Skip to content

Credential Artifacts Management

Rain Zhang edited this page Nov 6, 2025 · 2 revisions

Credential Artifacts Management

Table of Contents

  1. Introduction
  2. Core Data Structures
  3. Storage Architecture
  4. Artifact Lifecycle Management
  5. Serialization and Deserialization
  6. Metadata Service Integration
  7. Privacy and Data Retention
  8. Troubleshooting Guide

Introduction

The credential artifacts management system handles the storage, retrieval, and validation of WebAuthn credential artifacts throughout their lifecycle. This system manages public keys, attestation objects, and authentication assertions, providing a comprehensive framework for credential data persistence and integrity verification. The implementation supports both local file system storage and Google Cloud Storage (GCS) with seamless fallback mechanisms, ensuring reliability across different deployment environments.

The system is designed to maintain critical metadata including creation timestamps, authenticator AAGUIDs, and signature counters, while providing robust mechanisms for data integrity checks and validation against the FIDO Metadata Service (MDS). The architecture supports post-quantum cryptography and provides extensible interfaces for credential artifact management.

Section sources

  • server/server/credential_artifacts.py
  • server/server/storage.py

Core Data Structures

The credential artifacts management system is built around several key data structures that represent the fundamental components of WebAuthn credentials. These structures are implemented as immutable data classes with strict type checking to ensure data integrity throughout the credential lifecycle.

The AttestedCredentialData class encapsulates the core credential information including the authenticator's AAGUID (Authenticator Attestation GUID), credential ID, and COSE-formatted public key. This structure is parsed from binary data according to the WebAuthn specification, with the AAGUID occupying the first 16 bytes, followed by a 2-byte length prefix for the credential ID, and finally the CBOR-encoded public key.

classDiagram
class AttestedCredentialData {
+bytes aaguid
+bytes credential_id
+CoseKey public_key
+__init__(data : bytes)
+create(aaguid : bytes, credential_id : bytes, public_key : CoseKey)
+unpack_from(data : bytes)
+from_ctap1(key_handle : bytes, public_key : bytes)
}
class AuthenticatorData {
+bytes rp_id_hash
+FLAG flags
+int counter
+Optional[AttestedCredentialData] credential_data
+Optional[Mapping] extensions
+is_user_present()
+is_user_verified()
+is_backup_eligible()
+is_backed_up()
+is_attested()
+has_extension_data()
}
class AttestationObject {
+str fmt
+AuthenticatorData auth_data
+Mapping[str, Any] att_stmt
+create(fmt : str, auth_data : AuthenticatorData, att_stmt : Mapping[str, Any])
+from_ctap1(app_param : bytes, registration)
}
class CollectedClientData {
+str type
+bytes challenge
+str origin
+bool cross_origin
+create(type : str, challenge : Union[bytes, str], origin : str, cross_origin : bool)
+property b64
+property hash
}
AttestedCredentialData --> AuthenticatorData : "contained in"
AuthenticatorData --> AttestationObject : "contained in"
CollectedClientData --> AttestationObject : "referenced in"
Loading

**Diagram sources **

  • fido2/webauthn.py

The AuthenticatorData structure contains the RP ID hash, flags indicating authenticator state (user presence, user verification, etc.), signature counter, and optional attested credential data and extensions. The signature counter is a critical security component that prevents replay attacks by ensuring each authentication operation increments this value.

The AttestationObject serves as a container for the attestation format identifier, authenticator data, and attestation statement. This structure is CBOR-encoded and forms the basis of credential attestation during registration. The attestation statement typically contains cryptographic proof of the credential's authenticity, such as X.509 certificates and signatures in the case of "fido-u2f" format.

The CollectedClientData structure captures information from the client side of the WebAuthn operation, including the operation type (creation or retrieval), challenge, origin, and cross-origin flag. This data is JSON-encoded and hashed to create the client data hash used in various cryptographic operations.

Section sources

  • fido2/webauthn.py

Storage Architecture

The credential artifacts management system implements a dual-storage architecture that supports both local file system storage and Google Cloud Storage (GCS), with configurable fallback mechanisms. This hybrid approach ensures flexibility across different deployment scenarios while maintaining consistent data access patterns.

graph TD
A[Credential Artifacts System] --> B[Storage Interface]
B --> C[Google Cloud Storage]
B --> D[Local File System]
C --> E[GCS Bucket]
D --> F[Static Directory]
A --> G[Metadata Service]
G --> H[Base Metadata]
G --> I[Session Metadata]
H --> J[Metadata Blob Payload]
I --> K[Session Metadata Items]
A --> L[Client Applications]
L --> M[JavaScript Client]
M --> N[API Endpoints]
N --> A
style A fill:#f9f,stroke:#333
style C fill:#bbf,stroke:#333
style D fill:#bbf,stroke:#333
Loading

**Diagram sources **

  • server/server/credential_artifacts.py
  • server/server/storage.py
  • server/server/cloud_storage.py

The system uses a hierarchical storage organization with user-specific prefixes and subdirectories. For credential artifacts, the storage path is determined by the session ID and configured prefixes, creating a structure like {session_id}/credential-artifacts/{sha256(storage_id)}.json. This design prevents naming conflicts and enables efficient data isolation between user sessions.

Two primary storage modules handle different aspects of credential data: credential_artifacts.py manages advanced credential artifacts with rich metadata, while storage.py handles the core credential data used for authentication operations. Both modules implement the same storage interface, allowing them to use either GCS or local file storage based on configuration.

The storage decision is determined by environment variables, particularly FIDO_SERVER_GCS_BUCKET. When GCS is enabled, all operations route through cloud storage functions; otherwise, they fall back to local file operations. This configuration-driven approach allows seamless migration between storage backends without code changes.

Section sources

  • server/server/credential_artifacts.py
  • server/server/storage.py

Artifact Lifecycle Management

The credential artifacts management system provides a comprehensive lifecycle for credential artifacts through a set of well-defined operations: storage, retrieval, and deletion. These operations are designed to be atomic and thread-safe, ensuring data consistency in concurrent environments.

The store_credential_artifact function is the primary interface for persisting credential artifacts. It accepts a storage ID, payload data, and optional parameters for merging with existing data and specifying the session context. The function normalizes the storage ID, acquires a thread lock to prevent race conditions, and writes the data to the appropriate storage backend.

sequenceDiagram
participant Client as "Client Application"
participant Server as "Server"
participant Storage as "Storage Backend"
Client->>Server : store_credential_artifact(storage_id, payload)
Server->>Server : _normalise_storage_id(storage_id)
Server->>Server : _resolve_session_id(session_id)
Server->>Server : Acquire _LOCK
Server->>Server : _read_record() for merge
Server->>Server : _merge_payload() if merge=True
Server->>Storage : _write_record()
Storage-->>Server : Success/Failure
Server->>Server : Release _LOCK
Server-->>Client : Boolean result
Loading

**Diagram sources **

  • server/server/credential_artifacts.py

The storage process involves several key steps:

  1. Normalization of the storage ID to ensure consistent formatting
  2. Resolution of the session ID, either from the provided parameter or by creating a new session
  3. Acquisition of a thread lock to ensure atomic operations
  4. Reading existing data if merge functionality is requested
  5. Merging new payload with existing data when specified
  6. Writing the complete record with metadata (creation and update timestamps)
  7. Releasing the lock and returning the operation result

The load_credential_artifact function retrieves stored artifacts by storage ID, following a similar process but focused on reading operations. It validates the storage ID, resolves the session context, reads the record from storage, and extracts the payload from the stored structure. The function handles various error conditions gracefully, returning None for missing or invalid artifacts rather than raising exceptions.

The delete_credential_artifact function removes stored artifacts and returns a boolean indicating success. It follows the same pattern of ID normalization, session resolution, and thread-safe operations to ensure reliable deletion across storage backends.

A critical feature of the artifact management system is the merge functionality, which allows partial updates to existing artifacts. When the merge parameter is set to True, the system shallowly merges the new payload with existing data, preserving unchanged fields while updating modified ones. This enables efficient updates without requiring clients to provide complete artifact representations.

Section sources

  • server/server/credential_artifacts.py

Serialization and Deserialization

The credential artifacts management system employs JSON as the primary serialization format for stored artifacts, with CBOR used for encoding binary WebAuthn structures. This dual-format approach leverages the strengths of each format: JSON for human-readable, web-friendly data exchange and CBOR for compact, efficient binary encoding of cryptographic data.

flowchart TD
A[Raw Credential Data] --> B{Data Type}
B --> |Binary/CBOR| C[CBOR Encoding]
B --> |Structured/JSON| D[JSON Serialization]
C --> E[AttestationObject]
C --> F[AuthenticatorData]
D --> G[Credential Artifact Record]
G --> H[Storage]
H --> I{Storage Backend}
I --> |GCS| J[Cloud Storage]
I --> |Local| K[File System]
J --> L[JSON File]
K --> L
L --> M[Retrieval]
M --> N{Data Type}
N --> |JSON Artifact| O[JSON Deserialization]
N --> |CBOR Data| P[CBOR Decoding]
O --> Q[Application Data]
P --> R[WebAuthn Structures]
Loading

**Diagram sources **

  • fido2/cbor.py
  • server/server/credential_artifacts.py

The JSON serialization process for credential artifacts includes several important transformations to ensure data integrity and compatibility. Binary data such as public keys, signatures, and credential IDs are converted to base64url-encoded strings using the convert_bytes_for_json function. This function recursively traverses data structures, converting all bytes-like objects to web-safe string representations that can be safely included in JSON payloads.

The CBOR encoding and decoding functionality is implemented in the cbor.py module, which provides a minimal but complete implementation of the CBOR specification tailored for FIDO 2 CTAP requirements. The encoding process uses a registry of serializers for different data types (integers, booleans, strings, bytes, lists, and dictionaries), with special handling for map keys to ensure deterministic encoding.

classDiagram
class CBORSerializer {
+dump_int(data : int, mt : int)
+dump_bool(data : bool)
+dump_list(data : Sequence[CborType])
+dump_dict(data : Mapping[CborType, CborType])
+dump_bytes(data : bytes)
+dump_text(data : str)
+encode(data : CborType)
}
class CBORDeserializer {
+load_int(ai : int, data : bytes)
+load_nint(ai : int, data : bytes)
+load_bool(ai : int, data : bytes)
+load_bytes(ai : int, data : bytes)
+load_text(ai : int, data : bytes)
+load_array(ai : int, data : bytes)
+load_map(ai : int, data : bytes)
+decode_from(data : bytes)
+decode(data)
}
CBORSerializer --> CBORDeserializer : "symmetric operations"
CBORSerializer --> fido2.webauthn : "used by"
CBORDeserializer --> fido2.webauthn : "used by"
Loading

**Diagram sources **

  • fido2/cbor.py

The deserialization process reverses these transformations, converting base64url-encoded strings back to binary data and reconstructing the original data structures. The system handles potential parsing errors gracefully, returning None for malformed JSON or CBOR data rather than propagating exceptions to higher layers.

A key consideration in the serialization design is the preservation of data integrity during the encoding/decoding cycle. The system ensures that binary data can be round-tripped through JSON serialization without corruption, maintaining the exact byte sequences required for cryptographic operations.

Section sources

  • fido2/cbor.py
  • server/server/storage.py

Metadata Service Integration

The credential artifacts management system integrates with the FIDO Metadata Service (MDS) to validate authenticator trustworthiness and provide detailed information about registered devices. This integration enables verification of attestation statements against known authenticator metadata, detection of compromised devices, and enforcement of security policies based on authenticator capabilities.

The MDS integration is implemented through the mds3.py module, which provides classes and functions for parsing and validating MDS blobs, as well as verifying attestation trust using the metadata. The core component is the MdsAttestationVerifier class, which maintains a cache of metadata entries and provides methods for looking up authenticator information by AAGUID or certificate chain.

classDiagram
class MetadataBlobPayload {
+str legal_header
+int no
+date next_update
+Sequence[MetadataBlobPayloadEntry] entries
}
class MetadataBlobPayloadEntry {
+Sequence[StatusReport] status_reports
+date time_of_last_status_change
+Optional[str] aaid
+Optional[Aaguid] aaguid
+Optional[Sequence[bytes]] attestation_certificate_key_identifiers
+Optional[MetadataStatement] metadata_statement
+Optional[Sequence[BiometricStatusReport]] biometric_status_reports
+Optional[str] rogue_list_url
+Optional[bytes] rogue_list_hash
}
class MetadataStatement {
+str description
+int authenticator_version
+int schema
+Sequence[Version] upv
+Sequence[str] attestation_types
+Sequence[Sequence[VerificationMethodDescriptor]] user_verification_details
+Sequence[str] key_protection
+Sequence[str] matcher_protection
+Sequence[str] attachment_hint
+Sequence[str] tc_display
+Sequence[bytes] attestation_root_certificates
+Optional[str] legal_header
+Optional[str] aaid
+Optional[Aaguid] aaguid
+Optional[Sequence[bytes]] attestation_certificate_key_identifiers
+Optional[Mapping[str, str]] alternative_descriptions
}
class StatusReport {
+AuthenticatorStatus status
+Optional[date] effective_date
+Optional[int] authenticator_version
+Optional[bytes] certificate
+Optional[str] url
+Optional[str] certification_descriptor
+Optional[str] certificate_number
+Optional[str] certification_policy_version
+Optional[str] certification_requirements_version
}
class MdsAttestationVerifier {
+__init__(blob : MetadataBlobPayload, entry_filter : Optional[EntryFilter], attestation_filter : Optional[LookupFilter])
+find_entry_by_aaguid(aaguid : Aaguid)
+find_entry_by_chain(certificate_chain : Sequence[bytes])
+ca_lookup(attestation_result, auth_data)
+find_entry(attestation_object : AttestationObject, client_data_hash : bytes)
+evaluate_attestation(attestation_object : AttestationObject, client_data_hash : bytes)
}
MetadataBlobPayload --> MetadataBlobPayloadEntry : "contains"
MetadataBlobPayloadEntry --> MetadataStatement : "references"
MetadataBlobPayloadEntry --> StatusReport : "contains"
MdsAttestationVerifier --> MetadataBlobPayload : "uses"
Loading

**Diagram sources **

  • fido2/mds3.py

The system supports both base metadata from official sources and session-specific metadata that can be uploaded by users for testing or development purposes. The metadata.py module manages this dual-source approach, merging session metadata with base metadata to create a comprehensive view of available authenticator information.

When validating an attestation, the system first attempts to locate the corresponding metadata entry using the authenticator's AAGUID. If no AAGUID is available (as in some legacy authenticators), it falls back to searching by the attestation certificate chain. This lookup process considers the authenticator's status reports, rejecting entries marked as revoked or with compromised attestation keys.

The metadata integration also supports dynamic updates and caching. The system periodically checks for updated metadata blobs and caches them locally to reduce latency and network dependencies. Session-specific metadata entries are stored separately and can be managed through the web interface, allowing users to upload custom metadata for testing purposes.

Section sources

  • fido2/mds3.py
  • server/server/metadata.py

Privacy and Data Retention

The credential artifacts management system implements several privacy-preserving features to protect user data and comply with data protection regulations. These features include data minimization, anonymization, and configurable retention policies that balance security requirements with privacy concerns.

The system employs storage ID normalization to ensure consistent identifier formatting while preventing the storage of potentially sensitive information in raw form. Storage IDs are hashed using SHA-256 before being used as filenames, which prevents direct correlation between storage identifiers and user identities. This hashing mechanism also protects against directory traversal attacks and other security vulnerabilities.

flowchart TD
A[Raw Storage ID] --> B[Normalization]
B --> C[Trim whitespace]
C --> D[Validate string]
D --> E{Valid?}
E --> |No| F[Return None]
E --> |Yes| G[SHA-256 Hash]
G --> H[Hexadecimal Digest]
H --> I[.json extension]
I --> J[File Path]
J --> K[Secure Storage]
style E fill:#f96,stroke:#333
style F fill:#f66,stroke:#333
style K fill:#6f9,stroke:#333
Loading

**Diagram sources **

  • server/server/credential_artifacts.py

Data retention is managed through session-based organization and automatic cleanup of inactive sessions. The system tracks the last access time for each session and periodically removes sessions that have been inactive for more than 14 days. This automatic cleanup helps minimize the amount of stored data and reduces the privacy risks associated with long-term data retention.

The system also implements client-side data management through JavaScript interfaces that allow users to control what information is stored and for how long. Users can explicitly delete credential artifacts through the web interface, triggering the server-side deletion process. This user-controlled deletion capability supports data subject rights under privacy regulations like GDPR.

For sensitive data such as attestation certificates and cryptographic material, the system uses selective redaction in certain contexts. The _summarize_stored_credential function, for example, excludes heavy or sensitive keys from credential summaries, reducing the amount of sensitive data exposed in user interfaces and API responses.

The system's privacy design follows the principle of data minimization, storing only the information necessary for WebAuthn operations and security validation. Metadata about authenticators is stored separately from user credential data, and access to this information is controlled through session management and authentication mechanisms.

Section sources

  • server/server/credential_artifacts.py
  • server/server/routes/advanced.py

Troubleshooting Guide

This section provides guidance for diagnosing and resolving common issues related to credential artifacts management, including missing artifacts, format parsing errors, and version compatibility problems.

Missing Artifacts

When credential artifacts cannot be retrieved, verify the following:

  1. Storage ID normalization: Ensure the storage ID is a non-empty string that has been properly normalized. The system rejects non-string identifiers and empty strings.
  2. Session context: Confirm that the correct session ID is being used, especially in multi-user environments. Session IDs are resolved from cookies or explicitly provided parameters.
  3. Storage backend configuration: Check that the storage backend (GCS or local file system) is properly configured and accessible. Environment variables like FIDO_SERVER_GCS_BUCKET must be set correctly.
  4. File permissions: For local file storage, verify that the application has read and write permissions to the storage directory (static/credential-artifacts by default).
flowchart TD
A[Artifact Not Found] --> B{Storage ID valid?}
B --> |No| C[Check ID format and type]
B --> |Yes| D{Session ID resolved?}
D --> |No| E[Verify session establishment]
D --> |Yes| F{Using GCS?}
F --> |Yes| G[Check GCS configuration]
F --> |No| H[Check file permissions]
G --> I[Verify bucket name and credentials]
H --> J[Verify directory exists and is writable]
C --> K[Ensure string input, trim whitespace]
E --> L[Check cookie and session setup]
I --> M[Test GCS connectivity]
J --> N[Create directory if missing]
Loading

**Diagram sources **

  • server/server/credential_artifacts.py

Format Parsing Errors

JSON and CBOR parsing errors typically indicate corrupted data or version incompatibilities. To resolve these issues:

  1. Validate JSON structure: Ensure that stored artifacts are valid JSON with proper encoding. The system uses UTF-8 encoding and expects well-formed JSON objects.
  2. Check CBOR encoding: Verify that binary WebAuthn structures are properly CBOR-encoded according to the FIDO specifications. Use the cbor.encode and cbor.decode functions for consistent encoding.
  3. Handle binary data: Ensure that binary data (public keys, signatures, etc.) is properly base64url-encoded when included in JSON artifacts.
  4. Version compatibility: Confirm that the artifact format version is compatible with the current system version. The system supports schema version 1 for credential artifacts.

Version Compatibility

When integrating with different components or upgrading the system, consider the following compatibility issues:

  1. Storage format changes: Major version updates may introduce changes to the storage format. Always test artifact migration procedures in a staging environment before deploying to production.
  2. Metadata service updates: The MDS blob format may change between versions. The system includes validation for required fields and default values for missing optional fields.
  3. API endpoint changes: Client-side JavaScript code must be synchronized with server-side API changes. The /api/advanced/credential-artifacts endpoint follows REST conventions for CRUD operations.

For persistent issues, enable detailed logging to capture the exact sequence of operations and error conditions. The system logs warnings and errors related to storage operations, which can provide valuable diagnostic information.

Section sources

  • server/server/credential_artifacts.py
  • server/server/storage.py
  • server/server/static/scripts/shared/credential-artifacts-client.js

Post-Quantum WebAuthn Platform

Getting Started

Architectural Foundations

Cryptography & Security

Authentication Platform

Core Protocol

Flows & Interfaces

Authenticator Capabilities

Server Platform

Frontend Platform

Architecture

Interaction & Utilities

Metadata Service (MDS)

Storage & Data Management

Data Models & Encoding

API Reference

Cross-Platform & HID

Operations & Troubleshooting

Glossary & References

Clone this wiki locally