Documentation

How OpenAC Studio works, how to extend it, and what it does not cover.

How It Works

OpenAC Studio is a deterministic, rule-based design tool for anonymous credential presentation flows. There are no LLM calls, no network requests, and no server-side processing — everything runs locally in your browser.

1. Scenario Configuration

You describe your use case by selecting values for 8 parameters: presentation frequency, verifier topology, unlinkability goal, anti-replay strategy, device binding policy, verification target, credential format, and revocation handling.

2. Rule Engine

A deterministic rule engine evaluates your scenario against a set of rules. Each rule checks a condition (e.g., "unlinkability goal is cross-verifiers") and adds or removes modules accordingly. The engine then resolves module dependencies and detects conflicts.

3. Module Graph

The output is an ordered list of recommended cryptographic modules. Each module includes an explanation of why it was selected and the risk of omitting it. Modules include: attribute commitments, selective disclosure, reblind/rerandomize, verifier nonce, nullifier, device binding, and verification targets.

4. Diagram Generation

From the selected modules, a Mermaid sequence diagram is generated at two detail levels: a high-level overview showing actors and steps, and a crypto-level view showing actual cryptographic operations (commitments, signatures, proofs).

5. Threat Model

A library of 22 threat templates spanning 10 security categories is evaluated against your scenario and selected modules. Each threat has an appliesWhen predicate and a list of mitigations with module dependencies. The generator checks whether each mitigation is satisfied by your current module selection.

Privacy Score

The privacy score measures what percentage of applicable privacy protections your configuration has in place. It is calculated as:

Score = (earned points / applicable points) × 100

Not every factor applies to every scenario. For example, device binding is only scored when it is required and presentations are repeated. The score reflects how well-covered you are given your specific configuration, not a universal privacy rating.

Scoring Factors

FactorPointsWhen ApplicableModule Needed
Selective Disclosure15Alwaysselective_disclosure
Attribute Commitments10Alwaysattribute_commitments
Unlinkability25Unlinkability goal is set, and presentations are repeated or multi-verifierreblind_rerandomize
Verifier Collusion10Multi-verifier topology with no explicit unlinkability goalreblind_rerandomize
Anti-Replay15Any anti-replay strategy is configuredverifier_challenge_nonce
Nullifier10Anti-replay strategy is "nullifier"nullifier_antireplay
Device Binding15Device binding is required and presentations are repeateddevice_binding

Edge Cases

  • If no factors apply (e.g., a minimal one-time, single-verifier, no-anti-replay configuration with all always-on modules present), the score is 100 — there are no applicable risks left to mitigate.
  • The "Unlinkability" and "Verifier Collusion" factors are mutually exclusive — they cannot both be applicable at the same time, since collusion risk only triggers when no unlinkability goal is set.

Key Concepts

Unlinkability & Reblind

Core Privacy

When a credential is presented multiple times, verifiers can correlate presentations using stable proof elements. Rerandomization (reblind) generates fresh randomness for each presentation, making it impossible to link them — even if verifiers collude. This is critical for repeat presentations and multi-verifier scenarios.

Nonce vs. Nullifier Anti-Replay

Anti-Replay

A verifier nonce is a fresh random challenge that binds the proof to a specific session, preventing replay of captured proofs. A nullifier is a deterministic value tied to the credential and context — it prevents double-use (like double-voting or double-spending) while maintaining unlinkability. Nonces prevent replay; nullifiers prevent re-use.

Device Binding

Possession

Device binding ties the credential to a hardware-backed key (e.g., Secure Enclave, TEE). The wallet signs the verifier's challenge with the device key, proving physical possession. Without it, credentials can be exported, shared, or cloned.

On-Chain Verification

Constraints

Verifying proofs on-chain (smart contract) adds transparency and trust but introduces constraints: proof size must fit gas limits, the verifier contract must be audited, and verification key management becomes critical. This tool models on-chain as a verification target but does not simulate gas costs or circuit constraints.

Extending the Tool

Adding a Module

Edit src/lib/modules/registry.ts and add a ModuleDefinition to the registry array. Specify its provides, requires, conflicts, and diagram hooks. The rule engine and diagram generator will automatically pick it up.

Adding a Rule

Edit src/lib/rules/ruleset.ts and add a Rule with a predicate function, module adds/removals, and a human-readable explanation. The engine resolves dependencies and conflicts automatically.

Adding a Threat Template

Edit src/lib/threats/templates.ts and add a ThreatTemplate. Define the appliesWhen predicate (receives scenario and selected modules), severity, mitigations with dependsOnModules, detection signals, and references. The generator handles evaluation and satisfaction checks.

Threat Categories

Soundness / Forgery

Credential or proof forgery attacks

Zero-Knowledge Leakage

Unintended attribute disclosure

Unlinkability / Linkability

Cross-session or cross-verifier tracking

Replay / Double-Spend

Proof reuse or double-presentation

Device Sharing / Cloning

Credential export or device theft

Issuer Tracking / Registry

Issuer-side holder surveillance

Verifier Collusion

Multi-verifier correlation attacks

Dependency / Status / Revocation

Revocation gaps or status failures

Implementation / Side-Channels

Timing, memory, logging leaks

Operational / Key Management

Key rotation, contract audit, gas limits

Limitations
Important

  • This is a checklist-based design tool, not a formal security proof or verification system. It identifies common risks based on configuration but does not guarantee completeness.
  • The threat model covers known patterns for anonymous credential systems. Novel or domain-specific threats may not be included.
  • On-chain verification constraints (gas limits, circuit compatibility, verifier contract correctness) are flagged as risks but not simulated.
  • Network-level attacks, issuer misbehavior at issuance time, and physical coercion are out of scope.
  • Always review the output with a qualified security team before production deployment.