Product Guide

AI Trust Registry

Transform discovered tools and agents into risk-prioritized, confidence-scored entities with auditable identity posture.

Last updated Mar 4, 2026

Track: now
Frameworks: Provenance, Mandate, Meridian
Ethira workflow steps: 1 (tool inventory), 2 (agent discovery)

Product Description

AI Trust Registry is the identity and governance intake layer for autonomous systems. It converts inventory and discovery records into actionable trust posture, so security, risk, and platform teams can prioritize onboarding, approvals, and remediation.

The product answers three implementation-critical questions:

  1. Do we know what this asset or agent actually is? (Provenance)
  2. Can humans or policy controls intervene when needed? (Mandate)
  3. Is supporting evidence complete and reliable enough for decisions? (Meridian)

Problem Narrative: Why This Exists

Most organizations discover AI usage in reverse order. Agents and tools appear in production first, while identity, ownership, and control evidence appear later or not at all.

Typical failure sequence:

  1. A business team connects a new agent to customer or financial workflows.
  2. Security teams cannot verify who owns it, what it can access, or how intervention works.
  3. Compliance teams cannot prove governance posture at audit or incident time.
  4. Risk decisions become subjective because confidence in the evidence is unknown.

AI Trust Registry solves this by turning discovery into scored governance posture before broad production trust is granted.

Mathematical Approach Applied

AI Trust Registry operationalizes identity posture as a weighted risk model driven by framework outputs:

P = provenance_score / 100
M = mandate_score / 100
C = meridian_confidence   (0 to 1)

IdentityRisk = 1 - (0.45*P + 0.35*M + 0.20*C)

PriorityScore = 100 * (0.60*IdentityRisk + 0.40*ExposureFactor)

Interpretation:

  • IdentityRisk estimates trust uncertainty from identity, intervention readiness, and evidence quality.
  • ExposureFactor represents business impact scope (data sensitivity, access breadth, blast radius).
  • PriorityScore ranks what must be remediated first.

Example policy gate:

if provenance_score < 70 or mandate_score < 60 or meridian_confidence < 0.80:
  status = "conditional"
  require_remediation = true
else:
  status = "production-eligible"

Why This Gap Exists In The Market

This problem is usually split across disconnected tooling categories:

  • Asset inventories tell you what exists, not whether it is governable.
  • IAM and access tools enforce permissions, not trust posture scoring.
  • GRC systems track controls, but often depend on manual evidence assembly.

AI Trust Registry combines identity verification, intervention readiness, and evidence confidence into one quantitative decision layer. That integrated scoring approach is still uncommon in mainstream security and governance products.

Compliance Mapping (EU and US)

This product is a control-enablement layer, not legal advice. It supports auditability and policy enforcement by producing traceable scores and decision evidence.

RegionFramework / RegulationHow AI Trust Registry Helps
EUEU AI Act (risk management, governance, oversight expectations for high-risk systems)Maintains scored inventory and control-readiness state per agent/tool.
EUNIS2 (asset visibility, security governance)Provides structured inventory + risk-prioritized remediation queues.
EUDORA (ICT third-party and operational resilience controls)Improves vendor/agent onboarding governance with scored trust posture.
USNIST AI RMF (Govern, Map, Measure, Manage)Supplies measurable identity/control confidence signals for lifecycle governance.
USFTC Section 5 risk posture (deceptive or unfair AI operations concerns)Produces documented governance decisions and intervention pathways.
USSOC 2 style control evidence programsProvides continuous evidence-linked score history for audits.

Competitor Overlap Analysis

Potential overlap exists with multiple categories, but coverage is partial in each:

CategoryWhere Overlap ExistsWhat AI Trust Registry Adds
CMDB / asset inventory toolsDiscovery and asset recordsQuantitative trust scoring + readiness thresholds for AI operations.
IAM / PAM platformsAccess control and credential policyFramework-driven confidence and intervention-readiness scoring for agents/tools.
Model registries / MLOps catalogsModel metadata and lineageCross-tool and agent governance posture tied to operational risk decisions.
GRC platformsControl libraries and workflowsReal-time, score-based prioritization from live technical evidence.

Primary Users

  • Platform security teams deciding which agents can move from sandbox to production.
  • Risk and compliance operators creating exposure views by business unit.
  • Engineering leads triaging low-confidence integrations before launch.

How It Works

1Inventory Connectors
2Entity Normalization
3Framework Scorers
4Registry Entity Store
5Risk Views + Policy Actions

Emits

Provenance scoreMandate scoreMeridian confidencescore.updated webhook
1POST /v1/assets/score or /v1/agents/score
2Write score + confidence to registry
3Evaluate policy thresholds
4Create alerts and remediation tasks
5Deliver webhook notifications

Detailed Example Use Cases

Use Case 1: New Vendor Agent Approval

A procurement team introduces an external vendor agent with access to customer communication channels.

  1. Discovery connector registers agent_id.
  2. AI Trust Registry scores identity, control readiness, and evidence quality.
  3. Policy requires: provenance >= 70, mandate >= 60, confidence >= 0.80.
  4. Agent fails on Mandate because intervention hooks are incomplete.
  5. Approval is blocked until control points are implemented and rescored.

Outcome: high-risk onboarding prevented before production access.

Use Case 2: Internal Tool Estate Rationalization

A large enterprise has thousands of discovered tools, but no trust prioritization.

  1. Batch scoring processes all assets.
  2. Registry groups entities by risk band + confidence state.
  3. Teams focus first on "high risk + high confidence" and "medium risk + low confidence" clusters.
  4. Low-confidence entities get evidence enrichment tasks.

Outcome: remediation order becomes objective and scalable.

Integration Surfaces

  • POST /v1/assets/score
  • POST /v1/agents/score
  • GET /v1/score/{framework}/{entity_id}
  • score.updated webhook

Minimum Data Contract

  • asset_id or agent_id
  • owner/team metadata
  • integration permissions and data access scope
  • available attestations/control points
  • evidence-quality metadata for confidence scoring

KPI Examples

  • Scoring coverage of discovered assets.
  • Low-confidence inventory ratio.
  • Mean time to risk classification.
  • Approval cycle time for production onboarding.

Supporting Documentation

Canonical References

  • docs/source-of-truth/partnerships/ETHIRA_INTEGRATION_AND_PRODUCT_STRATEGY.md
  • docs/source-of-truth/partnerships/ETHIRA_2_WEEK_TECHNICAL_DELIVERY_PLAN.md