Provenance
Context-aware identity verification with cryptographic proof, temporal momentum, and risk overlays.
Last updated Mar 4, 2026
Layer: Agent (certification layer)
Scale: 0–100 with Certified (≥70) / Conditional (≥50) / Uncertified (<50)
Production Tier: Transaction-Grade (<10ms Passport validation, <200ms degraded assessment)
Competitive Edge: Context-sensitive scoring + Cryptographic proof (vs Mnemom's binary verification)
Purpose
Provenance Enhanced measures the verifiability, transparency, and completeness of an agent's identity with context-aware adaptive weighting and temporal dynamics. Unlike Mnemom's binary cryptographic verification, we provide nuanced trust scoring that adapts to operational context while maintaining cryptographic guarantees.
Mathematical Methodology
Core Formula with Context Adaptation
PROVENANCE(context) = 100 × Σᵢ(ωᵢ(context) × Pᵢ)
Precondition: P_d ≥ 0.20 (minimum deployment verification required)
Context-Sensitive Weight Functions
Financial Context
ω_financial = {
ω₁(P_d): 0.40, // Deployment verification critical
ω₂(P_c): 0.15, // Capability attestation
ω₃(P_v): 0.20, // Version integrity
ω₄(P_b): 0.10, // Behavioral history
ω₅(P_t): 0.15 // Transparency
}
Research Context
ω_research = {
ω₁(P_d): 0.15, // Deployment verification
ω₂(P_c): 0.25, // Capability attestation critical
ω₃(P_v): 0.15, // Version integrity
ω₄(P_b): 0.10, // Behavioral history
ω₅(P_t): 0.35 // Transparency critical
}
Default Context
ω_default = {
ω₁(P_d): 0.25, // Balanced weights
ω₂(P_c): 0.20,
ω₃(P_v): 0.20,
ω₄(P_b): 0.15,
ω₅(P_t): 0.20
}
Context Detection Algorithm
def detect_context(agent_metadata, transaction_type, environment):
signals = {
'financial': [
'payment' in transaction_type,
'trading' in agent_metadata.capabilities,
environment.compliance_level == 'SOC2',
agent_metadata.value_at_risk > 10000
],
'research': [
'analysis' in transaction_type,
'reasoning' in agent_metadata.capabilities,
environment.domain in ['academic', 'scientific'],
agent_metadata.output_type == 'report'
]
}
financial_score = sum(signals['financial']) / len(signals['financial'])
research_score = sum(signals['research']) / len(signals['research'])
if financial_score > 0.6:
return 'financial'
elif research_score > 0.6:
return 'research'
else:
return 'default'
Identity Momentum Calculation
Momentum Formula
P_momentum(t) = dP/dt = [P(t) - P(t-Δt)] / Δt
Where:
- Δt = measurement interval (default: 24 hours)
- P(t) = current Provenance score
- P(t-Δt) = previous Provenance score
Momentum Interpretation
def interpret_momentum(P_momentum):
if P_momentum > 5:
return "RAPIDLY_IMPROVING"
elif P_momentum > 1:
return "IMPROVING"
elif P_momentum > -1:
return "STABLE"
elif P_momentum > -5:
return "DEGRADING"
else:
return "RAPIDLY_DEGRADING"
Weighted Moving Average
P_momentum_smooth(t) = α × P_momentum(t) + (1-α) × P_momentum_smooth(t-1)
Where α = 0.3 (smoothing factor)
Composite Identity Risk Metric
Risk Formula
Identity_Risk = (1 - P/100) × Impact_Score × Context_Multiplier
Where:
- P = Provenance score (0-100)
- Impact_Score = potential damage if identity compromised (0-1000)
- Context_Multiplier = risk amplification by context
Impact Score Calculation
def calculate_impact_score(agent):
base_impact = {
'financial_access': agent.can_transfer_funds * 500,
'data_access': agent.data_sensitivity_level * 200,
'system_control': agent.admin_privileges * 300,
'user_interaction': agent.user_facing * 100
}
amplifiers = {
'production': 2.0 if agent.environment == 'production' else 1.0,
'scale': min(agent.transaction_volume / 1000, 3.0),
'criticality': agent.business_criticality_score
}
raw_impact = sum(base_impact.values())
amplified = raw_impact * np.prod(list(amplifiers.values()))
return min(amplified, 1000) # Cap at 1000
Context Risk Multipliers
context_multipliers = {
'financial': 2.5, # High risk context
'healthcare': 2.0, # Regulated context
'research': 1.0, # Standard risk
'development': 0.5 # Low risk context
}
Risk Categories
Risk_Category = {
Risk < 50: "LOW",
50 <= Risk < 150: "MEDIUM",
150 <= Risk < 300: "HIGH",
Risk >= 300: "CRITICAL"
}
Cryptographic Proof Layer
Dual-Layer Architecture
Unlike Mnemom's binary verification, we implement a two-tier system:
Layer 1: Intelligent Scoring (Primary)
- Context-aware weighted scoring
- Temporal dynamics (momentum)
- Risk-adjusted metrics
- This provides what binary verification cannot: nuanced trust assessment
Layer 2: Cryptographic Attestation (Secondary)
class CryptographicAttestation:
def __init__(self):
self.signing_key = Ed25519PrivateKey.generate()
self.verification_key = self.signing_key.public_key()
def create_attestation(self, provenance_data):
attestation = {
'agent_id': provenance_data.agent_id,
'provenance_score': provenance_data.score,
'context': provenance_data.context,
'momentum': provenance_data.momentum,
'risk_score': provenance_data.risk,
'timestamp': time.time(),
'scoring_version': 'v2.0-enhanced',
'weights_used': provenance_data.weights
}
# Deterministic JSON serialization
canonical = json.dumps(attestation, sort_keys=True)
# Ed25519 signature
signature = self.signing_key.sign(canonical.encode())
# Optional: STARK proof for computation verification
stark_proof = self.generate_stark_proof(provenance_data)
return {
'attestation': attestation,
'signature': signature.hex(),
'stark_proof': stark_proof,
'public_key': self.verification_key.public_bytes().hex()
}
Merkle Tree for Historical Proofs
class ProvenanceMerkleTree:
def __init__(self):
self.tree = MerkleTree()
def add_assessment(self, assessment):
leaf = hashlib.sha256(
f"{assessment.agent_id}:{assessment.score}:{assessment.timestamp}".encode()
).hexdigest()
self.tree.add_leaf(leaf)
def generate_proof(self, assessment_index):
return self.tree.get_proof(assessment_index)
def get_root(self):
return self.tree.get_root()
Enhanced Scoring Dimensions
1. Deployment Verification (P_d)
Enhanced with cryptographic options:
P_d = base_score × crypto_multiplier
base_scores = {
'self_assertion': 0.20,
'domain_validation': 0.40,
'extended_validation': 0.70,
'cryptographic_pki': 1.00
}
crypto_multiplier = {
'none': 1.0,
'ed25519_signed': 1.1,
'zk_proof': 1.2,
'multi_sig': 1.3
}
2. Capability Attestation (P_c)
Enhanced with benchmark verification:
def calculate_capability_score(attestations):
weights = {
'self_declared': 0.20,
'peer_reviewed': 0.50,
'third_party_tested': 0.80,
'certified_benchmark': 1.00
}
# Add temporal decay for old attestations
for attestation in attestations:
age_days = (now() - attestation.timestamp).days
decay = exp(-age_days / 365) # Half-life of 1 year
attestation.weight *= decay
return weighted_average(attestations, weights)
3. Version Integrity (P_v)
Enhanced with continuous monitoring:
def calculate_version_integrity(agent):
# Base hash verification
hash_match = verify_hash(agent.code_hash, agent.registered_hash)
# Drift detection
drift_score = 1.0 - (agent.modifications_count / 100)
# Time-based decay
age_factor = exp(-agent.days_since_verification / 90)
# Continuous monitoring bonus
monitoring_bonus = 0.1 if agent.continuous_monitoring else 0
return min(hash_match * drift_score * age_factor + monitoring_bonus, 1.0)
Implementation Requirements
Performance Targets
- Context detection: <5ms
- Score calculation: <10ms
- Cryptographic attestation: <50ms
- Full assessment with proof: <200ms
Storage Requirements
per_agent:
current_score: 8 bytes
historical_scores: 365 * 8 bytes # 1 year daily
momentum_data: 30 * 8 bytes # 30 days
attestations: 1KB per attestation
merkle_proofs: 256 bytes per proof
API Endpoints
GET /provenance/{agent_id}:
response:
score: float
context: string
momentum: float
risk: float
attestation: object
POST /provenance/verify:
request:
agent_id: string
context: string (optional)
response:
full_assessment: object
GET /provenance/history/{agent_id}:
response:
scores: array
momentum_chart: array
risk_timeline: array
Competitive Advantages Over Mnemom
1. Context Intelligence
- VaryOn: Adaptive weights based on operational context
- Mnemom: Fixed binary verification regardless of use case
2. Temporal Dynamics
- VaryOn: Momentum tracking shows improvement/degradation trends
- Mnemom: Static point-in-time verification
3. Risk Integration
- VaryOn: Identity risk scored relative to potential impact
- Mnemom: Pass/fail without risk assessment
4. Granular Trust
- VaryOn: 0-100 scale with meaningful gradations
- Mnemom: Binary trusted/untrusted
5. Dual Assurance
- VaryOn: Intelligent scoring AND cryptographic proof
- Mnemom: Cryptographic proof only
Migration Path
Phase 1: Core Enhancement (Weeks 1-2)
- Implement context detection
- Add momentum calculation
- Deploy risk metrics
Phase 2: Cryptographic Layer (Weeks 3-4)
- Ed25519 attestations
- Merkle tree implementation
- Public key infrastructure
Phase 3: Advanced Features (Weeks 5-6)
- STARK proof integration
- Multi-signature support
- Cross-agent correlation
Conclusion
This enhanced Provenance Framework positions VaryOn as the superior choice for organizations requiring both:
- Intelligent, context-aware trust assessment that adapts to their specific needs
- Cryptographic guarantees that meet compliance and security requirements
While Mnemom offers "trust through cryptography," VaryOn offers "intelligent trust with cryptographic proof"—a fundamentally more valuable proposition for the emerging agent economy.