← All Event Families

foundation.protocols.ai.*

Vendor-neutral AI governance for any agent — Claude, Mistral, Qwen, or future systems. Audit trail, policy gates, kill switch, cost controls, model registry, and EU AI Act compliance.

17
Events
10
Timeline
7
State

Contents

ESAA Audit Trail Pattern

The core governance model: separate what an agent proposes, the decision to proceed, and what actually happened.

AI agent proposes an action ai.intention (risk_level, requires_human_decision) Human or policy engine decides ai.decision (approved / denied / deferred / auto_approved) Agent executes (or aborts) ai.effect (success / failure / partial / aborted) Optional: post-hoc correction ai.correction (factual_error / hallucination / ...)

A1. Intention / Decision / Effect (ESAA Pattern)

Three-phase audit trail for high-risk AI operations.

Timeline Event

foundation.protocols.ai.intention

An AI agent proposes an action before execution. Emitted before any side-effects occur.

FieldTypeDescription
intention_idstringUnique intention identifier
session_idstringSession this belongs to
agent_typestringAgent type (e.g., claude-code)
agent_modelstringModel identifier
proposed_actionstringWhat the agent wants to do
descriptionstringHuman-readable description
parametersobjectAction parameters
risk_levelstringRisk classification
Values: low, medium, high, critical
requires_human_decisionbooleanWhether human approval is needed
policy_refstringEvent ID of the governing ai.policy
{
  "type": "foundation.protocols.ai.intention",
  "content": {
    "intention_id": "int_01ABC...",
    "session_id": "sess_01ABC...",
    "agent_type": "claude-code",
    "agent_model": "claude-sonnet-4-20250514",
    "proposed_action": "execute_shell_command",
    "description": "Delete the dist/ directory to perform a clean build",
    "parameters": { "command": "rm -rf dist/" },
    "risk_level": "high",
    "requires_human_decision": true,
    "policy_ref": "$ai_policy_state_event_id",
    "m.relates_to": {
      "rel_type": "m.thread",
      "event_id": "$session_start_event_id"
    }
  }
}
Timeline Event

foundation.protocols.ai.decision

A human or automated system decides whether the proposed action may proceed.

FieldTypeDescription
intention_idstringLinks to the intention
session_idstringSession identifier
decisionstringThe decision made
Values: approved, denied, deferred, auto_approved, timed_out
decided_bystringMatrix user ID of the decider
decision_methodstringHow the decision was made
Values: human, policy_engine, power_level_check, auto
rationalestringReason for the decision
conditionsstring[]Conditions attached to approval (optional)
{
  "type": "foundation.protocols.ai.decision",
  "content": {
    "intention_id": "int_01ABC...",
    "session_id": "sess_01ABC...",
    "decision": "approved",
    "decided_by": "@senior_engineer:matrix.openearth.network",
    "decision_method": "human",
    "rationale": "Clean build is expected at this stage of the pipeline",
    "conditions": [],
    "timestamp": 1711800030000,
    "m.relates_to": {
      "rel_type": "m.thread",
      "event_id": "$session_start_event_id",
      "m.in_reply_to": { "event_id": "$intention_event_id" }
    }
  }
}
Timeline Event

foundation.protocols.ai.effect

Records what actually happened after the decision. May differ from the intention if execution failed.

FieldTypeDescription
intention_idstringLinks to the original intention
session_idstringSession identifier
outcomestringWhat happened
Values: success, failure, partial, aborted
descriptionstringHuman-readable outcome description
side_effectsobject[]Observed side effects
duration_msintegerExecution duration in milliseconds
reversiblebooleanWhether the action can be undone
{
  "type": "foundation.protocols.ai.effect",
  "content": {
    "intention_id": "int_01ABC...",
    "session_id": "sess_01ABC...",
    "outcome": "success",
    "description": "dist/ directory deleted (42 files removed)",
    "side_effects": [
      { "type": "files_deleted", "count": 42, "path": "dist/" }
    ],
    "duration_ms": 120,
    "reversible": false,
    "m.relates_to": {
      "rel_type": "m.thread",
      "event_id": "$session_start_event_id",
      "m.in_reply_to": { "event_id": "$decision_event_id" }
    }
  }
}
Timeline Event

foundation.protocols.ai.correction

Post-hoc correction without destroying the original audit trail. Corrections are new events, not mutations.

FieldTypeDescription
corrects_event_idstringEvent ID being corrected
session_idstringSession identifier
corrected_bystringMatrix user ID
correction_typestringType of correction
Values: factual_error, hallucination, outdated_information, incomplete, safety_concern, policy_violation
descriptionstringWhat was wrong and what the correction is
severitystringSeverity: low, medium, high, critical
{
  "type": "foundation.protocols.ai.correction",
  "content": {
    "corrects_event_id": "$original_effect_or_response_event_id",
    "session_id": "sess_01ABC...",
    "corrected_by": "@senior_engineer:matrix.openearth.network",
    "correction_type": "factual_error",
    "description": "The AI cited Article 6(1)(f) but the correct legal basis is Article 6(1)(a)",
    "severity": "high",
    "m.relates_to": {
      "rel_type": "m.thread",
      "event_id": "$session_start_event_id"
    }
  }
}

A2. Hierarchical Grouping (LangFuse Pattern)

Group related operations into named phases for hierarchical observability.

Timeline Event

foundation.protocols.ai.span

Groups related operations into a named phase. Enables hierarchical views (e.g., “research phase” containing 5 MCP queries). Inspired by LangFuse’s trace/span model.

FieldTypeDescription
span_idstringUnique span identifier
session_idstringSession identifier
namestringHuman-readable phase name
parent_span_idstring|nullParent span for nesting
statusstringSpan status
Values: started, completed, failed, cancelled
start_timestampintegerUnix ms start time
end_timestampintegerUnix ms end time
metadataobjectArbitrary metadata about the phase
{
  "type": "foundation.protocols.ai.span",
  "content": {
    "span_id": "span_01ABC...",
    "session_id": "sess_01ABC...",
    "name": "regulatory_research",
    "parent_span_id": null,
    "status": "completed",
    "start_timestamp": 1711800000000,
    "end_timestamp": 1711800120000,
    "metadata": {
      "mcp_servers_queried": ["eu-regulations", "security-controls"],
      "articles_retrieved": 12
    },
    "m.relates_to": {
      "rel_type": "m.thread",
      "event_id": "$session_start_event_id"
    }
  }
}

A3. Evaluation and Quality (LangFuse Pattern)

Multi-dimensional scoring and dataset curation from real interactions.

Timeline Event

foundation.protocols.ai.evaluation

Human or automated evaluation of an AI output. Supports multi-dimensional scoring, enabling quality tracking across the commons.

FieldTypeDescription
target_event_idstringEvent being evaluated
session_idstringSession identifier
evaluatorstringMatrix user ID of evaluator
evaluation_typestringType of evaluation
Values: human, automated, peer_review, audit
scoresobjectDimension → score (0.0–1.0)
commentstringEvaluator notes
{
  "type": "foundation.protocols.ai.evaluation",
  "content": {
    "target_event_id": "$response_or_insight_event_id",
    "session_id": "sess_01ABC...",
    "evaluator": "@senior_engineer:matrix.openearth.network",
    "evaluation_type": "human",
    "scores": {
      "accuracy": 0.9,
      "completeness": 0.7,
      "safety": 1.0,
      "citation_quality": 0.85
    },
    "comment": "Correct DORA analysis but missed NIS2 cross-reference",
    "m.relates_to": {
      "rel_type": "m.thread",
      "event_id": "$session_start_event_id",
      "m.in_reply_to": { "event_id": "$response_or_insight_event_id" }
    }
  }
}
Timeline Event

foundation.protocols.ai.dataset.entry

Marks a prompt-response pair as benchmark or training data. Enables commons-wide evaluation datasets curated from real interactions.

FieldTypeDescription
source_prompt_event_idstringEvent ID of the prompt
source_response_event_idstringEvent ID of the response
dataset_namestringTarget dataset name
curated_bystringMatrix user ID of curator
tagsstring[]Topic/domain tags
expected_output_summarystringWhat the ideal response looks like
quality_scorenumberQuality score 0.0–1.0
{
  "type": "foundation.protocols.ai.dataset.entry",
  "content": {
    "source_prompt_event_id": "$prompt_event_id",
    "source_response_event_id": "$response_event_id",
    "dataset_name": "compliance_qa_v1",
    "curated_by": "@data_scientist:matrix.openearth.network",
    "tags": ["dora", "gap_analysis", "iso27001"],
    "expected_output_summary": "Correct identification of 3 DORA gaps with ISO 27001 control mappings",
    "quality_score": 0.92,
    "m.relates_to": {
      "rel_type": "m.thread",
      "event_id": "$session_start_event_id"
    }
  }
}

A4. Governance State Events

Persistent room state that configures and enforces AI governance. Read by the Compliance-Proxy.

State Event

foundation.protocols.ai.policy

Machine-readable AI usage policy. The Compliance-Proxy reads this state event and enforces its rules before allowing AI operations.

FieldTypeDescription
schema_versionstringPolicy schema version
max_cost_per_sessionstringMaximum cost per session (string, 6 decimals)
max_cost_per_daystringMaximum daily cost
requires_human_decision_abovestringCost threshold requiring human approval
currencystringISO 4217 currency code
allowed_modelsstring[]Approved model identifiers
blocked_toolsstring[]Tools that may not be used
requires_intention_decision_forstring[]Tools requiring ESAA flow
max_concurrent_sessionsintegerMax simultaneous AI sessions
data_residencystringRequired data residency region
eu_ai_act_risk_levelstringRoom-level AI Act risk classification
allowed_mcp_serversstring[]Permitted MCP servers (glob patterns)
auto_approve_risk_levelsstring[]Risk levels auto-approved without human
evaluation_required_for_risk_levelsstring[]Risk levels requiring post-evaluation
{
  "type": "foundation.protocols.ai.policy",
  "state_key": "",
  "content": {
    "schema_version": "1.0",
    "max_cost_per_session": "5.000000",
    "max_cost_per_day": "50.000000",
    "requires_human_decision_above": "1.000000",
    "currency": "USD",
    "allowed_models": ["claude-sonnet-4-20250514", "claude-opus-4-20250514"],
    "blocked_tools": [],
    "requires_intention_decision_for": ["bash", "write_file", "git_push"],
    "max_concurrent_sessions": 3,
    "data_residency": "eu",
    "eu_ai_act_risk_level": "limited",
    "allowed_mcp_servers": ["eu-regulations", "security-controls", "law-*"],
    "auto_approve_risk_levels": ["low"],
    "evaluation_required_for_risk_levels": ["high", "critical"]
  }
}
State Event

foundation.protocols.ai.kill_switch

Emergency halt of all AI operations. Takes precedence over all other policies.

FieldTypeDescription
statusstringKill switch status
Values: active, inactive
activated_bystringMatrix user ID who activated
reasonstringReason for activation
activated_atintegerUnix timestamp in milliseconds
scopestringScope of the halt
Values: all_ai_operations, new_sessions_only, specific_models
exceptionsstring[]Model or agent IDs exempted (optional)
{
  "type": "foundation.protocols.ai.kill_switch",
  "state_key": "",
  "content": {
    "status": "active",
    "activated_by": "@ciso:matrix.openearth.network",
    "reason": "Suspected data exfiltration via AI prompts — investigation in progress",
    "activated_at": 1711900000000,
    "scope": "all_ai_operations",
    "exceptions": []
  }
}
State Event

foundation.protocols.ai.model_registry

Registry of approved AI models and their risk classifications. Maps EU AI Act risk levels to specific models.

{
  "type": "foundation.protocols.ai.model_registry",
  "state_key": "",
  "content": {
    "models": [
      {
        "model_id": "claude-sonnet-4-20250514",
        "provider": "anthropic",
        "risk_level": "limited",
        "approved_by": "@cto:matrix.openearth.network",
        "approved_at": 1711800000000,
        "dpa_reference": "Anthropic DPA v2025-03",
        "permitted_data_types": ["public", "internal", "confidential"],
        "notes": "Approved for all compliance work including regulated data"
      },
      {
        "model_id": "mistral-7b-instruct",
        "provider": "self-hosted",
        "risk_level": "minimal",
        "approved_by": "@cto:matrix.openearth.network",
        "approved_at": 1711700000000,
        "permitted_data_types": ["public", "internal", "confidential", "restricted"],
        "notes": "Self-hosted on org infrastructure; no data leaves premises"
      }
    ]
  }
}
State Event

foundation.protocols.ai.data_boundary

Defines what data may be sent to AI systems. Enforces GDPR data minimisation at the protocol level.

FieldTypeDescription
allowed_data_typesstring[]Data types permitted for AI consumption
blocked_data_typesstring[]Data types that must never reach AI
requires_consent_eventbooleanRequire data.consent before extraction
consent_event_typestringEvent type for consent verification
max_context_tokens_per_queryintegerToken limit per AI query
strip_pii_before_sendbooleanAuto-strip PII from outbound data
audit_all_outboundbooleanLog all data sent to AI
{
  "type": "foundation.protocols.ai.data_boundary",
  "state_key": "",
  "content": {
    "allowed_data_types": ["aggregated_statistics", "anonymised_records", "public_legislation"],
    "blocked_data_types": ["personal_data", "health_records", "financial_pii"],
    "requires_consent_event": true,
    "consent_event_type": "foundation.protocols.data.consent",
    "max_context_tokens_per_query": 50000,
    "strip_pii_before_send": true,
    "audit_all_outbound": true
  }
}
State Event

foundation.protocols.ai.vendor_approval

DPA acceptance for a specific AI vendor. State key is the vendor identifier. Required by the DCPAI pathway before any data is sent to a proprietary AI provider.

{
  "type": "foundation.protocols.ai.vendor_approval",
  "state_key": "anthropic",
  "content": {
    "vendor": "Anthropic",
    "service": "Claude API",
    "dpa_signed": true,
    "dpa_reference": "Anthropic DPA v2025-03",
    "dpa_signed_by": "@dpo:matrix.openearth.network",
    "dpa_signed_at": 1711700000000,
    "no_training_guarantee": true,
    "data_retention_days": 0,
    "subprocessors_reviewed": true,
    "approved_for_data_types": ["public", "internal", "confidential"],
    "renewal_date": "2026-03-01"
  }
}
State Event

foundation.protocols.ai.risk_assessment

AI system risk classification per EU AI Act Article 9. Documents the risk management assessment for AI usage in this room or space.

{
  "type": "foundation.protocols.ai.risk_assessment",
  "state_key": "",
  "content": {
    "risk_level": "limited",
    "assessed_by": "@compliance_officer:matrix.openearth.network",
    "assessed_at": 1711800000000,
    "ai_system_description": "Claude Code for compliance gap analysis with MCP-sourced regulatory citations",
    "intended_purpose": "Assist compliance officers with regulatory research and control mapping",
    "eu_ai_act_category": "AI system with transparency obligations (Art. 50)",
    "risks_identified": [
      "Hallucinated legal citations",
      "Incomplete cross-regulation analysis",
      "Over-reliance on AI conclusions without expert review"
    ],
    "mitigations": [
      "All citations verified via MCP servers returning verbatim authoritative text",
      "Mandatory human approval for all outputs above 'low' risk level",
      "Evaluation scoring required for all high-risk outputs"
    ],
    "review_interval_days": 90,
    "next_review_date": "2026-06-30"
  }
}
State Event

foundation.protocols.ai.transparency_card

Model card — capabilities, limitations, and training details. Satisfies EU AI Act Article 13 transparency requirements. State key is the model identifier.

{
  "type": "foundation.protocols.ai.transparency_card",
  "state_key": "claude-sonnet-4-20250514",
  "content": {
    "model_id": "claude-sonnet-4-20250514",
    "provider": "Anthropic",
    "description": "General-purpose LLM with tool use and extended thinking capabilities",
    "capabilities": [
      "text_generation", "code_generation", "tool_use",
      "document_analysis", "mcp_integration"
    ],
    "known_limitations": [
      "May hallucinate citations if MCP server is unavailable",
      "Knowledge cutoff: training data up to early 2025",
      "Cannot verify factual claims against live sources without tools"
    ],
    "training_data_summary": "See Anthropic model card",
    "evaluation_results": {},
    "last_updated": "2025-05-14"
  }
}

EU AI Act Compliance Mapping

How foundation.protocols.ai.* events satisfy specific EU AI Act requirements.

ArticleRequirementSatisfying Events
Art. 9 Risk management throughout lifecycle ai.risk_assessment, ai.policy, ai.intention, ai.decision, ai.effect
Art. 12 Automatic record-keeping Entire event timeline; Matrix immutability + event signing exceeds requirements
Art. 13 Transparency to users ai.transparency_card, ai.model_registry, claude.session.start
Art. 14 Meaningful human oversight ai.intentionai.decision, ai.kill_switch, claude.approval.*
Art. 15 Accuracy, robustness, security E2EE (Megolm), event signing, ai.evaluation, ai.correction
Art. 26 Deployer: monitor, log, report Room timeline as canonical log, ai.evaluation, ai.correction, ai.kill_switch
Art. 50 Mark AI-generated content All ai.insight, claude.response, claude.tool.* are inherently flagged by event type

Related Event Families

FamilyRelationship
ai.claude.* Vendor-specific session recording layer. 42 events for Claude Code operations, built on top of the governance framework.
ai.cost Vendor-agnostic cost accumulator. Enforced against ai.policy limits.
data.* ai.data_boundary enforces which data types reach AI. data.consent gates extraction.
eudr.* AI-driven EUDR operations wrap in the intention → decision → effect flow.
legal.* Legal AI workflows use ai.policy for tool restrictions and ai.evaluation for output quality.