Engineers building AI decision systems argue constantly about the same concepts without realizing they are arguing. "Is this a rule or a policy?" "Should this live in memory or state?" "Is that a constraint or a guardrail?" The terminology is borrowed from disparate fields — database theory, network engineering, legal compliance, machine learning — and it has never been unified into a coherent vocabulary for the specific problem of making AI decisions explicit, auditable, and safe to change.
That vocabulary problem is not merely academic. When teams lack shared terms, they build incompatible systems. An "audit log" to one engineer is a raw LLM output log; to another it is a structured decision record. A "rule" to a compliance officer is a named, versioned artifact with an owner; to a developer it is an if/else branch in orchestration code. These definitional gaps produce governance failures — not because teams lack intent, but because they lack a shared language for what they are trying to build.
This article establishes six terms that together form a complete vocabulary for AI decision infrastructure. Each term is defined precisely, contrasted with its informal equivalent, and grounded in a concrete example. The sixth term shows how they interlock into a coherent system. At the end, we examine how Memrail's SOMA AMI architecture implements all six as first-class platform primitives.
Why the Field Lacks a Shared Vocabulary
The Stanford HAI 2025 AI Index documented a consistent pattern: organizations deploying production AI systems repeatedly cite "lack of standardized governance terminology" as a barrier to cross-team coordination. The problem is structural. AI system design draws on at least four disciplines simultaneously:
- Database and knowledge representation — facts, types, schemas, immutability
- Network and systems architecture — planes, layers, control separation
- Legal and compliance — audit trails, record retention, explainability
- Software deployment engineering — canary, rollback, versioning, promotion
Each discipline brings its own vocabulary. The result is that teams building "the same thing" name it differently, implement it differently, and cannot evaluate each other's work coherently. The six terms below are an attempt to resolve this by grounding each concept in the specific demands of consequential AI decision-making.
The arXiv survey of agentic AI architectures (2512.09458) notes that mature agent systems require a clear separation between the knowledge layer (what the system knows), the inference layer (what the model produces), and the decision layer (what the system commits to doing). The vocabulary below maps directly onto that distinction.
ATOM: The Minimal Unit of Knowledge
Definition
An ATOM is a typed, named, immutable fact — the minimal unit of knowledge that can be evaluated by a decision rule. An ATOM has exactly three properties: a name (what it is), a type (what kind of value it holds), and a value (the current assertion).
The key word is typed. An ATOM is not a free-form string passed from one component to another. It is a structured assertion whose type constrains what operations can be applied to it and what rules can consume it. account.status: enum("active", "trial", "suspended") is an ATOM. transaction.amount: decimal(USD) is an ATOM. "the user seems to be on a trial" is not an ATOM — it is an unstructured inference with no guaranteed type, no fixed name, and no evaluation semantics.
Why "Immutable"?
Immutability is the property that makes ATOMs safe to use in audit records. When a decision is made using ATOM account.status = "trial" at timestamp T, that fact is fixed. Later updates to account status do not retroactively change the ATOM that was evaluated. This is essential for replay fidelity — the ability to reconstruct exactly what the system knew when a decision was made.
Contrast with the Informal Equivalent
The informal equivalent of an ATOM is a "context variable" or "session state" passed through an agent's working memory. The difference is significant: context variables are untyped, mutable, and typically lost when the conversation or task ends. ATOMs are typed, immutable once asserted, and persist in the decision record.
Examples
| ATOM Name | Type | Example Value | Context |
|---|---|---|---|
account.plan_tier | enum | "growth" | SaaS feature eligibility |
transaction.amount_usd | decimal | 4250.00 | Approval threshold check |
customer.days_since_signup | integer | 12 | Trial conversion trigger |
user.jurisdiction | enum | "EU" | Compliance rule routing |
agent.identity | string | "cs-agent-prod-v2" | Authorization scope check |
The NIST AI RMF emphasizes that responsible AI systems must be able to document "the data inputs that informed a decision." ATOMs are the technical implementation of that requirement: a named, typed, recoverable record of what the system knew.
EMU: The Named, Versioned Decision Rule
Definition
An EMU (Evaluable Managed Unit) is a named, versioned decision rule that evaluates one or more ATOMs and produces a deterministic outcome. An EMU has: a name (unique identifier), a version (semver or equivalent), a trigger condition (which ATOMs it evaluates), a logic body (the evaluation expression), an outcome set (what it can return), and an owner (who is accountable for it).
The "versioned" property is as important as the "named" property. An EMU named trial_conversion_trigger at version 2.1.0 is a distinct artifact from the same EMU at version 2.0.0. When a decision trace records which EMU was evaluated, it records both the name and the version — making the decision reproducible even after the rule has been updated.
Contrast with the Informal Equivalent
The informal equivalent of an EMU is an if/else branch in orchestration code, or a condition embedded in a prompt. The critical differences:
- Testability: An EMU can be unit-tested against a known ATOM set. An if/else branch in orchestration code typically cannot be tested in isolation without running the full agent.
- Ownership: An EMU has a declared owner. A prompt condition has no formal owner — it belongs to whoever last edited the prompt.
- Rollback: An EMU can be rolled back to a previous version atomically. Rolling back a condition embedded in a prompt requires re-deploying the entire prompt.
- Discoverability: EMUs can be enumerated and queried. Conditions scattered across orchestration code cannot.
Example EMU
A SaaS trial conversion rule expressed as an EMU:
EMU: trial_conversion_trigger Version: 2.1.0 Owner: growth-team Evaluates: - customer.days_since_signup (integer) - account.usage_events_last_7d (integer) - account.plan_tier (enum) Logic: IF plan_tier == "trial" AND days_since_signup >= 21 AND usage_events_last_7d >= 5 THEN outcome: "trigger_conversion_sequence" ELSE outcome: "continue_trial" Outcomes: ["trigger_conversion_sequence", "continue_trial"]
This rule is readable, testable, owned, and versioned. It evaluates typed ATOMs. It can be promoted, rolled back, or shadow-evaluated against live traffic without touching orchestration code.
Decision Plane: The Architectural Layer Where EMUs Live
Definition
The Decision Plane is the architectural layer that houses EMUs, isolated from model inference and from orchestration code. It is the dedicated subsystem responsible for evaluating ATOMs against EMUs and returning deterministic outcomes.
The term borrows from network architecture, where the separation of data plane, control plane, and management plane is a foundational design principle. In AI system design, the equivalent separation is: model inference (generates candidate outputs), decision plane (evaluates those candidates against explicit rules), and execution layer (takes the permitted actions). The decision plane is the control plane of an AI system.
Why Isolation Matters
When decision logic is placed inside the model layer — embedded in prompts, system instructions, or fine-tuning data — it becomes non-deterministic. The model may or may not apply the rule consistently. When decision logic is placed inside orchestration code, it is testable but entangled with infrastructure concerns, making it hard to modify, review, or audit independently.
The Anthropic "Building Effective Agents" research notes that the most reliable agentic systems are those where the model's role is constrained to generating structured proposals, with a separate layer responsible for evaluating those proposals against explicit criteria. The Decision Plane is that separate layer.
Properties of a Well-Formed Decision Plane
- Single responsibility: evaluates ATOMs against EMUs; does not generate, orchestrate, or execute
- Stateless evaluation: given the same ATOMs and the same EMU version, always produces the same outcome
- Explicit interface: accepts typed ATOM inputs; returns typed outcomes and a trace record
- Independent deployability: can be updated, versioned, and rolled back independently of the model or orchestration layer
Decision Trace: The Append-Only Record That Makes Decisions Defensible
Definition
A Decision Trace is the append-only, immutable record generated every time the Decision Plane evaluates an outcome. It captures: the input ATOMs (names, types, values), the EMUs evaluated (names and versions), the outcome of each EMU, the final decision outcome, a timestamp, and the identity of the triggering agent or system.
"Append-only" and "immutable" are architectural requirements, not implementation details. A trace that can be modified after the fact is not a trace — it is a mutable log, which has very different legal and compliance properties.
What a Decision Trace Is Not
A Decision Trace is not an LLM output log, a request/response log, or an application event log. Those are observability artifacts — useful for debugging model behavior and monitoring system health. A Decision Trace is an audit artifact: it records the deterministic evaluation that produced a committed decision. The two serve different purposes and should be stored and governed differently.
The Defensibility Property
A Decision Trace is "defensible" when it satisfies three conditions:
- Immutability: the record cannot be altered after creation
- Completeness: every committed decision has a corresponding trace — there are no "silent" decisions
- Replay fidelity: given the same ATOMs and the same EMU versions recorded in the trace, re-evaluating the Decision Plane produces the same outcome
When these three properties hold, a Decision Trace satisfies the evidentiary standard required by frameworks like the EU AI Act's Article 13 transparency requirements and SEC Rule 17a-4 record-keeping obligations. Read more about implementing this pattern in Decision Traces: The Audit Log Pattern That Makes AI Systems Defensible.
Safe Rollout: The Promotion Protocol for Rule Changes
Definition
Safe Rollout is the structured promotion protocol that governs how an EMU moves from creation to full production enforcement. It defines four stages: Draft, Shadow, Canary, and Active.
| Stage | What Happens | Affects Live Outcomes? | Rollback Cost |
|---|---|---|---|
| Draft | EMU exists; evaluates nothing | No | Zero — delete the draft |
| Shadow | EMU evaluates against live ATOMs; logs results; does not act | No | Near-zero — disable shadow mode |
| Canary | EMU evaluates and acts on a defined traffic slice (e.g. 5%) | Partial | Low — reduce canary percentage to 0% |
| Active | EMU evaluates and acts on all eligible traffic | Yes | Requires version rollback |
Why Rules Need Deployment Protocols
The informal equivalent of Safe Rollout is shipping a rule change directly to production and watching for complaints. This approach has a failure mode that is well understood in software deployment — it is called "big bang deployment" — but it is still the norm for business rule changes in most organizations. The reason is historical: business rules were not previously treated as production artifacts with deployment semantics. In AI decision infrastructure, they must be.
A rule change that affects subscription boundaries, approval thresholds, or eligibility criteria is as consequential as a code change to the same logic. The Shadow stage is particularly valuable: it allows teams to observe exactly which live decisions would have been different under the new rule, without committing to those decisions. This is the equivalent of canary analysis in software deployment — measure divergence, then decide whether to promote.
Reachability: Static Analysis for Dead Rules
Definition
Reachability is the property of an EMU that answers the question: "Can this rule ever fire, given the current ATOM set and system configuration?" An EMU is reachable if there exists a valid combination of ATOM values that would cause it to evaluate to a non-null outcome. An EMU is unreachable if no such combination exists.
Why Dead Rules Are a Compliance Risk
An unreachable EMU is not merely a performance concern — it is a compliance and governance risk. If a compliance team believes an EMU named gdpr_data_deletion_gate is enforcing GDPR deletion requirements, but that EMU is unreachable because the ATOM it depends on (user.deletion_requested) is never asserted by the current system, the governance control does not actually exist. The organization believes it has a control; the system does not enforce it.
Reachability analysis is static analysis applied to the Decision Plane: examining the declared EMUs and the declared ATOM sources to identify rules that can never fire. This is analogous to dead code detection in software engineering, but the stakes are higher — a dead security rule is a silent compliance gap, not merely a maintenance burden.
Contrast with the Informal Equivalent
The informal equivalent of Reachability is a manual audit of decision logic — reading through conditions and trying to construct scenarios that would trigger them. This is feasible for a handful of rules and impossible at scale. Automated Reachability analysis makes this a continuous, systematic property of the Decision Plane rather than a periodic manual exercise.
How the Six Terms Form a Complete System
These six terms are not independent definitions. Each one implies the others:
- ATOMs are the inputs that EMUs evaluate. Without typed ATOMs, EMUs cannot be deterministic.
- EMUs are the rules that constitute the Decision Plane. Without named, versioned EMUs, the Decision Plane has no discrete units of governance.
- The Decision Plane generates Decision Traces. Without an isolated Decision Plane, traces are incomplete (decisions happen outside it).
- Decision Traces record EMU versions. Without versioned EMUs, traces cannot satisfy replay fidelity.
- Safe Rollout governs the lifecycle of EMUs. Without Safe Rollout, EMU changes go directly to production — bypassing shadow evaluation and canary testing.
- Reachability validates the EMU set against the ATOM set. Without Reachability, the Decision Plane may contain governance controls that never actually execute.
Together, these six concepts define the minimal vocabulary for a production-grade AI decision infrastructure. A system that implements all six has a complete answer to the question every auditor, regulator, and incident responder will ask: "What did the system know, what rules did it apply, what decision did it make, and can you prove it?"
SOMA AMI: The Production Implementation
Memrail's SOMA Adaptive Memory Intelligence (SOMA AMI) architecture is built around all six of these primitives as first-class platform concepts. Rather than requiring teams to design and implement this vocabulary from scratch, SOMA AMI provides each as a managed infrastructure component:
- ATOMs are the foundational knowledge unit in SOMA AMI — typed, named, immutable facts that the platform enforces at the schema level. Teams define ATOM schemas; the platform validates all assertions against them.
- EMUs are the platform's rule primitive — named, versioned, owner-assigned decision rules that the platform stores, versions, and promotes through lifecycle stages. Non-engineers can author and review EMUs through a no-code interface; engineers can define them programmatically.
- Decision Plane is SOMA AMI's core evaluation engine — stateless, isolated from model inference, with an explicit typed interface. Every decision routes through the Decision Plane; no agent action bypasses it.
- Decision Traces are generated automatically for every Decision Plane evaluation — append-only, immutable, with full ATOM and EMU version capture. They are stored separately from observability logs with audit-grade retention policies.
- Safe Rollout is a first-class workflow in the platform — teams promote EMUs through Draft, Shadow, Canary, and Active stages with configurable promotion criteria and automatic rollback triggers.
- Reachability is a continuous analysis that SOMA AMI runs against the active EMU set and ATOM schema — surfacing unreachable rules as a governance alert before they become a compliance gap.
The practical implication is that a team adopting SOMA AMI does not need to design this architecture from scratch or maintain its components independently. The vocabulary described in this article maps directly to the platform's feature set. Explore the full architecture at memrail.com/platform/architecture/.
Starting Point: Audit Your Current Vocabulary
Before adopting a new architecture, a useful diagnostic is to audit your team's current vocabulary for the concepts this article defines. Ask these questions:
- When we say "rule," do we mean a named, versioned artifact with an owner — or an if/else branch somewhere in code?
- When we say "audit log," do we mean a record that captures what the system knew and which rules it applied — or an event stream of what happened?
- When we deploy a rule change, does it go through a shadow stage before it affects live decisions — or does it go directly to production?
- Can we enumerate every active rule in our decision system and verify that each one is reachable?
- If asked to reproduce a specific decision from 90 days ago, could we reconstruct the exact ATOMs evaluated and EMUs fired?
The gap between your current answers and the definitions in this article is the scope of the architectural work ahead. Teams that have closed this gap — either by building these primitives or by adopting a platform that provides them — are the ones able to answer "why did your AI system do that?" without guesswork.
For further reading on how this vocabulary applies to production system design, see Separating Logic from Models: Why Your AI System Needs a Decision Plane and The Agent Governance Stack: Four Layers Every Enterprise Needs Before Going to Production.
