Thinking about deterministic AI
Deep dives into decision infrastructure, governance patterns, auditability, and building agent systems you can trust in production.
Governed AI in Financial Services: What Banks and Fintechs Must Get Right
Covers the specific governance obligations facing AI systems in financial services: SR 11-7 model risk management, explainability requirements for credit decisions, SEC guidance on algorithmic advice, and EU AI Act classification.
LLM observability vs decision monitoringLLM Observability Is Not Enough: The Case for Decision-Level Monitoring
Distinguishes LLM-level observability (token usage, latency, prompt/response logs) from decision-level monitoring (which rules fired, what action was authorized, what was executed). Explains why LLM observability tools alone cannot answer compliance questions.
AI rule change managementBuilding a Rule Change Process for AI Systems: From Ad-Hoc Edits to Governed Rollout
Provides a step-by-step governance model for managing rule changes in AI production systems. Covers four stages: authoring and peer review (draft), validation against live traffic (shadow), limited-scope enforcement (canary), and full promotion (active).
SaaS billing automation failuresSaaS Billing Decision Failures: The Hidden Cost of Unsequenced Enforcement
Examines how billing automation breaks when enforcement decisions lack sequencing, priority rules, and cooldowns — leading to mass suspensions without warning, payment retry storms, and churn caused by the billing system itself.
decision plane vs orchestration layerDecision Plane vs. Orchestration Layer: Why Your Agent Framework Is Not Your Governance
Draws a precise architectural distinction between the orchestration layer (sequencing, routing, tool calls) and the decision plane (policy evaluation, authority, audit). Shows why conflating them creates brittle governance.
EU AI Act logging requirementsThe EU AI Act Compliance Gap: What High-Risk AI Systems Must Log
Breaks down what Article 12 of the EU AI Act actually requires high-risk AI systems to log: automatic recording of events, traceability of inputs/outputs, human oversight indicators, and operational data retention.
AI agent authority modelAgent Authority Models: Who Decides What Your AI Can Do?
Examines four authority model patterns (flat, hierarchical, delegated, coalition) that determine what AI agents can do and on whose behalf. Covers how authority is asserted, verified, and revoked across multi-agent systems.
SaaS customer lifecycle automationThe SaaS Lifecycle Decision Stack: How Modern Companies Automate Every Customer Touchpoint
Maps the five lifecycle stages (Acquire & Activate, Engage & Retain, Expand & Monetize, Billing & Revenue, Recover & Win Back) as a decision stack, explaining what decisions must fire at each stage and what happens when they are skipped.
AI agent production failuresWhy AI Agents Fail in Production: The Six Root Causes
Synthesizes six repeating patterns behind real-world AI agent failures — goal misgeneralization, context drift, tool boundary violations, cascading approvals, missing audit trails, and unsafe rule changes — with diagnostic questions for each.
AI governance financial servicesHow Financial Services Teams Are Governing AI Decisions in 2026
Financial services demands reproducibility, auditability, explainability, and safe change management. This article surveys how banks, insurers, and fintechs are governing AI decision systems.
safe rollout SaaS rulesSafe Rollout for SaaS Decision Rules: How to Change Live Business Logic Without Breaking Customers
Changing a live business rule carries real production risk. This article introduces safe rollout as a first-class concept for business logic: Draft, Shadow, Canary, Active.
AI agent authorization modelHow to Build an AI Agent Authorization Model Without Writing a Policy Engine from Scratch
Authorization determines whether your AI agents are deployable in production. This practical guide covers the three authorization primitives and walks through implementation using Cedar.
EU AI Act transparency requirementsThe EU AI Act's Article 13 Problem: What 'Transparency' Actually Requires from Your AI System
August 2, 2026 is the compliance deadline for high-risk AI systems under the EU AI Act. Most teams do not know whether their system qualifies or what Article 13 actually demands in practice.
AI decision audit logDecision Traces: The Audit Log Pattern That Makes AI Systems Defensible
When an AI system makes a consequential decision, someone will ask "why did it do that?" Teams without decision traces cannot answer. Those with decision traces answer in seconds.
decision infrastructure vocabularyATOMs, EMUs, and the Decision Plane: A Vocabulary for AI Decision Infrastructure
The field of deterministic AI decision infrastructure lacks a shared vocabulary. This article establishes working definitions for ATOMs, EMUs, Decision Plane, Decision Traces, Safe Rollout, and Reachability.
policy engine AI agentsOPA, Cedar, or Custom? Choosing the Right Policy Engine for Your AI Agents
A clear, opinionated comparison of the three dominant policy engine approaches for AI agent authorization: OPA with Rego, AWS Cedar, and custom rules engines. Includes a decision matrix and the honest 2026 recommendation.
SaaS workflow automationThe 5 SaaS Workflows Most Broken by Undocumented Decision Logic
Every SaaS company has decisions that live nowhere: in Slack threads, in institutional memory, in dead PRs. This article maps the five workflows where undocumented logic causes the most damage.
AI agents fail productionWhy AI Agents Fail in Production (And What the Architecture Is Missing)
Only 5% of enterprise AI systems reach production. This article diagnoses the real reasons — not model capability, but five structural architectural gaps that deterministic decision infrastructure can fix.
deterministic AI decisionsInfrastructure for Deterministic AI Decisions
A complete guide to building decision infrastructure that makes high-stakes AI decisions explicit, auditable, and safe to change. Covers the propose-then-decide architecture, deterministic decision points, audit-grade logging, and safe rule rollout.
decision protocol SaaSWhat Is a Decision Protocol? The Concept Every SaaS Team Needs in 2026
SaaS teams encode product intent in code, docs, and people — none designed to be queried, tested, or rolled back. A decision protocol is the fourth container: a named, versioned, explicit record of business logic.
AI agent governance frameworkThe Agent Governance Stack: Four Layers Every Enterprise Needs Before Going to Production
Most enterprises treat AI governance as a compliance checklist. It is not — it is an architecture. This article introduces the four-layer governance stack every enterprise needs before deploying AI agents to production.
AI decision plane architectureSeparating Logic from Models: Why Your AI System Needs a Decision Plane
When teams build AI systems, they put decision logic inside prompts or model calls — fast to demo, impossible to govern. The decision plane concept offers a better model: all consequential logic in an explicit, testable, versionable layer.