Named Accountability Doctrine
Purpose
To establish that no AI system, workflow, decision path, or materially influential process may operate without a specifically named human accountable for its authorized use, oversight, and consequences.
Doctrine Statement
Every AI system, AI-enabled process, or decision pathway must have a specifically named human accountable for its governed use, boundaries, oversight, and consequences. Accountability cannot be assigned to a team, committee, function, vendor, or system. It must attach to an identifiable person with recognized authority.
Rationale
AI governance begins to fail the moment responsibility becomes diffuse.
When ownership is shared vaguely across business, IT, risk, legal, operations, or vendors, accountability weakens before any technical failure becomes visible.
Most organizations can describe participation. Far fewer can identify true accountability.
Participation answers who is involved.
Named accountability answers who is answerable.
Without a named accountable person:
- oversight becomes fragmented
- intervention becomes slower
- stop authority becomes ambiguous
- documentation becomes harder to trust
- reconstruction becomes political rather than factual
- executive assurance becomes weaker than it appears
This doctrine exists because AI does not eliminate the need for human accountability. It concentrates it.
Applicability
This doctrine applies to:
- AI systems used in business operations
- AI outputs that influence human decisions
- AI-enabled workflows crossing functional or silo boundaries
- AI used in regulated, customer-facing, financial, operational, or compliance-relevant contexts
- any AI capability with material, scalable, or hard-to-reverse impact
It applies whether the AI is internally developed, externally purchased, embedded in software, or introduced informally through departmental use.
Required Conditions
For this doctrine to be satisfied, the organization must be able to identify and evidence:
1. Named Accountable Person
A specific individual is assigned accountability for the AI process, use case, or decision path.
2. Scope of Accountability
The system, process, workflow, or decision domain under that person’s accountability is clearly defined.
3. Authority Boundaries
The accountable person has recognized authority appropriate to the risk, including the ability to constrain use, escalate concerns, and trigger intervention or stop actions through established mechanisms.
4. Oversight Expectation
The accountable person’s monitoring, review, reporting, and governance responsibilities are explicitly understood.
5. Documentation of Assignment
The assignment of accountability is documented in a form that can be verified by leadership, audit, risk, or regulators where applicable.
6. Alignment with Materiality
The accountability assignment reflects the materiality of the process and is not merely nominal or administrative.
What Does Not Satisfy This Doctrine
The doctrine is not satisfied by any of the following on their own:
- “the business owns it”
- “IT manages the platform”
- “compliance is aware”
- “legal reviewed it”
- “the vendor is responsible”
- “the committee oversees it”
- “multiple stakeholders share ownership”
- a role title without an identifiable person
- informal understanding without evidence
These may describe involvement. They do not establish named accountability.
Required Evidence
Evidence supporting this doctrine may include:
- named owner listed in AI inventory or governance register
- documented accountability assignment in governance records
- approval records showing accountable ownership
- RACI or equivalent governance mapping tied to a real person
- escalation path tied to the accountable person
- decision authority records showing scope and boundaries
- board, executive, or risk reporting that identifies accountable ownership by name or by formally assigned role tied to a named person
Failure Conditions
This doctrine is violated when:
- no specific person can be named as accountable
- multiple groups claim partial ownership but no one owns the whole
- technical administration is mistaken for accountability
- a vendor is treated as the accountable party for internal business use
- the accountable person lacks practical authority
- accountability is assigned only after an incident occurs
- leadership cannot quickly determine who owns the process and its consequences
Red Flags
Common signs of failure include statements such as:
- “It sits across several teams.”
- “There isn’t really one owner.”
- “The vendor handles that.”
- “It depends who is using it.”
- “We all share responsibility.”
- “That belongs to the business and IT.”
- “I’m not sure who signed off on that.”
These are signals of accountability diffusion.
Governance Implications
Without named accountability, the rest of the control environment weakens.
If no one is clearly accountable:
- admissibility decisions become unstable
- execution-time authority loses force
- intervention becomes delayed
- stop authority becomes harder to exercise
- documentation becomes less credible
- board oversight becomes less defensible
Named accountability is therefore a governance precondition, not an administrative afterthought.
Board-Level Questions
Boards and executive leaders should ask:
- Who is accountable for this AI process by name?
- What specifically are they accountable for?
- What authority do they have over its use?
- Can they escalate, constrain, or stop it?
- Where is this accountability documented?
- Does the accountability assignment match the materiality of the process?
Operational Test
An organization passes this doctrine only if it can answer, immediately and credibly:
Who is the named human accountable for this AI process, what is the scope of that accountability, and what authority and evidence support it?
If that answer is delayed, vague, or redirected to a function instead of a person, the doctrine is not satisfied.
Relationship to Other Doctrines
This doctrine supports and enables:
- Admissibility Before Execution Doctrine
- Execution-Time Authority Doctrine
- Intervention Before Escalation Doctrine
- Human Controls Must Remain Human Doctrine
- Decision Accountability Doctrine
- Board-Level AI Oversight Doctrine
- AI Governance Leadership Doctrine
Bottom Line
No named person, no real accountability.
No real accountability, no defensible governance.