The Collision That's Already Happening
Finance teams are adopting AI faster than compliance programs are adapting. Machine learning models that forecast revenue, natural language processing tools that draft MD&A sections, automated variance analysis that flags unusual account movements — these tools are now mainstream. The compliance question is no longer "will AI touch our financial reporting process?" It's "how do we maintain SOX-compliant controls over AI-assisted workflows?"
The answer is more nuanced than "don't use AI" or "AI is just another tool." SOX's control framework was designed around human judgment and manual review. Embedding AI in that framework without deliberate redesign of controls creates attestation risk that most compliance officers haven't fully mapped.
Why AI Complicates Section 302 Attestations
Section 302 requires the CEO and CFO to certify, each quarter, that the financial statements don't contain material misstatements and that they've evaluated the effectiveness of disclosure controls and procedures. The certification is personal — it can't be delegated, and it can't be predicated entirely on someone else's assessment.
When AI tools enter the financial reporting workflow, Section 302 creates two distinct problems:
The explainability problem. If a machine learning model produced an output that influenced a reported figure — say, an accounts receivable reserve calculation driven by an AI-based probability-of-collection model — can the CFO explain how that figure was derived? Can they explain what inputs drove it, what the model's error rate is on historical data, and what would have happened if a key assumption changed? If the answer is no, the CFO is certifying a number they can't explain. That's an attestation risk regardless of whether the model is accurate.
The change-detection problem. SOX requires disclosure of material changes to internal controls. If your finance team upgraded an AI model, retrained it on new data, or switched AI vendors, is that a material change to internal controls? If the AI tool is embedded in a process that affects financial reporting, probably yes. But most companies haven't built change management processes for AI model updates with the same rigor they apply to ERP changes.
Section 404 Controls for AI-Assisted Processes
Section 404 requires management's assessment of internal control over financial reporting (ICFR) and, for accelerated filers, the auditor's attestation on that assessment. The COSO framework — the dominant ICFR framework — has five components: control environment, risk assessment, control activities, information and communication, and monitoring activities. AI complicates all five.
Here's where control gaps are appearing:
Control Activities: The Human-in-the-Loop Question
The most common gap: AI tools embedded in financial processes without a defined human review step for outputs that affect reported figures. The PCAOB has been clear that a control is not adequate if it can't detect or prevent material misstatements — and a control operated by an AI system that a human can't meaningfully review isn't a reliable control.
The fix is straightforward in principle but operationally difficult: every AI-assisted step in a process that touches financial reporting needs a defined review procedure. What does the reviewer check? What threshold triggers an override? Who performs the review and what's their relevant expertise? These need to be documented as SOD-compliant control procedures, not informal "someone looks at this."
Risk Assessment: Model Risk Has Landed in Finance
Banks have had model risk management frameworks for decades. Non-bank financial companies and general corporate filers are now discovering that SOX and AI together require something similar. If a machine learning model drives an accounting estimate or a financial forecast, the model itself is a risk. It can be miscalibrated, trained on non-representative data, or silently degraded by drift.
SOX programs need to incorporate model risk assessment: what models are in use in financial reporting processes, what are their known limitations, how often are they validated against actual outcomes, and who owns the validation process?
IT General Controls: AI Infrastructure Is New Territory
ITGC testing — access controls, change management, operations — covers the IT systems that support financial reporting. AI infrastructure doesn't fit neatly into traditional ITGC categories:
- Access controls need to cover not just who can query the AI system but who can modify the model, retrain it, or change the prompts that drive its outputs.
- Change management needs to address model updates and retraining events as changes to the IT environment. Most change management policies were written before ML models were part of the financial reporting stack.
- Completeness and accuracy testing for AI-generated outputs requires a different methodology than for traditional transaction processing systems. You can't just test a sample of outputs — you need to test whether the model's outputs are systematically biased or unreliable under specific conditions.
What Auditors Are Looking For
The PCAOB's 2025 inspection priorities included AI-assisted audit procedures — and by extension, what auditors are doing when clients use AI in financial reporting. The consistent themes in staff guidance:
- Documentation of AI tool selection rationale. Why this tool? What was the basis for trusting its outputs?
- Evidence of human review. Not "a person was in the loop" but documented evidence of what the person reviewed, when, and what they concluded.
- Model change history. Any material change to an AI tool in the financial reporting stack during the period should be documented and evaluated for ICFR impact.
- Data lineage. Auditors are asking where the data inputs to AI models came from, whether they were validated, and how errors in inputs propagate to outputs.
Building Controls That Survive Attestation
The organizations handling this well have done a few things explicitly. First, they inventoried every AI tool in their financial reporting workflow — not just the obvious ones (ML-based forecasting) but the subtle ones (AI-assisted document review, AI-powered exception flagging in account reconciliation). The inventory is the starting point.
Second, they mapped each AI tool to the relevant financial statement assertion. Which accounts or disclosures does this tool's output affect? What's the magnitude of potential error? Is that magnitude material?
Third, they redesigned controls to include AI-specific review procedures — not just "someone reviews the output" but structured review with documented criteria and a defined escalation path when the AI's output is unexpected.
Fourth, they built change management processes for AI model updates that mirror what they already do for ERP patches: change request, impact assessment, testing, approval, and documentation.
The Attestation Bottom Line
No AI model can sign a Section 302 certification. The CEO and CFO remain personally responsible for the attestation, regardless of how much of the underlying financial process is AI-assisted. That accountability gap — between AI-driven outputs and human attestation — is where SOX compliance programs need to focus in 2026.
The question every CFO should be able to answer before signing a 302 certification: "For every AI-assisted step in our financial reporting process, do I understand what the AI did, how the output was reviewed, and how we would detect if the output was materially wrong?" If the answer is no, the attestation is on shaky ground.
Use ComplianceStack's SOX compliance tools to assess your ICFR posture, map AI-assisted processes to control frameworks, and identify gaps before your next 404 assessment.