Table of Contents
- Executive Summary
- Why SR 26-2 matters now
- Where APRA stands today
- Where implementation friction appears in practice
- What international peers are doing
- A practical uplift pathway for APRA
- Appendix A: What might an APRA CPG / CPS look like for AI risk management (including model risk addendum)
- Appendix B: What changed in SR 26-2 versus SR 11-7, and why it matters for AI
- References
Executive Summary
- U.S. SR 26-2 supersedes SR 11-7 and SR 21-8, signalling that major jurisdictions are actively refreshing model risk settings.
- APRA’s technology-neutral baseline (CPS 220 and CPS 230) is strong, but many institutions still face practical interpretation gaps when scaling AI.
- International peers are filling similar gaps with more operational guidance, especially on lifecycle governance, proportionality, and assurance.
- A practical APRA uplift, through targeted guidance rather than prescriptive rules, could improve consistency and support safer innovation.
- The aim is not to replace principles-based supervision, but to make implementation clearer for first line, second line, boards, and supervisors.
Why SR 26-2 matters now
As this piece was being finalised, U.S. regulators released SR 26-2 Revised Guidance on Model Risk Management, superseding SR 11-7 and SR 21-8. That timing sharpened the trigger for writing this post: major jurisdictions are actively updating model risk expectations, while Australian institutions are still stitching together practical AI governance from principles-based standards.
In ongoing discussions with risk, compliance, and innovation teams across banking, insurance, and superannuation, a recurring theme stands out: institutions are keen to harness AI for better customer outcomes and efficiency, yet they often find themselves bridging gaps in practical governance guidance.
Where APRA stands today
APRA’s framework is deliberately technology-neutral. CPS 220 Risk Management includes a high-level expectation that the risk management framework must cover “the process for the validation, approval and use of any models to measure components of risk.” However, unlike many international peers, APRA has no dedicated model risk management standard or practice guide that applies broadly across the institution.
Detailed validation expectations remain largely confined to credit and market risk models used for regulatory capital purposes. CPS 230 Operational Risk Management (effective July 2025) captures AI-related operational disruptions and critical operations, while targeted supervisory engagements on AI are underway with larger firms. This prudent, principles-based stance provides flexibility but leaves many teams interpreting how best to govern modern AI systems consistently.
Where implementation friction appears in practice
In client discussions, the absence of APRA-specific guidance on risk-proportionate validation for broader AI use cases creates real challenges, particularly when scaling beyond capital models into customer-impacting or operational applications.
By contrast, ASIC’s REP 798 (Beware the gap) stops short of creating a new AI-specific rulebook, but it clearly flags a governance gap and sets practical expectations for licensees to strengthen governance, risk management and oversight of AI under existing technology-neutral obligations.
What international peers are doing
Globally, regulators have filled similar gaps with more operational guidance. SR 26-2 is a useful U.S. signal: it reinforces risk-based, proportionate model risk management and clarifies scope, including that generative and agentic AI are currently out of scope for this guidance.
Singapore’s MAS Project MindForge has moved in a complementary but broader direction, with practical toolkits spanning traditional AI, generative AI, and agentic systems. Canada’s OSFI E-23 and the UK PRA’s model risk principles also offer clearer bridges between classic model risk management and AI.
A practical uplift pathway for APRA
APRA is well-placed to issue a Prudential Practice Guide on AI Risk Management (ideally developed collaboratively with industry). Key elements that would help include:
- Practical tools for AI use-case inventories, materiality assessments, and standardised documentation (e.g., AI Cards).
- Consistent language bridging traditional model risk and modern AI, evolving the CPS 220 validation reference into “risk-proportionate independent validation” or “AI-specific pre-deployment review”, with full independent challenge for high-materiality cases and lighter peer reviews for lower-risk ones.
- Targeted expectations for GenAI/agentic risks (hallucinations, adversarial testing, guardrails, drift monitoring, meaningful human oversight), all calibrated to risk materiality.
- Strengthened integration with third-party and lifecycle governance under CPS 230.
Such guidance would align with the National AI Centre’s 2025 Guidance for AI Adoption and reduce fragmentation, especially for smaller institutions wrestling with shadow AI. The result: faster, safer innovation, greater board confidence, competitive parity with global peers, and stronger prudential outcomes.
APRA’s 2025–26 supervisory focus on AI provides an ideal foundation. A practical uplift in 2026–27, building on the existing CPS 220 and CPS 230 foundations, would equip the industry with the clarity clients consistently tell me they need.
For explicit detail, see Appendix A: What might an APRA CPG / CPS look like for AI risk management, including the Model risk addendum. For the SR 26-2 delta, international comparison, and institution-level actions, see Appendix B, SR 26-2 and MAS MindForge: Similarities and differences, and Practical implications for institutions.
Appendix A: What might an APRA CPG / CPS look like for AI risk management (including model risk addendum)
Below is a practical draft structure that could preserve APRA’s principles-based approach while giving industry clearer implementation guidance, including these considerations and not an exhaustive list:
-
Purpose, scope, and proportionality
- Define AI systems and model classes in prudential context.
- Clarify risk-proportionate expectations by materiality and impact.
- Align with existing standards (especially CPS 220 and CPS 230), avoiding duplication.
-
Governance and accountability
- Set clear board and senior management accountability for AI outcomes.
- Require explicit role clarity across first line, second line, internal audit, and relevant committees.
- Define escalation triggers for high-risk AI use cases.
-
AI and model lifecycle controls
- Expectations across design, development, testing, deployment, monitoring, change, and decommissioning.
- Minimum standards for data quality, documentation, and reproducibility.
- Controls for drift, degradation, and revalidation triggers.
-
Independent validation and challenge
- Introduce proportionate independent validation expectations across broader model and AI use cases.
- Distinguish full independent challenge for high-materiality systems from lighter review for lower-risk systems.
- Clarify evidence requirements for approvals and periodic reviews.
-
Third-party and supply-chain risk
- Extend CPS 230-aligned controls to external AI providers and embedded model services.
- Require due diligence on provider resilience, transparency, and incident response.
- Address concentration risk and vendor lock-in for critical operations.
-
GenAI and agentic risk controls
- Guardrails for hallucination risk, prompt injection, data leakage, and harmful outputs.
- Expectations for meaningful human oversight in high-impact decisions.
- Ongoing monitoring and assurance tailored to non-deterministic behaviour.
-
Incident management, assurance, and reporting
- AI-specific incident taxonomy, thresholds, and escalation pathways.
- Board-level reporting expectations, including control effectiveness and residual risk.
- Supervisory engagement expectations for material incidents.
-
Implementation pathway
- Transitional timelines to support uplift, especially for smaller institutions.
- Suggested artefacts and templates (e.g., AI use-case inventory, AI card, validation pack, monitoring dashboard).
The model risk management addendum below is intended to connect these AI risk governance elements to existing model risk management foundations in institutions, so implementation can build on current model governance rather than run as a separate track.
Model risk addendum (within CPG or as standalone guidance)
- Broaden model risk expectations beyond regulatory capital models to material decision models and AI systems.
- Standardise model tiering and risk-rating criteria across quantitative and AI models.
- Set baseline requirements for model inventory, model ownership, validation frequency, and performance thresholds.
- Clarify treatment of explainability limits and compensating controls where model transparency is constrained.
- Define minimum challenger testing expectations for high-impact models and AI applications.
Appendix B: What changed in SR 26-2 versus SR 11-7, and why it matters for AI
SR 26-2 does not simply relabel SR 11-7; it modernises the framing in several useful ways:
- Supersession and consolidation: SR 26-2 supersedes SR 11-7 and SR 21-8, creating a single revised interagency anchor for model risk management.
- Sharper proportionality: it explicitly emphasises risk-based tailoring to model risk profile and organisational size and complexity (with expected strongest relevance for larger institutions).
- Clarified model scope: the revised definition focuses on complex quantitative models and excludes simple arithmetic calculations and deterministic rule-based processes/software.
- Materiality focus: it gives more explicit structure to model materiality, linking model exposure and model purpose to the intensity of oversight.
- Guidance posture: it is explicit that the document is principles-based guidance, not a prescriptive rulebook.
On AI treatment specifically, the signal is especially important:
- Included: traditional quantitative/statistical models and non-generative, non-agentic AI models.
- Out of scope (for now): generative AI and agentic AI models.
- Policy direction: agencies have signalled further work, including additional consultation on AI model risk management.
SR 26-2 and MAS MindForge: Similarities and differences
There is meaningful overlap between the two approaches:
- Both emphasise governance, lifecycle controls, and risk-proportionate oversight.
- Both recognise that one-size-fits-all model governance is ineffective.
There are also important differences:
- Scope: SR 26-2 currently excludes generative and agentic AI, whereas MAS MindForge explicitly covers traditional AI, generative AI, and emerging agentic AI.
- Artefact style: SR 26-2 is supervisory principles guidance; MindForge provides operational handbooks and implementation examples that institutions can adopt directly.
- Regulatory trajectory: SR 26-2 signals future AI-specific refinement; MindForge already provides implementation resources aligned to MAS’ evolving AI risk expectations.
Practical implications for institutions (industry perspective, not regulatory guidance)
Recent practitioner thinking, including Moody’s Model Risk Management in the Age of AI, reinforces several practical shifts institutions can start now:
- Move from periodic validation to continuous governance for higher-risk AI use cases.
- Implement dynamic guardrails and real-time monitoring, rather than relying only on fixed thresholds.
- Define explicit third-party model acceptance criteria, version controls, fallback arrangements, and escalation triggers.
- Set clear human-in-the-loop intervention points for material decisions and failure modes.
- Expand validator and decision-maker capability in prompt design, adversarial testing, observability, and system-level behavioural assessment.
For Australian institutions, this contrast is instructive: APRA can preserve technology-neutral principles while still issuing practical implementation guidance that covers modern AI use cases end to end.
References
- Rhizome: In focus: what’s on our mind?
- ASIC REP 798: Beware the gap: Governance arrangements in the face of AI innovation
- APRA: CPS 220 Risk Management
- APRA: CPS 230 Operational Risk Management
- Federal Reserve SR 26-2: Revised Guidance on Model Risk Management
- Federal Reserve/OCC/FDIC Attachment: Supervisory Guidance on Model Risk Management (April 2026)
- Federal Reserve SR 11-7: Guidance on Model Risk Management
- MAS Project MindForge
- MAS media release (Mar 2026): AI Risk Management Toolkit
- Moody’s white paper: Model Risk Management in the Age of AI
What gaps are you finding in current APRA expectations when governing AI and models? I’d welcome perspectives from risk and compliance leaders in the comments.
Declaration and bio note: These are the opinions of the author only and should be interpreted as such. This article reflects the personal views of Michael as a former regulator. Michael served at APRA from 2008 to 2015, including as Head of Operational Risk Analytics and a Responsible Supervisor. This is not regulatory advice and does not represent an official APRA view.