The Rise and Rise Of AI Risk

Regulators across the globe are raising major concerns over the rapid adoption of AI and taking steps to build in guardrails to temper this juggernaut.

In the European Union, there is a clear recognition that existing legislation is insufficient to address the specific challenges AI systems may bring, and they are shifting a Regulatory Framework that identifies four levels of risk in AI:

  1. Unacceptable risk
  2. High risk
  3. Limited risk
  4. Minimal or no risk

Financial services is just one of several sectors providing critical infrastructure that is seeing a marked uptake in regulatory activity.

In Canada, OSFI (Office of the Superintendent of Financial Institutions) sees model risks being exacerbated by digitalization and the use of advanced analytics, including AI/ML, resulting in plans to expand the scope of Guideline E-23 and clarify OSFI’s expectations that all federally regulated financial institutions (FRFIs) and federally regulated pension plans (FRPPs) appropriately assess and manage model risks at the enterprise level. 

In the UK, the third-party risks posed by AI initiatives that leverage cloud and vendor partnerships to build new AI/ML models are raising alarm bells, and the Bank of England model risk rule may well need to consider real-time monitoring of AI whereby banks need to repeat performance testing on models that recalibrate dynamically.