Skip to main content

The risk management imperative in an age of ubiquitous AI

The preceding sections illustrate that advancements in AI may go wrong in myriad ways. They also demonstrate that regulatory bodies are taking extensive action to mitigate and manage possibilities of adverse outcomes. And it is not only public agencies that recognise the need for such measures; increasingly, private sector actors have started to take notice. This includes developers of AI models or AI-powered applications, enterprises adopting or embedding AI in various industries, and institutional investors with portfolios that are increasingly exposed to AI's many implementations. In addition to having a shared interest in shaping AI developments to advance human prosperity, these actors are also driven by more immediate incentives to pursue effective risk management. Effective risk management encompasses both technological solutions as well as organisational practices that can minimise losses from damaged assets, harms and resultant litigations, service disruptions, insurance costs, mistrust among end-users, lack of access to markets, and more 1.

For AI developers and companies intent on deploying AI-based applications, there is another clear value case to tackle and mitigate AI-related risks. Non-compliance with the AI practice prohibitions outlined in Article 5 of the EU AI Act may result in administrative fines, with a maximum of €35 MM or up to 7% of the company's total revenue, whichever is higher. Violations of other provisions may incur fines of up to €15 MM or up to 3% of the company's total revenue, whichever is higher. Supplying incorrect information in response to requests from notified bodies and national competent authorities may result in fines of up to €7.5 MM or up to 1% of the company's total revenue, whichever is higher 2.

The risk management imperative also stems from the simple fact that companies gain a significant market advantage from providing reliable and trustworthy services. If AI-related risks are insufficiently mitigated and AI-powered products cause harm to clients or wider society, the companies which developed and deployed these technologies stand to accrue sizable reputational costs. Effective risk management is thus a vital process for enterprises that seek sustained business success, all the more so when they operate in high-risk and hard-to-predict environments 3. Moreover, companies—and the world at large—have a collective interest to ensure the safe deployment and development of AI, as this will drive popular support and market demand for AI-powered tools and services.

Beyond individual company founders, executives, or funders, institutional investors with large or cross-border portfolios are growing increasingly concerned with limiting and alleviating AI-related portfolio risks. These investors commonly rely on diversification strategies to create robust portfolios. Once AI use has scaled into core functions of most sectors, risks that come with the deployment of that technology will be spread across the economy and thus across the assets of even the most expertly diversified investment portfolio 4. One of the main ways to hedge against cascading risks in this new world of ubiquitous AI will be adherence to contemporary risk management strategies—those tailored to or informed by AI-related risks 5.

It is thus not surprising that there has been a proliferation of regulatory recommendations, technical standards negotiations, and industry guidelines and best practice enumerations in recent years. While these initiatives are important and welcomed by most commentators, there is also a widespread sense that AI-related rules and recommendations remain at a nascent stage, with little action-relevant guidance for the companies that seek to develop, deploy, and use AI tools 6.

"Regulations can create barriers or shift the slope of incentives in any market; likewise, regulation plays a vital role in all frontier tech and future problems—we have seen the shift in climate, and I expect that legislation, such as the EU AI Act, will lubricate the market conditions for AI Assurance Technology."

Hampus Jakobsson
General Partner, Pale Blue Dot

A common refrain has been that compliance with existing and emerging regulations will require a sophisticated risk management ecosystem, integrating societal, ethical, and technical means for a comprehensive response to AI-related risks 7. This will rely on the input of regulators and standards bodies as well as AI developers and deployers. To bridge the two, an effective risk management ecosystem will depend on an innovative and technically-savvy industry of AI assurance technology solution providers. Where possible, these companies will help translate principles of trustworthy AI into concrete requirements, and become integral partners in efforts to efficiently and effectively address those requirements. And in the following section, “The AI Assurance Technology Landscape”, we break down the major categories of solutions that we can expect from this emerging and vital new industry.

Footnotes

  1. Douglas Broom, “AI: These Are the Biggest Risks to Businesses and How to Manage Them,” World Economic Forum, July 27, 2023, https://www.weforum.org/agenda/2023/07/ai-biggest-risks-how-to-manage-them/.

  2. European Commission, Regulation Of The European Parliament And Of The Council Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts, 2021/0106 (COD), Art. 71

  3. Douglas Broom, “AI: These Are the Biggest Risks to Businesses and How to Manage Them,” World Economic Forum, July 27, 2023, https://www.weforum.org/agenda/2023/07/ai-biggest-risks-how-to-manage-them/.

  4. Yu, Chen. 2024. “AI as Critical Infrastructure: Safeguarding National Security in the Age of Artificial Intelligence.” OSF Preprints. https://doi.org/10.31219/osf.io/u4kdq.

  5. There are precedents for investment activism directed towards reducing systemic and societal risks with the goal of decreasing concomitant financial risk: James Hawley and Jon Lukomnik, “Beyond Modern Portfolio Theory – How Investors Can Mitigate Systemic Risk through the Portfolio,” RI Quarterly Vol. 12: Highlights from the Academic Network Conference and PRI in Person 2017, August 10, 2017, https://www.unpri.org/research/beyond-modern-portfolio-theory-how-investors-can-mitigate-systemic-risk-through-the-portfolio/538.article; and World Economic Forum, “Transformational Investment: Converting Global Systemic Risks into Sustainable Returns,” White Paper, May 2020, https://www.weforum.org/publications/transformational-investment-converting-global-systemic-risks-into-sustainable-returns/.

  6. “Frontier AI: Capabilities and Risks,” Discussion paper (Department of Science, Innovation and Technology (Government of the United Kingdom), October 25, 2023), https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper/frontier-ai-capabilities-and-risks-discussion-paper, section: “What risks do frontier AI present?”; and Hadrien Pouget, “What Will the Role of Standards Be in AI Governance?,” Ada Lovelace Institute, April 5, 2023, https://www.adalovelaceinstitute.org/blog/role-of-standards-in-ai-governance/.

  7. Michelle Donelon, “A Pro-Innovation Approach to AI Regulation: Government Response. Ministerial Foreword”, Consultation Outcome presented to Parliament by the Secretary of State for Science, Innovation and Technology by Command of His Majesty (Department of Science, Innovation and Technology, Government of the United Kingdom, February 6, 2024), Unique Reference: E03019481 02/24, https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response; and Kay Firth-Butterfield und Lofred Madzou, “Rethinking Risk and Compliance for the Age of AI”, World Economic Forum, September 30, 2020, https://www.weforum.org/agenda/2020/09/rethinking-risk-management-and-compliance-age-of-ai-artificial-intelligence/.