Skip to main content
The-Age-of-AI-cover

An indispensable new approach to risk management is rapidly emerging. The general public, government institutions, and business leaders now recognize the vast array of potential harms from artificial intelligence (AI) deployment. In response, legislative and business imperatives are demanding contemporary systems of governance, fit for a world embedded with AI at-scale. And yet, a common concern is that conformity with these technology governance imperatives will require sophisticated risk management techniques that currently do not exist at-scale.

To bridge this gap, a new industry is emerging: AI Assurance Technology (AIAT). AIAT is the software, hardware, and services that enable organisations to more effectively, more efficiently, or more precisely mitigate the risks of AI. AIAT companies support AI labs, downstream developers, cloud computing providers, AI-adopting enterprises, governments, insurance companies, and more.

As these types of organisations continue to grasp the potential impact of AI risks on economic growth and stability, they will demand new solutions. And as an instrumental enabler of contemporary risk management techniques, AIAT will see an exponential increase in growth over the next decade.

To capitalise on this burgeoning new market, this report highlights why investors, entrepreneurs, and policymakers will—respectively:

  • Invest in early-stage, and growth-stage AI risk -mitigating solutions
  • Found new businesses to safeguard, audit, govern, or verify in the age of AI
  • Ratify market-shaping policies, enabling a well-functioning AIAT market

Market Size

The Age of AI

👉 See Chapter 1: The Age of AI

Recent advances in AI signal that a so-called Fourth Industrial Revolution, also known as "The Intelligence Revolution", is upon us. AI is bound to generate world-changing transformations for our societies. These global changes will soon position AI as part of a class of major technological inventions that includes automobiles, aeroplanes, personal computers, and the internet. Further still, AI technologies may even hold the potential to revolutionise the world more than any of these past innovations. For instance, it has been estimated that AI's contribution to the global economy could reach the scale of Europe's current total GDP 1. Imagine a world with generally-intelligent, adaptive or autonomous systems embedded in:

  • Transportation (e.g., Self-driving delivery fleets)
  • Healthcare (e.g., Personalised diagnosis and prognosis)
  • Manufacturing (e.g., Adaptive CoBots)
  • Pharmaceuticals (e.g., Precision medicine)
  • Consumer Goods (e.g., Dynamic retail ads)
  • Professional Services (e.g., Personalised legal advice)
  • Finance (e.g., Automated underwriting)
  • Telecommunications (e.g., AI-native 6G networks)
  • Energy (e.g., Predictive maintenance)
  • And more …

These advanced AI applications may be arriving far sooner than we think. While affording hope and excitement for our future, it is also clear that a desirable version of this imminent post-AI world depends on contemporary approaches to risk management. The AI Assurance Technology market, in particular, will be vital to ensuring increasingly powerful AI technologies lead to stable industry transformations, rather than unmanageable economic disruptions.

Risk Management in the Age of AI

👉 See Chapter 2: AI Risk Management in the Age of AI

Fortunately, recent regulatory and private sector efforts have demonstrated a shared imperative to prevent adverse outcomes from AI adoption. A core driver of these efforts—among ethical, environmental, and social concerns—comes from the recognition that global economic stability will soon be coupled to AI-centric systems. This coupling signals an imminent, new risk management paradigm, and it necessitates systematic governance. Despite the nascency of AI-related regulations and enterprise risk management capabilities, there are a handful of known risks and compliance priorities:

  1. Technical failures of AI — unplanned consequences of AI under-performing relative to its intended use, either through goal Misalignment or underlying Unreliability of the trained models. In response, regulators may require companies to:
  1. implement measures to address errors, faults or inconsistencies
  2. implement standardised protocols and tools to identify and mitigate systemic risks

  3. implement mandatory human oversight mechanisms
  4. provide comprehensive instruction and documentation
  5. implement robust data governance frameworks
  6. record accidents and report to regulatory authorities
  7. monitor and document incidents.
  1. Misuse of AI — intentional exploitation of AI to aid in the process of or directly cause Physical harm, Digital harm, or Informational harm. In response, regulators may require companies to:
  1. monitor and attest to AI model robustness against misuse
  2. disclose large-scale cloud computing, and authenticate users
  3. identify systemic risks and implement standardised risk mitigation measures

  4. maintain documentation on risks and report to regulatory authorities
  5. authenticate content and report on provenance or uncertainty
  1. Vulnerability of AI to exogenous interference — forces interrupting or damaging an AI system that externally originate from Natural hazards (e.g., earthquake), Accidental collateral damage (e.g., kinetic military action), or intended Adversarial attacks. In response, regulators may require companies to:
  1. design and implement suitable security measures for AI and IT infrastructure

  2. monitor AI systems for unauthorised alterations, adversarial attacks, and tampering risks

  3. regularly assess potential risks linked to AI usage in critical infrastructure

The AI Assurance Technology Landscape

👉 See Chapter 3: The AI Assurance Tech Landscape

Globally, there is a growing demand for products and services that can support the integration of safety, security, and compliance into industry AI applications. Regulations, corporate safety standards, and AIAT solutions will work in concert to address the risks of AI—each correcting for the other's weaknesses. In a similar vein, AIAT serves a critical role comparable to that of air traffic control for the aviation industry, crash test programs in automotive, fraud detection in banking, or quality control labs in pharmaceuticals. At this point in time, lessons from the growth of similar markets suggest that the following four AIAT categories will be the most pressing prerequisites for a world with responsible AI.

  • AI-Resilient IT Security: Solutions for safeguarding compute hardware, data centres, network infrastructure, AI models, AI systems, and all other component parts of an AI application tech stack, through both hardware security and cybersecurity protocols. AI resilience refers to preparation for the novel security threats from AI and the unique vulnerabilities of AI-driven systems. Beyond compliance, organisations need security to minimise the costs of accidents, insurance, asset damage, cyber attacks, corporate espionage, and their related reputational consequences.

    • Hardware security: e.g., Hardware-integrated monitoring, tamper-proof device enclosures
    • Data privacy and cybersecurity: e.g., AI firewalls, data encryption, privacy-enhancing technology
  • AI Trustworthiness: Solutions for auditing desirable qualities (e.g., robustness, bias, fairness, safety) of AI data inputs, AI models, AI systems, and other components of the application tech stack, through automated testing, benchmarking, expert evaluations, and other quality assessments. The legal definition and thresholds that constitute trustworthy AI are unfolding in real-time; however, it is clear that organisations can improve their reputation, competitiveness, and AI performance through frequent external evaluations.

    • Data, model, or system evaluations: e.g., Algorithmic bias detection, adversarial testing for robustness, explainability and interpretability tools
    • External compliance audits: e.g., Pre-deployment audit, organisational governance procedures audit, data-focused compliance auditing, hardware-specific conformity audits
  • AI-Centric Risk Management: Solutions for governing the documentation of organisational standards or compliance information, tracking applicable standards, and provisioning access to external auditors, as well as solutions to support the post-deployment operations and management of AI observability, risk monitoring, and incident response. AI-centric organisations need help managing the sometimes cumbersome, complicated responsibilities that come from AI adoption and its corresponding governance, risk management, and compliance requirements.

    • Quality, conformity, and audit management: e.g., Policy libraries and testing tools, reporting and conformity management systems, structured access tools for external audits
    • Observability, monitoring, and incident response: e.g., AI application observability platforms, AI infrastructure monitoring systems, AI-centric incident management platforms
  • AI-Aware Digital Authenticity: Solutions for verifying the origins or the legitimacy of digital identities, avatars, news, and other forms of media accessed through public or private networks, through detection systems, provenance tracking, watermarking techniques, and other authentication tools. Media sites and other networks will rely on these solutions to uphold their responsibility to protect impressionable minds, maintain trust in online interactions, and prevent AI-enabled deception.

    • Identity and content authentication: e.g., verifiable digital credentialing, AI watermarking techniques, digital asset and provenance tracking platforms, AI-aware trust & safety moderation services

AI Assurance Technology Market Forecasts

👉 See Chapter 4: AI Assurance Tech Market Forecasts

By the year 2030, we estimate the global AIAT market could reach approximately USD $276 billion. This is based on starting estimates of $1.63 billion for the year 2023—a figure commensurate with comparable market research reports at the time of this writing 2. Three different modelling methods were developed to size the AIAT market, and resulted in an average 108% year-over-year compound annual growth rate (CAGR). That is to say, we expect the AIAT market to double each year, through the remainder of this decade. This rapid growth rate will be due to potent market forces, such as:

  • Private sector investment in AI technology adoption,
  • Breakthroughs from AI intersecting with other innovative fields,
  • Public recognition of the potential harms from AI,
  • AI-related compliance requirements around the globe, and
  • Enterprise imperatives to address known risk management gaps.

By comparison, the above figures represent an AIAT market equating to nearly 15% of the global AI market in 2030. This percentage aligns with our expectations, given forthcoming regulatory requirements are anticipated to establish a comparable market floor for compliance-related AIAT solutions. Moreover, these market proportions are comparable to that of cybersecurity, safety testing, and risk management in mission-critical sectors, such as Aviation or Banking.

For these reasons, the true value of the future AIAT market is likely far larger than other research estimates may suggest. However, it is worth noting that market projections for novel, transformative technologies are subject to a considerable degree of uncertainty. For instance, if all AI technologies were to generate a market five times as large as the projections from global market research firms, our resulting AIAT market modelling would amount to $1 Trillion by 2030.

Footnotes

  1. PwC. “Sizing the Prize: What's the Real Value of AI for Your Business and How Can You Capitalise?” 2017. Accessed April 3, 2024. https://www.pwc.com/gx/en/issues/analytics/assets/pwc-ai-analysis-sizing-the-prize-report.pdf.

  2. Grand View Research. "AI Trust, Risk, & Security Management Market Size & Trends Report." Accessed April 17, 2024. https://www.grandviewresearch.com/industry-analysis/ai-trust-risk-security-management-market-report; SNS Insider. "AI Trust, Risk, and Security Management [AI TRISM] Market Size to Cross USD 6.02 Billion by 2030 Due to Rising Demand for Ethical AI - Research by SNS Insider." *GlobeNewswire. *January 19, 2024. Accessed April 17, 2023. https://www.globenewswire.com/en/news-release/2024/01/19/2812347/0/en/AI-Trust-Risk-and-Security-Management-AI-TRISM-Market-Size-to-Cross-USD-6-02-Billion-by-2030-due-to-Rising-Demand-for-Ethical-AI-Research-by-SNS-Insider.html; Allied Market Research. "AI Trust, Risk, and Security Management (AI-TRISM) Market." Accessed April 17, 2024. https://www.alliedmarketresearch.com/ai-trust-risk-and-security-management-ai-trism-market-A97526