Existing and forthcoming legislations as well as contemporary risk management imperatives both demand a new class of AI safety, security, and governance solutions. These demands apply to cloud compute providers, data vendors, AI labs, downstream AI application developers, and many of the industries looking to adopt advanced AI use cases.
There are sizable economic ramifications of sub-standard safety and security protocols in the imminent age of AI. While this may suggest a need for strict limitations, licensing regimes, and other external governance infrastructures, there exists a vast array of AI harms that we expected to require a different answer: AI Assurance Technology (AIAT). The AIAT market refers to the products—whether software, hardware, or some combination—and services that commercial and state organisations can procure as a means to more effectively, more efficiently, or more precisely mitigate AI risks. As we enter into the age of AI, many advocate for a multi-layered approach to defence 1. Meaning that the combination of many regulations, corporate safety standards, and a variety of AIAT solutions will work in concert to attenuate the risks of AI—each correcting for another weaknesses.
Of the many ways to divide this emerging and evolving market 2, at present we view the following four domains (and seven sub-domains):
🛡️ AI-Resilient IT Security | Solutions for safeguarding against damage, access, theft, and other threats to compute hardware, data centres, network infrastructure, AI models, AI systems, and all component parts of an AI application tech stack, through both physical and virtual means.
|
💯 AI Trustworthiness | Solutions for auditing desirable qualities (e.g., robustness, bias, safety) of AI data inputs, AI models, AI systems, and other components of the application tech stack, through automated testing, benchmarking, expert evaluations, and other quality assessments.
|
🏛️ AI-Centric Risk Management | Solutions for governing the documentation of organizational standards or compliance information, tracking applicable standards, and provisioning access to external auditors, as well as solutions to support post-deployment management of AI observability, risk monitoring, incident alerts, or the allocation of organizational resources to address known issues.
|
🤖 AI-Aware Digital Authenticity | Solutions for verifying the origins, permissibility, or legitimacy of digital identities, avatars, news, and other forms of media accessed through public or private networks, through detection systems, provenance tracking, watermarking techniques, and more.
|
While this report divides the AI Assurance Technology market into discrete groupings ("solution domains"), each focused on the functions of either safeguarding, auditing, governing, or verifying, respectively; the AIAT companies that occupy this emerging market do not always cleanly map to just one subdomain or even one solution domain. This is because each company often offers multiple product and service lines, where each offering can be classed as providing a different function.
While in some instances AIAT may be aided or driven by AI technology, they do not always rely upon AI to achieve their assurance functionality. In our view, the unifying feature of all AIAT companies is their direct or indirect contributions to the mitigation of AI-caused harms. Additionally, it is worth noting that AI Assurance Technology is not synonymous with the categories of Public Interest Technologies 3, Public Interest AI 4, and AI for Good 5. While at times there may be overlaps between them—because AIAT might inadvertently lead to positive outcomes for non-AI issues—AIAT herein is expressly targeted towards the risks brought about or exacerbated by AI technologies.
What is it? | A short definition of the solution domain, and its sub-domains. |
Why is it imperative? | A recap of noteworthy AI risks and compliance actions that businesses in a given solution domain are likely to mitigate and address. |
How might it work? | A non-exhaustive sampling of the hardware, software, and services involved in the process of safeguarding, auditing, governing, or verifying. |
Who needs it? | A description of the business case for expenditure towards a given solution domain, and likely buyers that startups should target when scaling. |
What else do inventors & founders need to know? | A review of the current state of external governance infrastructure and likely legislative measures that will enable a well-functioning AIAT market. |
Sample companies from the Market Landscape Scan: | A small sampling of early-stage or growth-stage startups in each sub-domain of the AIAT market. |
Footnotes
-
SafeAI. "AI Risk; The Swiss Cheese Model" Accessed April 18, 2024. https://www.safe.ai/ai-risk ↩
-
Note: While this report details the four most promising categories of AIAT products and services, additional AI risk-mitigating businesses might include organisations improving public awareness or employee education regarding AI risks, as well as risk-focused consultancies that can help AI-adopting sectors comply with emerging legislation, adhere to industry standards, or achieve competitive differentiation through risk management. ↩
-
Public Interest Technologies (def.): An emergent discipline that encompasses a broad array of technologies developed, disseminated, and utilised to benefit all segments of society inclusively and equitably. [New America] ↩
-
Public Interest AI (def.): Similar to PIT, but specifically focuses on the application and implications of AI in best serving public well-being and long-term survival. [Public Interest AI] ↩
-
AI for Good (def.): The use of AI to address global challenges and improve the wellbeing of humanity (e.g., AI applications in healthcare, environmental protection, education, cancer research). [International Telecommunication Union] ↩