Skip to main content

Appendix I | Final Remarks from the Authors

As we conclude this report, we acknowledge several uncertainties remain and would benefit from further investigation, given additional time and resources. The following [inexhaustive] list of points highlight the considerations that should be taken into account when interpreting our analyses and findings. These points emphasise the evolving nature of AIAT and the need for continuous research to keep our analyses relevant and informative.

  • Disambiguate the AIAT solution typology: Our initial AI Assurance Technology (AIAT) solution typology distinguished between "Data, Model, and System Evaluations." As we discovered AIAT companies with more comprehensive solutions, these categories were consolidated. However, we also recognize there are sometimes different risks, and thus, there should exist different solutions when it comes to evaluating data inputs, foundation models, and downstream AI systems or applications. With more resources, we would explore re-separating these audits to clarify distinctions.
  • Analyse and refine the AIAT terminology: Industry alignment on the meaning of terms like "Observability", "Evaluation", "Explainability", and "Interpretability" is lacking, with many companies using these terms variably to describe their solutions. This misalignment may have influenced our market typology and led to inaccuracies in our company landscape classifications. More thorough scrutiny of how these terms are used across different contexts could clarify their meanings and improve the accuracy of our classifications. Moreover, this exercise may help to better distinguish between proactive "AI Data, Model, and System Evaluations"—aimed towards safety and security throughout AI development—versus more post-hoc or reactive kinds of evaluations—which we typically see in the "Observability, Monitoring, and Incident Alerts" category.
  • Confirm company focus on AI risk mitigation: Many companies built to improve AI system performance, may indirectly enhance safety and security. It is imperative to discern whether these companies primarily aim to mitigate AI risks or if these benefits are merely a byproduct. This distinction is critical for accurately categorising companies either within the AIAT sector or more broadly within the AI market.
  • Validate the Market Growth Estimates: A more detailed bottoms-up analysis could help validate our projections by disaggregating and closely examining each part of the AIAT market—safeguarding, auditing, governing, and verifying. Such analysis would also lend itself to evaluating each category's relative proportion of the market and any potential shifts in this composition, to be expected. Nevertheless, both this bottoms-up approach and our existing models include assumptions that are subject to variability, such as the pace of regulatory developments and AI adoption rates.
  • Deep-dive on DeepTech and Hardware Security: Our exploration into opportunities within hardware and compute governance has been limited. A dedicated analysis focusing on how DeepTech startups could intersect with AIAT solutions in hardware security might reveal untapped opportunities, especially considering the complex technical knowledge required in this area.
  • Review our Risk Framework and Solution Typology: Our frameworks for AI risks and AIAT solutions draw from widely recognized sources and public discussions—we aimed to take an observational or empirical approach, as opposed to a normative one. However, emerging risks and solutions, especially from less prominent or non-English-speaking sources, may not be fully captured herein. Regular updates, incorporating a broader range of sources, are necessary to ensure our frameworks continue to accurately reflect the latest developments in the AI and AIAT landscapes.