Skip to main content

AI-Centric Risk Management

What is it?

AI-Centric Risk Management is a crucial concept that emphasises the importance of governing AI technologies through formalised operational oversight of the deployment and maintenance of AI technologies. This market category is founded on the principle that responsible AI management requires companies to adhere to robust safety standards, monitor technology implementations diligently, and respond swiftly to any arising incidents. It involves the utilisation of software platforms designed to assist in technology management and organisational governance throughout the development and deployment stages of AI systems. This AIAT domain can be further categorised into two main sub-domains: (1) Quality, Conformity, & Audit Management, and (2) Observability, Monitoring, and Incident Response.

Quality, Conformity, & Audit Management platforms are instrumental in ensuring conformity with industry best practices as well as legislated compliance requirements, facilitating internal or external reviews of AI systems through tools and procedures reminiscent of quality management systems (QMS) or enterprise GRC platforms. These platforms help organisations track compliance, review documentation, and understand relevant policies. The process of internal audits, the preparation of documentation for external auditors, or the provisioning of structured access to AI evaluators may leverage solutions captured within this sub-domain.

On the other hand, Observability, Monitoring, and Incident Response solutions focus on tracking AI deployments, identifying and managing risks, and facilitating incident response through efficient resource and staff allocation. This includes monitoring live production environments to detect and address risks, ensuring the safety and reliability of AI functions. Solutions in this sub-domain may more often revolve around operationalizing risk management of AI systems post-deployment.

Why is it imperative?

Both sub-domains of AI-Centric Risk Management help to address AI risks by assuring that compliance and risk management practices are effectively implemented, documented, monitored, and put into action. This helps to prevent AI Misuse, Technical Failures of AI, and Vulnerabilities to Exogenous Interference. By way of regulations, the following kinds of requirements could be addressed by products and services in this solution category: Implement mandatory human oversight mechanisms, Provide comprehensive instruction and documentation, Maintain documentation on risks and report to regulatory authorities, and Regularly assess potential risks linked to AI usage in critical infrastructure, just to name a few.

How might it work?

SUB-DOMAINEXAMPLE SOLUTION IDEASSOLUTION TYPE
Quality, Conformity, and Audit ManagementPolicy libraries & testing toolsSoftware
Quality, Conformity, and Audit ManagementPolicy and conformity management systemsSoftware
Quality, Conformity, and Audit ManagementTools for structured access provisioningSoftware
Observability, Monitoring, and Incident ResponseReal-time model observability platformsSoftware
Observability, Monitoring, and Incident ResponseAI infrastructure monitoring systemsSoftware
Observability, Monitoring, and Incident ResponseAI model lifecycle management systemsSoftware
Observability, Monitoring, and Incident ResponseIncident management platformsSoftware

Quality, Conformity, & Audit Management

In short, these are software platforms allowing for internal documentation, comparisons of internal and external documentation, or provisioning access to review documentation. Policy libraries and testing tools simulate real-world scenarios and edge cases to evaluate the robustness of internal corporate policies. They also assist with the continuous iteration of organisational governance protocols, aligning them with an evolving regulatory landscape 1. By emulating edge cases and potential policy breaches within a controlled environment, these tools enable organisations to rectify weaknesses in their compliance stance and minimise risk exposure. Businesses in this sub-domain may also offer pre-packaged software with reporting and conformity management systems. Such a system may offer accessible starting points for organisational compliance teams, with pre-made or sometimes tailor-made templates 2. Organisations can leverage these templates to save time as they keep track of policy updates, or to facilitate interoperable compliance efforts across multinational business activity, industry mergers, or joint ventures that must manage complex AI risk management requirements. Moreover, these features can help to streamline the process of conducting internal audits or internal risk assessments 3. Thereby, these AI-centric governance tools are able to help organisations in establishing and managing a comprehensive risk management program.

Finally, there exists a need for a secure solution to the novel challenges that arise when allowing external parties to conduct assessments of frontier AI models, training design, cybersecurity, or hardware governance. High-quality tools or APIs for structured external access provisioning could help minimise risks of proliferation and leakage of critical information (e.g., model weights). Such technologies could facilitate AI Trustworthiness services for proprietary AI models, by giving minimal but sufficient access to third-party solution providers 4.

Observability, Monitoring, and Incident Response

"Observability" in ML is akin to monitoring the vital signs of an AI system. AI observability platforms involve continuously collecting real-time data across the entire AI system—including the model, its data, application infrastructure, and performance—to oversee its overall health. They do this through dynamic dashboards, visualising key metrics about an AI system's inputs, internal states, and decisions 5. Observability techniques enable MLOps practitioners to identify and address potential reliability issues in the ML pipeline, such as deteriorating data quality, model drift, declining performance, or unexpected behaviours 6. While observability may include testing or evaluations, it primarily focuses on enhancing system reliability, efficiency, and performance, rather than explicitly measuring safety, security, or robustness. However, end-to-end performance improvements can also reduce the risks associated with AI Technical Failures, AI system Vulnerabilities, or the potential for AI Misuse.

AI infrastructure monitoring systems ensure that both AI-centric systems and AI-enabling infrastructure (e.g., cloud computing services) continues to operate as intended without fail, avoiding added costs, security issues, and more. These platforms measure usage patterns, hardware temperatures, resource consumption, or system downtime risks 7. Monitoring helps maintain the overall solidity of AI services, ensuring organisations are notified before AI systems are derailed from continuous or optimal operations. This is especially relevant to high-trust or mission-critical sectors (e.g., finance, health, critical public infrastructure).

Another crucial layer of AI governance is that of incident management platforms. These platform-based solutions support organisations with the identification and remediation of AI-caused incidents. They can also empower individuals—be they employees, users, or broader stakeholder groups—to report issues with an AI system. Such mechanisms should be made easily accessible and, in some cases, potentially anonymous. Many will expect AI developers and AI-adopting enterprises to have robust and ethical practices for the identification, communication, and remediation of AI-related incidents, and some regulators may also set minimum standards to harmonise these protocols 8.

Who needs it?

Prospective CustomerEst. WTP for Quality & Conformity ReportingEst. WTP for Observability & Monitoring
High-Tech ManufacturersHighLow
Data Collectors & BrokersHighLow
Cloud Compute ProvidersHighLow
Frontier Ai LabsHighMedium
Ai Systems DevelopersHighHigh
Industry EnterprisesHighHigh

Collectively, the management of AI's potential risks, impending regulations, and industry-specific standards requires comprehensive oversight. Organisations across the AI lifecycle are recognising the need for solutions to streamline efficiencies and ensure conformity throughout their governance, risk management, and compliance processes. Quality and conformity reporting platforms are particularly crucial for Chief Risk Officers, compliance managers, and their teams. These roles are tasked with keeping up with the latest policies and standards relevant to their specific industries. Such tools become indispensable not only for conducting periodic internal audits but also for preparing documentation for external auditors. In heavily regulated sectors adopting AI-centric systems, the efficiency of these platforms supports risk management and compliance teams in maintaining stringent adherence to industry norms and regulatory requirements. Ultimately, this supports companies in minimising the regulatory burden and insuring against penalties for non-compliance.

For observability, monitoring, and incident response solutions, the demand for these platforms is primarily among sectors with deployed in-production AI applications, providing real-time insights into AI system performance and security. Chief Information Security Officers, IT operations managers, and incident response teams in industries like finance, healthcare, and automotive might be the key buyers, depending on the mix of features in a given software platform. These professionals are responsible for managing the risks of integrating AI into their operational fabric and stand to benefit significantly from the ability to swiftly detect anomalies, manage risks, and respond to incidents. By facilitating efficient resource allocation and engaging staff during incidents, these solutions play a pivotal role in the post-deployment governance of AI systems, thereby upholding system integrity, user trust, and minimising losses from any incidents that do arise.

What else do investors & founders need to know?

The significance of data privacy and information security extends into the realm of AI Quality, Conformity, and Access solution providers. These vendors engage in activities that inherently involve handling sensitive data, necessitating stringent security measures. The potential for cyberattacks highlights the critical need for advanced governance protocols, including user authorization, encryption, and comprehensive cybersecurity measures. This is particularly vital for vendors serving safety-critical sectors, where the stakes for information security are significantly higher. In jurisdictions like the UK and the US, adhering to national cybersecurity standards will be legally mandated for vendors looking to participate in public biddings or offer services to public institutions. The UK Cyber Essentials Scheme 9 and the Federal Risk and Authorization Management Program (FedRAMP) 10 in the US are examples of cybersecurity accreditation schemes that AI-Centric Risk Management vendors must comply with. These frameworks exemplify the growing recognition of the importance of cybersecurity measures in the context of AI technologies, setting a baseline for security practices that vendors must meet to operate within these markets.

Observability, Monitoring and Incident response platforms make it easier for companies, auditors, and regulators to identify and respond to incidents, vulnerabilities, or policy breaches. The EU AI Act, which mandates AI entities to report significant incidents, highlights a growing discussion on whether third parties should also report such incidents 11. While current regulations may not universally require third-party reporting, it's prudent for solution providers to anticipate potential mandates. Additionally, vendors in this domain may have to adhere to existing whistleblower protection regulations, like the European Whistleblower Directive 12 or the Justice Department's Civil Cyber-Fraud Initiative in the United States 13, which enables workers to report wrongdoings and prevent them from corporate retaliation. In addition to identifying vulnerabilities, some vendors in this domain will be responsible for compiling and inventorying vulnerabilities of AI applications. While aimed at strengthening AI systems, these databases must be handled with care to prevent misuse by malicious actors 14. Along with transparency, implementing effective processes for confidentiality and data privacy will be critical for solution providers.

CASE STUDY

Supply Chain Attacks: Lessons from the SolarWinds Incident

  • Year: 2019
  • Companies: SolarWinds
  • Industry: IT performance management and monitoring system

The SolarWinds breach marks a pivotal moment for third-party tech monitoring providers, emphasising the need for stringent cybersecurity measures. As a company that managed and monitored IT infrastructure, SolarWinds had access to sensitive data and workflow details.

By injecting malicious code into SolarWinds' Orion platform updates, attackers gained extensive access to 18,000 systems worldwide, including sensitive information from US government agencies, Microsoft, and Cisco 15. The breach's detection was delayed over a year due to the sophisticated nature of the attack, which mimicked legitimate network traffic and evaded detection mechanisms 16​.

SolarWinds and its Chief Information Security Officer were charged with fraud and internal control failures for repeatedly ignoring red flags 17.

This incident serves as a stark reminder of the critical need for robust cybersecurity defence in information sensitive environments—especially for vendors of technology monitoring and management solutions, which, if compromised, can serve as a gateway for widespread espionage and data theft.

It underscores the importance of secure software development practices, continuous monitoring for security threats, and transparent communication with stakeholders about cybersecurity risks and incidents.

For AI-related platforms that support conformity reporting, AI observability, and AI application monitoring, the SolarWinds breach highlights the necessity of stringent security measures and adherence to evolving standards that protect sensitive data.

Sample Companies from the Venture Landscape:

As at 15 April 2024: Of the 100 startups in our AI Assurance Technology landscape scan, we uncovered 44 offering AI-Centric Risk Management solutions—representing the majority of AI Assurance Technology companies discovered to-date. These companies included 39 seed/early stage startups, and 5 growth/late stage companies. Here's a brief sampling of those startups:

placeholder

placeholder

placeholder

Simplify AI Safety, Security, and Compliance.

GRACE Governance for Large Language Models (LLMs). Address and mitigate the concerns and risks associated with the use of LLMs in your organization.

Credo AI is the intelligence layer for AI projects across your organization. Track, assess, report, and manage AI systems you build, buy, or use to ensure they are effective, compliant, and safe.

placeholder

placeholder

placeholder

A governance tool to help you build and deploy safe, ethical, and transparent AI.

The AI Observability Platform for Enterprise ML Teams.One platform to guide and govern the entire lifecycle of your AI.
Note

The above companies may also offer products and services that fit in one of the other three solution domains. All relevant domain classifications and the full list of companies surfaced through our landscape scan can be reviewed in the Appendix: "AIAT Landscape Logo Map"

Footnotes

  1. Preamble. "Solution." Last modified 2023. https://www.preamble.com/solution. Accessed March 13, 2024.

  2. Cabrera, Ángel Alexander, Abraham J. Druck, Jason I. Hong, and Adam Perer. "Discovering and validating ai errors with crowdsourced failure reports." Proceedings of the ACM on Human-Computer Interaction 5, no. CSCW2 (2021): 1-22. https://doi.org/10.1145/3479569; and Saidot. "Homepage." Last modified 2023. https://www.saidot.ai/. Accessed March 13, 2024.

  3. Risk Assessment (def.): A process typically carried out by internal risk management teams to proactively pinpoint, review, and prepare for the potential risks to an organisation's operations, finances, compliance, and more, before such risks manifest.

  4. Bucknall, Benjamin S., and Robert F. Trager. "Structured Access For Third-Party Research On Frontier Ai Models: Investigating Researchers’ Model Access Requirements." (2023). https://cdn.governance.ai/Structured_Access_for_Third-Party_Research.pdf

  5. Dynatrace. “AI/ML Observability.” Last modified 2024. https://docs.dynatrace.com/docs/observe-and-explore/dynatrace-for-ai-observability. Accessed April 11, 2024.

  6. Censius. “AI Observability”. https://censius.ai/ Accessed April 4, 2024

  7. Fiddler AI. "ML Model Monitoring." Accessed April 2, 2024. https://www.fiddler.ai/ml-model-monitoring

  8. Schuett, Jonas, Noemi Dreksler, Markus Anderljung, David McCaffary, Lennart Heim, Emma Bluemke, and Ben Garfinkel. "Towards best practices in AGI safety and governance: A survey of expert opinion." arXiv preprint arXiv:2305.07153 (2023). https://doi.org/10.48550/arXiv.2305.07153

  9. National Cyber Security Centre, "Cyber Essentials Scheme," accessed April 6, 2024, https://www.ncsc.gov.uk/cyberessentials/overview.

  10. U.S. General Services Administration. "Federal Risk and Authorization Management Program (FedRAMP)." Accessed April 6, 2024. https://www.fedramp.gov/.

  11. European Commission, "Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts," Article 64, accessed April 6, 2024, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206.

  12. Directive (EU) 2019/1937 of the European Parliament and of the Council of 23 October 2019 on the protection of persons who report breaches of Union law, Official Journal of the European Union L 305 (November 26, 2019): 17-56 Directive (EU) 2019/1937 of the European Parliament and of the Council of 23 October 2019 on the protection of persons who report breaches of Union law, Official Journal of the European Union L 305 (November 26, 2019): 17-56

  13. Özlü Dolma, "Cybersecurity Whistleblower Protection: A Comparison of the US and the EU Approaches," Pamukkale University Journal of Business Research 10, no. 2 (2023): 615-631

  14. Miles Brundage et al., "Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims" (arXiv preprint, April 2020), https://arxiv.org/abs/2004.07213.

  15. Simplilearn. "All About the SolarWinds Attack." Accessed April 2, 2024. https://www.simplilearn.com/tutorials/cryptography-tutorial/all-about-solarwinds-attack#what_is_solarwinds

  16. Government Accountability Office (GAO). "SolarWinds Cyberattack Demands Significant Federal and Private Sector Response." GAO Blog. November 10, 2021. Accessed April 2, 2024. https://www.gao.gov/blog/solarwinds-cyberattack-demands-significant-federal-and-private-sector-response-infographic

  17. Securities and Exchange Commission (SEC). "SEC Charges Public Company with Cybersecurity Disclosure Failures." Press Release 2023-227. November 4, 2023. Accessed April 2, 2024. https://www.sec.gov/news/press-release/2023-227