Skip to main content

AI-Resilient IT Security

What is it?

The primary function of this solution domain is safeguarding against unethical or unauthorised access to information, damage or disruption to digital infrastructure, and other forms of interference in either AI development or AI deployment. By AI deployment, we also mean the public or commercial sectors where organisations are implementing AI-driven use cases (e.g., self-driving vehicles, precision medicine, 6G internet). The term Information Technology (IT) Security refers to physically protecting hardware as well as virtually defending software environments against deliberate threats 1. Resiliency in this context means preparation for the unique security vulnerabilities surrounding AI-driven systems and processes, as well as novel security threats from potential AI misuse 2 . Likewise, AI-Resilient IT Security can be broken down into two sub-domains: (1) hardware security; and (2) data privacy and cybersecurity.

Hardware security involves safeguards to prevent unauthorised access, data theft, tampering with or augmenting devices, or inflicting physical damage. With regard to the AI tech stack, these solutions aim to mitigate potential vulnerabilities in the compute chipsets, network architecture, cloud compute infrastructure, data centres, AI training environments, and local application hardware (e.g., network devices in a connected and automated vehicle).

Data privacy and cybersecurity involves safeguards to protect against attempts to access private information, damage or corrupt an AI-driven process, the unauthorised use of AI, or efforts to covertly surveil the inputs and outputs of an AI system. Moreover, due to modern-day advancements in AI, the possibility for AI-driven cyberattacks—whether from human-led misuse or an AI agent's misaligned instrumental goals—demand modernised security safeguards.

Why is it imperative?

AI-Resilient IT Security helps to mitigate the risks we have classified as AI Misuse in all forms (i.e., Physical, Digital, or Informational), Technical Failure--Misalignment, and Vulnerabilities to Exogenous Interference. Inadequate security in mission-critical sectors like autonomous vehicles, finance, or healthcare could cause severe operational, legal, and reputational consequences, underscoring the need for robust AI security investments. And as described in Chapter 2 above, regulators are attempting to combat such concerns by asking companies to: Disclose large-scale cloud computing, and authenticate users, Record accidents and report to regulatory authorities, Monitor AI systems for unauthorised alterations, adversarial attacks, and tampering risks, and Design and implement suitable security measures for AI and IT infrastructure, just to name a few.

How might it work?

SUB-DOMAINEXAMPLE SOLUTION IDEASSOLUTION TYPE
Hardware SecurityHardware-integrated monitoring mechanismsHardware
Hardware SecurityTamper-proof device enclosuresHardware
Hardware SecuritySpecialised chips to compute encrypted dataHardware
Data / CybersecurityPETs with Access Control MechanismsSoftware
Data / CybersecurityData Encryption ToolsSoftware
Data / CybersecurityAI FirewallsSoftware

Hardware Security

When it comes to data centres, technical standards like ISO/IEC 27001 describes extensively what traditional measures are necessary for safeguarding against unauthorised access and use of computing resources 3, but the rapid nature of AI advances calls for continually planning for new threats and vulnerabilities 4. Hardware-integrated logging or monitoring devices could include devices that detect network activity for patterns indicative of AI model training and could be used to trace AI-chips across their lifecycle 5. To further safeguard advanced chips from being stolen or attained through illegitimate means, we may also need tamper-proof enclosures. This is the concept of a physical enclosure, designed to prevent intrusion without adversely affecting chip functionality or performance 6. Additionally, hardware-integrated logging mechanisms could be embedded directly into AI processing units. This could significantly enhance efforts to track the scale of compute being utilised at any given moment. As advanced computing resources become more prevalent, it's crucial for organisations with their own compute clusters to have robust mechanisms in place for detecting and swiftly responding to any unauthorised or illegitimate use of their resources. Moreover, this kind of hardware-enabled governance can validate whether cloud computing customers are performing large-scale training runs within the terms of their licence 7. Methods to maintain the integrity of AI supply chains may involve uniquely fingerprinting physical devices 8, or the ability to remotely disable unauthorised workloads 9. For instance, there are now novel on-device attestation mechanisms to safeguard intellectual property by embedding a device-specific fingerprint in a DNN's weights 10. This fingerprint ensures that only authorised DNNs can run on the device, thereby preventing unauthorised use of compute hardware.

Finally, it is also plausible that there will be growing demand for specialised chips, capable of computing encrypted data 11. These chips will greatly enhance privacy by enabling AI models to be trained or fine-tuned without the need to first decrypt sensitive information. While it's likely that the chip-related design and manufacturing of components would be handled by incumbent high-tech firms, there may be certain cases where outsourced R&D, corporate acquisitions, or service provider partnerships involve deep-tech third-party innovations.

Data Privacy and Cybersecurity

The ISO/IEC 27001 standard also stipulates the need to examine an organisation's information security risks and account for threats towards networks, computing resources, proprietary data, and endpoints. The same goes for cloud security measures, such as privacy-enhancing technologies with access control mechanisms—which allow organisations to grant access permissions to specific users and enforce related information security policies. Access controls are of critical importance in that they can contribute to the safeguarding of one of the most attractive targets for data theft: the model parameters of the most advanced foundation models 12. Protecting this information will require a combination of both traditional and novel security solutions.

When organisations rely on external providers for AI services, the safeguarding of data confidentiality becomes challenging, especially as data traverses cloud platforms hosted in various, potentially undisclosed, countries 13. Traditional and improved data encryption methods will be integral to ensure competitive intelligence and personally identifiable information is kept secure. Encryption solutions can be leveraged to scramble sensitive data, making it uninterpretable during transmission, allowing it to be processed without being read 14.

AI foundation models themselves are affording them new means of attack. For instance, LLM-based spear-phishing—manipulating targets into divulging sensitive information—is but one realistic example 15. A broader set of solutions are also required for detecting and classifying malware from AI-driven traffic, or AI-targeting adversarial attacks (e.g., poisoning attacks, inference attacks) 16. AI Firewalls represent one component of a holistic security suite for modern-day organisations, offering the real-time validation of API-mediated inputs and outputs. This can help shield deployed AI models from threats injected into AI prompts, preventing unwanted uses of proprietary AI applications or the unintended release of confidential information 17 18.

Who needs it?

Est. WTP for H/w SecurityEst. WTP for Cybersecurity
High-Tech ManufacturersHighHigh
Data Collectors & BrokersHighHigh
Cloud Compute ProvidersHighHigh
Frontier AI LabsHighHigh
AI Systems DevelopersHighHigh
Industry EnterprisesHighHigh

AI-Resilient IT Security solutions are a necessity for organisations spanning technology supply chains, AI development ecosystems, and industries adopting AI applications. This broad spectrum of prospective customers, including cloud compute providers, high-tech companies, or insurance companies, underscores a substantial addressable market for modern security solutions for the age of AI. These entities have and will continue to prioritise robust security systems for regulatory adherence, to prevent unauthorised training runs, to protect intellectual property, and also as a crucial investment to mitigate the financial risks associated with potential AI-driven security incidents. The financial ramifications of inadequate security measures can be profound, ranging from operational disruptions that directly impact revenue streams to significant expenditures on legal settlements following data breaches. In high-stakes sectors like autonomous vehicle manufacturing, a single security lapse could lead to accidents, endangering lives, and incurring massive liability and insurance costs. For financial institutions leveraging AI for trading or fraud detection, security breaches could result in substantial financial losses due to digital asset corruption, fraudulent transactions, or regulatory fines. Healthcare organisations, relying on AI for patient care and data analysis, face risks of compromised patient data, leading to trust erosion and penalties under privacy laws. AI adopting enterprises investing in AI-Resilient IT Security can avert not only direct financial losses but also indirect costs such as reputational damage, competitive disadvantage due to eroded customer trust, or limited commercial access to countries and sectors with heightened security concerns.

Newer AI-Resilient IT Security companies breaking into the space should look towards the CISOs department (i.e., information security teams), or the industry equivalent org chart. These teams oversee the security of IT infrastructure and data through both internal investments and the procurement of third-party products and services. For insurance companies, they need to be concerned with both their own security, but also that of their client organisations. AIAT offerings will be of interest to insurance risk actuaries and product development managers developing new risk frameworks and insurance products that account for the level of AI-Resilient IT Security measures implemented by their insured clients.

What else do investors & founders need to know?

Impending adoption of AI-centric systems is prompting regulatory bodies to turn their attention towards hardware as a particularly effective point of intervention in the governance of AI technologies 19. Examples such as the NIST IR 8320, the EU Cyber Resilience Act 20, the EU Cloud Certification Scheme, or the ENISA report showcase this increasing legislative focus 21. The wave of regulatory attention includes the Digital Operational Resilience Act (DORA) 22, which emphasises the need for robust ICT security measures for financial entities and third-party ICT solution providers (which includes “hardware as a service” vendors.) DORA includes additional provisions for “critical third-party providers” that provide essential and critical services to the finance sector. These provisions include mandatory incident reporting, vulnerability assessments and compliance with technical and implementation standards determined by ESA. As evidenced by DORA, hardware-related standards for AIAT companies in this domain will be stricter and more important for critical infrastructure, such as banking, national defence, or the public sector. Vendors specialising in hardware security solutions stand to gain a significant competitive edge by adhering to the latest information security standards and achieving necessary accreditations.

Likewise, modernised data and information security standards is a cross-industry imperative for all companies developing or making use of AI. Privacy and cybersecurity solutions vendors must navigate existing regulations such as the EU's General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA) 23. Considering the consequential role of AI-resilient security companies, it is crucial that they comply with current and evolving standards—helping their clients do so as well. Industry standards like ISO/IEC 27001, sector-specific standards such as DORA, and accreditation schemes resembling those of the EU's Cybersecurity Act 24, become essential tools for a well-functioning ecosystem of safeguards against cyberattacks, data breaches, and other AI-driven risks.

CASE STUDY

When Trust is Breached: Re-evaluating Cybersecurity Partnerships

  • Year: 2013
  • Companies: Target Corporation, Trustwave
  • Industry: Managed Security Solutions (MSS)

The 2013 data breach at Target Corporation, involving the loss of 40 million credit and debit card numbers and 70 million other customer details, starkly illustrates the high stakes involved in cybersecurity partnerships.

This case, involving Trustwave, Target's Managed Security Service (MSS) provider, underscores the pivotal role these third-party vendors play in safeguarding sensitive information. Despite Trustwave's engagement for security and monitoring services, critical vulnerabilities within Target's systems went undetected, allowing attackers unfettered access for weeks​​ 25.

The breach not only highlighted Trustwave's failure in identifying and responding to suspicious activities but also raised questions about the effectiveness of existing legislative and industry standards for managed security solutions providers.

This data breach significantly influenced the governance of third-party security providers, spurring uptake of Third-Party Risk Management (TPRM) 26 and the Cloud Security Alliance's updates to MSS industry guidance.

Both standards emphasise the onus of responsibility for third-party risk assessment, monitoring, and incident response 27. Similarly, the International Organization for Standardization focused on bolstering security certifications and frameworks for managed service providers 28.

Moreover, regulatory bodies such as the New York Department of Financial Services (NYDFS) implemented stringent cybersecurity regulations for financial institutions, mandating thorough vetting of third-party vendors' security practices 29.​​​​​

This situation underscores the importance of the regulatory landscape and industry standards surrounding both IT infrastructure and third-party security providers.

As it relates to AI-Resilient IT Security, we should expect similar, if not stronger governance frameworks that aim to ensure market-driven security solutions are up to the task of safeguarding against AI-driven security threats.

Staying ahead means not just adhering to current regulations but actively participating in discussions and developments around standards that address AI risks. Aligning with such standards not only mitigates legal and operational risks, but also positions companies as leaders, in a future where AI plays both a central role in and a central threat to security.

Sample Companies from the Venture Landscape:

As of 15 April 2024: Of the 100 startups in our AI Assurance Tech landscape scan, we uncovered 41 offering AI-Resilient IT Security solutions. These companies included 32 seed/early stage startups, and 9 growth/late stage companies. Despite the risk mitigation potential from hardware governance innovations 30, firms uncovered through our landscape scan were all focused on non-hardware, data privacy or cybersecurity solutions. Here's a brief sampling of those startups:

placeholder

placeholder
Mithril Security

placeholder

Helping businesses accelerate model operations and making AI better for everyone.

Leverage the power of Conversational AI without fearing your data could be accessed or used for training.

Skyflow is a data privacy vault that integrates with any tech stack and makes it easy to enforce privacy policies across any app, any data cloud, and any LLM.

placeholder

placeholder

placeholder

Seamless Plug-and-Play Privacy Infrastructure for Enterprise.The AI-Native Data Leak Prevention Platform.

Empower your AI cybersecurity with data-driven insights. Our AI Cyber Risk Analytics shines a light on vulnerabilities before they're exploited.

Note

The above companies may also offer products and services that fit in one of the other three solution domains. All relevant domain classifications and the full list of companies surfaced through our landscape scan can be reviewed in the Appendix: "AIAT Landscape Logo Map"

Footnotes

  1. Gollmann, Dieter. "Security for cyber-physical systems." In Mathematical and Engineering Methods in Computer Science: 8th International Doctoral Workshop, MEMICS 2012, Znojmo, Czech Republic, October 25-28, 2012, Revised Selected Papers 8, pp. 12-14. Springer Berlin Heidelberg, 2013.

  2. OpenAI. "OpenAI Cybersecurity Grant Program." OpenAI Blog. February 13, 2023. Accessed April 18, 2024. https://openai.com/blog/openai-cybersecurity-grant-program

  3. International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC). 2013. ISO/IEC 27001: Information technology — Security techniques — Information security management systems — Requirements. Geneva, Switzerland: ISO. https://www.iso.org/standard/27001

  4. Tan, Benjamin, and Ramesh Karri. "Challenges and new directions for ai and hardware security." In 2020 IEEE 63rd International Midwest Symposium on Circuits and Systems (MWSCAS), pp. 277-280. IEEE, 2020. 10.1109/MWSCAS48704.2020.9184612.

  5. Sastry, Girish, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O'Keefe et al. "Computing Power and the Governance of Artificial Intelligence." arXiv preprint arXiv:2402.08797 (2024). https://doi.org/10.48550/arXiv.2402.08797

  6. Isaacs, Phil, Thomas Morris Jr, Michael J. Fisher, and Keith Cuthbert. "Tamper proof, tamper evident encryption technology." In Pan Pacific Symposium. 2013.

  7. Shavit, Yonadav. "What does it take to catch a Chinchilla? Verifying rules on large-scale neural network training via compute monitoring." arXiv preprint arXiv:2303.11341 (2023). https://doi.org/10.48550/arXiv.2303.11341

  8. Maes, Roel, and Roel Maes. Physically unclonable functions: Concept and constructions. Springer Berlin Heidelberg, 2013. https://doi.org/10.1007/978-3-642-41395-7_2

  9. Mistral solutions. "Rootkits, Kill-switches, and Backdoors: Implications for Homeland Security." https://www.mistralsolutions.com/articles/rootkits-kill-switches-backdoors-implications-homeland-security/. Accessed March 19, 2024.

  10. Microsoft. “DeepAttest: An End-to-End Attestation Framework for Deep Neural Networks.” Last updated 2019. https://www.microsoft.com/en-us/research/publication/deepattest-an-end-to-end-attestation-framework-for-deep-neural-networks/. Accessed April 10, 2024.

  11. Spectrum. “Homomorphic encryption.” Last updated 2024. https://spectrum.ieee.org/homomorphic-encryption. Last accessed April 10, 2024.

  12. Model parameters (def.): All aspects of an AI model that are learned from its training data and proprietary training procedures, as well as adjustments to a models's latent biases or neural network weights (i.e., the values that determine the influence of different types of input data, and largely determine the model's behaviour).

  13. De Capitani di Vimercati, Sabrina, Robert F. Erbacher, Sara Foresti, Sushil Jajodia, Giovanni Livraga, and Pierangela Samarati. "Encryption and fragmentation for data confidentiality in the cloud." Foundations of Security Analysis and Design VII: FOSAD 2012/2013 Tutorial Lectures (2014): 212-243. https://doi.org/10.1007/978-3-319-10082-1_8; and Annex, I. "AI Watch European Landscape on the Use of Artificial Intelligence by the Public Sector." (2022). https://econpapers.repec.org/RePEc:ipt:iptwpa:jrc129301

  14. Timan, Tjerk, and Zoltan Mann. "Data protection in the era of artificial intelligence: trends, existing solutions and recommendations for privacy-preserving technologies." In The Elements of Big Data Value: Foundations of the Research and Innovation Ecosystem, pp. 153-175. Cham: Springer International Publishing, 2021. https://doi.org/10.1007/978-3-030-68176-0; and Pratomo, Arief Budi, Sabil Mokodenseho, and Adit Mohammad Aziz. "Data encryption and anonymization techniques for enhanced information system security and privacy." West Science Information System and Technology 1, no. 01 (2023): 1-9. https://doi.org/10.58812/wsist.v1i01.176

  15. Hazell, Julian. "Spear phishing with large language models." arXiv preprint arXiv:2305.06972 (2023). https://arxiv.org/pdf/2305.06972.pdf

  16. Sangwan, Raghvinder S., Youakim Badr, and Satish M. Srinivasan. "Cybersecurity for AI systems: A survey." Journal of Cybersecurity and Privacy 3, no. 2 (2023): 166-190. https://doi.org/10.3390/jcp3020010

  17. Cohen, Stav, Ron Bitton, and Ben Nassi. "Here Comes The AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications." arXiv preprint arXiv:2403.02817 (2024).

  18. Robust intelligence. "AI-firewall." robust intelligence. Last modified 2024. https://www.robustintelligence.com/platform/ai-firewall. Accessed March 12, 2024.

  19. Sastry, Girish, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, Diane Coyle, “Computing Power and the Governance of Artificial Intelligence” arXiv, February 20, 2024. Accessed April 18, 2024. https://arxiv.org/pdf/2402.08797.pdf

  20. EU Cyber Resilience Act (def.): The Cyber Resilience Act (CRA) is a regulatory proposal put forth by the European Commission to enhance cybersecurity and cyber resilience across the EU by establishing uniform standards for products containing digital components. It is expected to enter into force in 2024.

  21. Bartock, Michael, Murugiah Souppaya, Ryan Savino, Tim Knoll, Uttam Shetty, Mourad Cherfaoui, Raghu Yeluri, Akash Malhotra, Don Banks, Michael Jordan, Dimitrios Pendarakis, J. R. Rao, Peter Romness, Karen Scarfone, “Hardware-Enabled Security: Enabling a Layered Hardware-Enabled Security Strategy,” NIST Interagency or Internal Report (IR) 8320 (September 2020), https://doi.org/10.6028/NIST.IR.8320.; European Commission, “Proposal for a Regulation...on Horizontal Cybersecurity Requirements for Products with Digital Elements and Amending Regulation...,” COM (2022) 454 final (September 15, 2022), https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52022PC0454; and Peets, Lisa, Marty Hansen, Mark Young, Aleksander Aleksiev, Bart Szewczyk & Matthieu Coget. "Implications of the EU Cybersecurity Scheme for Cloud Services." November 29, 2023, . Accessed April 18, 2024. https://www.globalpolicywatch.com/2023/11/implications-of-the-eu-cybersecurity-scheme-for-cloud-services/

  22. Digital Operational Resilience Act (DORA) (def.): The Digital Operational Resilience Act (DORA), passed in December 2021, aims to enhance the operational resilience of the EU's financial sector by establishing requirements for digital operational resilience and incident reporting. [Regulation (EU) 2022/2554]

  23. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119) 1.; and California Consumer Privacy Act, Cal. Civ. Code §§ 1798.100-1798.199 (2018).

  24. Peets, Lisa, Marty Hansen, Mark Young, Aleksander Aleksiev, Bart Szewczyk & Matthieu Coget. "Implications of the EU Cybersecurity Scheme for Cloud Services." November 29, 2023, . Accessed April 18, 2024. https://www.globalpolicywatch.com/2023/11/implications-of-the-eu-cybersecurity-scheme-for-cloud-services/

  25. Prince, Brian. "Trustwave Hit with Lawsuit Tied to Target Breach." SecurityWeek. June 2, 2014. Accessed April 18, 2024. https://www.securityweek.com/trustwave-hit-lawsuit-tied-target-breach/

  26. Keskin, Omer F., Kevin Matthe Caramancion, Irem Tatar, Owais Raza, and Unal Tatar. "Cyber third-party risk management: A comparison of non-intrusive risk scoring reports." Electronics 10, no. 10 (2021): 1168. https://doi.org/10.3390/electronics10101168

  27. Cloud Security Alliance. “New Guidance From Cloud Security Alliance Aims to Help Cloud Service Customers Better Evaluate Service Level Agreements.” Last modified 2023. https://cloudsecurityalliance.org/press-releases/2021/11/30/new-guidance-from-cloud-security-alliance-aims-to-help-cloud-service-customers-better-evaluate-service-level-agreements Accessed March 20, 2024.

  28. International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC). 2023. ISO/IEC/IEEE 23026:2023 Systems and software engineering. Geneva, Switzerland: ISO. https://www.iso.org/standard/81896.html

  29. Simon, Joseph D., and Elizabeth A. Murphy. "Cybersecurity Regulation for Financial Services Companies: New York State Leads the Way." Journal of Taxation & Regulation of Financial Institutions 30, no. 4 (2017).

  30. Sastry, Girish, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, Diane Coyle, “Computing Power and the Governance of Artificial Intelligence” arXiv, February 20, 2024. Accessed April 18, 2024. https://arxiv.org/pdf/2402.08797.pdf; Heim, Lennart, Tim Fist, Janet Egan, Sihao Huang, Stephen Zekany, Robert Trager, Michael A. Osborne and Noa Zilberman "Governing Through the Cloud: The Intermediary Role of Compute Providers in AI Regulation." Accessed April 18, 2024. https://cdn.governance.ai/Governing-Through-the-Cloud_The-Intermediary-Role-of-Compute-Providers-in-AI-Regulation.pdf