Skip to main content

Appendix H | Glossary of Terms

TermDefinition
Accurate AI"Accuracy is defined by ISO/IEC TS 5723:2022 as “closeness of results of observations, computations, or estimates to the true values or the values accepted as being true.”
Adversarial testing"[Simulation of] real-world hacks on [an] organisation’s data and networks [to] spotlight vulnerabilities that help organisations strengthen security." "Adversarial testing" is used synonymously to "red-teaming" in this report (as is common in the existing literature).
AI application tech stackThe combination of machine learning frameworks, data processing tools, development and deployment environments, specialised hardware, network infrastructure, and other elements involved in implementing an AI use case.
AI Assurance Technology (AIAT)Artificial Intelligence Assurance Technology (AIAT) companies develop products—whether software, hardware, or some combination—and service offerings that commercial and state organisations can procure as a means to more effectively, more efficiently, or more precisely mitigate AI hazards.
AI for GoodThe use of AI to address global challenges and improve the wellbeing of humanity (e.g., AI applications in healthcare, environmental protection, education, cancer research).
AI Governance TechnologyAI GovTech encompasses solutions to handle AI systems' ethical, legal, and societal ramifications as it becomes more common in many industries. As a result, platforms, and tools for AI governance have increased, including explainable AI (XAI) solutions, tools for bias detection and reduction, and frameworks for AI ethics.
AI Trust, Risk, and Security ManagementAI TRiSM ensures AI model governance, trustworthiness, fairness, reliability, robustness, efficacy and data protection. This includes solutions and techniques for model interpretability and explainability, AI data protection, model operations and adversarial attack resistance.
AI Trustworthiness solutionsSolutions for auditing desirable qualities (e.g., robustness, biases, fairness, safety) of AI data inputs, AI models, AI systems, and other components of the application tech stack, through automated testing, benchmarking, expert evaluations, and other quality assessments.
AI-Aware Digital AuthenticitySolutions for verifying the origins or the legitimacy of digital identities, avatars, content, and other forms of media accessed through public or private networks, through detection systems, provenance tracking, watermarking techniques, and other authentication tools.
AI-centric Risk ManagementSolutions for governing the documentation of organisational standards or compliance information, tracking applicable standards, and provisioning access to external auditors, as well as solutions to support post-deployment management of AI observability, risk monitoring, incident alerts, or the allocation of organisational resources to address known issues.
AI-Resilient IT SecuritySolutions for safeguarding compute hardware, data centres, network infrastructure, AI models, AI systems, and all other component parts of an AI application tech stack, through both hardware security and cybersecurity protocols.
Automation biasAutomation bias refers to the tendency of humans to rely excessively on automated systems, without critically evaluating their outputs or recommendations.
Bias in AI"Bias is broader than demographic balance and data representativeness. NIST has identified three major categories of AI bias to be considered and managed: systemic, computational and statistical, and human-cognitive. Each of these can occur in the absence of prejudice, partiality, or discriminatory intent. Systemic bias can be present in AI datasets, the organisational norms, practices, and processes across the AI lifecycle, and the broader society that uses AI systems. Computational and statistical biases can be present in AI datasets and algorithmic processes, and often stem from systematic errors due to non-representative samples. Human-cognitive biases relate to how an individual or group perceives AI system information to make a decision or fill in missing information, or how humans think about purposes and functions of an AI system."
CAC Provisions on Deep SynthesisThe regulations implemented by China aim to oversee the utilisation of deep synthesis technologies, which encompass AI-based tools employed for generating text, video, and audio content, commonly referred to as deepfakes.
California Consumer Privacy Act (CCPA)The California Consumer Privacy Act (CCPA), passed in June 2018, grants California residents rights over their personal information held by businesses and imposes obligations on companies regarding data privacy and transparency. [AB-375]
Content ProvenanceKnowledge of the origins, history, ownership, and authenticity of media. Facilitated through documentation, it establishes the source of content and whether it may have undergone alterations over time. Provenance is crucial for verifying the integrity and ownership of media in digital or online environments.
Dangerous Capability RisksThe potential for AI technologies to directly or indirectly cause extreme harms by way of its capacities for deception, persuasion & manipulation, weapons design, cyber attacks, long-horizon planning, self-proliferation, and more.
Data labellingThe process of annotating text, images, and other data types with machine-interpretable information (or metadata), so that AI models can be trained or fine-tuned using that data.
Data privacy and cybersecuritySafeguards to protect against attempts to access private information, damage or corrupt an AI-driven process, the unauthorised use of AI, or efforts to covertly surveil the inputs and outputs of an AI system.
Data-, model-, or system-focused evaluationsRigorous tests to identify and mitigate bias, errors, and potential hazards in AI datasets, models and systems; such tests may be repeatedly performed above and beyond the specifications of legislation for reasons related to competitive advantages, product-market-fit, risk management, and more.
Deep Neural Network (DNN)"[A network comprised of] multiple nonlinear computational units or neurons organised in a layer-wise fashion to extract high-level, deeper, robust, and discriminative features from the underlying data."
Digital Operational Resilience Act (DORA)The Digital Operational Resilience Act (DORA), passed in December 2021, aims to enhance the operational resilience of the EU's financial sector by establishing requirements for digital operational resilience and incident reporting. [Regulation (EU) 2022/2554]
Dual-use TechnologyInnovations that can serve multiple purposes; namely, those initially designed for civilian or commercial interests, that can also inadvertently provide capabilities suitable for military or malicious uses.
Electronic Identification, Authentication and Trust Services Regulation (eIDAS)The Electronic Identification, Authentication and Trust Services (eIDAS) regulation, passed in July 2014, establishes a framework for electronic identification and trust services within the European Union, ensuring their legal recognition and cross-border interoperability. [Regulation (EU) No 910/2014]
Emergent capabilities in AI modelsUnexpected jumps in AI capabilities which render AI tools a more potent assistant to malicious activities and increase the amount of harm caused when powerful AI systems misbehave due to misalignment, accident, or adversarial attacks.
EU AI ActThe EU AI Act aims to promote the development of reliable AI within Europe and globally. It seeks to guarantee that AI systems uphold fundamental rights, safety, and ethical standards, while also confronting the risks posed by highly influential and consequential AI models. [P9_TA(2024)0138]
EU Cyber Resilience ActThe Cyber Resilience Act (CRA) is a regulatory proposal put forth by the European Commission to enhance cybersecurity and cyber resilience across the EU by establishing uniform standards for products containing digital components. It is expected to enter into force in 2024.
Evasion attackAdversarial attack against an AI system or model, where the environment encountered by an AI is tampered with to cause poor performance in a specific deployment situation.
Explainable and interpretable AI"Explainability refers to a representation of the mechanisms underlying AI systems’ operation, whereas interpretability refers to the meaning of AI systems’ output in the context of their designed functional purposes. Together, explainability and interpretability assist those operating or overseeing an AI system, as well as users of an AI system, to gain deeper insights into the functionality and trustworthiness of the system, including its outputs. The underlying assumption is that perceptions of negative risk stem from a lack of ability to make sense of, or contextualise, system output appropriately. Explainable and interpretable AI systems offer information that will help end users understand the purposes and potential impact of an AI system."
External compliance auditsThird-party compliance audits may include on-site inspections of an AI application's technology stack, reviewing data certifications, or scrutinising the results of evaluations. They are conducted to ensure that AI technologies and their deploying organisations meet requisite laws and industry standards, covering both pre-deployment readiness and periodic reviews after deployment.
Extraction attackAdversarial attack against an AI system or model, where the attacker seeks to deduce information about an AI’s training data or about its model architecture through prompt engineering and other strategic interactions with the AI product.
Fair AI"Fairness in AI includes concerns for equality and equity by addressing issues such as harmful bias and discrimination. Standards of fairness can be complex and difficult to define because perceptions of fairness differ among cultures and may shift depending on application."
Fine-tuningAdditional AI model training where the model learns from a smaller, bespoke dataset. This process harnesses the model's fundamental strengths, but also recalibrates it for specific applications.
Formal VerificationTesting to assert whether an AI model or system satisfies pre-specified criteria, often through automated methods or systematic mathematical validations.
Fourth Industrial Revolution (a.k.a. the Intelligence Revolution)A new technologically-mediated transformation, akin in scale to the agricultural revolution, first industrial revolution, scientific-technical revolution, and digital revolution, which is brought about by advances in artificial intelligence (AI).
G7 Hiroshima ProcessesInternational guiding principles on artificial intelligence and a voluntary Code of Conduct for AI developers, prepared for the 2023 Japanese G7 Presidency and the G7 Digital and Tech Working Group, as of 7 September 2023.
General Data Protection Regulation (GDPR)The General Data Protection Regulation (GDPR) is a comprehensive data protection law enacted by the European Union to safeguard individuals' personal data and regulate its processing by organisations. [Regulation (EU) 2016/679]
General-purpose AI (GPAI)"An AI system that can accomplish or be adapted to accomplish a range of distinct tasks, including some for which it was not intentionally and specifically trained."
Hardware securitySafeguards to prevent unauthorised access, misuse of AI, data theft, tampering with or augmenting devices, or inflicting physical damage.
Identity and content authentication toolsTechnologies that verify or track the origins and authenticity of digital content and identities using methods like detection systems, provenance tracking, and watermarking across public or private networks.
Large Language Model (LLM)"A category of foundation models trained on immense amounts of data making them capable of understanding and generating natural language and other types of content to perform a wide range of tasks."
Machine Learning (ML)A technique that enables AI systems to learn from input data, develop versatile capabilities, and respond autonomously to situations that were never specified by a human developer.
Misuse of AIMisuse risks stem from the way in which AI can be used as a powerful tool to facilitate and strengthen attacks against material objects (i.e., physical infrastructure, physical health), digital systems (e.g., personal data, software applications), or informational targets (e.g., online debate fora, personalised communications). In combination or individually, these AI-assisted attacks enhance mal-intentioned (or misguided) actors’ ability to inflict serious political, economic, and social harm.
Misuse--Digital HarmRisks that fall into the subcategory “Digital harm” are those that result from an ill-intentioned individual or group using AI in an adversarial cyber operation. This includes attacks to disrupt the data or digital infrastructure of a country, corporation, or other important actor/community, as well as attempts to steal or corrupt digitally-stored information. The “Digital Harm” sub-category does not include the use of AI for spreading disinformation or hate speech—i.e., acts that intend to degrade our shared information commons.
Misuse--Informational HarmRisks that fall into the subcategory "Informational Harm" cause harm that is less tangible than the harm that would result from direct attacks against physical and digital targets. This risk category encompasses any intentional use of AI to pollute information environments or to injure individuals’ mental and emotional wellbeing.
Misuse--Physical HarmRisks that fall into the subcategory “Physical harm” are those that result from an ill-intentioned individual or group using AI in an attack against a physical target.
Model ParametersAll aspects of an AI model that are learned from its training data and proprietary training procedures, as well as adjustments to a models's latent biases or neural network weights (i.e., the values that determine the influence of different types of input data, and largely determine the model's behaviour).
New York City Bias Auditing LawThe New York City Bias Auditing Law, passed in January 2022, mandates algorithmic bias audits of municipal agencies' automated decision-making systems to ensure fairness and accountability. [NYC local law 144-21]
Observability, Monitoring, and Incident Response solutionsTools to track AI deployments, identify and manage risks, and facilitate incident response through efficient resource and staff allocation.
Opacity of AI modelsLack of transparency and explainability of automated decisions in advanced AI systems built on deep learning technologies.
Poisoning attackAdversarial attack against an AI system or model, where training data or model algorithms are tampered with to degrade performance.
Privacy-enhanced AI"Privacy refers generally to the norms and practices that help to safeguard human autonomy, identity, and dignity. These norms and practices typically address freedom from intrusion, limiting observation, or individuals’ agency to consent to disclosure or control of facets of their identities (e.g., body, data, reputation). [...] Privacy-enhancing technologies (“PETs”) for AI, as well as data minimizing methods such as de-identification and aggregation for certain model outputs, can support design for privacy-enhanced AI systems."
Private AI investments“A private placement is a private sale of newly issued securities (equity or debt) by a company to a selected investor or a selected group of investors. The stakes that buyers take in private placements are often minority stakes (under 50%), although it is possible to take control of a company through a private placement as well, in which case the private placement would be a majority stake investment.”
Public Interest AISimilar to PIT, but specifically focuses on the application and implications of AI in best serving public well-being and long-term survival.
Public Interest Technologies (PIT)An emergent discipline that encompasses a broad array of technologies developed, disseminated, and utilised to benefit all segments of society inclusively and equitably.
Quality, Conformity, & Audit ManagementPlatforms that are instrumental in ensuring compliance with legal standards and industry regulations, facilitating internal or external reviews of AI systems through tools and procedures reminiscent of quality management systems (QMS) or Enterprise GRC platforms.
Red-teamingUsing manual or automated methods to adversarially probe a language model for harmful outputs. "Red-teaming" is used synonymously to "adversarial testing" in this report (as is common in the existing literature).
Reliable AI"Reliability is defined in the same standard as the “ability of an item to perform as required, without failure, for a given time interval, under given conditions” (Source: ISO/IEC TS 5723:2022). Reliability is a goal for overall correctness of AI system operation under the conditions of expected use and over a given period of time, including the entire lifetime of the system."
ResiliencyPreparation for the unique security vulnerabilities of AI-driven processes as well as novel security threats from misused AI.
Resilient AI"AI systems, as well as the ecosystems in which they are deployed, may be said to be resilient if they can withstand unexpected adverse events or unexpected changes in their environment or use – or if they can maintain their functions and structure in the face of internal and external change and degrade safely and gracefully when this is necessary." (Adapted from: ISO/IEC TS 5723:2022)
Risk AssessmentA process typically carried out by internal risk management teams to proactively pinpoint, review, and prepare for the potential risks to an organisation's operations, finances, compliance, and more, before such risks manifest.
Robust AI"Robustness or generalizability is defined as the “ability of a system to maintain its level of performance under a variety of circumstances” (Source: ISO/IEC TS 5723:2022). Robustness is a goal for appropriate system functionality in a broad set of conditions and circumstances, including uses of AI systems not initially anticipated. Robustness requires not only that the system perform exactly as it does under expected uses, but also that it should perform in ways that minimise potential harms to people if it is operating in an unexpected setting."
Safe AI"[Safe] AI systems should “not under defined conditions, lead to a state in which human life, health, property, or the environment is endangered” (Source: ISO/IEC TS 5723:2022)."
Secure AI"AI systems that can maintain confidentiality, integrity, and availability through protection mechanisms that prevent unauthorised access and use may be said to be secure."
Technical failure of AIThe second category in our framework—Technical failure of AI—encapsulates the idea that AI systems themselves may cause harm if they function in a way that no human—whether developer, deployer, or user—intended.
Technical failure--MisalignmentRisks in the "Misalignment" subcategory stem from the so-called alignment problem. A misaligned AI system effectively pursues the objective function it was given by humans (e.g., maximising the time social media users spend on a certain platform), but it causes harm in the process either because the objective itself is not what humans actually want (e.g., more hours on social media ≠ more enjoyment of social media) or because the strategy the AI uses to achieve the objective has deeply undesirable side-effects (e.g., if the best way to maximise time on social media turns out to be personalised content feeds that optimise for outrage and fear).
Technical failure--UnreliabilityRisks in the "Unreliability" subcategory are those that stem from AI systems' occasional inaccuracies or failure to complete their given task, whether due to biassed or incomplete input data, technical glitches, or the inherent probabilistic nature of machine learning-based decisions.
Transparent AI"Transparency reflects the extent to which information about an AI system and its outputs is available to individuals interacting with such a system – regardless of whether they are even aware that they are doing so. Meaningful transparency provides access to appropriate levels of information based on the stage of the AI lifecycle and tailored to the role or knowledge of AI actors or individuals interacting with or using the AI system."
Trustworthy AI"For AI systems to be trustworthy, they often need to be responsive to a multiplicity of criteria that are of value to interested parties. [...] Characteristics of trustworthy AI systems include: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed. [...] Addressing AI trustworthiness characteristics individually will not ensure AI system trustworthiness; tradeoffs are usually involved, rarely do all characteristics apply in every setting, and some will be more or less important in any given situation. Ultimately, trustworthiness is a social concept that ranges across a spectrum and is only as strong as its weakest characteristics."
UK AI Regulation White PaperThe UK AI Regulation White Paper is a proposal of five cross-sectoral principles for existing regulators to interpret and apply within their remits in order to drive safe, responsible AI innovation. [E03019481 02/24]
Vulnerability of AI systems to exogenous factorsRisks related to the vulnerability of AI systems arise when highly relied-upon AI systems are not resilient against exogenous interference—whether an unintended disruption, or intentionally-caused breach.
Vulnerability--AccidentHarms resulting from an accidental disruption of an AI system can be the consequence of a natural hazard (e.g., a flood, an earthquake) or the collateral damage of proximate human-induced attacks (e.g., bombing raids)—i.e., the AI system was not the target, but was caught in the cross-fire and suffered material damage.
Vulnerability--AttackVulnerability of AI systems to adversarial (cyber)attacks lead to hazards of data theft, skewed performance, and system breakdown, with the attendant consequences in terms of financial, reputational, and human injury as well as negative effects on the business environment, incentives for innovation, and geopolitical tensions.
Watermarking"A way to identify the source, creator, owner, distributor, or authorized consumer of a document or image. Its objective is to permanently and unalterably mark the image so that the credit or assignment is beyond dispute."
White House Executive Order 14110The United States Executive Order aims to advance the establishment and utilisation of consistent procedures and tools to comprehend and address risks associated with the adoption of AI. This initiative particularly emphasises concerns regarding biosecurity, cybersecurity, national security, and risks to critical infrastructure. [88 Federal Register 75191]