Skip to main content

Drivers of risk and vulnerability

Concern over AI-related risks is heightened by several factors that can exacerbate risk (by raising either the severity or the likelihood of a given risk) or hamper effective risk management:

  1. Race dynamics: The potential of artificial intelligence to transform economies and societies in positive ways turn it into a powerful instrument coveted by state and corporate actors alike. Whether in competition for national strength or business advantage, the perception of highly capable AI as a vital asset can lead to race dynamics, where competitors on either side feel compelled to assume leadership positions in the advancement of AI. Competitive pressure of that kind can lead to more reckless steps in the development, release and adoption of AI systems, trading off effective risk management practices and long-run prosperity for short-term gains 1.

  2. Automation bias: Automation bias refers to the tendency of humans to rely excessively on automated systems, without critically evaluating their outputs or recommendations 2. If high levels of faith in AI systems are due to biassed thinking rather than a clear-eyed assessment of the quality and relevance of capabilities, this can lead to a false sense of certainty, inhibiting forward-looking risk management, early detection of harm and correct judgements on whether an AI system is suited for a given task.

  3. Opacity of AI systems: The deep neural networks used in most advanced AI systems create problems for the transparency and explainability of automated decisions 3. This exacerbates risk because it reduces our ability to predict how an AI system will behave, where it might go wrong, and how it might be used in malicious or erroneous ways (see previous point). In addition, it makes assigning accountability for adverse outcomes a more challenging task, which can inhibit effective risk management both ex ante (unclear responsibility for mitigating risks) and ex post (unclear liability for insufficiently mitigated risk, unclear responsibility for alleviating harms) 4.

  4. Hard-to-foresee capability advancements: Advancements in the strength and versatility of AI systems to complete diverse tasks are difficult to predict. In recent months, we have seen unexpected jumps in AI capabilities, with GPT-4 achieving above-average scores on several tests for human knowledge which GPT-3.5 had failed at miserably 5. Emergent and unanticipated capabilities exacerbate the risks we outlined, rendering AI tools a more potent assistant to malicious activities, and increasing the amount of harm caused when powerful AI systems misbehave due to misalignment, accident, or adversarial attacks 6.

  5. Compounding risks: Thus far, we have examined various AI risks discretely; however, these risks can interact and exacerbate one another. For example, AI-driven job displacement or misinformation could weaken community resilience, making societies more susceptible to AI-enhanced cyberattacks or the fallout from AI Technical Failures. Additionally, AI can act as a catalyst for non-AI risks. For instance, AI may accelerate advancements in fields like genetic engineering or nanotechnology. These rapid developments may outpace proper governance requirements, leading to unintended consequences such as the dominance of inorganic organisms over native species, or the accidental release of nanoparticles that pose undetectable health threats. Moreover, all of the above challenges have the potential to cause further cascading hazards and put strain on institutional resources, increasing socioeconomic vulnerability to risks.

Footnotes

  1. Dan Hendrycks, Mantas Mazeika, and Thomas Woodside, “An Overview of Catastrophic AI Risks”, arXiv pre-print (Center for AI Safety, October 2023), https://arxiv.org/abs/2306.12001; and Kelsey Piper, “Are We Racing toward AI Catastrophe?,” Vox, February 9, 2023, https://www.vox.com/future-perfect/23591534/chatgpt-artificial-intelligence-google-baidu-microsoft-openai.

  2. David Lyell, “Automation Can Leave Us Complacent, and That Can Have Dangerous Consequences”, The Conversation, July 28, 2016, http://theconversation.com/automation-can-leave-us-complacent-and-that-can-have-dangerous-consequences-62429.

  3. Jenna Burrell, “How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms”, Big Data & Society 3, no. 1 (June 1, 2016), https://doi.org/10.1177/2053951715622512; and “Frontier AI: Capabilities and Risks,” Discussion paper (Department of Science, Innovation and Technology (Government of the United Kingdom), October 25, 2023), https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper/frontier-ai-capabilities-and-risks-discussion-paper, section: “What risks do frontier AI present?”.

  4. Simon Burton et al., “Mind the Gaps: Assuring the Safety of Autonomous Systems from an Engineering, Ethical, and Legal Perspective,” Artificial Intelligence 279 (February 1, 2020), https://doi.org/10.1016/j.artint.2019.103201.

  5. OpenAI et al., “GPT-4 Technical Report” (arXiv, December 18, 2023), https://doi.org/10.48550/arXiv.2303.08774.

  6. “Frontier AI: Capabilities and Risks,” Discussion paper (Department of Science, Innovation and Technology (Government of the United Kingdom), October 25, 2023), https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper/frontier-ai-capabilities-and-risks-discussion-paper, section: “What risks do frontier AI present?”.