Other risks resulting from the development and concentrated ownership of AI
In addition to the risks contained in our risk framework, recent developments in AI inspire other, less AI-specific sets of concerns. These concerns focus on the byproducts of the way in which AI technologies are developed, produced, owned and used as economic or strategic assets. In other words, they capture concerns around potential “collateral damage” resulting from the widespread integration of AI in our societies.
In the immediate term, concerns revolve around excessive energy needs during AI model training 1, exploitative labour conditions in the AI value chain 2, and alleged copyright and privacy infringement during AI data collection practices 3.
Looking ahead, the application of AI could lead to significant breakthroughs in fields such as climate engineering, quantum computing, nanotechnology, synthetic biology, and genetic engineering. While these advancements are not intended to cause harm, they may pose serious risks or have the potential for dual-use 4.
In the longer run, there are worries about widespread job displacement due to automation and a crisis of meaning resulting from the ubiquity of ever more capable and human-like machines 5. Moreover, frontier AI development is currently driven by a few leading-edge companies and countries 6. There are concerns that if this dynamic continues, it would concentrate global economic and political power in the hands of just a few actors 7. Issues of power shifts additionally disrupt international relations 8, with the potential to exacerbate competitive dynamics and geopolitical tensions. Adverse effects of such tensions include a deterioration to the international trade and business environment, supply chain disruptions 9, commercial domiciliation risks, and a higher chance of conflict escalation between states.
Discussions around all the above occupy policymakers and private actors alike. With the exception of labour protection laws, however, these discussions have not yet led to clear and binding regulatory requirements in most jurisdictions (China is an outlier in this case) 10, nor to well-developed industry standards or best practices. Responsible and farsighted corporate leadership can play a vital role in grappling with the societal challenges mentioned in this section, but such contributions will be unlikely to rely primarily on technical solutions from AI Assurance providers. For this reason, the present report does not focus heavily on risk management strategies related to the socioeconomic risks described in the preceding paragraphs.
Footnotes
-
Alesia Zhuk, “Artificial Intelligence Impact on the Environment: Hidden Ecological Costs and Ethical-Legal Issues”, Journal of Digital Technologies and Law 1, no. 4 (December 15, 2023): 932–54, https://doi.org/10.21202/jdtl.2023.40. ↩
-
Mary L. Gray and Siddarth Suri, Ghost Work (HarperCollins Publishers, 2019); Niamh Rowe, “Underage Workers Are Training AI”, Wired, November 15, 2023, https://www.wired.co.uk/article/artificial-intelligence-data-labeling-children; Billy Perrigo, “Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic”, Time Magazine, January 18, 2023, https://time.com/6247678/openai-chatgpt-kenya-workers/; and Karen Hao and Andrea Paola Hernández, “How the AI Industry Profits from Catastrophe,” MIT Technology Review, April 20, 2022, https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/. ↩
-
Gil Appel, Juliana Neelbauer, and David A. Schweidel, “Generative AI Has an Intellectual Property Problem,” Harvard Business Review, April 7, 2023, https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem. ↩
-
Dual-Use Technology (def.): Innovations that can serve multiple purposes; namely, those initially designed for civilian or commercial interests, that can also inadvertently provide capabilities suitable for military or malicious uses. ↩
-
Yuval Noah Harari, “Reboot for the AI Revolution,” Nature 550, no. 7676 (October 2017): 324–27, https://doi.org/10.1038/550324a. ↩
-
Note: Some of the most prominent frontier AI developers include (sorted alphabetically) Alibaba (China), Anthropic (U.S.), Baidu (China), Bytedance (China), Google DeepMind (U.S.), Inflection (U.S.), OpenAI (U.S.), Technology Innovation Institute (UAE), and Tencent AI Lab (China). ↩
-
Amba Kak and Sarah Myers West, ‘AI Now 2023 Landscape: Confronting Tech Power’ (AI Now Institute, April 11, 2023), https://ainowinstitute.org/2023-landscape; and Yuval Noah Harari, “Chapter 9: The Great Decoupling,” in Homo Deus: A History of Tomorrow (Harper, 2017). ↩
-
Ioana Puscas, “AI and International Security: Understanding the Risks and Paving the Path for Confidence-Building Measures,” Research Report, Confidence-Building Measures for Artificial Intelligence (Geneva: United Nations Institute for Disarmament Research (UNIDIR), December 10, 2023), https://unidir.org/publication/ai-and-international-security-understanding-the-risks-and-paving-the-path-for-confidence-building-measures/, pp. 44, 51; Amandeep Singh Gill, “Artificial Intelligence and International Security: The Long View,” Ethics & International Affairs 33, no. 2 (July 2019): 169–79, https://doi.org/10.1017/S0892679419000145; and Ilaria Carrozza, Nicholas Marsh, and Gregory M. Reichberg, “Dual-Use AI Technology in China, the US and the EU: Strategic Implications for the Balance of Power,” PRIO Paper (Peace Research Institute Oslo (PRIO)), accessed February 15, 2024, https://www.prio.org/publications/13150. ↩
-
Bradley Martin, “Supply Chain Disruptions: The Risks and Consequences” (RAND Corporation, November 15, 2021), https://www.rand.org/pubs/commentary/2021/11/supply-chain-disruptions-the-risks-and-consequences.html; and Jeremy Kingsley, “The Business Costs of Supply Chain Disruption, ” Economist Impact - Perspectives, February 25, 2021, https://impact.economist.com/perspectives/sustainability/business-costs-supply-chain-disruption-1. ↩
-
We are currently witnessing discussions around copyright and data privacy requirements for AI companies, with uncertain outcomes: João Pedro Quintais, “Generative AI, Copyright and the AI Act,” Kluwer Copyright Blog, Institute for Information Law (Instituut voor Informatierecht, IVIR), May 9, 2023, https://dev.ivir.nl/publications/generative-ai-copyright-and-the-ai-act/; and James Vincent, “The Scary Truth about AI Copyright Is Nobody Knows What Will Happen Next,” The Verge, November 15, 2022, https://www.theverge.com/23444685/generative-ai-copyright-infringement-legal-fair-use-training-data. Authorities in the US and EU have also stated publicly that they are considering to take action against potentially anti-competitive behaviour among frontier AI developers: Reshmi Rampersad and Pauline Kuipers, “Artificial Intelligence and Competition Law: Shaping the Future Landscape in the EU,” Bird & Bird, January 17, 2024, https://competitionlawinsights.twobirds.com//post/102ixb6/artificial-intelligence-and-competition-law-shaping-the-future-landscape-in-the; and Melanie Martin and Nazli Cansin Karga, “Managing the Competition Law Risks of AI” (Denton, November 17, 2023), https://www.dentons.com/en/insights/articles/2023/november/17/managing-the-competition-law-risks-of-ai. In addition, high-level officials in the European Union have expressed concerns around environmental damage resulting from large training runs, and have called on AI developers to voluntarily embrace requirements related to environmental sustainability (e.g., see Recitals 28a, 72c, and 81 of the EU AI Act). In China, “Businesses must not use algorithms for monopolistic or unfair business practices”, see Matt Sheehan, “Tracing the Roots of China’s AI Regulations,” Paper (Carnegie Endowment for International Peace, February 27, 2024), https://carnegieendowment.org/2024/02/27/tracing-roots-of-china-s-ai-regulations-pub-91815. ↩