AI-Aware Digital Authenticity
What is it?
This domain refers to Identity and Content Authentication solutions designed for verifying and tracking the origins and legitimacy of digital identities, avatars, creative works, writing, and other media in public or private digital networks. While identity and content can be viewed as two distinct aspects each requiring authentication, often verifying one involves verifying the other (e.g., detecting AI generated content falsifies the party claiming authorship). Digital authenticity technologies might employ detection algorithms, provenance tracking platforms, watermarking techniques, as well as other verification tools to ensure the authenticity of online interactions. Furthermore, the emerging industry of Trust & Safety service companies will be integral to a multifaceted approach to defend against AI-driven digital or Informational harms. Especially in an era where AI will enhance the capacity for impersonation, sophisticated malware, illegitimate financial transactions, IP infringements, targeted phishing campaigns, false attribution, as well as the creation of synthetic creative works, these products and services will be vital to a well-functioning worldwide web.
Why is it imperative?
AI-Aware Digital Authenticity solutions are necessary to enhance trust in online information and interactions, and maintain social cohesion by validating the origins of creative works (both human and AI-generated), moderating online content for safety, preventing the spread of misinformation, and simplifying the management of verifiable digital credentials. By doing so, these solutions not only protect against the Misuse of AI in generating deceptive content but also uphold copyright and intellectual property laws, thereby ensuring a secure and trustworthy digital environment. By way of regulations, products and services in this solution category could aid or enable companies to: Monitor and attest to AI model robustness against misuse, and Provide comprehensive instruction and documentation, Authenticate content and report on provenance or uncertainty, just to name a few.
How might it work?
EXAMPLE SOLUTION IDEAS | SOLUTION TYPE |
---|---|
Digital signatures and forensic watermarking tools | Software |
Visual search technology platforms | Software |
Digital asset management and provenance tracking | Software |
AI-aware trust & safety moderation services | Service / Software |
Identity and Content Authentication
One class of emerging solutions for the integrity of digital content and identity can be described as, essentially, integrating verifiable digital documentation into data files. For instance, digital signatures can include content provenance information and also confirm whether data files have been tampered with, by requiring uncompromised private keys to access files or make modifications 1. These cryptographically-secure signatures are also verifiable using public-key encryption, allowing any user to trace the origin of a file and determine its authenticity 2. Applied to news sites or photojournalism, these signatures can help media companies maintain trust with their audiences amidst an era of increasing concerns around mis- and disinformation 3. It is also possible that blockchain technology may lend itself to further enhancing provenance tracking capabilities with its secure, transparent ledger. The ability to store unchangeable records can assist with digital asset management and provenance tracking solutions 4.
Another solution that integrates verifiable digital documentation is watermarking. Forensic watermarking, whether visible or covert, embeds content provenance and ownership information into digital media files 5. This technique—essentially capable of transforming images into invisible QR codes—can withstand duplication and distribution, and plays a vital role in safeguarding media from misuse or copyright infringement. Methods for watermarking can vary depending on the type of media, such as audio, video, images, or digital documents, considering each one's specific data characteristics, encoding needs, and resistance to compression or editing. Some kinds of watermarks are made to be "robust"—meant to withstand any efforts to manipulate a file—and others are intentionally created "fragile"—destroying or altering itself in order to signal that a file has been tampered with.
Visual search technology platforms can leverage computer vision to identify visually identical or similar components between multiple pictures or videos 6. Integrated with APIs for logo detection, object recognition, or text detection, this technology analyses attributes such as colour, object type, and markings to detect similar aspects between media. This technology can be especially useful in discovering unauthorised uses of intellectual property by scanning and flagging items for legal review. That being said, post-hoc detection techniques to discover AI generated content are quickly being outpaced by advances in AI technologies.
Lastly, AI-Aware Digital Authenticity relates closely to the burgeoning market and function of Trust & Safety 7. These are teams that focus on enforcing platform policies to ensure online authenticity, safety, and compliance 8. Modern multi-lingual AI-aware trust & safety moderation services may make use of a combination of semantic analysis, computer vision, and human expertise to monitor, detect, and respond to harmful online content (e.g., bullying, exploitation, disinformation, hate speech, profanity, fraud, and violent extremism) 9. Such solutions not only allow companies to prioritise and manage risks more effectively but also enhance the scalability of moderation processes. This capability is critical for protecting brand integrity and reputation, while also helping companies avoid non-compliance penalties related to regulations like GDPR or the UK's Online Safety Bill 10.
Who needs it?
Prospective Customer | Est. WTP for Identity & Content Verification |
---|---|
High-Tech Manufacturers | Low |
Data Collectors & Brokers | Medium |
Cloud Compute Providers | Low |
Frontier Ai Labs | Low |
Ai Systems Developers | Low |
Industry Enterprises | High |
AI-aware digital authenticity solutions, or identity and content verification technologies, are crucial for sectors aiming to safeguard online interactions, transactions, and the dissemination of information. The incentives for adopting digital authenticity solutions are significant, with an estimated $78 billion lost annually to disinformation 11. Potential savings and revenue protection from early adoption are substantial, especially as stringent regulations and industry standards loom. Authenticity technologies not only offer a compliance advantage but also serve as a strategic competitive edge, attracting more users and advertisers to platforms that uphold high standards. Absent the development and adoption of effective identity and content verification, the age of AI will see an increasing lack of trust, integrity, and authenticity in online environments. Such an outcome could decrease business and consumer spending, and slow national economic growth. This risk is making digital authenticity solutions a critical investment for forward-thinking organisations and nations.
Prospective buyers of these solutions tend to be concentrated in the AI use phase—or post-deployment phase of AI. That said, there remains significant potential for rapid business scalability given the network benefits of highly-connected, interoperable authenticity systems. Industries such as retail, gaming, dating, e-commerce, or online marketplaces will benefit from mitigating losses related to fake accounts and fraudulent transactions. Authenticity solutions vendors can approach team members reporting to Chief Technology Officers or Chief Security Officers, as these departments are responsible for overseeing technological infrastructure and safeguarding against cyber threats. When it comes to financial institutions or FinTech companies, they are a heavily-regulated sector often already equipped with Know Your Customer (KYC) solutions to combat identity theft and financial fraud. Dedicated compliance officers and risk management teams would procure AI-aware Digital Authenticity solutions to novel threads from AI-driven fraudulent activity. In healthcare environments or at government agencies, data protection officers and information security teams will require modern authenticity tools to upgrade the safeguards they have around confidential health data or electoral information. And most notably, big tech companies operating social media platforms, gaming platforms, as well as news websites, in an era rife with mis- and disinformation, will require identity and content verification tools to manage multimedia content authenticity, maintain credibility, and secure advertising revenue by demonstrating their commitment to the truth. Often in these contexts, product managers or heads of "content integrity", respectively, would drive the business case for embedding improved authenticity solutions
What else do investors & founders need to know?
The use of AI-enabled identity and content authentication tools may involve opaque—and possibly arbitrary—decisions about what is labelled as authentic, truthful, or accurate. For instance, when online content is referring to complex or indeterminate issues, it is often inappropriate to express clear-cut certainty about what is deemed misinformation. Providers in this domain may be pressured, both by regulators and the public, to maximise transparency and interpretability in automated or procedural decision-making about validity or authenticity. This approach also aligns with the legislative emphasis on Explainable AI (XAI) 12, as highlighted by various guidelines, including NIST Proposals 13, the U.S. Algorithmic Accountability Act 14, or ISO/IEC CD TS 6254 15.
When attempting to detect AI-generated fake or deceptive media, vendors may face uncertainty regarding content veracity or the legitimacy of user identities. Institutional pressure and even negligence might cause news sites, social media companies, and other online networks to withhold this uncertainty—or worse, falsely authenticate AI generated media as genuine. The absence of clear standards for disclosing uncertainty could result in an online social and informational landscape characterised by manipulation and mistrust. Recent regulatory frameworks, such as the Hiroshima Process 7 or CAC Provisions on Deep Synthesis, could be interpreted to mandate the disclosure of uncertainty when companies are unable to determine provenance. Considering the affordances of Generative AI, such disclosures will become increasingly critical to online issues such as the spread of mis- and disinformation 16. Likewise, there have been notable attempts to standardise watermarking solutions from the Digital Watermarking Alliance 17 and the Content Authenticity Initiative 18, which both aim to make content ownership protection interoperable across different media and devices.
Finally, where applicable, identity verification solution providers in this domain must protect personally identifiable information (PII) through adherence to US CCPA 19, EU GDPR 20, or the Electronic Identification Authentication and Trust Services (eIDAS) regulation 21. For example, the GDPR data minimization principle necessitates vendors to restrict data usage to what is strictly necessary for a specific objective 22. These regulations not only provide a framework for ethical and responsible handling of data but also have the aim of fostering trust in AI-driven systems, thus promoting safeguards for individual data privacy rights.
Mt. Gox Cyber Attack
- Year: 2014
- Companies: Mt. Gox
- Industry: Cryptocurrency
The Mt. Gox cryptocurrency trading platform, once handling over 70% of the world's bitcoin trades, was the site of a pivotal event in the cryptocurrency realm when a massive hack led to the loss of 744,408 bitcoins from customers and an additional 100,000 bitcoins belonging to the company itself 23 24. This breach exposed not only the vulnerabilities within Mt. Gox's system but also the dire need for robust identity verification measures within any cryptocurrency exchange platform. As a consequence, it underscored the significance of stringent Know Your Customer (KYC) and Anti-Money Laundering (AML) laws aimed at preventing such fraudulent activities by ensuring stringent checks on the identities of individuals engaging in financial transactions. This event led to a reevaluation and tightening of regulatory requirements around the globe for cryptocurrency exchanges, aiming to protect investors and stabilise the financial market by removing similar vulnerabilities.
Russian Interference with US Elections
- Year: 2016
- Companies: Mt. Gox
- Industry: Social Media
The 2016 Russian interference with the US elections through manipulative ads and posts on Facebook's social media platform highlighted the critical challenge of verifying content origin and legitimacy in the digital age 25 26. This incident prompted an increasing demand for mechanisms to validate content provenance and authenticate the identities behind content creation, without infringing on privacy or anonymity. It illuminated the urgent need for digital platforms to deploy advanced solutions capable of distinguishing between genuine and AI-generated or manipulated content, thereby safeguarding the integrity of information disseminated online. These events collectively signal a growing market for companies specialising in identity verification and content authenticity, highlighting the crucial role such entities play in ensuring the security and trustworthiness of online ecosystems.
Sample Companies from the Venture Landscape:
As at 15 April 2024: Of the 100 startups in our AI Assurance Tech landscape scan, we uncovered 11 offering AI-Aware Digital Authenticity solutions. These companies included 7 seed/early stage startups and 4 growth/late stage companies. Here's a brief sampling of those startups:
Making it simple for organizations to build, manage and present instantly verifiable digital credentials. | Access a full suite of content moderation services trusted by the leading brands and customized to your needs. | Trust and Safety Solutions for User and AI-Generated Content. |
Secure your assets against leaks. | Empowering Originality & Inspiring Authenticity. | We are an open and decentralized network designed to ensure provenance for all types of creative works created by humans & AI. |
The above companies may also offer products and services that fit in one of the other three solution domains. All relevant domain classifications and the full list of companies surfaced through our landscape scan can be reviewed in the Appendix: "AIAT Landscape Logo Map"
Footnotes
-
Roy, Abhishek, and Sunil Karforma. "A survey on digital signatures and its applications." Journal of Computer and Information Technology 3, no. 1 (2012): 45-69. https://www.academia.edu/download/30180978/J.ofComp._I.T._45-d__(1)12_(1).pdf ↩
-
Content Authenticity Initiative. "How It Works" Last modified 2024. https://contentauthenticity.org/how-it-works. Accessed April 11, 2024. ↩
-
Strickland, Eliza. "This Election Year, Look for Content Credentials: Media organisations combat deepfakes and disinformation with digital manifests." IEEE Spectrum 61, no. 01 (2024): 24-27. https://doi.org/10.1109/MSPEC.2024.10380467 ↩
-
Lautert, Filipe, Daniel Fernandes Gonçalves Pigatto, Luiz Celso Gomes-JR. "Blockchain-based data provenance." In Anais do III Workshop em Blockchain: Teoria, Tecnologia e Aplicações, pp. 120-125. SBC, 2020. https://doi.org/10.5753/wblockchain.2020.12975; and Numbersprotocol. "Solutions." Last modified 2023. https://www.numbersprotocol.io/. Accessed March 14, 2024. ↩
-
Barni, Mauro, Franco Bartolini, Vito Cappellini, and Alessandro Piva. "Copyright protection of digital images by embedded unperceivable marks." Image and Vision Computing 16, no. 12-13 (1998): 897-906. https://doi.org/10.1016/S0262-8856(98)00058-4; Berghel, Hal, and Lawrence O'Gorman. "Protecting ownership rights through digital watermarking." Computer 29, no. 7 (1996): 101-103. https://doi.org/10.1109/2.511977; and Furon, Teddy. "A Survey of Watermarking Security." In Digital Watermarking, edited by Barni, Mauro, Cox, Ingemar, Kalker, Ton, Kim, Hyoung-Joong, 201-215. Springer Berlin Heidelberg, 2005. https://doi.org/10.1007/11551492_16 ↩
-
Szeliski, Richard. 2022. “Computer Vision: Algorithms and Applications”. 2nd ed. Texts in Computer Science. Cham, Switzerland: Springer. ↩
-
Perspective Economics. “Towards a Safer Nation: The United States ‘Safety Tech’ Market .” Paladin Capital Group, 2022. https://www.paladincapgroup.com/wp-content/uploads/2022/01/US_Safety-Tech_Market_Report.pdf. ; Perspective Economics. "The UK Safety Tech Sector: 2023 Analysis." Perspective Economics, 2023. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1160085/uk_safety_tech_analysis_2023.pdf ↩
-
ActiveFence. "The Trust & Safety Glossary." ActiveFence. Accessed April 19, 2024. https://www.activefence.com/the-trust-safety-glossary/#def-trust-safety ↩
-
Microsoft Azure. "Public Preview: Azure AI Content Safety." Microsoft. May 24, 2023. https://azure.microsoft.com/en-us/updates/announcing-azure-ai-content-safety ↩
-
"Overview of expected impact of changes to the Online Safety Bill." GOV.UK. 18 January 2023. https://www.gov.uk/government/publications/online-safety-bill-supporting-documents/overview-of-expected-impact-of-changes-to-the-online-safety-bill ↩
-
Castillo, Michelle. "Exclusive: Fake News Is Costing the World $78 Billion a Year." CHEDDAR. November 18, 2019. Accessed April 3, 2024. https://cheddar.com/media/exclusive-fake-news-is-costing-the-world-billion-a-year. ↩
-
Explainable AI (def.): "Explainable AI is a set of tools and frameworks to help you understand and interpret predictions made by your machine learning models, natively integrated with a number of Google's products and services." [Google Cloud] ↩
-
National Institute of Standards and Technology (NIST). SP 800-63A: Enrollment and Identity Proofing. Unpublished work. https://doi.org/10.6028/NIST.SP.800-63a ↩
-
U.S. Congress. Senate. (2022). S.3572, Algorithmic Accountability Act of 2022, 117th Cong., 2nd sess. (introduced in Senate February 3, 2022). https://www.congress.gov/bill/117th-congress/senate-bill/3572. ↩
-
Phillips, P. Jonathon, Carina A. Hahn, Peter C. Fontana, Amy N. Yates, Kristen Greene, David A. Broniatowski, and Mark A. Przybocki. 2021. "Four Principles of Explainable Artificial Intelligence." NIST Special Publication (SP) 1265. Gaithersburg, MD: National Institute of Standards and Technology. https://www.govinfo.gov/app/details/GOVPUB-C13-7848d8b02b0f9467e09670d6f7531430.; and Panigutti, Camilla, Ronan Hamon, Isabelle Hupont, Daniel Fernandez Llorca, Danilo Fano Yela, Henry Junklewitz, Sara Scalzo, Giuseppe Mazzini, Idoia Sanchez, Juan Soler Garrido, and Esperanza Gomez. 2023. "The Role of Explainable AI in the Context of the AI Act." In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT '23), 12. June 12-15, 2023, Chicago, IL, USA. New York, NY: ACM. https://doi.org/10.1145/3593013.3594069 ↩
-
Villasenor, John. "Artificial Intelligence, Deepfakes, and the Uncertain Future of Truth." Brookings Institution Report, February 2019. https://www.brookings.edu/research/artificial-intelligence-deepfakes-and-the-uncertain-future-of-truth/. ↩
-
Digital Watermarking Alliance. "Digital Watermarking Alliance." Accessed April 2, 2024. https://digitalwatermarkingalliance.org/ ↩
-
Content Authenticity Initiative. "How It Works" Last modified 2024. https://contentauthenticity.org/how-it-works. Accessed April 11, 2024. ↩
-
California Consumer Privacy Act (CCPA) (def.): The California Consumer Privacy Act (CCPA), passed in June 2018, grants California residents rights over their personal information held by businesses and imposes obligations on companies regarding data privacy and transparency. [AB-375] ↩
-
European Parliament and Council of the European Union, "Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)," Official Journal of the European Union L119 (May 4, 2016): 1-88 ↩
-
Electronic Identification, Authentication and Trust Services Regulation (eIDAS) (def.): The Electronic Identification, Authentication and Trust Services (eIDAS) regulation, passed in July 2014, establishes a framework for electronic identification and trust services within the European Union, ensuring their legal recognition and cross-border interoperability. [Regulation (EU) No 910/2014] ↩
-
"Article 5, General Data Protection Regulation," Official Journal of the European Union, L 119 (2016): 35-36 ↩
-
Frunza, Marius-Cristian. “Solving modern crime in financial markets: Analytics and case studies”. Academic Press, 2015. ↩
-
Bloomberg. “Mt. Gox Seeks Bankruptcy After $480 Million Bitcoin Loss.” Last modified 2014. https://www.bloomberg.com/news/articles/2014-02-28/mt-gox-exchange-files-for-bankruptcy. Accessed March 20, 2024. ↩
-
Schick, Nina. “Deep fakes and the infocalypse: What you urgently need to know”. Hachette UK, 2020. ↩
-
US Department of Justice, Office of Public Affairs. “Russian Project Lakhta Member Charged with Wire Fraud Conspiracy.” Last modified 2020. https://www.justice.gov/opa/pr/russian-project-lakhta-member-charged-wire-fraud-conspiracy. Accessed March 20, 2024. ↩