In a groundbreaking move to combat the rising threat of deepfakes and disinformation, major AI players, including OpenAI, Google, and Meta Platforms, have united under voluntary commitments to implement watermarking technology for AI-generated content. The announcement, made in response to growing concerns about the potential misuse of AI tools, was revealed by the Biden administration, which aims to make technology safer and more trustworthy for users.
“While it’s currently unclear how the watermark will work, it will likely be embedded in the content so that users can trace its origins to the AI tools used to generate it,” the Biden administration hopes.
The voluntary commitments were made by seven prominent AI companies: OpenAI, Microsoft, Google, Meta, Amazon, Anthropic, and Inflection. The primary objective of watermarking AI-generated content is to ensure that users can easily trace the origins of the content and verify its authenticity. This proactive step aims to safeguard users from being misled or deceived by maliciously generated AI content, be it text, video, audio, or images.
While the exact workings of the watermarking technology have not been detailed, it is expected to be seamlessly embedded within the AI-generated content. This embedded watermark will allow users to identify the AI tools responsible for creating the content, thereby reducing the potential for disinformation and malicious use.
In a statement, OpenAI said, “Audiovisual content that is readily distinguishable from reality or that is designed to be readily recognizable as generated by a company’s AI system—such as the default voices of AI assistants—is outside the scope of this commitment.”
Deepfakes, in particular, have emerged as a pressing concern for both Internet users and policymakers. An incident involving the image generator Midjourney, which was used to create fake images of Donald Trump’s arrest, highlights the need for such watermarking measures. If a watermark had been present at that time, the consequences faced by the user who created those images, Eliot Higgins, could have been avoided. This highlights the importance of watermarking technology to ensure responsible content creation and sharing.
The implications of AI-generated disinformation go beyond mere pranksterism. Scammers have taken advantage of AI voice-generating software to defraud individuals of significant amounts of money, while the FBI has issued warnings about the growing use of AI-generated deepfakes in sextortion schemes. These malicious applications underscore the urgency of implementing measures to protect Internet users from potential harm.
AI companies, including OpenAI and Google, have embraced the White House’s call for proactive measures to enhance AI safety and security. OpenAI, in a blog post, pledged to develop robust mechanisms, including watermarking systems, for audio and visual content. Moreover, they plan to provide tools and APIs that can determine whether specific content originated from their AI systems.
According to President Biden, “We’ll see more technology change in the next 10 years, or even in the next few years than we’ve seen in the last 50 years. That has been an astounding revelation to me, quite frankly.”
However, certain exceptions apply, and default voices of AI assistants, which are readily recognizable as generated by a company’s AI system, will not be subjected to watermarking. Google, on the other hand, has committed not only to watermarking but also to integrating metadata and innovative techniques to promote trustworthy information.
Also read: Blockchain Development with AI
This united effort by major AI players is commendable, as it signals the acknowledgment of the need for industry-wide collaboration to address AI misuse effectively. President Joe Biden’s meeting with tech companies further demonstrates the administration’s commitment to tackling AI-related challenges and developing comprehensive legislation to regulate the technology.
While the watermarking commitment is an essential step in the right direction, the companies have also made a range of other voluntary commitments to bolster AI safeguards. These commitments include conducting rigorous internal and external testing of AI systems before release, investing more in cybersecurity, and fostering information sharing across the industry to reduce AI risks.
Moreover, tech companies have pledged to protect users’ privacy as AI continues to evolve and ensure that the technology remains free from bias and discrimination against vulnerable groups. Additionally, they are committed to using AI for scientific advancements, such as medical research and climate change mitigation.
The move by these AI titans to establish watermarking technology and pursue responsible AI governance sets a significant precedent for the industry globally. It underscores the critical importance of maintaining AI’s potential for creativity and innovation while safeguarding against its potential for misuse and harm.
As AI technology advances rapidly, it is paramount that regulatory efforts keep pace with the transformative changes it brings. The EU has already taken steps to address AI regulation, including requirements for disclosure of AI-generated content and safeguards against illegal content. The U.S. can take inspiration from such initiatives to develop comprehensive legislation that ensures responsible and secure AI adoption.
President Biden’s leadership in bringing the tech industry together to formulate concrete steps for AI’s safe, secure, and beneficial deployment is laudable. The evolving landscape of AI technology demands vigilance and collaboration from all stakeholders to navigate the future of AI responsibly.
As the AI industry unites to fight deepfakes and disinformation through watermarking, the promise of AI remains bright, with the potential to drive significant advancements in various fields. By taking meaningful and effective steps to govern AI responsibly, AI companies can usher in an era of technology that is not only cutting-edge but also safe, trustworthy, and beneficial for all.
Also read: What is Generative Artificial Intelligence