Assurances include watermarking, reporting about capabilities and risks, investing in safeguards to prevent bias and more.
Some of the largest generative AI companies operating in the U.S. plan to watermark their content, a fact sheet from the White House revealed on Friday, July 21. Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI agreed to eight voluntary commitments around the use and oversight of generative AI, including watermarking.
This follows a March statement about the White House’s concerns about the misuse of AI. Also, the agreement comes at a time when regulators are nailing down procedures for managing the effect generative artificial intelligence has had on technology and the ways people interact with it since ChatGPT put AI content in the public eye in November 2022.
Jump to:
The eight AI safety commitments include:
The watermark commitment involves generative AI companies developing a way to mark text, audio or visual content as machine-generated; it will apply to any publicly available generative AI content created after the watermarking system is locked in. Since the watermarking system hasn’t been created yet, it will be some time before a standard way to tell whether content is AI generated becomes publicly available.
SEE: Hiring kit: Prompt engineer (TechRepublic Premium)
Former Microsoft Azure global vice president and current Cognite chief procurement officer Moe Tanabian supports government regulation of generative AI. He compared the current era of generative AI with the rise of social media, including possible downsides like the Cambridge Analytica data privacy scandal and other misinformation during the 2016 election, in a conversation with TechRepublic.
“There are a lot of opportunities for malicious actors to take advantage of [generative AI], and use it and misuse it, and they are doing it. So, I think, governments have to have some watermarking, some root of trust element that they need to instantiate and they need to define,” Tanabian said.
“For example, phones should be able to detect if malicious actors are using AI-generated voices to leave fraudulent voice messages,” he said.
“Technologically, we’re not disadvantaged. We know how to [detect AI-generated content],” Tanabian said. “Requiring the industry and putting in place those regulations so that there is a root of trust that we can authenticate this AI generated content is the key.”