Google wants to fight deepfakes with a special badge

In just a few short years, AI-generated deepfakes of celebrities and politicians have graduated from the confines of academic journals to trending pages on major social media sites. Misinformation experts warn these tools, when combined with strained moderation teams at social media platforms, could add a layer of chaos and confusion to an already contentious 2024 election season. 

Now, Google is officially adding itself to a rapidly growing coalition of tech and media companies working to standardize a digital badge that reveals whether or not images were created using generative AI tools. If rolled out widely, the “Content Credential” spearheaded by The Coalition for Content Provenance and Authenticity (C2PA) could help bolster consumer trust in the provenance of photos and video amid a rise in deceptive AI-generated political deepfakes spreading on the internet. Google will join the C2PA as steering member this month which puts them in the same company as Adobe, Microsoft, Intel, and the BBC. 

In an email, a Google spokerson told PopSci that the company is currently exploring ways to use the standard in its suite of products and will have more to share “in the coming months.” The spokesperson says Google is already exploring incorporating Content Credentials into the “About this image” feature in Google Image search. Google’s support of these credentials could drive up their popularity but their overall use still remains voluntarily in lieu of any binding federal deepfake legislation. That lack of consistency gives deepfake creators an advantage. 

What are Content Credentials?

The (C2PA) is a global standards body created in 2019 with the main goal of creating technical standards to certify who where and how a piece of digital content was originally created. Adobe, which led the Content Authenticity Initiative (CAI), and its partners were already concerned about the ways AI generated media could erode public trust and amplify misinformation online years before massively popular consumer generative AI tools like OpenAI’s DALL-E gained momentum.

That concern catalyzed creation of Content Credentials, a small badge companies and creators can choose to attach to an image’s metadata that discloses who created it and when the image was made. It also discloses to viewers whether or not the digital content was created using an generative AI model and even names the particular model used as well as whether or not it was digitally edited or modified later. 

Content Credential supporters argue the tool creates a “tamper-resistant metadata” record that travels with digital content and can be verified at any point along its life cycle. In practice, most users will see this “icon of transparency” pop up as a small badge with the letters “CR” appearing in the corner of the image. Microsoft, Intel, ARM, and the BBC are also all members of the C2PA steering committee.

“With digital content becoming the de facto means of communication, coupled with the rise of AI-enabled creation and editing tools, the public urgently needs transparency behind the content they encounter at home, in schools, in the workplace, wherever they are,” Adobe General Counsel and Chief Trust Officer Dana Rao said in a statement sent to PopSci. “In a world where all digital content could be fake, we need a way to prove what’s true.” 

Users who come across an image pinned with the Content Credential can click on the badge to inspect when it was created and any edits that may have occurred since then. Each new edit is then bound to the photo or video’s original manifest which travels with it across the web. 

If a reporter were to crop a photo that was previously edified using Photoshop, for example, both of those changes to the images would be noted in the final manifest. CAI says the tool won’t prevent anyone taking a screenshot of an image, however, that screenshot would not include CAI metadata from the original file, could be a hint to viewers that it was not the original file.The symbol is visible on the image but is also included in its metadata which, in theory, should prevent a trouble-maker from using Photoshop or another editing tool to remove the badge. 

If an image does not have a visible badge on it, users can copy it and upload it to this Content Credentials Verify link to inspect its credentials and see if it has been altered over time. If the media was edited without in a way that didn’t meet the C2PA’s specification during some part of its life cycle, users will see a “missing” or “incomplete” marker. The Content Credential feature dates back to 2021. Adobe has since made it available to Photoshop users and creators producing images using Adobe’s Firefly AI Image generator. Microsoft plans to use the badge with images created by its Bing AI image generators. Meta, which owns Facebook and Instagram, similarly announced it would add a new feature to let users disclose when they share AI-generated video or audio on its platforms. Meta said it would begin applying these labels “in the coming months.” 

Why Google joining C2PA matters

Google’s involvement in the C2PA is important, first and foremost, because of the search giant’s massive digital footprint online. The company is already exploring ways of using these badges across its wide range of online products and services, which notably includes YouTube. The C2PA believes Google’s participation could put the credentials in front of more eyeballs, which could drive broader awareness of the tool as an actionable way to verify digital content, especially as political deepfakes and manipulated media gain traction online. Rao described Google’s partnership as a “watershed moment” for driving awareness to Content Credentials. 

“Google’s industry expertise, deep research investments, and global reach will help us strengthen our standard to address the most pressing issues around the use of content provenance and reach even more consumers and creators everywhere,” Rao said. “With support and adoption from companies like Google, we believe Content Credentials can become what we need: a simple, harmonized, universal way to understand content.” 

The partnership comes three months after Google announced it would attach a digital watermark with SynthID to audio created using its DeepMind AI Lyrica model. In that case, DeepMind says the audio watermark shouldn’t be audible to a human ear and similarly shouldn’t disrupt a user’s listening experience. Instead, it should serve as a more transparent safeguard to protect musicals from AI generated replicas of themselves or to prove whether or not an questionable clip was genuine or AI generated. 

Deepfake-caused confusion could make already contentious 2024 elections worse 

Tech companies and media companies are rushing to establish trusted ways to verify the provenance of digital media online ahead of what misinformation experts warn could be a mind bending 2024 election cycle. Major political figures, like Republican presidential candidate Donald Trump and Florida Governor Ron DeSantis have both already used generative AI tools to attack each other. More recently in New Hampshire, AI vocal cloning technology was used to make it appear as if President Joe Biden was calling residents urging them not to vote in the January primary election. The state’s attorney general’s office has since linked robocalls to two companies based in Texas

But the threats extend beyond elections too. For years, researchers have warned the rampant spread of increasingly convincing AI-generated deepfake images and videos online could lead to a phenomena called the “Liar’s Dividend” where consumers doubt whether anything they see online actually as it seems. Lawyers, politicians, and police officers have already falsely claimed legitimated images and videos were AI-generated to try and win a case or seal a conviction. 

Content Credential helps could help, but they lack teeth 

Even with Google’s support, Content Credentials remain entirely voluntary. Neither Adobe nor any regulatory body are forcing tech companies or their users to dutifully add provenance credentials to their content. And even if Google and Microsoft do use these markers to disclose content made using their own particular AI generators, nothing currently stops political bad actors from cobbling together a deepfake using other open source AI tools and then try to spread it via social media.

In the US, the Biden Administration has instructed the Commerce Department to create new guidelines for AI watermarking and safety standards tech firms building generative AI models would have to adhere to. Lawmakers in Congress have proposed federal legislation requiring AI companies include identifiable watermarks on all AI-generated content, though it’s unclear whether or not that would work practically. 

Tech companies are working quickly to put in place safeguards against deepfake but with a major presidential election less than seven months away, experts agree it’s likely confusing or misleading AI material will likely play some role.

Related Posts