Google is ready to introduce new labels in its search results to identify AI-generated content or content edited using artificial intelligence (AI). This step aims to increase transparency and help users make more informed decisions about the content they come across online, particularly as AI-generated content becomes more widespread.

After the growth of AI-generated content, Google announced that it will use technology from the Coalition for Content Provenance and Authenticity (C2PA), where it is part of the steering committee, to label content with specific information showing whether it was created using AI tools.

Over the next few months, these labels will appear on products like Google Search, Images, and Lens. When users come across images or media that contain C2PA metadata, they will be able to click on the “About this image” feature to see if an image was generated or edited by AI. This feature aims to provide valuable context about the origins of images, helping users better understand the content they view online.

AI-Generated content
Image Source: the Adobe Blog

For transparency, Google should label AI-generated content in ads.

Additionally, Google will expand its AI-generated content labeling into its ad system as well. By integrating C2PA metadata, Google will ensure that ads with AI-generated content comply with its policies. This step is designed to make ad enforcement more effective, ensuring a more trustworthy and reliable experience for both users and advertisers on the platform.

Google plans to label AI-generated videos on YouTube.

Google is considering adding labels to AI-generated or AI-edited videos on YouTube to provide viewers with high transparency. More updates on this feature are expected later in the year. To secure this, Google and its partners have developed Content Credentials, a new technical standard that tracks the history of how content was created. These credentials will help to verify whether a photo or video was taken by a specific camera, edited, or generated using AI, and are designed to prevent tampering, ensuring the source remains reliable.

Google is expanding its efforts to increase transparency by continuing to develop SynthID, a watermarking tool from Google DeepMind. This tool will help identify AI-generated content across different formats, such as text, images, audio, and video. With these updates, Google aims to make online content more clear and trustworthy, as AI plays a larger role in creating media.

You might also be interested in - Netflix India content head summoned by govt over ‘IC 814’ web series controversy