Google will start requiring political advertisements on its platforms to disclose the use of artificial intelligence (AI) in generating synthetic imagery, audio, or video content. This new policy is set to go into effect in November 2023, approximately one year before the next US presidential election.
Aims to Provide Transparency Around Election Advertising
The implementation comes in response to growing concerns over the proliferation of AI tools that can create increasingly realistic fake media, known as "deepfakes." With the 2024 election nearing, there are fears such technologies could be exploited to mislead voters through political ads.
Google currently prohibits election-related ads from manipulating media to deceive people about politics, social issues, or matters of public concern. Demonstrably false claims undermining trust in elections are also banned. However, the rise of generative AI has prompted the company to expand its policies.
New Disclosure Requirement
Under the new rules, political ads containing digitally altered or AI-generated content that depicts events or portrays individuals must prominently disclose that fact. Suggested language includes phrases like "This image does not depict real events" or "This audio was synthetically generated."
The disclosures must be sufficiently clear and conspicuous, placed where users are likely to notice them. This could apply to imagery of politicians doing or saying things they did not, or to depictions of hypothetical future scenarios generated through AI.
Google will make exceptions for minor edits like cropping, color correction, and other alterations that do not fundamentally misrepresent reality. The requirements focus specifically on content portraying realistic people or events.
Concerns Around Deepfakes
Requiring the disclosure of synthetic content in ads addresses growing concerns that AI-generated deepfakes could be used to mislead voters in the 2024 US presidential election.
Some political campaigns are already experimenting with using AI to create political ads. a campaign video for Ron DeSantis used AI-generated images in an attack against former US President Donald Trump.
Lawmakers have started discussing potential regulations around AI-generated political content. Google's move aims to increase transparency rather than banning deepfakes outright.
Google says it will invest heavily in technology to detect synthetic content. It plans to remove ads with misleading AI-generated content, even if disclosures are present.
While not banning deepfakes completely, Google's new policy represents an important step toward mitigating disinformation in political advertising. The disclosure requirement creates more accountability for advertisers using AI-generated content.
Please, also have a look into : Bengaluru colleges ban use of ChatGPT, the AI agent that writes assignments