Google said on Wednesday that political adverts on its platforms would be required to declare when visuals and audio have been edited or generated using techniques such as artificial intelligence (AI). The change to Google's ad policy will go into effect in November, almost a year before what is expected to be a controversial US presidential election, and as concerns grow that generative AI may be used to mislead voters.
"For years, we've provided additional levels of transparency for election ads," a Google spokeswoman told AFP. "Given the increasing prevalence of tools that generate synthetic content, we're expanding our policies to require advertisers to disclose when their election ads contain material that has been digitally altered or generated."
An AFP Fact Check team discovered that visuals in a Ron DeSantis campaign ad targeting former US President Donald Trump were produced using AI in June.
According to AFP Fact Check, the video released in a tweet at X, then known as Twitter, comprised photographs that were edited to depict Trump hugging Anthony Fauci, a key member of the US coronavirus task group, with kisses on the cheek. Google's advertising standards already prohibit using digital media to confuse or mislead consumers about politics, social problems, or public concerns.
Google's ad policy prohibits making demonstrably misleading statements that might affect participation or faith in the electoral process. Google compels political advertisements to reveal who paid for them and keeps information on the messaging in an online advertising library.
According to Google, the next upgrade would compel election-related advertisements to "prominently disclose" if they use "synthetic content" that represents genuine or realistic-looking persons or events. The technology behemoth stated that it will continue to invest in technologies to detect and delete such content.
According to Google, disclosures of digitally changed information in election advertisements must be "clear and conspicuous," and placed where they are likely to be observed. Synthetic visuals or audio representing a person saying or doing something they did not say or do, or depicting an event that did not occur, are examples of what might justify a label.
Google recommended titles like "This image does not depict real events" or "This video content was synthetically generated."