With election season around the corner, Google and YouTube are keeping a close eye on AI-altered political ads, a growing concern as campaigning gears up and political candidates lean into generative AI.
According to a new update to Google’s political content policy, any advertising materials featuring “synthetic” or artificially altered people, voices, or other events must “prominently disclose” its use within the advertisement itself.
Google already bans the use of deepfake content in advertising, but the expanded disclosure rules now apply to any AI alteration beyond minor edits, the Washington Post reported. The policy excludes synthetic content altered or generated in a way that’s “inconsequential to the ad’s claims,” and AI can also be used in some video and photo editing, such as image resizing, cropping, color correction, defect correction, or background edits.
Political ads and their intersection with Big Tech are evolving into a significant part of the upcoming 2024 election. Elon Musk recently announced that X (formerly Twitter) will once again allow political ads from candidates and political parties — a reversal of a four-year-old, wide-sweeping ban on all political ads — just as platform users report a rise in unlabeled advertisements appearing across their feeds.
A September report from Media Matters for America found that Meta platforms are failing to enforce the company’s political ads policy, citing unlabeled right-wing advertisements appearing on Facebook and Instagram.
Google’s new policy will go into effect in November and apply to election ads on Google’s platforms, including YouTube and third-party sites that are part of the company’s ad network.