Google took a step towards transparency in AI-generated images today. Google DeepMind announced SynthID, a watermarking / identification tool for generative art. The company says the technology embeds a digital watermark, invisible to the human eye, directly onto an image’s pixels. SynthID is rolling out first to “a limited number” of customers using Imagen, Google’s art generator available on its suite of cloud-based AI tools.
One of the many issues with generative art — apart from the ethical implications of training on artists’ work — is the potential for creating deepfakes. For example, the pope’s hot new hip-hop attire (an AI image created with MidJourney) going viral on social media was an early example of what could become more commonplace as generative tools evolve. It doesn’t take much imagination to see how something like political ads using AI-generated art could do much more damage than a funny image circulating on Twitter. “Watermarking audio and visual content to help make it clear that content is AI-generated” was one of the voluntary commitments that seven AI companies agreed to develop after a July meeting at the White House. Google is the first of the companies to launch such a system.
Google doesn’t go too far into the weeds about SynthID’s technical implementation (likely to prevent workarounds), but it says the watermark can’t be easily removed through simple editing techniques. “Finding the right balance between imperceptibility and robustness to image manipulations is difficult,” the company wrote in a DeepMind blog post published today. “We designed SynthID so it doesn’t compromise image quality, and allows the watermark to remain detectable, even after modifications like adding filters, changing colours, and saving with various lossy compression schemes — most commonly used for JPEGs,” DeepMind’s SynthID project leaders Sven Gowal and Pushmeet Kohli wrote.
The identification part of SynthID rates the image based on three digital watermark confidence levels: detected, not detected and possibly detected. Since the tool is embedded into the image’s pixels, Google says its system can work alongside metadata-based approaches, like the one Adobe uses with its Photoshop generative features, currently available in an open beta.
SynthID includes a pair of deep learning models: one for watermarking and the other for identifying. Google says the two trained on diverse images, culminating in a combined ML model. “The combined model is optimised on a range of objectives, including correctly identifying watermarked content and improving imperceptibility by visually aligning the watermark to the original content,” Gowal and Kohli wrote.
Google acknowledged that it isn’t a perfect solution, adding that it “isn’t foolproof against extreme image manipulations.” But it describes the watermark as “a promising technical approach for empowering people and organisations to work with AI-generated content responsibly.” The company says the tool could expand to other AI models, including those tasked with generating text (like ChatGPT), video and audio.
Although watermarks could help with deepfakes, it’s easy to imagine digital watermarking turning into an arms race with hackers, with services that adopt SynthID requiring continual updating. In addition, the open-source nature of Stable Diffusion, one of the leading generative tools, could make industry-wide adoption of SynthID or any similar solution a tall order: It already has countless custom builds that can run on local PCs out in the wild. Regardless, Google hopes to make SynthID available to third parties “in the near future” to at least improve AI transparency industry-wide.