Google Is Getting Thousands of Deepfake Porn Complaints

Google Is Getting Thousands of Deepfake Porn Complaints

The number of nonconsensual deepfake porn videos online has exploded since 2017. As the harmful videos have spread, thousands of women—including Twitch streamers, gamers, and other content creators—have complained to Google websites hosting the videos and tried to get the tech giant to remove them from its search results.

A WIRED analysis of copyright claims regarding websites that host deepfake porn videos reveals that thousands of takedown requests have been made, with the frequency of complaints increasing. More than 13,000 copyright complaints—encompassing almost 30,000 URLs—have been made to Google concerning content on a dozen of the most popular deepfake websites.

The complaints, which have been made under the Digital Media Copyright Act (DMCA), have resulted in thousands of nonconsensual videos being removed from the web. Two of the most prominent deepfake video websites have been the subject of more than 6,000 and 4,000 complaints each, data published by Google and Harvard University’s Lumen database shows. Across all the deepfake platforms analyzed, around 82 percent of complaints resulted in URLs being removed from Google, the company’s copyright transparency data shows.

Millions of people find and access deepfake video websites by searching for deepfakes, often alongside the names of celebrities or content creators. WIRED is not naming the specific websites to limit the exposure they receive. However, lawyers and companies combating deepfakes online, including by systematically making DMCA complaints, say the number of copyright complaints and high percentage of removals are a sign that Google should take more action against the specific websites. This should include removing them from search results entirely, they say.

“If the sole purpose of these websites is to abuse and manipulate a person’s personal brand, or take their autonomy away from them, or host simple revenge porn, they shouldn’t be there,” says Dan Purcell, the founder and CEO of Ceartas, a firm that helps creators remove their content when it is being used without permission.

For the biggest deepfake video website alone, Google has received takedown requests for 12,600 URLs, 88 percent of which have been taken offline. Purcell says that given the large volume of offending content, the tech company should be examining why the site is still in search results. “If you remove 12,000 links for infringement, why are they not just completely removed?” He adds: “They should not be crawled. They’re of no public interest.”

In the five years since nonconsensual deepfake porn videos first emerged, tech companies and lawmakers have been slow to act. At the same time, machine learning improvements have made it easier to create deepfakes. Today, there are a few kinds of explicit deepfake content: videos where a person’s face is put onto existing consensual pornography, apps that can “undress” a person or swap their face onto a nude image, and some generative AI that allows entirely new deepfake images to be created, such as the images of Taylor Swift which spread online in January.

Each method is weaponized—almost always against women—to degrade, harass, or cause shame, among other harms. Julie Inman Grant, Australia’s e-safety commissioner, says her office is starting to see more deepfakes reported to its image-based abuse complaints scheme, alongside other AI-generated content, such as “synthetic” child sexual abuse and children using apps to create sexualized videos of their classmates. “We know it’s a really underreported form of abuse,” Grant says.

As the number of videos on deepfake websites has grown, content creators—such as streamers and adult models—have used DMCA requests. The DMCA allows people who own the intellectual property of certain content to request it be removed from the websites directly or from search results. More than 8 billion takedown requests, covering everything from gaming to music, have been made to Google.

“The DMCA historically has been an important way for victims of image-based sexual abuse to get their content removed from the internet,” says Carrie Goldberg, a victims’ rights attorney. Goldberg says newer criminal laws and civil law procedures make it easier to get some image-based sexual abuse removed, but deepfakes complicate the situation. “While platforms tend to have no empathy for victims of privacy violations, they do respect copyright laws,” Goldberg says.

WIRED’s analysis of deepfake websites, which covered 14 sites, shows that Google has received DMCA takedown requests about all of them in the past few years. Many of the websites host only deepfake content and often focus on celebrities. The websites themselves include DMCA contact forms where people can directly request to have content removed, although they do not publish any statistics, and it is unclear how effective they are at responding to complaints. One website says it contains videos of “actresses, YouTubers, streamers, TV personas, and other types of public figures and celebrities.” It hosts hundreds of videos with “Taylor Swift” in the video title.

The vast majority of DMCA takedown requests linked to deepfake websites listed in Google’s data relate to two of the biggest sites. Neither responded to written questions sent by WIRED. The majority of the 14 websites had over 80 percent of the complaints leading to content being removed by Google. Some copyright takedown requests sent by individuals indicate the distress the videos can have. “It is done to demean and bully me,” one request says. “I take this very seriously and I will do anything and everything to get it taken down,” another says.

“It has such a huge impact on someone’s life,” says Yvette van Bekkum, the CEO of Orange Warriors, a firm that helps people remove leaked, stolen, or nonconsensually shared images online, including through DMCA requests. Van Bekkum says the organization is seeing an increase in deepfake content online, and victims face hurdles to come forward and ask that their content is removed. “Imagine going through a hiring process and people Google your name, and they find that kind of explicit content,” van Bekkum says.

Google spokesperson Ned Adriance says its DMCA process allows “rights holders” to protect their work online and the company has separate tools for dealing with deepfakes—including a separate form and removal process. “We have policies for nonconsensual deepfake pornography, so people can have this type of content that includes their likeness removed from search results,” Adriance says. “And we’re actively developing additional safeguards to help people who are affected.” Google says when it receives a high volume of valid copyright removals about a website, it uses those as a signal the site may not be providing high-quality content. The company also says it has created a system to remove duplicates of nonconsensual deepfake porn once it has removed one copy of it, and that it has recently updated its search results to limit the visibility for deepfakes when people aren’t searching for them.

The DMCA is an imperfect tool, particularly when it comes to deepfakes. Goldberg says it needs someone to “affirm under penalty of perjury” that they’re the copyright holder of the video or images. “But the process of creating a deepfake can transform the image so much that the resulting image is not the same intellectual property as the images it was sourced from,” Goldberg says. Ultimately, this could mean the person creating the deepfake video may be the copyright holder of the abusive content. “Our firm has long advocated that the copyrighting of illegal works should revert to the victims so they can exercise control over them,” Goldberg says. “But the law has not yet caught up.”

Purcell, from Ceartas, says that the law is “not fit for purpose” and can be very “easily weaponized” by deepfake websites filing counternotices against DMCA requests. “At that point, the only option is to sue them,” Purcell says, which raises its own problems. The websites often don’t include contact details or information about who created them, and they can be based in countries with difficult legal regimes. “They will do anything to stay anonymous. They will hide themselves,” van Bekkum says. “They are using, on purpose, hosting companies offshore.”

Grant, the Australian regulator, says her office works with technology platforms and can also issue orders for content to be removed. The office has also targeted individuals who upload videos to deepfake websites. Last year, Grant’s office pursued a man who uploaded images of Australian public figures to one of the largest deepfake websites. He was ordered to remove the material and delete it from his devices.

“I am not a resident of Australia. The removal notice means nothing to me. Get an arrest warrant if you think you are right,” he emailed back after receiving the legal order, according to court documents. Months later, border officials told Grant that the man had entered Australia, and he was ultimately charged with contempt of court.

The incident was a rare example of a regulator or law enforcement body taking successful action against someone creating deepfakes. Adam Dodge, a lawyer and founder of Endtab (Ending Technology-Enabled Abuse), says technology companies should be funding greater education efforts in schools and communities about the harms of creating and sharing deepfakes, while legislators should put in place laws that don’t push the burden of getting content removed on the victims. “It needs to be considered as harmful, as devastating, and as important to internet safety as other forms of banned or criminal or regulated material that people find to be abhorrent and unacceptable,” Dodge says.

https://www.wired.com/feed/rss

Matt Burgess

Leave a Reply