Share

MOUNTAIN VIEW, Calif. — Google has been adjusting its search engine to downrank any websites featuring a high volume of content that has been subject to deepfake takedown requests.

The market leading search engine has been adjusted “to reduce the prevalence of sexually explicit fake content high in results, responding to the explosion in non-consensual unsavory content people have created using generative artificial intelligence (AI) tools,” Bloomberg reported last week, after Google published a blogpost outlining the changes.

When Google accepts a deepfake takedown request, it will filter all explicit results on similar searches and remove duplicates. This results in an optimization of explicit searches prioritizing adult content pages that do not host deepfake content.

Google has specified that it intends to be careful in the way it purges explicit content, to make sure that non-consensual content like deepfakes is filtered, but legal, consensual adult content is not.

Google Product Manager Emma Higham told Bloomberg, “We’ve long had policies to enable people to remove this content if they find it in search, but we’re in the middle of a technology shift. As with every technology shift, we’re also seeing new abuses.”

The change follows earlier press reports that focused on the site MrDeepfakes.com, which benefited from its privileged position at the top of Google Search results as one its main traffic funnels.

According to Bloomberg, since the adjustments were implemented in the spring, “U.S.-based search traffic to the top two deepfake pornography websites plummeted.”

Higham added that the company has to be careful about “not taking too blunt an approach and having unintended consequences on access to information. But when we’re seeing high-risk queries and a high risk of fake explicit content showing up non-consensually, we are taking strong action now.