President Trump signs Take It Down Act, addressing nonconsensual
deepfakes. What is it?
[May 20, 2025]
By BARBARA ORTUTAY
President Donald Trump on Monday signed the Take It Down Act, bipartisan
legislation that enacts stricter penalties for the distribution of
non-consensual intimate imagery, sometimes called “revenge porn," as
fell as deepfakes created by artificial intelligence.
The measure, which goes into effect immediately, was introduced by Sen.
Ted Cruz, a Republican from Texas, and Sen. Amy Klobuchar, a Democrat
from Minnesota, and later gained the support of First Lady Melania
Trump. Critics of the measure, which addresses both real and artificial
intelligence-generated imagery, say the language is too broad and could
lead to censorship and First Amendment issues.
What is the Take It Down Act?
The law makes it illegal to “knowingly publish” or threaten to publish
intimate images without a person's consent, including AI-created "deepfakes."
It also requires websites and social media companies to remove such
material within 48 hours of notice from a victim. The platforms must
also take steps to delete duplicate content. Many states have already
banned the dissemination of sexually explicit deepfakes or revenge porn,
but the Take It Down Act is a rare example of federal regulators
imposing on internet companies.

Who supports it?
The Take It Down Act has garnered strong bipartisan support and has been
championed by Melania Trump, who lobbied on Capitol Hill in March saying
it was “heartbreaking” to see what teenagers, especially girls, go
through after they are victimized by people who spread such content.
Cruz said the measure was inspired by Elliston Berry and her mother, who
visited his office after Snapchat refused for nearly a year to remove an
AI-generated “deepfake” of the then 14-year-old.
Meta, which owns and operates Facebook and Instagram, supports the
legislation.
“Having an intimate image – real or AI-generated - shared without
consent can be devastating and Meta developed and backs many efforts to
help prevent it,” Meta spokesman Andy Stone said in March.
The Information Technology and Innovation Foundation, a tech
industry-supported think tank, said in a statement following the bill's
passage last month that it “is an important step forward that will help
people pursue justice when they are victims of non-consensual intimate
imagery, including deepfake images generated using AI.”
“We must provide victims of online abuse with the legal protections they
need when intimate images are shared without their consent, especially
now that deepfakes are creating horrifying new opportunities for abuse,”
Klobuchar said in a statement. “These images can ruin lives and
reputations, but now that our bipartisan legislation is becoming law,
victims will be able to have this material removed from social media
platforms and law enforcement can hold perpetrators accountable."

[to top of second column]
|

Klobuchar called the law's passage a “a major victory for victims of
online abuse” and said it gives people "legal protections and tools
for when their intimate images, including deepfakes, are shared
without their consent, and enabling law enforcement to hold
perpetrators accountable.”
“This is also a landmark move towards establishing common-sense
rules of the road around social media and AI," she added.
Cruz said “predators who weaponize new technology to post this
exploitative filth will now rightfully face criminal consequences,
and Big Tech will no longer be allowed to turn a blind eye to the
spread of this vile material.”
What are the censorship concerns?
Free speech advocates and digital rights groups say the bill is too
broad and could lead to the censorship of legitimate images
including legal pornography and LGBTQ content, as well as government
critics.
“While the bill is meant to address a serious problem, good
intentions alone are not enough to make good policy,” said the
nonprofit Electronic Frontier Foundation, a digital rights advocacy
group. “Lawmakers should be strengthening and enforcing existing
legal protections for victims, rather than inventing new takedown
regimes that are ripe for abuse.”
The takedown provision in the bill “applies to a much broader
category of content — potentially any images involving intimate or
sexual content” than the narrower definitions of non-consensual
intimate imagery found elsewhere in the text, EFF said.
“The takedown provision also lacks critical safeguards against
frivolous or bad-faith takedown requests. Services will rely on
automated filters, which are infamously blunt tools,” EFF said.
“They frequently flag legal content, from fair-use commentary to
news reporting. The law’s tight time frame requires that apps and
websites remove speech within 48 hours, rarely enough time to verify
whether the speech is actually illegal.”
As a result, the group said online companies, especially smaller
ones that lack the resources to wade through a lot of content, “will
likely choose to avoid the onerous legal risk by simply depublishing
the speech rather than even attempting to verify it.”

The measure, EFF said, also pressures platforms to “actively monitor
speech, including speech that is presently encrypted” to address
liability threats.
The Cyber Civil Rights Initiative, a nonprofit that helps victims of
online crimes and abuse, said it has “serious reservations” about
the bill. It called its takedown provision unconstitutionally vague,
unconstitutionally overbroad, and lacking adequate safeguards
against misuse."
For instance, the group said, platforms could be obligated to remove
a journalist’s photographs of a topless protest on a public street,
photos of a subway flasher distributed by law enforcement to locate
the perpetrator, commercially produced sexually explicit content or
sexually explicit material that is consensual but falsely reported
as being nonconsensual.
All contents © copyright 2025 Associated Press. All rights reserved |