TL;DR
A Yale Law School clinic has sued to force the shutdown of ClothOff, an app that used AI to create non-consensual sexual imagery, after a New Jersey teen’s Instagram photos were altered. The case illustrates practical and legal hurdles in holding platform operators accountable for deepfake sexual content, especially across international borders.
What happened
For more than two years, an app called ClothOff has been used to generate non-consensual sexual images of young women. Although the app was removed from major mobile app stores and is banned on many social platforms, it remains accessible via the web and a Telegram bot. In October, a clinic at Yale Law School filed a civil suit on behalf of an anonymous New Jersey high school student whose Instagram photos were altered; the student was 14 when the original photos were taken, which makes the AI-generated images legally child sexual abuse material (CSAM). The complaint alleges the service is incorporated in the British Virgin Islands and is believed to be run by a sibling pair in Belarus, complicating efforts to serve defendants and obtain evidence. Local law enforcement declined to prosecute, citing evidentiary challenges, and the clinic is now attempting to serve notice and press for a judicial judgment amid broader questions about platform liability and free‑speech protections for general-purpose AI.
Why it matters
- Victims of AI-enabled non-consensual imagery face barriers to enforcement even when content meets legal standards for CSAM.
- Cross-border corporate structures and anonymous hosting make it hard for plaintiffs to identify and serve platform operators.
- Existing laws that ban deepfake pornography can be difficult to apply to general-purpose AI services without clear evidence of intent.
- Regulatory and legal gaps leave victims reliant on slow civil litigation or inconsistent criminal responses.
Key facts
- ClothOff has been active for over two years and is still reachable on the web and via a Telegram bot despite removals from major app stores.
- Yale Law School’s clinic filed a lawsuit in October representing an anonymous New Jersey high school student whose images were altered.
- The plaintiff was 14 when the original Instagram photos were taken, which classifies the altered images as child sexual abuse material (CSAM).
- Local authorities declined to pursue criminal charges in the plaintiff’s case, citing difficulties obtaining evidence from suspects’ devices.
- The complaint states ClothOff is incorporated in the British Virgin Islands and is believed to be operated by a brother and sister in Belarus.
- U.S. law such as the Take It Down Act bans deepfake pornography, but proving platform-level liability often requires showing intent or reckless disregard.
- xAI’s Grok and other general-purpose AI tools pose different legal challenges because they can be used for many lawful purposes, complicating claims of liability.
- Several countries have taken regulatory steps against AI chatbots alleged to generate sexual imagery — Indonesia and Malaysia blocked access to Grok, and the U.K. has opened an investigation; the EU, France, Ireland, India and Brazil have taken preliminary actions.
- No U.S. regulatory agency has issued an official response to the recent wave of AI-generated non-consensual imagery, according to the reporting.
What to watch next
- Whether Yale Law School’s clinic is able to serve the defendants and secure a court judgment against ClothOff.
- How courts handle claims about platform intent or recklessness in cases involving general-purpose AI tools — not confirmed in the source.
- Outcomes of regulatory investigations into xAI/Grok in the U.K. and any actions by other national regulators.
- Whether U.S. law enforcement or regulators take formal steps in response to the recent wave of AI-generated non-consensual imagery.
Quick glossary
- Deepfake: Media—often images or video—synthetically altered or generated by AI to depict people saying or doing things they did not actually do.
- Child Sexual Abuse Material (CSAM): Visual or audio content that depicts sexual activity involving minors; its production, distribution, and possession are illegal in most jurisdictions.
- Take It Down Act: U.S. legislation referenced in coverage that addresses non-consensual deepfake sexual content; specifics depend on statutory language and enforcement practice.
- First Amendment: The U.S. constitutional protection for freedom of speech; courts balance it against criminal prohibitions such as those covering CSAM.
Reader FAQ
Is ClothOff still available?
Yes. The app was removed from major app stores and banned on many social platforms, but it remains accessible via the web and a Telegram bot.
Did prosecutors charge anyone over the altered images?
Local authorities declined to pursue criminal charges in the case described, citing difficulty obtaining evidence from suspects’ devices.
Does U.S. law prohibit deepfake pornography?
Yes — laws such as the Take It Down Act address deepfake pornography, though applying them to platforms and general-purpose AI can be legally complex.
Is xAI or Elon Musk confirmed to be responsible for producing illegal imagery?
Not confirmed in the source. The reporting notes legal and investigative scrutiny of xAI/Grok and references reporting that Musk directed changes to safeguards, but it does not establish legal responsibility.

For more than two years, an app called ClothOff has been terrorizing young women online — and it’s been maddeningly difficult to stop. The app has been taken down from…
Sources
- A New Jersey lawsuit shows how hard it is to fight deepfake porn
- States race to restrict deepfake porn as it becomes easier …
- Why New Jersey is joining the battle against deepfakes
- Deepfake Charges & Penalties in New Jersey
Related posts
- Why Ontario Digital Service Couldn’t Accept ‘98% Safe’ LLMs for 15M Citizens
- Ireland urged to fast-track law criminalising harmful voice and image misuse
- Crew-11 ends ISS stay early after NASA cuts mission over crew illness