To change location

  • alSobh
  • alChourouq
  • alDohr
  • alAsr
  • alMaghrib
  • alIchae

Follow Us on Facebook

San Francisco Wages Legal Battle Against AI Deepfake Nude Websites

Saturday 17 August 2024 - 11:40
San Francisco Wages Legal Battle Against AI Deepfake Nude Websites

In a groundbreaking move, San Francisco has launched a legal offensive against websites that use artificial intelligence to create non-consensual nude images, commonly known as deepfakes. This unprecedented lawsuit, filed on behalf of the people of California, targets multiple internationally-based services that promise to "undress any photo" within seconds.

David Chiu, San Francisco's City Attorney, spearheaded this legal action against websites operating from Estonia, Serbia, the United Kingdom, and other locations. Chiu emphasized the devastating impact of these AI-generated images on victims, citing issues such as bullying, humiliation, threats, and severe mental health consequences.

The lawsuit alleges violations of California state laws pertaining to fraudulent business practices, nonconsensual pornography, and child sexual abuse. However, the case faces significant challenges, particularly in identifying and prosecuting the operators of these services, which are often shrouded in anonymity.

This legal action follows a distressing incident in Almendralejo, Spain, where AI-generated nude images of high school girls circulated widely, leading to the sentencing of 15 classmates to a year's probation. Despite this legal action, the AI tool used in the Spanish case remains easily accessible online.

Dr. Miriam al Adib Mendiri, a physician whose daughter was among the victims in Spain, applauded San Francisco's initiative while calling for broader responsibility from tech giants like Meta Platforms and its subsidiary WhatsApp, which were used to spread the images in the Spanish case.

The lawsuit's potential to set a legal precedent has caught the attention of organizations combating AI-generated child sexual abuse material. Emily Slifer, Director of Policy at Thorn, an organization fighting child sexual exploitation, highlighted the case's significance in shaping future legal approaches to this issue.

However, experts like Stanford University researcher Riana Pfefferkorn point out the challenges in prosecuting international defendants. Pfefferkorn suggests that even if the defendants ignore the lawsuit, a default win for San Francisco could lead to actions against domain-name registrars, web hosts, and payment processors, potentially shuttering these sites.

The European Union's response to the Almendralejo incident earlier this year indicated that the app used did not fall under the bloc's new online safety rules due to its small size, highlighting the complexities in regulating such tools.

As the case unfolds, it is likely to serve as a litmus test for legal strategies against AI-powered privacy violations and non-consensual imagery. The outcome could have far-reaching implications for how jurisdictions worldwide tackle the growing threat of AI-generated deepfakes and their impact on individuals, particularly women and minors.

This legal battle underscores the urgent need for comprehensive approaches to combat the misuse of AI technology, balancing technological innovation with robust protections for individual privacy and dignity in the digital age.


Lire aussi