- 15:00Elon Musk advocates for a US-Europe free trade zone
- 14:30French Far Right Rally in Paris Against Marine Le Pen Conviction
- 14:00Morocco and South Korea expand ties through major rail deal
- 13:30Israel deports two British MPs amid Gaza medics' killings backlash
- 13:00Financial Support for 29 Film Festivals in Morocco
- 12:30UK Prepares to Safeguard Industry amid US Tariff Shifts
- 12:00Morocco and Spain: Strategic Partners in Global Competition
- 11:30Russia strikes Kyiv and south Ukraine following deadly attacks
- 11:00Record-breaking banana imports in Morocco
-
Prayer times
RABAT2025-04-06
Follow us on Facebook
San Francisco Wages Legal Battle Against AI Deepfake Nude Websites
In a groundbreaking move, San Francisco has launched a legal offensive against websites that use artificial intelligence to create non-consensual nude images, commonly known as deepfakes. This unprecedented lawsuit, filed on behalf of the people of California, targets multiple internationally-based services that promise to "undress any photo" within seconds.
David Chiu, San Francisco's City Attorney, spearheaded this legal action against websites operating from Estonia, Serbia, the United Kingdom, and other locations. Chiu emphasized the devastating impact of these AI-generated images on victims, citing issues such as bullying, humiliation, threats, and severe mental health consequences.
The lawsuit alleges violations of California state laws pertaining to fraudulent business practices, nonconsensual pornography, and child sexual abuse. However, the case faces significant challenges, particularly in identifying and prosecuting the operators of these services, which are often shrouded in anonymity.
This legal action follows a distressing incident in Almendralejo, Spain, where AI-generated nude images of high school girls circulated widely, leading to the sentencing of 15 classmates to a year's probation. Despite this legal action, the AI tool used in the Spanish case remains easily accessible online.
Dr. Miriam al Adib Mendiri, a physician whose daughter was among the victims in Spain, applauded San Francisco's initiative while calling for broader responsibility from tech giants like Meta Platforms and its subsidiary WhatsApp, which were used to spread the images in the Spanish case.
The lawsuit's potential to set a legal precedent has caught the attention of organizations combating AI-generated child sexual abuse material. Emily Slifer, Director of Policy at Thorn, an organization fighting child sexual exploitation, highlighted the case's significance in shaping future legal approaches to this issue.
However, experts like Stanford University researcher Riana Pfefferkorn point out the challenges in prosecuting international defendants. Pfefferkorn suggests that even if the defendants ignore the lawsuit, a default win for San Francisco could lead to actions against domain-name registrars, web hosts, and payment processors, potentially shuttering these sites.
The European Union's response to the Almendralejo incident earlier this year indicated that the app used did not fall under the bloc's new online safety rules due to its small size, highlighting the complexities in regulating such tools.
As the case unfolds, it is likely to serve as a litmus test for legal strategies against AI-powered privacy violations and non-consensual imagery. The outcome could have far-reaching implications for how jurisdictions worldwide tackle the growing threat of AI-generated deepfakes and their impact on individuals, particularly women and minors.
This legal battle underscores the urgent need for comprehensive approaches to combat the misuse of AI technology, balancing technological innovation with robust protections for individual privacy and dignity in the digital age.
Comments (0)