Breaking 09:40 Hackers threaten Rockstar data leak after ransom refusal 09:40 UK regulators review Anthropic AI model kept out of public release 08:40 Compact CRISPR enzyme advances in vivo gene editing potential 08:30 Rising pressure on G7 debt as global challenges mount 08:20 Chinese AI solves decade-old math conjecture in 80 hours 07:50 Hungary opposition landslide ends Orbán era despite US backing 17:20 Musician G. Love loses $420,000 in Bitcoin to fake wallet on Mac App Store 17:00 Oil shock widens inflation gap between emerging and developed markets 16:20 OpenAI memo claims Microsoft limited reach as Amazon demand surges 16:00 Leaked screenshots show Anthropic building app creator inside Claude 15:40 China's Q1 GDP growth forecast to rebound to 4.8% despite Iran war risks 15:00 Revolution Medicines drug nearly doubles survival in pancreatic cancer trial 14:20 Google CEO Pichai urges US to lead in AI development 13:50 AI system maps ocean currents hourly using existing weather satellites 12:20 Spring-summer 2026 fashion weeks reveal vibrant color palette 11:42 RAVE token surges 2,000 percent as analysts flag market manipulation 11:20 Bitcoin short squeeze risk rises as open interest nears $25 billion 11:00 US naval blockade of Iranian ports takes effect after failed talks 10:40 Gold falls as Trump Hormuz blockade lifts oil and dollar 10:30 Japan calls for swift US–Iran agreement amid rising regional tensions 10:20 Rockstar confirms data breach as hackers set ransom deadline 10:02 Artemis II crew reflects on iconic ‘Earthset’ photo after return

Grok issues apology after serious content moderation failure

Saturday 03 January 2026 - 15:00
By: Sahili Aya
Grok issues apology after serious content moderation failure

Grok, the artificial intelligence tool developed by Elon Musk’s company xAI, has issued a public apology following a major controversy linked to the circulation of illegal content generated by its image system. The incident sparked widespread criticism and renewed concerns over the safeguards used in generative AI technologies.

In a statement released on its platform, Grok acknowledged serious shortcomings in its safety mechanisms and admitted that existing protections had failed to prevent the creation of prohibited material. The company promised a review of its internal procedures and pledged to strengthen moderation systems to avoid similar incidents in the future.

The apology, however, was met with skepticism online. Many users argued that an automated system cannot take responsibility or express remorse, insisting that accountability lies with the developers and executives who designed, approved, and deployed the technology. Critics said the response appeared to shift blame away from human decision-makers.

Elon Musk and xAI’s leadership were also targeted by critics who said the apology should have come directly from the company’s management. According to these voices, responsibility for defining ethical standards, technical limits, and moderation policies rests with those overseeing the platform, not the software itself.

The incident has intensified calls for stricter oversight of AI tools. Experts and users alike warned that without deep technical changes and robust filters, similar failures could occur again. Some have even suggested suspending the tool until stronger guarantees are in place.

More broadly, the controversy has reignited debate over legal and ethical responsibility in artificial intelligence, particularly regarding the duty of AI creators to ensure their systems do not generate illegal or harmful content, especially when child protection is involved.


  • Fajr
  • Sunrise
  • Dhuhr
  • Asr
  • Maghrib
  • Isha

Read more

This website, walaw.press, uses cookies to provide you with a good browsing experience and to continuously improve our services. By continuing to browse this site, you agree to the use of these cookies.