Breaking 18:30 Jordan Bardella distances himself from Donald Trump, criticizes Iran's regime 17:53 How US inflation evolved across history and presidencies 17:30 U.S. envoy Steve Witkoff scheduled to meet Netanyahu following Rafah border reopening 17:20 Greenland PM warns US still seeks control despite Trump's retreat 16:50 Global markets plunge after Trump's Fed chair pick 16:20 EU foreign policy chief warns European army would be dangerous 15:00 US And South Korea report progress on tariff discussions 14:50 MIT AI model suggests recipes for novel materials 14:44 Richard Duke Buchan III: A seasoned diplomat leading U.S. representation in Morocco 13:50 Copper prices plunge amid broad metals sell-off shaking global markets 13:20 Aviation leaders warn of supply chain strains and geopolitical risks 12:50 Mexico defies Trump pressure with humanitarian aid to Cuba 12:00 Ukraine Conflict: trilateral talks scheduled in Abu Dhabi 11:30 Four foreign nationals arrested in Tehran over riot involvement 11:20 China's solar capacity to surpass coal for first time in 2026 11:19 China leads world's largest foreign currency reserve holders 10:50 Musk hails AI-only social network as dawn of singularity 10:20 Trump optimistic on Iran deal as Tehran reviews talks 10:00 Grammy Awards 2026: Bad Bunny, Kendrick Lamar and Billie Eilish take top honors 09:30 Epstein files reveal shipment of sacred Kaaba cloth to the United States 07:30 Qatari emir and French president discuss Iran and regional security

Grok issues apology after serious content moderation failure

Saturday 03 January 2026 - 15:00
By: Sahili Aya
Grok issues apology after serious content moderation failure

Grok, the artificial intelligence tool developed by Elon Musk’s company xAI, has issued a public apology following a major controversy linked to the circulation of illegal content generated by its image system. The incident sparked widespread criticism and renewed concerns over the safeguards used in generative AI technologies.

In a statement released on its platform, Grok acknowledged serious shortcomings in its safety mechanisms and admitted that existing protections had failed to prevent the creation of prohibited material. The company promised a review of its internal procedures and pledged to strengthen moderation systems to avoid similar incidents in the future.

The apology, however, was met with skepticism online. Many users argued that an automated system cannot take responsibility or express remorse, insisting that accountability lies with the developers and executives who designed, approved, and deployed the technology. Critics said the response appeared to shift blame away from human decision-makers.

Elon Musk and xAI’s leadership were also targeted by critics who said the apology should have come directly from the company’s management. According to these voices, responsibility for defining ethical standards, technical limits, and moderation policies rests with those overseeing the platform, not the software itself.

The incident has intensified calls for stricter oversight of AI tools. Experts and users alike warned that without deep technical changes and robust filters, similar failures could occur again. Some have even suggested suspending the tool until stronger guarantees are in place.

More broadly, the controversy has reignited debate over legal and ethical responsibility in artificial intelligence, particularly regarding the duty of AI creators to ensure their systems do not generate illegal or harmful content, especially when child protection is involved.


  • Fajr
  • Sunrise
  • Dhuhr
  • Asr
  • Maghrib
  • Isha

Read more

This website, walaw.press, uses cookies to provide you with a good browsing experience and to continuously improve our services. By continuing to browse this site, you agree to the use of these cookies.