Breaking 21:45 Middle East conflict sends oil prices soaring nearly 30% in a week 21:16 Oracle and OpenAI halt Texas AI data center expansion plan 20:45 Brent oil could reach $120 if Middle East tensions continue, Barclays warns 20:15 White House downplays reports of Russian intelligence support to Iran 16:30 US agency to host forum on autonomous vehicle safety with Top CEOs 16:20 US submarine sinks Iranian frigate near Sri Lanka as regional tensions escalate 15:20 EU says United States will honor Turnberry trade deal despite tariff dispute 14:45 US dollar pares gains after February payrolls fall short of expectations 14:20 Iranian AI disinformation campaign escalates during conflict 13:50 Global investors shift toward international stocks as BofA predicts new market order 13:20 Dozens of French ships stranded as Strait of Hormuz crisis deepens 12:50 European stocks rise as oil eases after strongest weekly surge since 2022 12:20 FIFA reviews World Cup security with Mexico after cartel violence 09:50 Asian markets mixed as Iran conflict enters seventh day 09:20 Jimmy Lai drops appeal against 20 year prison sentence in Hong Kong 08:50 Physicists create first computer model of long theorized ideal glass 08:20 Euro risks falling below parity with dollar if Iran war drags on 07:50 SoftBank seeks record $40 billion loan to expand investment in OpenAI 07:20 Microsoft unveils Project Helix, next generation Xbox with PC gaming support 07:00 Amazon restores service after six hour shopping outage linked to software error

Grok issues apology after serious content moderation failure

Saturday 03 January 2026 - 15:00
By: Sahili Aya
Grok issues apology after serious content moderation failure

Grok, the artificial intelligence tool developed by Elon Musk’s company xAI, has issued a public apology following a major controversy linked to the circulation of illegal content generated by its image system. The incident sparked widespread criticism and renewed concerns over the safeguards used in generative AI technologies.

In a statement released on its platform, Grok acknowledged serious shortcomings in its safety mechanisms and admitted that existing protections had failed to prevent the creation of prohibited material. The company promised a review of its internal procedures and pledged to strengthen moderation systems to avoid similar incidents in the future.

The apology, however, was met with skepticism online. Many users argued that an automated system cannot take responsibility or express remorse, insisting that accountability lies with the developers and executives who designed, approved, and deployed the technology. Critics said the response appeared to shift blame away from human decision-makers.

Elon Musk and xAI’s leadership were also targeted by critics who said the apology should have come directly from the company’s management. According to these voices, responsibility for defining ethical standards, technical limits, and moderation policies rests with those overseeing the platform, not the software itself.

The incident has intensified calls for stricter oversight of AI tools. Experts and users alike warned that without deep technical changes and robust filters, similar failures could occur again. Some have even suggested suspending the tool until stronger guarantees are in place.

More broadly, the controversy has reignited debate over legal and ethical responsibility in artificial intelligence, particularly regarding the duty of AI creators to ensure their systems do not generate illegal or harmful content, especially when child protection is involved.


  • Fajr
  • Sunrise
  • Dhuhr
  • Asr
  • Maghrib
  • Isha

Read more

This website, walaw.press, uses cookies to provide you with a good browsing experience and to continuously improve our services. By continuing to browse this site, you agree to the use of these cookies.