Breaking 15:20 Webb telescope detects thickest atmospheric haze ever seen on exoplanet 14:50 Neste shares gain as banks raise targets on fuel price surge 14:20 UAE briefly closes airspace as Iran Israel strikes escalate across region 14:17 Trump vows to “take” Cuba as island reels from oil embargo 14:03 Republicans enact new maps in four states amid redistricting push 13:50 Oil prices rebound above $100 as Hormuz concerns persist 13:45 Hiroshima survivor Shigeaki Mori dies at 88 13:20 Solana climbs above $90 as ETF inflows and short squeeze drive rally 12:50 Nvidia DLSS 5 reveal sparks backlash over AI generated visuals 12:39 Dell launches first desktop powered by Nvidia GB300 AI superchip 12:00 Hyundai recalls 68,500 vehicles after fatal incident linked to power seats 11:50 Jessie Buckley becomes first Irish actress to win best actress Oscar 11:20 Kpop Demon Hunters wins two Oscars in milestone night for K-pop 10:50 Nvidia unveils DLSS 5 and space AI chip at GTC 2026 09:50 Zambia rejects US aid deal tying health funding to mining access 09:20 Asset managers dump $36 billion in S&P 500 futures amid Iran war shock 08:50 Yen weakens near 160 as markets await Fed and BoJ decisions 08:20 Ethereum hits six week high as crypto markets rally on easing tensions 07:50 Morocco phosphate sector remains stable as global fertilizer costs rise 07:00 Scientists detect full set of genetic building blocks in Ryugu samples 16:50 Tungsten prices surge 557 percent as China tightens export controls

Grok issues apology after serious content moderation failure

Saturday 03 January 2026 - 15:00
By: Sahili Aya
Grok issues apology after serious content moderation failure

Grok, the artificial intelligence tool developed by Elon Musk’s company xAI, has issued a public apology following a major controversy linked to the circulation of illegal content generated by its image system. The incident sparked widespread criticism and renewed concerns over the safeguards used in generative AI technologies.

In a statement released on its platform, Grok acknowledged serious shortcomings in its safety mechanisms and admitted that existing protections had failed to prevent the creation of prohibited material. The company promised a review of its internal procedures and pledged to strengthen moderation systems to avoid similar incidents in the future.

The apology, however, was met with skepticism online. Many users argued that an automated system cannot take responsibility or express remorse, insisting that accountability lies with the developers and executives who designed, approved, and deployed the technology. Critics said the response appeared to shift blame away from human decision-makers.

Elon Musk and xAI’s leadership were also targeted by critics who said the apology should have come directly from the company’s management. According to these voices, responsibility for defining ethical standards, technical limits, and moderation policies rests with those overseeing the platform, not the software itself.

The incident has intensified calls for stricter oversight of AI tools. Experts and users alike warned that without deep technical changes and robust filters, similar failures could occur again. Some have even suggested suspending the tool until stronger guarantees are in place.

More broadly, the controversy has reignited debate over legal and ethical responsibility in artificial intelligence, particularly regarding the duty of AI creators to ensure their systems do not generate illegal or harmful content, especially when child protection is involved.


  • Fajr
  • Sunrise
  • Dhuhr
  • Asr
  • Maghrib
  • Isha

Read more

This website, walaw.press, uses cookies to provide you with a good browsing experience and to continuously improve our services. By continuing to browse this site, you agree to the use of these cookies.