Breaking 16:20 OpenAI launches a $10 billion joint venture to embed AI in private equity firms 16:00 Wildfires spread across the Northern Hemisphere weeks ahead of schedule 15:38 Iranian supertanker carrying $220 million in crude breaks through US naval blockade 15:20 Bitcoin stalls near $78,000 as Binance logs five days of stablecoin outflows 14:55 Germany maps US potash dependency as potential lever in trade standoff 14:37 Oil shock and Wall Street euphoria put global economy on recession watch 13:42 US backs Lai after surprise Eswatini visit draws sharp rebuke from Beijing 13:20 Dubai airport traffic collapses 66 percent in March as regional war disrupts Gulf aviation 13:03 Rockstar Games developers allege unpaid overtime amid GTA 6 crunch at India studio 11:45 Fifa faces world cup broadcast crisis as India and China deals remain uncertain 11:21 Jet fuel crisis grounds airlines worldwide as Spirit Airlines shuts down operations 11:00 Pakistan facilitates return of Iranian cargo ship crew seized by the United States 10:30 New Mexico seeks changes to Meta platforms in youth harm trial 10:04 United Airlines Boeing 767 strikes lamppost and truck while landing at Newark airport 09:30 AI chipmaker Cerebras targets strong valuation in US IPO push 09:04 Chanel Cruise 2026/27 backstage beauty looks reveal key makeup trends 08:15 German carmakers hit by new US tariff increase 08:00 The Kremlin tightens security around Putin amid fears of internal coup 07:42 Apple tests a streamlined Modular dial for watchOS 27

Grok issues apology after serious content moderation failure

Saturday 03 January 2026 - 15:00
By: Sahili Aya
Grok issues apology after serious content moderation failure

Grok, the artificial intelligence tool developed by Elon Musk’s company xAI, has issued a public apology following a major controversy linked to the circulation of illegal content generated by its image system. The incident sparked widespread criticism and renewed concerns over the safeguards used in generative AI technologies.

In a statement released on its platform, Grok acknowledged serious shortcomings in its safety mechanisms and admitted that existing protections had failed to prevent the creation of prohibited material. The company promised a review of its internal procedures and pledged to strengthen moderation systems to avoid similar incidents in the future.

The apology, however, was met with skepticism online. Many users argued that an automated system cannot take responsibility or express remorse, insisting that accountability lies with the developers and executives who designed, approved, and deployed the technology. Critics said the response appeared to shift blame away from human decision-makers.

Elon Musk and xAI’s leadership were also targeted by critics who said the apology should have come directly from the company’s management. According to these voices, responsibility for defining ethical standards, technical limits, and moderation policies rests with those overseeing the platform, not the software itself.

The incident has intensified calls for stricter oversight of AI tools. Experts and users alike warned that without deep technical changes and robust filters, similar failures could occur again. Some have even suggested suspending the tool until stronger guarantees are in place.

More broadly, the controversy has reignited debate over legal and ethical responsibility in artificial intelligence, particularly regarding the duty of AI creators to ensure their systems do not generate illegal or harmful content, especially when child protection is involved.


  • Fajr
  • Sunrise
  • Dhuhr
  • Asr
  • Maghrib
  • Isha

Read more

This website, walaw.press, uses cookies to provide you with a good browsing experience and to continuously improve our services. By continuing to browse this site, you agree to the use of these cookies.