Breaking 16:20 G7 weighs oil reserve release as Iran war sends markets swinging 16:00 Meta acquires AI agent social network Moltbook 15:53 Love Brand 2025 | Renault among consumers’ favorite brands in Morocco 15:50 Aramco nears full pipeline capacity as Saudi Arabia reroutes oil exports 15:30 Middle East conflict and rising oil prices weigh on Transavia operations 15:20 US LNG exporters gain windfall as Iran conflict disrupts global gas supply 15:00 Bourita holds talks with French counterpart in Paris 14:50 Botanik Garden launches luxury villa domain in Marrakech 14:45 Johan Elbaz joins the Board of Directors of Forafric Global 14:30 Ukraine-Russia peace talks postponed amid global focus on Iran conflict 14:20 Volkswagen profits plunge as tariffs and China competition hit 2025 results 14:00 Turkish industrial production rises 1.1% in January 13:50 Bitcoin climbs above $70,000 as markets react to Iran war de escalation signals 13:45 Italy expresses solidarity with Türkiye after missile fired from Iran 13:30 Guercif–Nador West Med highway: a strategic project to unlock the Oriental region 13:20 Dolphin shaped mini robot developed to clean oil spills 13:15 Higher education: IMBT and ESCM Strasbourg form strategic partnership 13:00 Former Syrian intelligence officer appears in UK court over crimes against humanity charges 12:50 Palm oil prices jump nearly 10 percent as Hormuz crisis boosts biofuel demand 12:45 Love Brand 2025 | Saad Lamjarred among Moroccans’ favorite personalities 12:26 Ramadan advertising market in Morocco shows stable growth during first ten days 12:20 Australia and India test rice husk waste to produce low carbon steel 12:15 Drone attack sparks fire in industrial area of Abu Dhabi 12:00 Arabic language teaching in Spain: Bourita reacts after suspensions in Madrid and Murcia 11:50 US stocks rebound late as Trump signals Iran war may end soon 11:24 Morocco advances green ammonia to secure fertilizer production 11:20 February 2026 ranks among hottest on record as storms hit western Europe 10:50 Defense stocks surge to record highs as US Iran war enters second week 10:20 Germany secures Patriot interceptor missiles from allies for Ukraine 09:50 Wall Street banks offer UAE staff temporary relocation amid Iran conflict 09:20 SAS raises fares as Iran war drives oil above $100 08:50 EU leaders clash over Iran war as divisions deepen across Europe 08:20 Volkswagen expects margin recovery in 2026 after difficult year 07:50 Ancient Egyptians used early white correction paint on papyrus manuscripts 07:20 EA cuts jobs across Battlefield 6 studios after record launch 07:00 Oil surge to $120 revives global push for renewable energy

Musk dismisses Anthropic CEO comments about possible AI consciousness

Yesterday 08:50
By: Dakir Madiha
Musk dismisses Anthropic CEO comments about possible AI consciousness

Elon Musk rejected remarks by Anthropic chief executive Dario Amodei suggesting uncertainty about whether advanced artificial intelligence systems might possess some form of consciousness, responding with a brief remark that quickly circulated online.

The exchange gained attention after the prediction market platform Polymarket posted on X that Amodei had suggested Anthropic’s AI model Claude might have developed signs of consciousness, including symptoms resembling anxiety. Musk, who founded the competing AI company xAI, replied to the post with a short message stating that Amodei was “projecting.”

Amodei’s comments originated from a February 12 appearance on the New York Times podcast “Interesting Times,” hosted by columnist Ross Douthat. During the conversation, Amodei discussed Anthropic’s latest model, Claude Opus 4.6, and the broader question of whether highly advanced language models could theoretically possess consciousness.

He said the company does not know whether its systems are conscious and that researchers remain uncertain about what consciousness would even mean in the context of an AI system. Amodei noted that while the possibility cannot be ruled out, there is no clear scientific framework to determine whether a model has achieved such a state.

The discussion referenced a system description for Claude Opus 4.6 in which the model sometimes expressed discomfort with the idea of being treated as a product. In certain prompts, the model assigned a probability of about 15 to 20 percent that it might be conscious.

Amodei also described interpretability research conducted by Anthropic that identified what researchers call “anxiety neurons.” These are internal neural activations that appear when the model processes language associated with anxiety or when it is placed in situations that humans might describe in similar emotional terms.

However, Amodei emphasized that such signals do not demonstrate that the system actually experiences anxiety. Instead, they may simply reflect patterns learned from training data.

The debate highlights a broader unresolved question in artificial intelligence research. Scientists and philosophers continue to disagree over whether large language models could ever develop consciousness or whether their behavior will always remain an advanced form of pattern recognition and linguistic imitation.

Amanda Askell, a philosopher at Anthropic, addressed the issue in the podcast “Hard Fork,” saying researchers still lack a clear explanation of how consciousness arises in biological systems. She suggested that large neural networks might replicate some aspects of emotional expression found in training data, but it remains uncertain whether this reflects anything comparable to subjective experience.

Anthropic says it is adopting what Amodei calls a precautionary approach while the debate continues. The company has updated guidelines governing Claude’s behavior to acknowledge uncertainty about whether advanced AI systems might possess some form of moral status.

As part of those safeguards, Anthropic introduced a mechanism allowing Claude to decline tasks it appears reluctant to perform. According to Amodei, the system rarely uses that option.

Despite the discussion, many researchers maintain that current AI systems operate by predicting the next word in a sequence and that their apparent introspection may simply be a sophisticated form of language simulation rather than evidence of self awareness.


  • Fajr
  • Sunrise
  • Dhuhr
  • Asr
  • Maghrib
  • Isha

Read more

This website, walaw.press, uses cookies to provide you with a good browsing experience and to continuously improve our services. By continuing to browse this site, you agree to the use of these cookies.