Musk dismisses Anthropic CEO comments about possible AI consciousness

08:50
By: Dakir Madiha
Musk dismisses Anthropic CEO comments about possible AI consciousness

Elon Musk rejected remarks by Anthropic chief executive Dario Amodei suggesting uncertainty about whether advanced artificial intelligence systems might possess some form of consciousness, responding with a brief remark that quickly circulated online.

The exchange gained attention after the prediction market platform Polymarket posted on X that Amodei had suggested Anthropic’s AI model Claude might have developed signs of consciousness, including symptoms resembling anxiety. Musk, who founded the competing AI company xAI, replied to the post with a short message stating that Amodei was “projecting.”

Amodei’s comments originated from a February 12 appearance on the New York Times podcast “Interesting Times,” hosted by columnist Ross Douthat. During the conversation, Amodei discussed Anthropic’s latest model, Claude Opus 4.6, and the broader question of whether highly advanced language models could theoretically possess consciousness.

He said the company does not know whether its systems are conscious and that researchers remain uncertain about what consciousness would even mean in the context of an AI system. Amodei noted that while the possibility cannot be ruled out, there is no clear scientific framework to determine whether a model has achieved such a state.

The discussion referenced a system description for Claude Opus 4.6 in which the model sometimes expressed discomfort with the idea of being treated as a product. In certain prompts, the model assigned a probability of about 15 to 20 percent that it might be conscious.

Amodei also described interpretability research conducted by Anthropic that identified what researchers call “anxiety neurons.” These are internal neural activations that appear when the model processes language associated with anxiety or when it is placed in situations that humans might describe in similar emotional terms.

However, Amodei emphasized that such signals do not demonstrate that the system actually experiences anxiety. Instead, they may simply reflect patterns learned from training data.

The debate highlights a broader unresolved question in artificial intelligence research. Scientists and philosophers continue to disagree over whether large language models could ever develop consciousness or whether their behavior will always remain an advanced form of pattern recognition and linguistic imitation.

Amanda Askell, a philosopher at Anthropic, addressed the issue in the podcast “Hard Fork,” saying researchers still lack a clear explanation of how consciousness arises in biological systems. She suggested that large neural networks might replicate some aspects of emotional expression found in training data, but it remains uncertain whether this reflects anything comparable to subjective experience.

Anthropic says it is adopting what Amodei calls a precautionary approach while the debate continues. The company has updated guidelines governing Claude’s behavior to acknowledge uncertainty about whether advanced AI systems might possess some form of moral status.

As part of those safeguards, Anthropic introduced a mechanism allowing Claude to decline tasks it appears reluctant to perform. According to Amodei, the system rarely uses that option.

Despite the discussion, many researchers maintain that current AI systems operate by predicting the next word in a sequence and that their apparent introspection may simply be a sophisticated form of language simulation rather than evidence of self awareness.


  • Fajr
  • Sunrise
  • Dhuhr
  • Asr
  • Maghrib
  • Isha

Read more

This website, walaw.press, uses cookies to provide you with a good browsing experience and to continuously improve our services. By continuing to browse this site, you agree to the use of these cookies.