Anthropic identifies three bugs behind weeks of Claude code issues
Artificial intelligence company Anthropic has acknowledged that a series of product-level changes caused a sustained decline in the performance of its coding assistant, Claude Code. The admission follows weeks of user complaints from developers who reported inconsistent outputs and weaker reasoning capabilities. A detailed internal review traced the issue to three separate modifications deployed between early March and mid-April.
The first change reduced the model’s default reasoning effort from a high setting to a medium level on March 4. The adjustment aimed to lower latency and improve response speed. Internal tests suggested only a minor drop in capability, but users quickly detected a decline in output quality. The company reversed the decision on April 7, later describing it as a flawed trade-off between speed and intelligence.
A second issue emerged on March 26 with a caching bug that disrupted how Claude Code retained session memory. An optimization intended to clear outdated reasoning blocks once per session restart instead triggered at every interaction. This behavior effectively erased the model’s working memory during conversations, degrading performance across tasks. The flaw also increased usage consumption, as each request failed to benefit from cache reuse. Anthropic said the bug passed multiple layers of review, including automated checks and internal testing, before being fixed on April 10.
The third regression occurred on April 16, when a system prompt was introduced to limit responses between tool calls to 25 words. The constraint aimed to reduce verbosity in the recently released Opus 4.7 model. Subsequent evaluations showed a measurable decline of around 3 percent in coding benchmarks. The instruction was removed on April 20 after its negative impact became clear.
The problems unfolded amid growing tension between the company and its developer community. A detailed analysis submitted by an AMD executive examined thousands of sessions and tool interactions, identifying clear signs of reduced reasoning depth and altered code reading behavior. Company representatives initially disputed these findings and denied any intentional degradation. The post-mortem confirmed that the underlying model weights had not changed, reinforcing that the issues stemmed from product-level adjustments rather than core model retraining.
In response, Anthropic has reset usage limits for subscribers and introduced stricter safeguards. These include expanded internal testing before public releases, improved evaluation frameworks, tighter review of system prompts, and staged rollouts for changes that could affect model performance. The company has also launched a dedicated communication channel to provide clearer updates on product decisions and ongoing investigations.
-
11:15
-
11:00
-
11:00
-
10:45
-
10:40
-
10:30
-
10:20
-
10:20
-
10:15
-
10:01
-
10:00
-
09:45
-
09:40
-
09:30
-
09:20
-
09:15
-
09:01
-
09:00
-
08:45
-
08:40
-
08:30
-
08:20
-
08:15
-
08:00
-
07:50
-
07:45
-
07:33
-
07:20
-
07:05
-
17:20
-
17:00
-
16:20
-
16:00
-
15:40
-
15:20
-
15:00
-
14:40
-
14:20
-
13:50
-
13:20
-
12:20
-
12:00
-
11:40