Anthropic restricted AI model raises new cybersecurity governance concerns

08:20
By: Dakir Madiha
Anthropic restricted AI model raises new cybersecurity governance concerns

Anthropic’s decision to withhold public access to its Claude Mythos Preview model is intensifying debate across the cybersecurity sector and among financial regulators. The system, described by the company as capable of autonomously identifying thousands of high severity zero day vulnerabilities across major operating systems and web browsers, remains accessible only to a limited group of organizations under a controlled initiative known as Project Glasswing.

The model was introduced on April 7 alongside the launch of the program, which Anthropic framed as an urgent effort to strengthen defenses before similar AI capabilities become widespread. The company committed up to 100 million dollars in usage credits and 4 million dollars in direct funding to support open source security projects. Early access was granted to a group of major technology and financial institutions, including Amazon Web Services, Apple, Cisco, Google, Microsoft, and Nvidia, along with more than 40 additional organizations tasked with testing and securing critical software infrastructure. Anthropic classified the system under its highest internal safety threshold, ASL 3, indicating an unprecedented level of capability.

The controlled rollout encountered an early setback after the company confirmed it was investigating unauthorized access through a third party provider. An individual reportedly inferred the model’s online location based on knowledge of internal file systems and enabled limited access to a small group of users. Anthropic stated that no cybersecurity related queries were submitted during that period, but the incident has heightened scrutiny over access controls and supply chain security in AI deployment.

Regulators in Europe have also raised alarms. The European Securities and Markets Authority warned in its March 2026 risk monitoring report that cyber threats are becoming a major amplifier of financial market stress, with artificial intelligence accelerating both the scale and speed of potential disruptions. Industry experts point to a dual use dilemma. The same system that can expose hidden vulnerabilities could also lower the barrier to executing sophisticated attacks. Nearly half of cybersecurity professionals surveyed in recent industry research now identify agentic AI as the leading attack vector for 2026.

The emergence of Claude Mythos highlights a broader structural challenge for the AI industry. While restricted access may reduce immediate exposure, distributing such powerful tools across dozens of partners introduces new layers of risk. Some analysts see strong commercial implications, with expectations of increased demand for cybersecurity services as organizations race to adapt. The central question remains unresolved: whether controlled access to advanced AI systems is sufficient to contain their potential misuse or merely shifts the risk into more complex channels.


  • Fajr
  • Sunrise
  • Dhuhr
  • Asr
  • Maghrib
  • Isha

Read more

This website, walaw.press, uses cookies to provide you with a good browsing experience and to continuously improve our services. By continuing to browse this site, you agree to the use of these cookies.