Anthropic talks to EU, including on its cyber security models, commission says
Artificial intelligence company Anthropic is in ongoing discussions with the European Commission regarding its range of AI models, including cybersecurity-focused systems that are not yet available in the European Union, according to the Commission.
A spokesperson for the European Commission confirmed that the U.S.-based company has engaged with EU officials as part of broader regulatory conversations on artificial intelligence development and deployment.
Anthropic has also agreed in principle to comply with the European Union’s general-purpose AI code of practice, which outlines voluntary guidelines aimed at improving transparency, safety, and risk management in AI systems.
According to the Commission, companies operating in or targeting the EU market are expected to assess and mitigate potential risks linked to their technologies, even if certain services are not yet officially launched in the region. This includes evaluating possible cybersecurity and misuse risks associated with advanced AI models.
The discussions reflect the European Union’s growing efforts to regulate artificial intelligence and ensure that emerging technologies align with strict safety and governance standards. As AI systems become more powerful and widely used, regulators are increasingly focused on preventing harmful applications while encouraging innovation.
-
17:26
-
16:20
-
16:00
-
15:40
-
15:20
-
15:00
-
14:40
-
14:20
-
13:55
-
13:38
-
13:19
-
12:59
-
12:45
-
12:30
-
12:15
-
12:00
-
11:45
-
11:30
-
11:21
-
11:15
-
11:02
-
11:00
-
10:45
-
10:40
-
10:30
-
10:20
-
10:15
-
10:04
-
10:00
-
09:45
-
09:40
-
09:30
-
09:20
-
09:15
-
09:03
-
09:00
-
08:45
-
08:42
-
08:30
-
08:22
-
08:15
-
08:00
-
07:59
-
07:50
-
07:45
-
07:40
-
07:30
-
07:18
-
07:15
-
07:02