-
16:20
-
15:50
-
15:20
-
14:50
-
14:20
-
13:50
-
13:20
-
12:50
-
12:30
-
12:20
-
12:00
-
11:59
-
11:50
-
11:30
-
11:20
-
11:00
-
10:50
-
10:30
-
10:20
-
10:00
-
09:50
-
09:50
-
09:30
-
09:20
-
09:00
-
08:50
-
08:50
-
08:30
-
08:20
-
08:11
-
08:00
-
07:50
-
07:30
-
07:00
-
17:50
-
17:20
-
17:00
-
16:50
CrowdStrike unveils Falcon AI detection service to defend against prompt-based cyberattacks
CrowdStrike has introduced a new artificial intelligence security solution, Falcon AI Detection and Response (AIDR), aimed at safeguarding enterprises from a fast-emerging threat: the manipulation of AI models through malicious prompts and hidden instructions.
The California-based cybersecurity firm described prompts as the “new malware,” warning that adversaries are increasingly exploiting generative AI tools to alter outputs, exfiltrate sensitive data, and bypass security filters. The launch marks a strategic step in strengthening defenses against AI-driven social engineering and code injections.
Securing the AI interaction layer
Falcon AIDR functions as a protective layer between users and AI systems, continuously scanning prompts, responses, and automated agent activities. The service automatically blocks prompt injection attempts, jailbreak commands, and unsafe content by leveraging security research covering more than 180 known prompt manipulation methods.
It also prevents the transfer of confidential or regulated data such as login credentials or personal identifiers before this information enters any AI model or leaves internal networks. Beyond direct protection, the platform offers detailed visibility into how employees and AI agents interact, maintaining runtime logs for compliance and audits.
Built upon strategic acquisition
The service builds on CrowdStrike’s September acquisition of Pangea, an AI security startup purchased for about $260 million. The deal expanded the company’s Falcon platform to 32 modules, positioning it to deliver what CrowdStrike calls the first unified security framework for enterprise AI infrastructure.
Company executives have emphasized that every AI prompt now represents a potential entry point for threat actors, reinforcing the need for continuous monitoring and protection across all AI workflows.
Addressing the rise of shadow AI
The release coincides with growing corporate anxiety over unregulated AI use. Studies indicate that nearly half of employees experiment with AI tools without informing supervisors, while most use free-tier models that may store or expose data. This “shadow AI” practice has become a major concern for information security leaders.
Prompt-based vulnerabilities currently top the 2025 risk list compiled by the OWASP Gen AI Security Project, which found these flaws in more than two-thirds of AI systems deployed in enterprise environments.
To advance awareness and best practices, CrowdStrike plans to hold a series of virtual AI security summits in January 2026 dedicated to safe AI adoption and threat prevention strategies.