Advertising
Advertising
  • Fajr
  • Sunrise
  • Dhuhr
  • Asr
  • Maghrib
  • Isha

CrowdStrike unveils Falcon AI detection service to defend against prompt-based cyberattacks

10:50
By: Dakir Madiha

CrowdStrike has introduced a new artificial intelligence security solution, Falcon AI Detection and Response (AIDR), aimed at safeguarding enterprises from a fast-emerging threat: the manipulation of AI models through malicious prompts and hidden instructions.

The California-based cybersecurity firm described prompts as the “new malware,” warning that adversaries are increasingly exploiting generative AI tools to alter outputs, exfiltrate sensitive data, and bypass security filters. The launch marks a strategic step in strengthening defenses against AI-driven social engineering and code injections.

Securing the AI interaction layer

Falcon AIDR functions as a protective layer between users and AI systems, continuously scanning prompts, responses, and automated agent activities. The service automatically blocks prompt injection attempts, jailbreak commands, and unsafe content by leveraging security research covering more than 180 known prompt manipulation methods.

It also prevents the transfer of confidential or regulated data  such as login credentials or personal identifiers  before this information enters any AI model or leaves internal networks. Beyond direct protection, the platform offers detailed visibility into how employees and AI agents interact, maintaining runtime logs for compliance and audits.

Built upon strategic acquisition

The service builds on CrowdStrike’s September acquisition of Pangea, an AI security startup purchased for about $260 million. The deal expanded the company’s Falcon platform to 32 modules, positioning it to deliver what CrowdStrike calls the first unified security framework for enterprise AI infrastructure.

Company executives have emphasized that every AI prompt now represents a potential entry point for threat actors, reinforcing the need for continuous monitoring and protection across all AI workflows.

Addressing the rise of shadow AI

The release coincides with growing corporate anxiety over unregulated AI use. Studies indicate that nearly half of employees experiment with AI tools without informing supervisors, while most use free-tier models that may store or expose data. This “shadow AI” practice has become a major concern for information security leaders.

Prompt-based vulnerabilities currently top the 2025 risk list compiled by the OWASP Gen AI Security Project, which found these flaws in more than two-thirds of AI systems deployed in enterprise environments.

To advance awareness and best practices, CrowdStrike plans to hold a series of virtual AI security summits in January 2026 dedicated to safe AI adoption and threat prevention strategies.



Read more

This website, walaw.press, uses cookies to provide you with a good browsing experience and to continuously improve our services. By continuing to browse this site, you agree to the use of these cookies.