Breaking 10:30 Mattel investor calls for strategic review as toy demand weakens 10:20 Search for two missing US soldiers in Morocco enters fifth day with 600 personnel deployed 10:15 Vatican’s careful language on Pope-Rubio meeting signals strained relations with Trump administration 09:30 Marco Rubio meets Giorgia Meloni amid tensions between Rome and Washington 09:00 Zyphra's sub-billion parameter AI model matches industry giants on reasoning benchmarks 08:37 Iran threatens UAE will "pay the price" after explosions rock Qeshm island 08:15 US investigates alleged smuggling of Nvidia AI Chips through Thailand 07:59 Trump sets July 4 deadline for EU to ratify trade deal or face higher tariffs 07:03 Microsoft scales back Copilot as the company retreats from its AI-everywhere strategy 17:00 Rave files antitrust lawsuit against Apple over App Store removal 16:45 BlackRock reduces private credit fund valuation by 5% in first quarter 16:20 Nvidia's Jensen Huang calls AI job loss warnings ridiculous and attacks rivals' God complex 16:15 United States sanctions Iraqi oil official and militias over alleged Iran ties 15:56 European climate model puts odds of a super El Niño by November at 100 percent 15:45 Whirlpool shares plunge after weak revenue and dividend suspension 15:23 Rubio visits Rome to ease Trump's rift with the Vatican and Italy 15:00 Trump and Lula meet at White House to address tariffs, minerals and security ties 14:30 Blackstone marks down private credit fund amid software sector concerns 13:02 Anthropic's Claude guided hackers toward water infrastructure control systems in documented cyberattack, report finds 13:00 US Jobless claims rise slightly as labor market remains stable 10:57 Ted Turner, CNN founder and American media pioneer, dies at 87

Researchers hijack ai agents via github prompt injection attacks

Thursday 16 April 2026 - 09:20
By: Dakir Madiha
Researchers hijack ai agents via github prompt injection attacks

Security researchers have demonstrated how artificial intelligence agents from Anthropic, Google and Microsoft can be compromised through prompt injection attacks hidden in GitHub workflows. The technique allowed attackers to extract API keys, GitHub tokens and other sensitive data without direct system access, raising concerns about the security of AI driven development tools.

The research was conducted at Johns Hopkins University, where Aonan Guan and colleagues identified a vulnerability in AI agents integrated into software development pipelines. These agents analyze pull requests and issues on GitHub. By embedding malicious instructions in pull request titles or issue comments, attackers could manipulate the agents into revealing confidential information during automated reviews.

The attack relies on how these systems process context. AI agents treat user generated text such as titles, comments and issue descriptions as trusted input. Guan showed that carefully crafted prompts can override built in safeguards. In one case, the Claude based security review tool processed a malicious title and exposed sensitive credentials in its automated response. The researcher described the method as “comment and control,” since the full attack cycle occurs داخل GitHub without external infrastructure.

The same approach proved effective against multiple systems. Google’s Gemini CLI agent was tricked into exposing its API key by disguising malicious instructions as trusted content. Microsoft’s GitHub Copilot agent was manipulated using hidden HTML comments embedded in Markdown, invisible to users but readable by the AI system. This method bypassed multiple layers of runtime protection.

Despite the severity, responses from the affected companies remained limited. Anthropic issued a small bug bounty and added a documentation warning. Google and Microsoft also paid rewards through their vulnerability programs. None of the companies released formal security advisories or assigned CVE identifiers, leaving many users unaware of potential exposure, especially those running outdated versions.

The findings highlight broader structural risks in AI agent ecosystems. A separate analysis by OX Security identified a critical flaw in Anthropic’s Model Context Protocol, which connects AI agents to external tools. The vulnerability could enable arbitrary command execution on affected servers, impacting widely used software components.

These incidents build on earlier research by Aikido Security, which showed that prompt injection attacks can compromise AI systems embedded in CI CD pipelines. This class of vulnerabilities, sometimes referred to as “PromptPwnd,” demonstrates that AI agents can be manipulated in ways similar to phishing attacks, but targeting machines instead of users.


  • Fajr
  • Sunrise
  • Dhuhr
  • Asr
  • Maghrib
  • Isha

This website, walaw.press, uses cookies to provide you with a good browsing experience and to continuously improve our services. By continuing to browse this site, you agree to the use of these cookies.