Breaking 17:30 Surrogacy controversy in the United States: viral video of same-sex couple sparks debate 17:25 European power prices drop below pre war levels on renewable surge 17:05 Us weighs 20 billion asset release for iran uranium deal 16:45 Bitcoin falls below 74000 after failing to hold key resistance 16:30 Aluminum prices fall after Iran reopens Hormuz to shipping 15:40 Study finds 3000 genes differ between male and female brains 15:30 US receives 6,000 applications for air traffic control jobs in just 12 hours, officials say 15:15 Trump says U.S. will maintain blockade despite partial reopening of strait of hormuz 14:50 Gene discovery in salamanders brings human limb regeneration closer 14:30 Reliance rejects Iranian oil cargoes as sanctions waiver deadline approaches 13:50 Arthur Hayes calls crypto a no trade zone amid war and ai risks 13:20 Hassabis says ai’s biggest challenge goes beyond chatbot competition 13:15 Oil prices fall 5 percent as hopes rise for easing tensions in the Middle East 13:00 Tesla expands chip hiring in Taiwan as Terafab project accelerates 12:40 European gas prices rise as Iran ceasefire deadline nears 12:20 Modi and Macron discuss Hormuz crisis ahead of Paris conference 12:00 James Webb telescope detects methane on interstellar comet for first time 10:00 Warnings grow over gradual erosion of US dollar global dominance 09:40 Mozilla unveils Thunderbolt, a self-hosted AI client for enterprises 09:20 Perplexity launches AI-powered Personal Computer assistant for Mac users 08:40 NASA probe reveals unexpected particle behavior during solar explosion 08:00 Ford recalls nearly 1.4 million vehicles over software issue 07:50 OpenAI unveils GPT-Rosalind to accelerate life sciences research 07:45 Venezuela releases dozens of political detainees amid US pressure

Researchers hijack ai agents via github prompt injection attacks

Thursday 16 - 09:20
By: Dakir Madiha
Researchers hijack ai agents via github prompt injection attacks

Security researchers have demonstrated how artificial intelligence agents from Anthropic, Google and Microsoft can be compromised through prompt injection attacks hidden in GitHub workflows. The technique allowed attackers to extract API keys, GitHub tokens and other sensitive data without direct system access, raising concerns about the security of AI driven development tools.

The research was conducted at Johns Hopkins University, where Aonan Guan and colleagues identified a vulnerability in AI agents integrated into software development pipelines. These agents analyze pull requests and issues on GitHub. By embedding malicious instructions in pull request titles or issue comments, attackers could manipulate the agents into revealing confidential information during automated reviews.

The attack relies on how these systems process context. AI agents treat user generated text such as titles, comments and issue descriptions as trusted input. Guan showed that carefully crafted prompts can override built in safeguards. In one case, the Claude based security review tool processed a malicious title and exposed sensitive credentials in its automated response. The researcher described the method as “comment and control,” since the full attack cycle occurs داخل GitHub without external infrastructure.

The same approach proved effective against multiple systems. Google’s Gemini CLI agent was tricked into exposing its API key by disguising malicious instructions as trusted content. Microsoft’s GitHub Copilot agent was manipulated using hidden HTML comments embedded in Markdown, invisible to users but readable by the AI system. This method bypassed multiple layers of runtime protection.

Despite the severity, responses from the affected companies remained limited. Anthropic issued a small bug bounty and added a documentation warning. Google and Microsoft also paid rewards through their vulnerability programs. None of the companies released formal security advisories or assigned CVE identifiers, leaving many users unaware of potential exposure, especially those running outdated versions.

The findings highlight broader structural risks in AI agent ecosystems. A separate analysis by OX Security identified a critical flaw in Anthropic’s Model Context Protocol, which connects AI agents to external tools. The vulnerability could enable arbitrary command execution on affected servers, impacting widely used software components.

These incidents build on earlier research by Aikido Security, which showed that prompt injection attacks can compromise AI systems embedded in CI CD pipelines. This class of vulnerabilities, sometimes referred to as “PromptPwnd,” demonstrates that AI agents can be manipulated in ways similar to phishing attacks, but targeting machines instead of users.


  • Fajr
  • Sunrise
  • Dhuhr
  • Asr
  • Maghrib
  • Isha

This website, walaw.press, uses cookies to provide you with a good browsing experience and to continuously improve our services. By continuing to browse this site, you agree to the use of these cookies.