Breaking 12:15 Bny reports higher profit driven by strong fees and interest income growth 11:20 Solana teaser on XRP fuels speculation over potential blockchain integration 11:00 Let cofounder Amir Hamza critically wounded in Lahore shooting 10:45 Chanel expands in California with the acquisition of a new vineyard estate 10:20 Gartner warns most ai driven mainframe migrations will fail 09:40 Bitcoin proposal seeks to freeze satoshi era coins over quantum risk 09:20 Researchers hijack ai agents via github prompt injection attacks 09:00 Mars bathtub ring discovery points to long lasting ancient ocean 08:40 Largest gravity test confirms Newton and Einstein across cosmic scales 08:20 Ai models can pass hidden traits through unrelated data study finds 07:50 Nikkei hits record high as US Iran talks lift markets 17:20 Apple expands ads in maps as unified business platform rolls out 17:00 Robinhood and Webull jump after US SEC approves removal of day-trading limits for small investors 16:30 Big advertising agencies settle US FTC probe over alleged boycott of political content 16:20 VW warns China car market may shrink for first time since 2018 16:00 Steve Aoki exits crypto holdings as Bored Ape NFTs lose 88% value 15:40 Anthropic shifts to usage pricing for enterprise AI customers 15:20 European farmers cut crops as Iran war disrupts fertilizer supply 15:00 Tesla completes AI5 chip design with mass production targeted for 2027 14:40 Renewables offset Hormuz crisis as fossil power output falls 14:20 Unitree launches $8,200 humanoid robot globally via AliExpress 14:00 Donald Trump threatens to reconsider trade deal with the United Kingdom

Researchers hijack ai agents via github prompt injection attacks

09:20
By: Dakir Madiha
Researchers hijack ai agents via github prompt injection attacks

Security researchers have demonstrated how artificial intelligence agents from Anthropic, Google and Microsoft can be compromised through prompt injection attacks hidden in GitHub workflows. The technique allowed attackers to extract API keys, GitHub tokens and other sensitive data without direct system access, raising concerns about the security of AI driven development tools.

The research was conducted at Johns Hopkins University, where Aonan Guan and colleagues identified a vulnerability in AI agents integrated into software development pipelines. These agents analyze pull requests and issues on GitHub. By embedding malicious instructions in pull request titles or issue comments, attackers could manipulate the agents into revealing confidential information during automated reviews.

The attack relies on how these systems process context. AI agents treat user generated text such as titles, comments and issue descriptions as trusted input. Guan showed that carefully crafted prompts can override built in safeguards. In one case, the Claude based security review tool processed a malicious title and exposed sensitive credentials in its automated response. The researcher described the method as “comment and control,” since the full attack cycle occurs داخل GitHub without external infrastructure.

The same approach proved effective against multiple systems. Google’s Gemini CLI agent was tricked into exposing its API key by disguising malicious instructions as trusted content. Microsoft’s GitHub Copilot agent was manipulated using hidden HTML comments embedded in Markdown, invisible to users but readable by the AI system. This method bypassed multiple layers of runtime protection.

Despite the severity, responses from the affected companies remained limited. Anthropic issued a small bug bounty and added a documentation warning. Google and Microsoft also paid rewards through their vulnerability programs. None of the companies released formal security advisories or assigned CVE identifiers, leaving many users unaware of potential exposure, especially those running outdated versions.

The findings highlight broader structural risks in AI agent ecosystems. A separate analysis by OX Security identified a critical flaw in Anthropic’s Model Context Protocol, which connects AI agents to external tools. The vulnerability could enable arbitrary command execution on affected servers, impacting widely used software components.

These incidents build on earlier research by Aikido Security, which showed that prompt injection attacks can compromise AI systems embedded in CI CD pipelines. This class of vulnerabilities, sometimes referred to as “PromptPwnd,” demonstrates that AI agents can be manipulated in ways similar to phishing attacks, but targeting machines instead of users.


  • Fajr
  • Sunrise
  • Dhuhr
  • Asr
  • Maghrib
  • Isha

This website, walaw.press, uses cookies to provide you with a good browsing experience and to continuously improve our services. By continuing to browse this site, you agree to the use of these cookies.