Breaking 16:20 OpenAI launches a $10 billion joint venture to embed AI in private equity firms 16:00 Wildfires spread across the Northern Hemisphere weeks ahead of schedule 15:38 Iranian supertanker carrying $220 million in crude breaks through US naval blockade 15:20 Bitcoin stalls near $78,000 as Binance logs five days of stablecoin outflows 14:55 Germany maps US potash dependency as potential lever in trade standoff 14:37 Oil shock and Wall Street euphoria put global economy on recession watch 13:42 US backs Lai after surprise Eswatini visit draws sharp rebuke from Beijing 13:20 Dubai airport traffic collapses 66 percent in March as regional war disrupts Gulf aviation 13:03 Rockstar Games developers allege unpaid overtime amid GTA 6 crunch at India studio 11:45 Fifa faces world cup broadcast crisis as India and China deals remain uncertain 11:21 Jet fuel crisis grounds airlines worldwide as Spirit Airlines shuts down operations 11:00 Pakistan facilitates return of Iranian cargo ship crew seized by the United States 10:30 New Mexico seeks changes to Meta platforms in youth harm trial 10:04 United Airlines Boeing 767 strikes lamppost and truck while landing at Newark airport 09:30 AI chipmaker Cerebras targets strong valuation in US IPO push 09:04 Chanel Cruise 2026/27 backstage beauty looks reveal key makeup trends 08:15 German carmakers hit by new US tariff increase 08:00 The Kremlin tightens security around Putin amid fears of internal coup 07:42 Apple tests a streamlined Modular dial for watchOS 27

Researchers hijack ai agents via github prompt injection attacks

Thursday 16 April 2026 - 09:20
By: Dakir Madiha
Researchers hijack ai agents via github prompt injection attacks

Security researchers have demonstrated how artificial intelligence agents from Anthropic, Google and Microsoft can be compromised through prompt injection attacks hidden in GitHub workflows. The technique allowed attackers to extract API keys, GitHub tokens and other sensitive data without direct system access, raising concerns about the security of AI driven development tools.

The research was conducted at Johns Hopkins University, where Aonan Guan and colleagues identified a vulnerability in AI agents integrated into software development pipelines. These agents analyze pull requests and issues on GitHub. By embedding malicious instructions in pull request titles or issue comments, attackers could manipulate the agents into revealing confidential information during automated reviews.

The attack relies on how these systems process context. AI agents treat user generated text such as titles, comments and issue descriptions as trusted input. Guan showed that carefully crafted prompts can override built in safeguards. In one case, the Claude based security review tool processed a malicious title and exposed sensitive credentials in its automated response. The researcher described the method as “comment and control,” since the full attack cycle occurs داخل GitHub without external infrastructure.

The same approach proved effective against multiple systems. Google’s Gemini CLI agent was tricked into exposing its API key by disguising malicious instructions as trusted content. Microsoft’s GitHub Copilot agent was manipulated using hidden HTML comments embedded in Markdown, invisible to users but readable by the AI system. This method bypassed multiple layers of runtime protection.

Despite the severity, responses from the affected companies remained limited. Anthropic issued a small bug bounty and added a documentation warning. Google and Microsoft also paid rewards through their vulnerability programs. None of the companies released formal security advisories or assigned CVE identifiers, leaving many users unaware of potential exposure, especially those running outdated versions.

The findings highlight broader structural risks in AI agent ecosystems. A separate analysis by OX Security identified a critical flaw in Anthropic’s Model Context Protocol, which connects AI agents to external tools. The vulnerability could enable arbitrary command execution on affected servers, impacting widely used software components.

These incidents build on earlier research by Aikido Security, which showed that prompt injection attacks can compromise AI systems embedded in CI CD pipelines. This class of vulnerabilities, sometimes referred to as “PromptPwnd,” demonstrates that AI agents can be manipulated in ways similar to phishing attacks, but targeting machines instead of users.


  • Fajr
  • Sunrise
  • Dhuhr
  • Asr
  • Maghrib
  • Isha

This website, walaw.press, uses cookies to provide you with a good browsing experience and to continuously improve our services. By continuing to browse this site, you agree to the use of these cookies.