Hidden Text in Emails Can Manipulate Gmail’s Gemini AI, Raising New Cybersecurity Concerns
A researcher found that Gmail's Gemini AI can be tricked through hidden prompt injection, potentially enabling phishing scams. Google is aware of the issue and working on safeguards.

A cybersecurity researcher identified a weakness in Gmail's AI assistent, Gemini, that could be exploited to carry out phishing attacks. Gemini, which provides useful capabilities such as email summarization and message rewriting, can potentially be tricked via a technique known as prompt injection.
The finding was made by Marco Figueroa, GenAI Bug Bounty Programmes Manager at Mozilla, as part of their AI-focused bug bounty project, 0din. According to Figueroa, attackers can mislead Gemini into showing phishing content by quietly embedding harmful instructions into an email rather than using links or attachments.
The technique used is known as indirect prompt injection. In this scenario, the attacker sends a normal-looking email but conceals hazardous information at the bottom. This text might be in white type on a white background, making it unreadable to the human reader. Other approaches include utilizing extremely small font sizes or using HTML or CSS to move text off-screen.
When the user engages Gemini's "Summarize Email" option, the AI reads the full email, including the hidden text, and runs the malicious command. Because the summary now comes from Gemini and not a suspicious sender, users may be more likely to believe it and fall for the fraud.
In response, Google stated that it has not seen any real-world applications of this technology. However, the business acknowledged the problem and is working on ways to further protect people from prompt injection attacks.
This discovery stands as a caution that even useful AI features can be exploited if not sufficiently protected. Users should exercise caution, even when messages appear to be from trusted programs like as Gemini.
This article is based on information from Gadgets 360