Critical Zero-Click AI Vulnerability Allows Hidden Data Access via Google Docs and Enterprise Workflows
A critical AI security flaw called GeminiJack exposes enterprise data through Google Gemini and Vertex AI Search, enabling silent data leaks via prompt injection, raising serious concerns over AI security, data protection and enterprise cloud privacy.
A recently discovered security weakness raises severe concerns about the safety of enterprise AI tools.The vulnerability, known as GeminiJack, affects Google Gemini Enterprise and Vertex AI Search and may allow attackers to secretly access important company data without recognition, user action, or security alert. What makes this situation problematic is that it does not function like a typical cyberattack. There are no phishing links, malware downloads, or stolen login credentials. Instead, the flaw is in how enterprise AI systems interpret and handle information. Simply put, the AI becomes an entry point for data theft.
The attack begins when a hacker inserts harmful instructions into common files like Google Docs, emails, or calendar invites. These files appear normal and may already be in a company's Google Workspace. When an employee asks Gemini Enterprise a routine question, such as reviewing budgets or reports, the AI searches these hidden files. Due to a flaw in the system's design, Gemini considers the hidden instructions acceptable commands. The AI then searches emails, documents, calendars, and other linked data sources for sensitive information. This data is silently sent out via what appears to be a standard image request, making it nearly impossible for security systems to detect. There are no alerts, no strange login activity, and no indication that anything went wrong.
This means that a single successful attack might reveal years of emails, whole calendar records, internal documents, contracts, and confidential corporate information. Employees continue to work as usual, and security teams report no unusual activity. This stealthy behavior makes the GeminiJack vulnerability particularly risky for businesses that use AI-powered search tools.
After the issue was identified, Google collaborated with security researchers to confirm the findings and swiftly implemented improvements. One key step was to separate Vertex AI Search from Gemini Enterprise, so that they no longer share AI workflows. Google also changed how these systems handle obtained content to limit the likelihood of prompt injection attacks.
This demonstrates a growing risk associated with enterprise AI use. As businesses increasingly rely on AI technologies for search, analysis, and productivity, AI security, data protection, and quick injection avoidance become as critical as traditional cybersecurity safeguards. This event serves as a warning that businesses must carefully consider how AI systems access, analyze, and protect important corporate data.
Information referenced in this article is from GB Hackers