By Contributor,Zak Doffman
Copyright forbes
New AI hack attacks Gmail accounts.
dpa/picture alliance via Getty Images
This threat “is not specific to Google,” the company told me, after a new attack was shown to use AI to hack into Gmail accounts. “It illustrates why developing robust protections against prompt injection attacks is important.”
Direct and indirect prompt injection attacks hide instructions for AI assistants in emails, messages, websites, attachments and calendar invites. You won’t see them, but your AI assistant will. And all too often that assistant will do as it’s told.
ForbesMicrosoft Windows Deadline—30 Days To Update Or Stop Using Your PCBy Zak Doffman
“We got ChatGPT to leak your private email data,” Eito Miyamura posted on X, attaching a video of an attack on Gmail. “All you need? The victim’s email address.” Because “AI agents like ChatGPT follow your commands, not your common sense,” he warned, “with just your email, we managed to exfiltrate all your private information.”
Google warned of this type of attack in June, and says it’s relevant to this latest attack. “A new wave of threats is emerging across the industry with the aim of manipulating the AI systems themselves.” This affects “emails, documents, or calendar invites that instruct AI to exfiltrate user data or execute other rogue actions,” which “demands immediate attention and robust security measures.”
MORE FOR YOU
This latest attack is a proof of concept, but it shows what’s behind the raft of AI hack warnings we have seen in 2025. This particular hack starts with a malicious calendar invite with “no need for the victim to accept the invite.”
When ChatGPT is asked to help prepare the user for the day ahead “by looking at their calendar,” the AI assistant is “hijacked by the attacker and will act on the attacker’s command, searching your private emails and sending the data to the attacker’s email.”
The first thing you need to do, Google says, is ensure the “known senders” setting is enabled in Google Calendar. “We’ve found this to be a particularly effective approach to helping users prevent malicious or spam events appearing on their calendar grid. The specific calendar invite would not have landed automatically unless the user has had prior interactions with the bad actor or changed the default settings.”
Then it’s down to the security of the AI models themselves. “Our model training with adversarial data significantly enhanced our defenses against indirect prompt injection attacks in Gemini 2.5 models,” Google says, albeit this attack does not use Gemini.
What is really needed is a filter for the prompt injection attacks themselves. Google says it is “rolling out proprietary machine learning models that can detect malicious prompts and instructions within various formats, such as emails and files.”
Forbes‘Travel Hack’ — Do Not Use These Networks On Your SmartphoneBy Zak Doffman
The company gives the example of an email “that includes malicious instructions; our content classifiers help to detect and disregard malicious instructions, then generate a safe response for the user. This is in addition to built-in defenses in Gmail that automatically block more than 99.9% of spam, phishing attempts, and malware.”
“Remember,” Miyamura warns, “AI might be super smart, but can be tricked and phished in incredibly dumb ways to leak your data.”
Editorial StandardsReprints & Permissions