Today, AI-powered services are already part of the mainstream. This means that more and more people are actively using them, integrating them into their daily lives. However, just like when browsing the internet, there are certain precautions to take when using AI that many people overlook. The end result will be to compromise your own data security. In a new development, a cybersecurity firm has found a flaw in ChatGPT that could have allowed hackers to steal private information from a user’s Gmail account.
ChatGPT vulnerability found that could have exposed Gmail data
To start, Radware is the team that discovered the vulnerability. OpenAI already sent a patch for it. However, the findings highlight a new and complex threat landscape.
The cybersecurity team found the issue within ChatGPT’s “deep research” function. If you are not aware, this feature can analyze large amounts of information from connected apps like Gmail and Google Drive. According to Radware, a hacker could have crafted a special email with secret instructions. If a user asked ChatGPT to perform a deep research query on their Gmail inbox, the chatbot could be tricked into scanning and leaking sensitive data to a hacker-controlled website. This data could range from names to addresses.
What makes this vulnerability particularly notable is that it’s an example of an AI agent being manipulated to steal data (as reported by Bloomberg. Usually, bad actors use AI as a tool to launch a direct attack. The process was difficult to execute. It required a very specific scenario. The user would have to ask a query related to a topic (like HR) that the malicious email was designed to exploit.
The threat was also difficult for traditional security tools to detect. Radware noted that standard security measures could not see or stop the sending of data. This is because the data exfiltration originated from OpenAI’s own infrastructure, and not from the user’s device or browser.
Vulnerability patched quickly, but reflects a new threat category
OpenAI was notified of the flaw and patched it in August before acknowledging it in September. A spokesperson for the company stated that the safety of its models is a top priority. They also said that they welcome research from the community that helps them improve their technology.
Fortunately, this vulnerability is no longer exploitable. But, the discovery points to a new category of security risks. As AI agents become more powerful and integrated into our daily lives, ensuring they are not a vector for data theft will be a critical challenge for developers and cybersecurity professionals alike.