Flaws in ChatGPT extensions allowed access to sensitive data
New threat research from Salt Labs has uncovered critical security flaws within ChatGPT plugins, highlighting a new risk for enterprises.
Plugins provide AI chatbots like ChatGPT with access and permissions to perform tasks on behalf of users within third party websites. For example, committing code to GitHub repositories or retrieving data from an organization's Google Drives.
These security flaws introduce a new attack vector and could enable bad actors to gain control of an organization's account on third party websites, or allow access to personal identifiable information (PII) and other sensitive user data stored within third-party applications.
The Salt Labs team uncovered three different types of vulnerabilities within ChatGPT plugins. The first exploits ChatGPT’s code approval process to allow attackers to install a malicious plugin giving access to the user's account.
The second was in PluginLab (pluginlab.ai), a framework developers and companies use to develop plugins for ChatGPT. Researchers uncovered that PluginLab didn't properly authenticate user accounts, which would have allowed a prospective attacker to insert another user ID and get a code that represents the victim, which leads to account takeover on the plugin.
The third vulnerability, uncovered within several plugins, was OAuth (Open Authorization) redirection manipulation. Several plugins don't validate the URLs, which means that an attacker can insert a malicious URL and steal user credentials.
On discovering the vulnerabilities, Salt Labs' researchers followed coordinated disclosure practices with OpenAI and the third-party vendors. All issues were quickly remediated and there's no evidence that these flaws have been exploited in the wild.
"Generative AI tools like ChatGPT have rapidly captivated the attention of millions across the world, boasting the potential to drastically improve efficiencies within both business operations as well as daily human life," says Yaniv Balmas, vice president of research at Salt Security. "As more organizations leverage this type of technology, attackers are too pivoting their efforts, finding ways to exploit these tools and subsequently gain access to sensitive data. Our recent vulnerability discoveries within ChatGPT illustrate the importance of protecting the plugins within such technology to ensure that attackers cannot access critical business assets and execute account takeovers."
You can read more on the Salt Security blog.
Image credit: Arwagula/Dreamstime.com