OpenAI announced its Mac desktop app for ChatGPT a few weeks ago with much fanfare, but it turned out to have a pretty serious security flaw: user chats were stored in plaintext, making them available to malicious parties if they gained access to your computer.
As Threads user Pedro José Pereira Vieito noted earlier this week, “the OpenAI ChatGPT app on macOS is not sandboxed and stores all conversations in plaintext in an unprotected location.” This means that “any other running app/process/malware can read all your ChatGPT conversations without asking your permission.”
He added:
macOS has been blocking access to personal user data since macOS Mojave 10.14 (6 years ago!). Any app that accesses personal user data (Calendar, Contacts, Mail, Photos, third-party sandboxed apps, etc.) now requires explicit user permissions.
OpenAI has chosen not to use the sandbox and instead stores the conversations in plaintext in an unprotected location, thus disabling any built-in defenses.
OpenAI has since updated the app to encrypt local chats, but it is still not sandboxed. (The app is only available as a direct download from OpenAI’s website and is not available through Apple’s App Store, where stricter security is required.)
Many people now use ChatGPT like they use Google: to ask important questions, solve problems, and so on. Often, sensitive personal information can be shared in those conversations.
Things aren’t looking good for OpenAI, which recently partnered with Apple to offer chatbot services built into Siri queries in Apple operating systems. However, Apple detailed some of the security measures surrounding those queries at WWDC last month, and they’re stricter than what OpenAI did (or, more accurately, didn’t do) with its Mac app, which is a separate initiative from the partnership.
If you have used the app recently, make sure to update it as soon as possible.