OpenAI has revealed that a recent bug, which allowed some people to view other users’ chat history with ChatGPT, has also potentially leaked payment information for its paid users.
In a blog post, the company announced that it addressed and fixed the bug earlier this week that caused the company to temporarily turn off access to the chatbot. The bug had caused some people to accidentally see other users’ chat history, a weird breach of privacy you wouldn’t normally think to be afraid of, especially since there is currently no way to link your personal account to anyone else’s in something like a team environment.
In addition to that issue, OpenAI has also revealed that the bug “may have caused the unintentional visibility of payment-related information of 1.2% of the ChatGPT Plus subscribers.” While full payment card numbers were not exposed, the rest of the payment information was, including users’ names and addresses.
Upon deeper investigation, we also discovered that the same bug may have caused the unintentional visibility of payment-related information of 1.2% of the ChatGPT Plus subscribers who were active during a specific nine-hour window. In the hours before we took ChatGPT offline on Monday, it was possible for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date. Full credit card numbers were not exposed at any time.
OpenAI says that this issue was also addressed with the bug fix and that any potentially impacted ChatGPT Plus subscribers are being notified. If you are currently a paid subscriber, the company says it is taking the following actions:
- Extensively tested our fix to the underlying bug.
- Added redundant checks to ensure the data returned by our Redis cache matches the requesting user.
- Programmatically examined our logs to make sure that all messages are only available to the correct user.
- Correlated several data sources to precisely identify the affected users so that we can notify them.
- Improved logging to identify when this is happening and fully confirm it has stopped.
- Improved the robustness and scale of our Redis cluster to reduce the likelihood of connection errors at extreme load.
While the company says it is notifying impacted users, it has not publicly said if it is offering any kind of program to ensure that affected users’ information is protected by a third-party service. When other companies have a data leak or breach, it’s quite standard to give consumers free access to a third-party data or identity protection service — at least for a certain amount of time. We’ll have to see if those impacted by the data exposure will be happy with OpenAI’s approach to addressing it.
The reveal comes in the same week that the company announced the launch of plugins for their popular language model, ChatGPT, to provide users with a wider range of possible use cases. Microsoft also recently integrated GPT-4 into its Azure OpenAI service.