OpenAI, the organization behind the popular chatbot ChatGPT, has confirmed a data breach that exposed users’ personal information. The breach was caused by a bug in an open-source library called Redis-py, introduced by OpenAI on March 20. As a consequence, users were able to access chat data that did not belong to them, and sensitive payment-related details of 1.2% of ChatGPT Plus subscribers were revealed. While OpenAI has reached out to affected users and claims there is no ongoing risk, the incident highlights the importance of properly testing and securing open-source libraries used in software development.
GreyNoise, a threat intelligence company, issued a warning about a ChatGPT feature that enhances the chatbot’s information-gathering capabilities through plugins. OpenAI provided customers with code examples that include a docker image for the MinIO distributed object storage system. However, the version of the docker image utilized by OpenAI’s code example has a potential information disclosure vulnerability called CVE-2023-28432. GreyNoise has witnessed attempts to exploit the vulnerability in the wild.
These incidents highlight the importance of proper security testing and monitoring for software systems and the libraries they use. Developers should prioritize testing for potential vulnerabilities and staying up-to-date with security patches and updates for open-source libraries. Organizations should also have a plan in place for responding to security incidents, including timely communication with affected users and implementing additional security measures to prevent future breaches.