- Samsung bans employees from using AI programs like ChatGPT following a data leak that exposed sensitive company codes, highlighting growing concerns about security risks associated with generative AI platforms.
- The company is developing its own AI program and aims to create a secure environment for using generative AI, while emphasizing adherence to security guidelines and warning employees of potential disciplinary actions in case of breaches.
Samsung has issued a ban on employees using artificial intelligence (AI) programs such as ChatGPT after a data leak exposed sensitive company codes, according to a Bloomberg report on Tuesday. The leak occurred when some staff members reportedly uploaded confidential code information to ChatGPT, raising concerns about the potential exposure of uploaded data to other users.
An internal memo obtained by Bloomberg News informed Samsung employees that the use of AI programs like ChatGPT was prohibited due to cybersecurity risks. The memo also mentioned the challenges in retrieving and deleting the uploaded data.
“Interest in generative AI platforms such as ChatGPT has been growing internally and externally,” Samsung stated in the memo. “While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI.”
Last month, the company conducted an internal survey, in which 65% of respondents identified AI tools as a security risk. In response, Samsung has banned the use of AI programs on company devices and urged employees not to submit company information through these platforms on their personal devices.
Samsung is currently developing its own AI program and is committed to creating “a secure environment for safely using generative AI,” as stated in the memo. The company emphasized the importance of adhering to security guidelines, warning that breaches or compromises of company information could result in disciplinary action, including termination of employment.