Samsung Electronics has joined a growing number of major global companies, such as JPMorgan, Bank of America, Goldman Sachs, and Citigroup, in restricting or banning the use of generative artificial intelligence (AI) tools, such as ChatGPT, due to concerns over security risks. Samsung’s decision to ban the use of generative AI tools by its employees comes after sensitive code was uploaded to the platform.
The company cited concerns over the data sent to AI platforms and the potential for it to be stored on external servers with little control over retrieval or erasure. While Samsung is reviewing security measures to create a secure environment for safely using generative AI to enhance productivity and efficiency, the company has temporarily restricted the use of generative AI until such measures are in place.
The ban includes the use of generative AI tools on Samsung-owned devices and internal networks. Samsung has also asked employees who use such tools on personal devices not to submit any company information or risk facing disciplinary action up to and including termination of employment.
In an internal survey conducted in April, 65% of Samsung employees who responded believed that the use of generative AI tools poses a security risk. However, Samsung is developing its own AI tools, such as an AI tool for translation and summarizing documents. It remains to be seen whether other companies will follow suit and develop their own AI tools or continue to restrict or ban the use of generative AI tools due to security concerns.
Samsung is developing its own AI tool
The ban on generative AI tools by major global companies is indicative of the growing concerns over the technology’s potential security risks. While AI tools can enhance productivity and efficiency, the storage and handling of sensitive data are major concerns for companies.
Moreover, the ban on the use of generative AI tools is not universal, as companies such as JPMorgan and Samsung are developing their own AI tools. However, this may raise questions about the consistency of AI security measures and how effective they are in protecting sensitive data.
Another concern is the potential for companies to fall behind competitors who use generative AI tools to enhance their operations. While it may be safer to avoid using such tools, the lack of innovation and increased productivity could ultimately be detrimental to a company’s success in the long run.
As the use of AI tools continues to expand across various industries, it is crucial for companies to ensure that proper security measures are in place to prevent data breaches and protect sensitive information. While some companies may opt to ban the use of generative AI tools altogether, others may develop their own tools to reap the benefits of the technology while mitigating potential risks.
In addition to security concerns, the use of generative AI tools has also raised ethical questions in recent years. The technology can be used to generate fake news, deepfakes, and other forms of misleading or malicious content, which can have serious consequences for individuals and society as a whole.