Tech behemoth Apple Inc., in a bold move, has put a halt on the use of OpenAI’s AI chatbot, ChatGPT, across its operations. This decision comes amid rising concerns over potential compromise of sensitive company data.
Internal caution at Apple
In a directive released to employees, Apple has barred the use of ChatGPT, which is backed by industry rival Microsoft, and comparable AI utilities.
This embargo has been enforced during a period of Apple’s own AI technology development, indicating an intensifying focus on internal innovation and security.
The company’s apprehensions revolve around the possibility of employees inadvertently exposing proprietary company information through their interactions with these AI systems.
Notably, Microsoft-owned Copilot, GitHub’s AI tool that aids in software code automation, has also been placed under restriction within Apple’s walls.
This news of internal prohibition follows closely on the heels of ChatGPT’s launch on iOS in the Apple app store, a mere few days prior. This highly anticipated app is currently accessible to iPhone and iPad users in the U.S. and has plans to broaden its reach to other regions in the near future.
The growing trend of AI tool restrictions
Apple isn’t alone in its stance, as major corporations worldwide have begun to curtail the internal usage of AI chatbots like ChatGPT.
For instance, electronics giant Samsung sent a memo to its employees early this May prohibiting the use of generative AI utilities after an unfortunate incident involving the upload of sensitive code to the platform.
Likewise, prominent financial institutions such as JPMorgan, Bank of America, Goldman Sachs, and Citigroup have implemented similar measures, banning their employees from using these AI tools.
It’s important to note that many of these companies, while restricting third-party AI utilities, are in the process of developing their own bespoke applications.
Risks and opportunities in AI utilization
The apprehension surrounding AI tools like ChatGPT and GitHub’s Copilot originates from their data-handling methodologies, with potential risks of compromising proprietary code or other confidential data.
The risk is augmented by the fact that these platforms are either owned or financially supported by competitors, such as Microsoft. Despite this, many firms are embracing the power of AI in their workflows.
In fact, Goldman Sachs has disclosed its utilization of generative AI tools to assist in software code writing and testing. Similarly, management consulting firm Bain & Company revealed plans to integrate OpenAI’s generative tools into their management systems.
This highlights the dual nature of AI, offering significant efficiencies and cost savings, yet necessitating stringent security measures.
The limitation on the use of AI tools by some corporations does not necessarily stem from broader concerns about artificial intelligence, but rather from how third-party AI platform providers, like OpenAI, Google, and Microsoft, handle proprietary data shared on these services.
As per a report, Apple’s recent decision is a part of a wider strategic shift towards creating its own AI tools, a development to watch in the industry’s competitive landscape.