Apple limits employees’ use of ChatGPT for fear of data breaches
Apple has banned employees from using AI tools such as OpenAI’s ChatGPT over fears that confidential information entered into these systems will be leaked or collected.
According to a report The Wall Street JournalApple employees have also been warned about using GitHub’s AI programming assistant Copilot. Bloomberg reporter Mark Gurman tweeted that ChatGPT had been on Apple’s restricted software list “for months.”
Apple has good reasons to be wary. By default, OpenAI saves all interactions between users and ChatGPT. These conversations are collected to train OpenAI’s systems and can be inspected by moderators for violating the company’s terms and services.
Back in April, OpenAI launched a feature that allows users to turn off chat history (coincidentally, not long after several EU countries began investigating the tool for possible privacy violations), but even with this setting enabled, OpenAI still retains conversations for 30 days with the option to reviewing them “for misuse” before permanently removing them.
Given ChatGPT’s usefulness for tasks such as improving code and brainstorming ideas, Apple can legitimately be concerned that its employees will enter information about confidential projects into the system. This information can then be viewed by one of OpenAI’s moderators. Research shows that it is also possible extract training data from some language models via the chat interface, although there is no evidence that ChatGPT itself is vulnerable to such attacks.
Apple’s ban is noteworthy, however, as OpenAI launched an iOS app for ChatGPT this week. The app is free to use, supports voice input and is available in the US. OpenAI says it will launch the app in other countries soon, along with an Android version.