Zoom, the popular video conferencing application, has responded to growing privacy concerns raised by users over recent changes to its terms of service. The changes included language that required users to grant Zoom permission to use their data for training artificial intelligence (AI) models. In response to the backlash, Zoom announced that it will not use customer data without their explicit consent for AI training, seeking to alleviate fears about data privacy.
Controversial clause in Zoom’s terms of service
The controversy stemmed from Section 10.4 of Zoom’s updated terms of service, which users were required to agree to. This section granted Zoom a broad license for various purposes, including machine learning, AI training, and product improvement. This raised concerns about the potential misuse of users’ audio, video, and chat content without their knowledge or consent.
AI applications in Zoom and privacy concerns
Zoom had incorporated AI into its services, including features like Zoom IQ Meeting Summary and automated scanning of webinar invitations to detect spam. While these AI features had their merits, they also triggered concerns about how user data was being used to train the underlying AI models.
Zoom’s response to privacy concerns
To address these concerns, Zoom released a blog post emphasizing that users have the choice to enable or disable AI features. Meeting administrators were given the option to opt out of sharing meeting summary data with Zoom. Additionally, non-administrator participants were informed about the new data-sharing policies and were given the choice to accept or decline.
A spokesperson from Zoom confirmed that the company updated its terms of service to explicitly state that it would not utilize customer content for AI model training without their consent. This move was intended to reassure users that their data would not be exploited for AI purposes without explicit permission.
Mixed reactions and persistent concerns
Despite Zoom’s efforts to allay concerns, data privacy advocates and some users remained skeptical. Some users threatened to cancel their Zoom accounts, while others demanded more comprehensive revisions to the terms of service. A key point of contention was the requirement for only meeting administrators to have the option to opt out of data usage for AI training, leaving other participants without a similar choice.
This highlights the ongoing scrutiny of AI technologies and the broader debate about the ethical use of personal data to train AI models.
Public scrutiny of AI and data privacy
The backlash against Zoom’s AI data collection practices reflects a broader trend of public scrutiny over the use of AI and data privacy. The concern extends beyond just Zoom’s terms of service and encompasses long-standing anxieties surrounding the use of personal data to train AI models.
Janet Haven, the executive director of Data & Society, emphasized that these concerns are not limited to Zoom but are symptomatic of a larger issue. She highlighted the lack of robust legal protections for data privacy in society, leaving individuals to navigate complex terms of service on an individual basis.
Response from users and organizations
Some individuals and organizations chose to take action in response to the controversy. Aric Toler, the director of training and research at Bellingcat, a research publication, announced that they would no longer use Zoom Pro due to concerns about data privacy. Despite Zoom’s assurances, Toler felt it was better to disassociate from the platform to avoid potential future issues.
Bellingcat, which had relied on Zoom for hosting training workshops and webinars, decided to explore alternative video communication platforms like Jitsi Meet, Google Meet, and Microsoft Teams, while also evaluating their data usage policies.
A call for transparency and change
Data privacy advocates and experts stressed the need for increased transparency and public discourse around how companies integrate AI into their products and services. They argued that terms of service documents are often complex and intentionally written in ways that discourage users from scrutinizing them. There is a general lack of awareness and notification about changes to these documents, which places the burden on consumers to navigate this complexity alone.