Stanford Study Finds AI Chatbots Use Customer Data by Default
The research revealed that all six leading U.S. AI companies use customer conversations for model training, often without clear user consent.
A Pervasive Privacy Problem
A study released by Stanford University's Institute for Human-Centered Artificial Intelligence found that all six leading U.S. artificial intelligence companies use customer chat data by default to train or improve their models. The research highlights a significant gap between industry practices and consumer privacy expectations, creating a trust deficit that could hinder long-term adoption of the technology.
The study examined the privacy policies of Google, Meta, OpenAI, Microsoft, Amazon, and Anthropic. Researchers found that these companies often use customer conversations for model training without clear disclosure and, in some cases, retain the data indefinitely.
Archive Access
This article is older than 24 hours. Create a free account to access our 7-day archive.
Archive Access
This article is older than 24 hours. Create a free account to access our 7-day archive.
Share this article
Related Articles
AI Sector Analysis: December 19, 2025
In a coordinated mobilization of public and private resources, the U.S. Department of Energy (DOE) …
AI Sector Analysis: December 18, 2025
By synthesizing financial market data, technical benchmarks, and policy documentation, we aim to provide institutional …
NIST Releases Draft Cybersecurity Framework Profile for Artificial Intelligence
NIST published its draft AI Cybersecurity Framework Profile on Dec. 17, setting technical standards for …