OpenAI and Meta Announce New AI Safety Features for Teens
Following recent legal pressure, both companies on September 2 detailed plans for new parental controls and content restrictions for younger users.
OpenAI and Meta announced on September 2, 2025, that they are adjusting their AI chatbots to better protect teenage users, particularly those showing signs of mental distress. The moves come in the wake of a landmark wrongful death lawsuit filed against OpenAI and growing public and regulatory pressure on technology companies to ensure the safety of their AI products.[31]
A Response to Legal and Regulatory Pressure
The announcements follow a series of events that have put AI safety in the spotlight. On August 26, the parents of a 16-year-old filed the first known wrongful death lawsuit against OpenAI, alleging …
Archive Access
This article is older than 24 hours. Create a free account to access our 7-day archive.