AI News

OpenAI and Meta Announce New AI Safety Features for Teens

Following recent legal pressure, both companies on September 2 detailed plans for new parental controls and content restrictions for younger users.

Olivia Sharp 2 min read 681 views
Free
OpenAI and Meta on September 2 announced new AI chatbot safety features for teens, including parental controls and content restrictions, following a recent wrongful death lawsuit against OpenAI.

OpenAI and Meta announced on September 2, 2025, that they are adjusting their AI chatbots to better protect teenage users, particularly those showing signs of mental distress. The moves come in the wake of a landmark wrongful death lawsuit filed against OpenAI and growing public and regulatory pressure on technology companies to ensure the safety of their AI products.[31]

The announcements follow a series of events that have put AI safety in the spotlight. On August 26, the parents of a 16-year-old filed the first known wrongful death lawsuit against OpenAI, alleging …

Archive Access

This article is older than 24 hours. Create a free account to access our 7-day archive.

Share this article

Related Articles