Research

Researchers Develop Method to Erase Data From Trained AI Models

A new "source-free certified unlearning" technique from UC Riverside offers a path for AI firms to comply with "right to be forgotten" laws.

Olivia Sharp 2 min read 611 views
Free
Researchers at UC Riverside on August 28 announced a new technique that can erase specific data from a trained AI model without the original dataset, offering a solution for privacy compliance.

Computer scientists at the University of California, Riverside, have developed a novel method for erasing specific private or copyrighted data from a trained AI model without needing to access the original training dataset. The breakthrough, announced on August 28, 2025, addresses a major technical and legal challenge for the AI industry and offers a practical way for companies to comply with "right to be forgotten" mandates in privacy laws.

"Source-Free Certified Unlearning"

The technique, called "source-free certified unlearning," compels an AI model to "forget" selected information while maintaining its functionality with the remaining data. This is a significant advance …

Archive Access

This article is older than 24 hours. Create a free account to access our 7-day archive.

Share this article

Related Articles