Tools & Products

Meta Releases SAM 3 and SAM 3D to Reconstruct the Physical World

New open-source models allow users to segment and build 3D objects from simple text prompts.

Olivia Sharp 1 min read 724 views
Free
Meta launched SAM 3 and SAM 3D, enabling text-based object segmentation and 3D reconstruction from single images for developers.

Meta AI advanced the field of computer vision, with the release of SAM 3 (Segment Anything Model 3) and SAM 3D. The models push beyond 2D image recognition, offering developers the ability to reconstruct three-dimensional objects and human bodies from single flat images.

SAM 3: Promptable Segmentation

The third iteration of the Segment Anything Model introduces "Promptable Concept Segmentation." Unlike previous versions that required clicks or bounding boxes, SAM 3 can identify and segment objects based on abstract text descriptions (e.g., "yellow school bus") or image exemplars. * Benchmark: To validate performance, Meta released the SA-Co benchmark, featuring over …

Archive Access

This article is older than 24 hours. Create a free account to access our 7-day archive.

Share this article

Related Articles