Google DeepMind Unveils Advanced AI Models for Autonomous Robots
The new Gemini Robotics 1.5 models add agentic capabilities, allowing robots to reason, plan multi-step tasks, and use digital tools like a web browser.
A Leap in Embodied AI
Google's DeepMind research unit announced a major update to its robotics models on September 25, 2025, introducing new agentic capabilities that allow robots to perform complex, multi-step physical tasks.The new models, Gemini Robotics 1.5 and Gemini Robotics-ER 1.5, enable robots to move beyond single-instruction execution to "genuine understanding and problem-solving," said Carolina Parada, head of robotics at DeepMind.
The update positions Google in direct competition with other companies developing general-purpose robotics software, including Nvidia's Project GR00T, Tesla's Optimus, and startups like Figure AI. The models can now perform tasks like packing a suitcase by …
Archive Access
This article is older than 24 hours. Create a free account to access our 7-day archive.