Game

Select Language

English

Down Icon

Select Country

Turkey

Down Icon

Boston Dynamics Led a Robot Revolution. Now Its Machines Are Teaching Themselves New Tricks

Boston Dynamics Led a Robot Revolution. Now Its Machines Are Teaching Themselves New Tricks
Boston Dynamics founder Marc Raibert says reinforcement learning is helping his creations gain more independence.
Photo-Illustration: WIRED Staff/Getty Images

Marc Raibert, the founder and chairman of Boston Dynamics, gave the world a menagerie of two- and four-legged machines capable of jaw-dropping parkour, infectious dance routines, and industrious shelf stacking.

Raibert is now looking to lead a revolution in robot intelligence as well as acrobatics. And he says that recent advances in machine learning have accelerated his robots’ ability to learn how to perform difficult moves without human help. “The hope is that we'll be able to produce lots of behavior without having to handcraft everything that robots do,” Raibert told me recently.

Boston Dynamics might have pioneered legged robots, but it’s now part of a crowded pack of companies offering robot dogs and humanoids. Only this week, a startup called Figure showed off a new humanoid called Helix, which can apparently unload groceries. Another company, x1, showed off a muscly-looking humanoid called NEO Gamma doing chores around the home. A third, Apptronik, said it plans to scale up the manufacturing of his humanoid, called Apollo. Demos can be misleading, though. Also, few companies disclose how much their humanoids cost, and it is unclear how many of them really expect to sell them as home helpers.

The real test for these robots will be how much they can do independent of human programming and direct control. And that will depend on advancements like the ones Raibert is touting. Last November I wrote about efforts to create entirely new kinds of models for controlling robots. If that work starts to bear fruit we may see humanoids and quadrupeds advance more rapidly.

Boston Dynamics' Spot RL Sim in action. Credit: Boston Dynamics

Boston Dynamics sells a four-legged robot called Spot that is used on oil rigs, construction sites, and other places where wheels struggle with the terrain. The company also makes a humanoid called Atlas for research. Raibert says Boston Dynamics used an artificial intelligence technique called reinforcement learning to upgrade Spot’s ability to run, so that it moves three times faster. The same method is also helping Atlas walk more confidently, Raibert says.

Reinforcement learning is a decades-old way of having a computer learn to do something through experimentation combined with positive or negative feedback. It came to the fore last decade when Google DeepMind showed it could produce algorithms capable of superhuman strategy and gameplay. More recently, AI engineers have used the technique to get large language models to behave themselves.

Raibert says highly accurate new simulations have sped up what can be an arduous learning process by allowing robots to practice their moves in silico. “You don't have to get as much physical behavior from the robot [to generate] good performance,” he says.

Several academic groups have published work that shows how reinforcement learning can be used to improve legged locomotion. A team at UC Berkeley used the approach to train a humanoid to walk around their campus. Another group at ETH Zurich is using the method to guide quadrupeds across treacherous ground.

Boston Dynamics has been building legged robots for decades, based on Raibert’s pioneering insights on how animals balance dynamically using the kind of low-level control provided by their nervous system. As nimble footed as the company’s machines are, however, more advanced behaviors, including dancing, doing parkour, and simply navigating around a room, normally require either careful programming or some kind of human remote control.

In 2024 Raibert founded the Robotics and AI (RAI) Institute to explore ways of increasing the intelligence of legged and other robots so that they can do more on their own. While we wait for robots to actually learn how to do the dishes, AI should make them less accident prone. “You break fewer robots when you actually come to run the thing on the physical machine,” says Al Rizzi, chief technology officer at the RAI Institute.

What do you make of the many humanoid robots now being demoed? What kinds of tasks do you think they should do? Write to us at [email protected] or comment below.

wired

wired

Similar News

All News
Animated ArrowAnimated ArrowAnimated Arrow