One thing robots are notoriously bad at is learning by doing. You can pack plenty of information into a robotic brain, but ask a bot to teach itself a new motor taskâ"even one as simple as stacking blocks or unscrewing a water bottleâ"and youâre probably shit out of luck.
That, however, might be changing very soon. Researchers at UC Berkeley are now developing algorithms that robots can use to learn all sorts of tasks through trial and error, just like humans do. In practical terms, this could eventually lead to home service robots capable of handling any number of tedious tasks weâd rather not doâ"screwing in lightbulbs, plunging toilets, folding laundry.
Traditionally, robots make their way through the world with a vast amount of pre-programming that equips them to handle a range of scenarios. While this works reasonably well in controlled environmentsâ"laboratories or medical facilities, for instanceâ"learning to adapt to the unknown is a critical step our robots will need to take if theyâre ever going to become more integrated into our daily lives.
To that end, researchers involved in Berkeleyâs âPeople and Robotics Initiativeâ are turning to a new branch of artificial intelligence known as deep learning, which draws inspiration from how the human brainâs neural circuitry perceives and interacts with the world.
âFor all our versatility, humans are not born with a repertoire of behaviors that can be deployed like a Swiss army knife, and we do not need to be programmed,â said robotics researcher Sergey Levine in a press release. âInstead, we learn new skills over the course of our life from experience and from other humans. This learning process is so deeply rooted in our nervous system, that we cannot even communicate to another person precisely how the resulting skill should be executed. We can at best hope to offer pointers and guidance as they learn it on their own.â
If youâve ever used Siri, Googleâs speech-to-text program, or Google Street View, youâve already benefited from recent advances in this field. But applying deep learning to motor skills has proven much more challenging. In terms of complexity, physical tasks go far beyond passive recognition of sights or sounds.
In recent experiments, researchers have been working with a small personal robot they call the âBerkeley Robot for the Elimination of Tedious Tasksâ, or BRETT. For his training, BRETT is presented with a series of simple motor tasks, such as placing pegs into holes or stacking LEGO bricks. The algorithm controlling BRETTâs learning includes a reward function that scores BRETT based on how well he learns a new task. That reward system is key: Movements that bring BRETT closer to completing his task score higher than those that donât, and this information is relayed across thousands of parameters in his neural net.
So far, the results of BRETTâs training have been astounding. Given the location of objects in a scene, BRETT is typically able to master a new assignment within ten minutes. If BRETT doesnât have the location of objects and instead needs to learn vision and motor control together, the process can take several hours.
âWe still have a long way to go before our robots can learn to clean a house or sort laundry, but our initial results indicate that these kinds of deep learning techniques can have a transformative effect in terms of enabling robots to learn complex tasks entirely from scratch,â said Pieter Abbeel of UC Berkeleyâs Department of Electrical Engineering and Computer Sciences. âIn the next five to 10 years, we may see significant advances in robot learning capabilities through this line of work.â
As someone whoâs about to commit three solid days to spring cleaning, this is very heartening news.Top image: BRETT learning how to screw a cap onto a water bottle, via UC Berkeley Robot Learning Lab
Follow Maddie on Twitter or contact her at maddie.stone@gizmodo.com
0 Response to "EnTech: This Robot Learns New Tasks Like a Human"
Post a Comment