4 Keys to Making the Robots of Our Imagination a Reality

‘The robots of reality are starting to get a lot closer to the robots of our imagination’ said Sarah Bergbreiter. In her talk on advanced robotics at Singularity University’s Exponential Manufacturing Summit, Bergbreiter elaborated on how modern robots have already come to resemble the most fantastic robots humans have imagined.
 
Bergbreiter joined the University of Maryland, College Park in 2008 as an Assistant Professor of Mechanical Engineering, with a joint appointment in the Institute for Systems Research. She received the DARPA Young Faculty Award in 2008, the NSF CAREER Award in 2011, and the Presidential Early Career Award for Scientists and Engineers (PECASE) Award in 2013 for her research on engineering robotic systems down to sub-millimeter size scales.
 
Below are four key areas Bergbreiter thinks roboticists need to hone to make sure their robots add maximum value to our jobs, our homes, and our lives.
 
1. Focus on how they interact with humans
 
At the Tesla plant in Fremont, California, there are dozens of robots, but they’re all caged off from people, with robots and employees performing completely separate tasks. Robots programmed to perform a task or series of tasks over and over are already widespread, but enabling robots to work with people is a still a major manufacturing challenge.
 
Robots need to be able to understand what people are doing, and vice-versa. How do we get robots to understand social cues and display them back to us?
 
The Advanced Robotics for Manufacturing Institute (ARM Institute) focuses on collaborative robotics, or robots complementing a person’s job to enhance productivity. The institute’s mission is to lower the barriers for companies to adopt robotics technology, and in the process, bring currently off-shored production back onshore.
 
Robots that work with people rather than instead of people will not only save jobs, they’ll bring new advances in efficiency and innovation, but we need to keep people in the equation as we develop them.
 
2. Make them softer
 
When you picture a robot, whether it currently exists or is a product of your imagination, you’re most likely picturing a rigid machine with a lot of right angles and not much squishiness or pliability. That’s because the field of soft robotics is just starting to take off, with the first-ever completely soft autonomous robot unveiled in December 2016.
 
One of the problems with traditional robots is that they tend to be clunky and heavy and their movement is limited. Soft robots can do things rigid robots can’t, like more precisely manipulate objects, climb, grow, or stretch.
 
Having robots perform these actions is useful across a variety of settings, from exoskeletons, which are beginning to be used to augment people in a manufacturing context, to rescue robots that could grasp and turn a valve or climb through rubble in places humans can’t access.
 
Soft robots are also more compliant and safer around humans; if you can touch a robot, there’s a lot more you can do in terms of programming it. And the best part is, making robots soft actually lowers their cost. This will enable robotic manufacturing in places that couldn’t do it before.
 
3. Give soft robots sensors
 
Soft robots have a lot of advantages over rigid ones, but they’re still stuck with one major drawback: they’re harder to control. Soft sensors are thus a crucial research area in robotics right now.
 
San Francisco startup Pneubotics makes robots out of fabric and air, with the goal of making robots that can interact with and react to the world. Their robots move by shifting air around to different compartments inside the fabric. To improve their precision and reactive capability, they’ll be equipped with sensors tailored to their function or task.
 
And there is some progress there. Recently, University of Minnesota researchers said they’ve created a process to 3D print flexible sensors. Something like this may act as a kind of “skin” for future robots.
 
Sensors will allow soft robots with their expanded capabilities to take on the precision of rigid robots, bringing the best of these two robotics worlds together for completely new applications.
 
4. Connect them
 
When we think of robots putting together cars or zooming around a warehouse to find a product, we often assume each individual robot is “smart.” That doesn’t have to be the case, though.
 
Robots can now network and interact with the cloud, eliminating the need for individual robots to be smart. The computation for the 45,000 robots Amazon uses in their warehouses happens in a central system, meaning not all 45,000 bots need to house all that computation inside their own “heads”—they just need to be able to coordinate with the system.
 
Especially for large-scale operations like this, it’s cheaper and more efficient to have ‘dumb’ robots taking instructions from one, centralized, in-charge bit of software than equipping all the robots with more advanced software and hardware of their own.
 
We are moving towards a manufacturing environment where robots will both work closely with humans and be able to do things in less-structured environments without human intervention.
 
As Bergbreiter said in closing, “It’s a fascinating time for robots.”