Collaborative learning for artificial intelligence and robots

Researchers from MIT’s Laboratory for Information and Decision Systems have developed an algorithm in which distributed agents, such as robots exploring a building, collect data and analyze it independently. Pairs of agents, such as robots passing each other in the hall, then exchange analyses.
 
In experiments involving several different data sets, the researchers’ distributed algorithm actually outperformed a standard algorithm that works on data aggregated at a single location, as described in an arXiv paper. Machine learning, in which computers learn new skills by looking for patterns in training data, is the basis of most recent advances in artificial intelligence, from voice-recognition systems to self-parking cars. It’s also the technique that autonomous robots typically use to build models of their environments.
 
That type of model-building gets complicated, however, in cases in which clusters of robots work as teams. The robots may have gathered information that, collectively, would produce a good model but which, individually, is almost useless. If constraints on power, communication, or computation mean that the robots can’t pool their data at one location, how can they collectively build a model?
 
At the Uncertainty in Artificial Intelligence conference July 23 to 27, the researchers will present the new algorithm. “A single computer has a very difficult optimization problem to solve in order to learn a model from a single giant batch of data, and it can get stuck at bad solutions,” says Trevor Campbell, a graduate student in aeronautics and astronautics at MIT.
 
He wrote the new paper with his advisor, Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics. “If smaller chunks of data are first processed by individual robots and then combined, the final model is less likely to get stuck at a bad solution.”