Google advances AI with ‘one model to learn them all’

Called “One Model to Learn Them All,” it lays out a template for how to create a single machine learning model that can address multiple tasks well. The MultiModel, as the Google researchers call it, was trained on a variety of tasks, including translation, language parsing, speech recognition, image recognition, and object detection.
 
While its results don’t show radical improvements over existing approaches, they illustrate that training a machine learning system on a variety of tasks could help boost its overall performance.
 
For example, the MultiModel improved its accuracy on machine translation, speech, and parsing tasks when trained on all of the operations it was capable of, compared to when the model was just trained on one operation.
 
Google’s paper could provide a template for the development of future machine learning systems that are more broadly applicable, and potentially more accurate, than the narrow solutions that populate much of the market today. What’s more, these techniques (or those they spawn) could help reduce the amount of training data needed to create a viable machine learning algorithm.
 
That’s because the team’s results show that when the MultiModel is trained on all the tasks it’s capable of, its accuracy improves on tasks with less training data. That’s important, since it can be difficult to accumulate a sizable enough set of training data in some domains.
 
However, Google doesn’t claim to have a master algorithm that can learn everything at once. As its name implies, the MultiModel network includes systems that are tailor-made to address different challenges, along with systems that help direct input to those expert algorithms. This research does show that the approach Google took could be useful for future development of similar systems that address different domains.
 
It’s also worth noting that there’s plenty more testing to be done. Google’s results haven’t been verified, and it’s hard to know how well this research generalizes to other fields. The Google Brain team has released the MultiModel code as part of the TensorFlow open source project, so other people can experiment with it and find out.
 
Google also has some clear paths to improvement. The team pointed out that they didn’t spend a lot of time optimizing some of the system’s fixed parameters (known as “hyperparameters” in machine learning speak), and going through more extensive tweaking could help improve accuracy in the future.