Google open-sources mobile-first computer vision models for TensorFlow

Google is helping smartphones better recognize images without requiring massive power consumption. Called MobileNets, the pre-trained image recognition models let developers pick between a set of models that vary in size and accuracy to best suit what their application needs.
 
Right now, a lot of the machine learning inside mobile apps works by passing data off to cloud services for processing and then providing the resulting insights to users once they return over the network. That means it’s possible to use very powerful computers in a data center and alleviate the burden for processing information on a smartphone. The drawback to that approach is that latency and privacy suffer.
 
By processing data on a user’s smartphone, it’s possible to return results a lot faster, and data never has to leave the phone. However, optimizing a machine learning model for use on mobile is a tall order. Eating up a bunch of battery with computationally intensive machine learning operations is no good.
 
That’s where MobileNets come in: Google has handled all of the optimization ahead of time, so developers just need to implement the model in their application. The models range from one that uses 569 million multiply and addition operations to one that uses just 14 million of those operations.
 
In this case, the more operations one of the MobileNet models uses, the higher its accuracy, in exchange for an increased load on a device’s resources.
 
It’s a move by Google to capitalize on a trend of increased local machine learning processing. The news comes a month after the company revealed TensorFlow Lite, its framework for running machine learning models created using TensorFlow more efficiently on low-power Android devices.
 
Developers can deploy the models now using TensorFlow Mobile, a system that is designed to help with deploying models onto Android, iOS, and Raspberry Pi.