DeepMind’s AI Can Now Recognize Something After Seeing It Only Once

DeepMind is now capable of recognizing objects on images, handwriting, and even language through "one-shot learning" algorithm. "Our algorithm improves one-shot accuracy on ImageNet from 87.6% to 93.2% and from 88.0% to 93.8% on Omniglot compared to competing approaches," claim the researchers.
To date, machines previously required thousands of hand-coded examples from databases like ImageNet to become familiar with an object or a word. This work is normally time-consuming and expensive and makes the scalability of such AI systems difficult.  For instance, driverless car AIs need to study thousands of cars in order to work. It seems impractical for a robot to navigate an unfamiliar home for countless of hours before getting familiar with it.
Oriol Vinyals, a research scientist at Google DeepMind, the U.K.-based subsidiary of Alphabet that’s focused on artificial intelligence, added a new memory component to a deep-learning system, the large neural network that’s trained to recognize things by adjusting the sensitivity of many layers of interconnected components roughly analogous to the neurons in a brain
The new software still needs to analyze several hundred categories of images, but after that it can learn to recognize new objects from just one picture. Effectively, it learns to recognize the characteristics in images that make them unique. The algorithm was able to recognize images of dogs with an accuracy close to that of a conventional data-hungry system after seeing just one example. 
Another way the system is almost human-like with regards to learning is that the research team found one-shot learning is much easier if you train the network to do one-shot learning. Also, ungrouped or, non-parametric structures in a neural network make it easier for networks to remember and adapt to new training sets in the same tasks.
The work could be especially useful if it could quickly recognize the meaning of a new word. This could be important for Google, Vinyals says, since it could allow a system to quickly learn the meaning of a new search term.
"We feel this is an area with exciting challenges which we hope to keep improving in future work," concluded the researchers in their paper.