Biggest Neural Network Ever Pushes AI Deep Learning

Projects are trying to harness artificial intelligence by training brain-inspired neural networks to better represent the real world. Digital Reasoning, a cognitive computing company, announced that it has trained a neural network consisting of 160 billion parameters, more than 10 times larger than previous neural networks.
 
The Digital Reasoning neural network easily surpassed previous records held by Google’s 11.2-billion parameter system and Lawrence Livermore National Laboratory’s 15-billion parameter system. But it also showed improved accuracy over previous neural networks in tackling an “industry-standard dataset” consisting of 20,000 word analogies. Digital Reasoning’s model achieved an accuracy of almost 86 percent; significantly higher than Google’s previous record of just over 76 percent and Stanford University’s 75 percent.
 
“We are extremely proud of the results we have achieved, and the contribution we are making daily to the field of deep learning,” said Matthew Russell, chief technology officer for Digital Reasoning, in a press release. Deep learning” involves the building of learning machines from five or more layers of artificial neural networks. ("Deep" refers to the depth of the layers, rather than any depth of knowledge.) Yann LeCun, head of the Artificial Intelligence Research Lab at Facebook, has described the idea of deep learning as “machines that learn to represent the world.”
 
Digital Reasoning’s neural network was trained on three multi-core computers overnight in order to achieve its accuracy in tackling the word analogies dataset. But the company’s researchers plan to test the system on larger datasets and vocabularies in the near future. Their results so far have been detailed in a paper on the preprint server arXiv and in the Journal of Machine Learning.