Google researchers create AI that maps the brains neurons

Mapping the structure of biological networks in the nervous system is computationally intensive. The human brain contains around 86 billion neurons networked through 100 trillion synapses, and imaging a single cubic millimeter of tissue can generate more than 1,000 terabytes of data. Luckily, artificial intelligence can help.
 
In a paper (High-Precision Automated Reconstruction of Neurons with Flood-Filling Networks) published in the journal Nature Methods, scientists at Google and the Max Planck Institute of Neurobiology demonstrated a recurrent neural network — a type of machine learning algorithm that’s often used in handwriting and speech recognition — tailored made for connectomics analysis.
 
Google researchers aren’t the first to apply machine learning to connectomics — in March, Intel partnered with the Massachusetts Institute of Technology’s Computer Science and AI Laboratory to develop a “next-gen” brain image processing pipeline. But they claim their model improves accuracy by “an order of magnitude” over previous deep learning techniques.
 
The researchers employed an edge detector algorithm that identified the boundaries of neurites (an outgrowth from the body of the neuron), as well as a recurrent convolutional neural network — a subcategory of recurrent neural network — that grouped together and highlighted pixels in scans depicting the neurons.
 
To keep track of accuracy, the team developed “expected run length” (ERL), a metric that, given a random point with a random neuron in a 3D image of a brain, measured how far the algorithm could trace a neuron before making a mistake. In a brain scan of a 1-million cubic micron zebra finch, the model performed “much better” than previous algorithms, the team reported.
 
“By combining these automated results with a small amount of additional human effort required to fix the remaining errors, [researchers] at the Max Planck Institute are now able to study the songbird connectome to derive new insights into how zebra finch birds sing their song and test theories related to how they learn their song,” Viren Jain and Michal Januszewski, Google researchers and lead authors on the paper, wrote in a blog post.
 
In addition to the paper, the team published the model’s TensorFlow code on Github, along with the WebGL 3D software they used to visualize the dataset and improve the reconstruction results. They plan to refine the system in the future, with the aim of fully automating the synapse-resolution process and “contributing to projects at the Max Planck Institute and elsewhere.”