US Bets $100 Million on Machines That Think Like Humans

In recent years, big science has become increasingly focused on the brain. And now the government is pushing forward what is perhaps the most high-risk, high-reward project of our time: brain-like artificial intelligence. And they’re betting $100 million that they will be able to succeed in next decade.
 
Led by IARPA and a part of the larger BRAIN Initiative, the MICrONS (Machine Intelligence from Cortical Networks) project seeks to revolutionize machine learning by reverse engineering algorithms of the mammalian cortex.
 
It’s a hefty goal, but one that may sound familiar. The controversial European Union-led Human Brain Project (HBP), a nearly $2 billion investment, also sits at the crossroads between brain mapping, computational neuroscience and machine learning. But MICrONS is fundamentally different, both technically and logistically.
 
Rather than building a simulation of the human brain, which the HBP set out to do, MICrONS is working in reverse. By mapping out the intricate connections that neurons form during visual learning and observing how they change with time, the project hopes to distill sensory computation into mathematical “neural codes” that can be fed into machines, giving them the power to identify, discriminate, and generalize visual stimulation.
 
The end goal: smarter machines that can process images and video at human-level proficiency. The bonus: better understanding of the human brain and new technological tools to take us further. It’s a solid strategy. And the project’s got an all-star team to see it through. Here’s what to expect from this “Apollo Project of the Brain.”
 
Much of today’s AI is inspired by the human brain. Take deep reinforcement learning, a strategy based on artificial neural networks that’s transformed AI in the last few years. This class of algorithm powers much of today’s technology: self-driving cars, Go-playing computers, automated voice and facial recognition, just to name a few.
 
Nature has also inspired new computing hardware, such as IBM’s neuromorphic SyNAPSE chip, which mimics the brain’s computing architecture and promises lightning-fast computing with minimal energy consumption.
 
Even a rough idea of how the brain works has given us powerful AI systems. MICrONS takes the logical next step: instead of guessing, let’s figure out how the brain actually works, find out what AI’s currently missing, and add it in.
 
MICrONS plans to dissect one cubic millimeter of mouse cortex at nanoscale resolution. And it’s recruited Drs. David Cox and Jeff Lichtman, both neurobiologists from Harvard University to head the task.
 
Last July, Lichtman published the first three-dimensional complete reconstruction of a crumb-sized cube of mouse cortex. The effort covered just 1,500 cubic microns, roughly 600,000 times smaller than MICrONS’ goal.
 
It’s an incredibly difficult multi-step procedure. First, the team uses a diamond blade to slice the cortex into thousands of pieces. Then the pieces are rolled onto a strip of special plastic tape at a rate of 1,000 sections a day.
 
These ultrathin sections are then imaged with a scanning electron microscope, which can capture synapses in such fine detail that tiny vesicles containing neurotransmitters in the synapses are visible.
 
Mapping the tissue to this level of detail is like “creating a road map of the U.S. by measuring every inch,” says MICrON’s project manager Jacob Volgestein to Scientific American. Lichtman’s original reconstruction took over six long years.
 
That said, the team is optimistic. According to Dr. Christof Koch, president of the Allen Institute for Brain Science, various technologies involved in the process will speed up tremendously, thanks to new tools developed under the BRAIN Initiative. Lichtman and Cox hopes to make tremendous headway in the next five years.
 
We’re hoping to observe the activity of 100,000 neurons simultaneously while a mouse is learning a new visual task, explained Cox. It’s like wire tapping the brain: the scientists will watch neural computations happen in real time as the animal learns.
 
To achieve the formidable task, Cox plans on using two-photon microscopy, which relies on fluorescent proteins that only glow in the presence of calcium. When a neuron fires, calcium rushes into the cell and activates those proteins, and their light can be observed with a laser-scanning microscope. This gives scientists a direct visual of neural network activation.
 
The technique’s been around for a while. But so far, it’s only been used to visualize tiny portions of neural networks. If Cox successfully adapts it for wide-scale imaging, it may well be revolutionary for functional brain mapping and connectomics.
 
Meanwhile, MICrONS project head Dr. Tai Sing Lee at Carnegie Mellon University is taking a faster, if untraveled, route to map the mouse connectome.
 
According to Scientific American, Lee plans to tag synapses with unique barcodes, a short chain of random nucleotides, the molecules that make up our DNA. By chemically linking these barcodes together across synapses, he hopes to quickly reconstruct neural circuits.
 
If it works, the process will be much faster than nanoscale microscopy and may give us a rough draft of the cortex (one cubic millimeter of it) within the decade.
 
As a computer scientist and machine learning expert, Lee’s formidable skills will likely come into play during the next phase of the project: making sense of all the data and extracting information useful for developing new algorithms for AI.
 
Going from neurobiological data to theories to computational models will be the really tough part. But according to Cox, there is one guiding principle that’s a good place to start: Bayesian inference.
 
During learning, the cortex actively integrates past experiences with present learning, building a constantly shifting representation of the world that allows us to predict incoming data and possible outcomes.
 
It’s likely that whatever algorithms the teams distill are Bayesian in nature. If they succeed, the next step is to thoroughly test their reverse-engineered models.
 
Vogelstein acknowledges that many current algorithms already rely on Bayesian principles. The crucial difference between what we have now and what we may get from mapping the brain is implementation.
 
There are millions of choices that a programmer makes to translate Bayesian theory into executable code, says Vogelstein. Some will be good, others not so much. Instead of guessing those parameters and features in software as we have been doing, it makes sense to extract those settings from the brain and narrow down optimal implementations to a smaller set that we can test.
 
Using this data-based ground-up approach to brain simulation, MICrONS hopes to succeed where HBP stumbled.
 
“We think it’s a critical challenge,” says Vogelstein. If MICrONS succeeds, it may “achieve a quantum leap in machine learning.”