Google’s DeepMind is using neural nets to explore dopamine’s role in learning

Deep learning algorithms can outperform humans in a number of areas, from classifying images and reading lips to making predictions about the future. But despite their superhuman levels of proficiency, they’re disadvantaged in the rate at which they learn. Some of the best machine learning algorithms take hundreds of hours to master classic video games that take the average person an afternoon, for example.
It might have something to do with the neurotransmitter dopamine, according to a paper published by Google subsidiary DeepMind in the journal Nature Neuroscience.
Meta-learning, or the process of learning quickly from examples and deriving rules from those examples over time, is thought to be one of the reasons that humans attain new knowledge more efficiently than their computer counterparts. But the underlying mechanisms of meta-learning remain poorly understood.
In an attempt to shed light on the process, researchers at DeepMind in London modeled human physiology using a recurrent neural network, a type of neural network that’s able to internalize past actions and observations and draw from those experiences during training. The reward prediction error of the network — the signal that mathematically optimizes the algorithm over time through trial and error — stood in for dopamine, the chemical in the brain that affects emotions, movements, and sensations of pain and pleasure, which is thought to play a key role in the learning process.
The researchers set the system loose on six neuroscientific meta-learning experiments, comparing its performance to that of animals that had been subjected to the same test. One of the tests, known as the Harlow Experiment, tasked the algorithm with choosing between two randomly selected images, one of which was associated with a reward. In the original experiment, the subjects (a group of monkeys) quickly learned a strategy for picking objects: choosing an object randomly the first time, then choosing reward-associated objects the second and each subsequent time thereafter.
The algorithm performed much like the animal subjects, making reward-associated choices from new images it hadn’t seen before. Moreover, the researchers noted, the learning took place in the recurrent neural network, supporting the theory that dopamine plays a key role in meta-learning.
The AI system behaved the same even when the weights — the strength between two neural network nodes, akin to the amount of influence one firing neuron in the brain has on another — were frozen. In animals, dopamine is believed to reinforce behaviors by strengthening synaptic links in the prefrontal cortex. But the consistency of the neural network’s behavior suggests that dopamine also conveys and encodes information about tasks and rule structures, according to the researchers.
“Neuroscientists have long observed similar patterns of neural activations in the prefrontal cortex, which is quick to adapt and flexible, but have struggled to find an adequate explanation for why that’s the case,” the DeepMind team wrote in a blog post. “The idea that the prefrontal cortex isn’t relying on slow synaptic weight changes to learn rule structures, but is using abstract model-based information directly encoded in dopamine, offers a more satisfactory reason for its versatility.”
Above: DeepMind’s neural network shifts its gaze toward the reward-associated image.
The idea that AI systems mimic human biology isn’t new, of course. A study conducted by researchers at Radboud University in the Netherlands found that recurrent neural networks can predict how the human brain processes sensory information, particularly visual stimuli. But for the most part, those discoveries have informed machine learning rather than neuroscientific research.
Last year, for example, DeepMind built a partial anatomical model of the human brain with complementary algorithms: a neural network that mimicked the behavior of the prefrontal cortex and a “memory” network that played the part of the hippocampus. The result was an AI machine that significantly outperformed most neural nets. More recently, DeepMind turned its attention to so-called rational machinery, producing synthetic neural networks capable of applying humanlike reasoning skills and logic to problem-solving.
The dopamine study, the paper’s authors wrote, shows that medicine has as much to gain from neural network research as computer science does.
“Leveraging insights from AI which can be applied to explain findings in neuroscience and psychology highlights the value each field can offer the other,” the DeepMind team wrote. “Going forward, we anticipate that much benefit can be gained in the reverse direction, by taking guidance from specific organization of brain circuits in designing new models for learning in reinforcement learning agents.”

This post was originally published in :