Skip to main content Skip to secondary navigation
Page Content
Image
Illustration of brain neurons

Advances in artificial intelligence research have often been fostered by advances in neuroscience. Indeed, the two fields have frequently borrowed ideas from each other and there remain many fruitful opportunities for doing so in the future.

In a recent review paper published in Science, Stanford University biology and neurobiology professor Liqun Luo summarizes our current understanding of neural circuits in the brain and how they fit together into the brain’s architecture. The review also suggests additional opportunities for artificial intelligence to learn from neuroscience.  

“I wanted to set out what’s known and what’s unknown, to stimulate both neuroscience and AI researchers,” he says.

Read Architectures of Neuronal Circuits

 

Luo’s message to AI researchers is this: Neuroscientists still have a long way to go to understand the various circuit motifs and architectures in the brain and how they interact with one another, but the groundwork has been laid for AI researchers to consider using a greater variety of motifs and architectures than they do currently – and perhaps to even connect multiple circuit architectures together to create the kinds of synergies we see in the brain. 

From Neurons, to Circuit Motifs, to Architectures

Luo likens the brain’s structure to the building blocks of language. If individual neurons are letters, then circuit motifs are the words they spell, and circuit architectures are the sentences created by a series of words. At each level, Luo says, AI researchers stand to benefit from a better understanding of how the various parts of the brain connect and communicate with each other. 

Synaptic connectivity patterns – the ways that neurons connect to other neurons – spell out the first level of generalized information-processing principles in the brain – the circuit motifs. These include some of the most fundamental sorts of neural circuitry, such as feed-forward excitation, that were incorporated into some of the first artificial neural networks ever developed, including perceptrons and deep neural nets.

But Luo describes other motifs as well, including feedback inhibition, lateral inhibition, and mutual inhibition. Although these motifs might arise in AI systems that use unsupervised learning, where weights are assigned and adjusted during the learning process, Luo is wondering whether deliberately incorporating these motifs into the architecture of AI systems can help further improve their performance.

At one level above circuit motifs, Luo says, are the “sentences” these motifs create when organized together into specific brain architectures. For example, continuous topographic mapping is an architecture in which nearby units in one layer of the brain are connected to nearby units in the next layer. This approach has been incorporated into AI systems that use convolutional neural nets. Similarly, parallel processing is a type of neural circuit architecture that has been widely adopted in computing generally as well as in a variety of AI systems. 

An additional important circuit architecture is dimensionality expansion, in which inputs from a layer with a small number of units are connected to an intermediate layer with a much larger number of units such that subtle differences in the input layer become more apparent in the intermediate layer for the output layer to distinguish. Also important are recurrent networks, in which neurons connect back to themselves, often through intermediaries. The brain concatenates both dimensionality expansion and recurrent processing in a highly structured manner across multiple regions. Understanding and exploiting the design principles governing these combinations of circuit motifs could help AI.

In general, Luo says, “Using my language metaphor, I would say that AI researchers tend to use letters and jump directly to articles without writing the words and sentences in between.” In essence, he says, without knowing the intermediates, they still get things to work by using brute force and lots of computational power. Perhaps neuroscience can help AI researchers open that black box, Luo says.

Moving Forward: Assemble Multiple Architectures

AI researchers should broaden their approaches, Luo says. In the brain, a variety of architectures coexist and work together to generate general intelligence, whereas most AI systems rely on a single type of circuit architecture.

“Perhaps if AI researchers explore the variety of architectures that exist in the brain, they will be inspired to design new ways of putting multiple architectures together to build better systems than are possible with a single architecture alone,” he says. 

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics

Related Content