How computationally complex is a neuron?

[ad_1]

Our messy brains they look far from the hard silicon chips in computer processors, but scientists have a long history of comparing the two. As Alan Turing said in 1952: “We are not interested in the fact that the brain has the consistency of cold porridge.” In other words, the environment does not matter, only the computational power.

Today, the most powerful artificial intelligence systems use a type of machine learning called deep learning. Their algorithms are learned by processing huge amounts of data through hidden layers of interconnected nodes called deep neural networks. As their name suggests, deep neural networks are inspired by real neural networks in the brain, with nodes modeled on real neurons – or at least after what neurologists knew about neurons in the 1950s, when an influential neural model called born is a perceptron. Since then, our understanding of the computational complexity of single neurons has expanded dramatically, so that biological neurons are known to be more complex than artificial ones. But by how much?

To find out, David Beniaguev, Idan Segev and Michael London, all at the Hebrew University of Jerusalem, trained an artificial deep neural network to mimic the calculations of a simulated biological neuron. They showed that a deep neural network requires between five and eight layers of interconnected “neurons” to represent the complexity of a single biological neuron.

Even the authors did not expect such complexity. “I thought it would be simpler and less,” Benyaguev said. He expected that three or four layers would be enough to capture the calculations made in the cell.

Timothy Lilikrap, who designs decision-making algorithms at Google’s own artificial intelligence company DeepMind, said the new result suggests that the old tradition of comparing a neuron in the brain to a neuron in the context of machine learning may need to be rethought. . “This document really helps to make the issue think more carefully and to the extent to which you can make these analogies,” he said.

The most basic analogy between artificial and real neurons involves how they process input. Both types of neurons receive incoming signals and based on this information decide whether to send their own signal to other neurons. While artificial neurons rely on a simple calculation to make this decision, decades of research have shown that the process is much more complex in biological neurons. Computational neurologists use an input-output function to model the relationship between the inputs obtained from the long tree branches of a biological neuron, called dendrites, and the neuron’s decision to send a signal.

This feature is what the authors of the new work learned to mimic an artificial deep neural network to determine its complexity. They began by creating a massive simulation of the input-output function of a neuron type with individual trees of dendritic branches at the top and bottom, known as a pyramidal neuron, from the bark of a rat. They then submitted the simulation to a deep neural network that had up to 256 artificial neurons in each layer. They continued to increase the number of layers until they reached 99 percent accuracy at the millisecond level between the input and output of the simulated neuron. The deep neural network successfully predicts the behavior of the neuron’s input-output function with at least five — but no more than eight — artificial layers. In most networks, this equates to about 1,000 artificial neurons for just one biological neuron.

Neuroscientists now know that the computational complexity of a neuron, like the pyramidal neuron on the left, relies on dendritic tree branches that are bombarded with input signals. They cause local changes in voltage, represented by the changing colors of the neuron (red means high voltage, blue means low voltage), before the neuron decides whether to send its own signal called a “spike”. This is shot three times, as shown by the traces of individual branches on the right, where the colors represent the locations of the dendrites from top (red) to bottom (blue).

Video: David Benyaguev

“[The result] it forms a bridge from biological neurons to artificial neurons, ”said Andreas Tolias, a computational neurologist at Baylor College of Medicine.

But the study’s authors warn that this is not yet direct correspondence. “The connection between how many layers you have in a neural network and the complexity of the network is not obvious,” London said. So we can’t really say how much more complexity is achieved by going from, say, four layers to five. Nor can we say that the need for 1,000 artificial neurons means that the biological neuron is exactly 1,000 times more complex. Ultimately, it is possible that the use of exponentially more artificial neurons in each layer will eventually lead to a deep neural network with a single layer – but this will probably require much more data and time to master the algorithm.

[ad_2]

Source link

Leave a Reply

Your email address will not be published.