Tuesday, December 7, 2021

How complex is a single biological neuron?

 For an audience that is well versed with machine learning and deep learning these days, they often know the complexity of a single artificial neuron while building complex architectures. A single neuron typically comprises of a linear block and non-linear activation. The linear block simply does a weighted linear combination of its inputs and the the non-linear activation block computes the output using the defined non-linear activation function. It could be a simple sigmoid or a tanh or a softmax or a ReLU or a leaky ReLU or any other. It’s often said that the artificial neural networks were inspired by the brain. So just how complex is a typical biological neuron – which inspired us all – in comparison to an artificial neuron in terms of complexity.

We will mention the notion of “complexity”, at least as it was used in the work that David Beniaguev, Idan Segev and Michael London, at the Hebrew University of Jerusalem carried out. They trained an artificial deep neural network to mimic the computations of a simulated biological neuron. They published their work titled “Single cortical neurons as deep artificial neural networks” (ref https://www.sciencedirect.com/science/article/abs/pii/S0896627321005018). 

They showed that a deep neural network requires between five and eight layers of interconnected “neurons” to represent the complexity of one single biological neuron.

The paper says that “This study provides a unified characterization of the computational complexity of single neurons and suggests that cortical networks therefore have a unique architecture, potentially supporting their computational power.”

The authors also hope that their result will change the present state-of-the-art deep network architecture in AI. “We call for the replacement of the deep network technology to make it closer to how the brain works by replacing each simple unit in the deep network today with a unit that represents a neuron, which is already—on its own—deep,” said Segev. In this replacement scenario, AI researchers and engineers could plug in a five-layer deep network as a “mini network” to replace every artificial neuron.

This might provide insights into comparing architectures to real brains, especially image classification tasks. If 100 neurons is equivalent to 20 neurons in a biological network, then that is all that is required for completing a classification task in the brain!

So, guess, it is okay to claim that they brain (especially the visual cortex) inspired the artificial neural network architecture, but unfair to say that they are equivalent!

Data and code availability

As mentioned in the paper cited above, all data and pre-trained networks that were used in this work are available on Kaggle datasets platform (https://doi.org/10.34740/kaggle/ds/417817) at the following link:

https://www.kaggle.com/selfishgene/single-neurons-as-deep-nets-nmda-test-data

Additionally, the dataset was deposited to Mendeley Data (https://doi.org/10.17632/xjvsp3dhzf.2) at the link:

https://data.mendeley.com/datasets/xjvsp3dhzf/2

A github repository of all simulation, fitting and evaluation code can be found in the following link:

https://github.com/SelfishGene/neuron_as_deep_net.

Additionally, we provide a python script that loads a pretrained artificial network and makes a prediction on the entire NMDA test set that replicates the main result of the paper (Figure 2):

https://www.kaggle.com/selfishgene/single-neuron-as-deep-net-replicating-key-result.

Also, a python script that loads the data and explores the dataset (Figure S1) can be found in the following link: https://www.kaggle.com/selfishgene/exploring-a-single-cortical-neuron.

No comments:

Post a Comment