A well-known research team of the Salk Institute, led by Pioneer Terry Sejnowski, published a fascinating study on memory and computational abilities of the brain. The approach includes laboratory experiments and theoretical analyses, which contribute to the revision of previous estimates of the scale of our abilities by order of magnitude.
The authors claim that the brain’s computational abilities can process and store volumes of information comparable to the petabytes of the entire World Wide Web. At the same time, they also answer the question that troubles generations of neuroscientists: how does the brain achieve its remarkable energy efficiency without consuming more energy than a lamp of 20 watts?
How our brains are organized
Our memories and thoughts result from organized patterns of electrochemical activity in our chaotic brains. An important part of this activity takes place in the branches of neurons, which look like entangled electrical wires, the so-called dendrites. Dendrites interact with incoming pieces of information at particular points, also known as synapses. The “output” of one neuron constitutes an “input” for another, and signals travel very quickly on the avenues of dendrites.
On a larger scale, each neuron can carry thousands of these synapses, thus connecting with thousands of other neurons. Following the same bottom-up path, we observe a system that consists of organized networks of 85 billion neurons and perhaps 100 trillion synapses. The result of this complexity is simply ourselves.
Finding the pieces of the puzzle
To understand the mysterious organization and function of the synapses, the researchers wanted to reconstruct a small brain tumor through images based on electron microscopy. When all the dendrites and synapses were placed in their correct position, like pieces of a neurobiological puzzle, they composed a three-dimensional image so complex that even the researchers themselves were surprised. Kristen Harris is the co-author of this study and a professor in the Department of Neuroscience at the University of Texas. She says, “Although I hoped to extract some fundamental principles of brain organization from these detailed representations, I was really surprised at the incomparable detail of the analyses we recorded.”
To extract conclusions from the experimental data, the researchers used complex algorithms to reconstruct the connectivity of these structures, volumes, and shapes. Until then, they used to divide the synaptic sizes into three different categories, small, medium, and large, based on their size and respective contribution to the transmission of electrical signals. After these exciting discoveries, they had to use 26 categories to put order to the chaos that had been created.
The future of Computers and AI
To have a better understanding, consider that each category can store a certain amount of information. The more the categories, the greater the amount of information the synapse can store. Now, memories are stored as distributed patterns of electrochemical activity. So, memory capacity results from underlying synaptic magnitudes. In other words, this means that memory is much larger than what we thought.
The researchers also calculated that for smaller synapses, 1,500 incoming signals are required to cause a change in their size. In comparison, for larger synapses, just 100 signaling events are capable. This means that every 2 to 20 minutes, the synapses adjust to the complex sensory signals, which the brain must process effectively to meet the increased demands of the environment.
A new chapter in research on the mechanisms of learning and memory can now begin. The authors characteristically mentioned: “The memory capacity is up to an order of magnitude higher than we had imagined. The effects of what we have discovered are amazing. Beneath the apparent chaos and clutter of the brain, there is great accuracy in the size and shape of synapses, which we could not detect until recently. This brain trick clearly shows a more efficient way to better design computers as well.”
The new knowledge could find numerous applications in computers, especially those using artificial neural networks, “deep learning” and other software of sophisticated learning and analysis, to automatically recognize speech in sound and objects in images as well as do translations and extract meaning from a written speech.
Have you read?
Philosophy’s Role in Enhancing Human Decision-Making in an AI-Driven World by Lisa Gable.
How to create a people-first culture that retains talent by Jade Green.
Difficult Conversations: We can work it out by Shantha Mohan Ph.D.
5 ways to use the summer holidays to create better habits that stick by Donna McGeorge.
Using Our Attention to Improve Collaborative Performance by Grayson James.
Add CEOWORLD magazine to your Google News feed.
Follow CEOWORLD magazine headlines on: Google News, LinkedIn, Twitter, and Facebook.
Thank you for supporting our journalism. Subscribe here.
For media queries, please contact: email@example.com