uncategorized

Nanowire networks act like a human brain

They can learn and remember

Researchers from the University of Sydney and other institutions have shown that nanowire networks can work like the human brain’s short- and long-term memory.

Nanowires are nanostructures in the shape of wires with lengths ranging from a few micrometers to centimeters with widths of the order of a nanometer (10-9 meters). These are structures with unconstrained lengths and thickness or diameter limits of tens of nanometers or less. Metals, semiconductors, and oxides are just a few of the many materials that can be used to create nanowires. They are useful for a variety of applications, including sensors, transistors, solar cells, and batteries, because of their distinctive electrical, optical, and mechanical features.

According to this article, with Japanese collaborators, the study was published in the journal Science Advances under the direction of Dr. Alon Loeffler, who got his Ph.D. in the School of Physics.

“In this research, we found higher-order cognitive function, which we normally associate with the human brain, can be emulated in non-biological hardware”, Dr. Loeffler said.

“This work builds on our previous research in which we showed how nanotechnology could be used to build a brain-inspired electrical device with neural network-like circuitry and synapse-like signaling”.

“Our current work paves the way towards replicating brain-like learning and memory in non-biological hardware systems and suggests that the underlying nature of brain-like intelligence may be physical”.

Invisible to the unaided eye, nanowire networks are a type of nanotechnology that are often built from small, highly conductive silver wires that are dispersed across one another like a mesh. Aspects of the networked physical structure of the human brain are modeled by the wires.

Many practical applications, such as improving robots or sensing systems that must make quick decisions in unexpected surroundings, could be ushered in by advancements in nanowire networks.

“This nanowire network is like a synthetic neural network because the nanowires act like neurons, and the places where they connect with each other are analogous to synapses”, senior author Professor Zdenka Kuncic, from the School of Physics, said.

“Instead of implementing some kind of machine learning task, in this study, Dr. Loeffler has actually taken it one step further and tried to demonstrate that nanowire networks exhibit some kind of cognitive function”.

The N-Back task, a common memory test used in studies on human psychology, was employed by the researchers to examine the capabilities of the nanowire network.

The n-back task, developed by Wayne Kirchner in 1958, is a continuous performance task that is frequently employed in psychological and cognitive neuroscience assessments to evaluate the capacity and a portion of working memory. People must determine whether each item in a sequence of letters or images shown as part of the task matches an item that was presented n items earlier.

A person may recognize the identical image that appeared seven steps back getting an N-Back score of 7, which is the average score for people. Researchers discovered that the nanowire network could “remember” a desired endpoint in an electric circuit seven steps back, getting the same score as a person.

“What we did here is manipulate the voltages of the end electrodes to force the pathways to change, rather than letting the network just do its own thing. We forced the pathways to go where we wanted them to go”, Dr. Loeffler said.

“When we implement that, its memory had much higher accuracy and didn’t really decrease over time, suggesting that we’ve found a way to strengthen the pathways to push them towards where we want them, and then the network remembers it”.

“Neuroscientists think this is how the brain works, certain synaptic connections strengthen while others weaken, and that’s thought to be how we preferentially remember some things, how we learn, and so on.”

According to the researchers, the nanowire network can accumulate information in memory to the point where it no longer requires reinforcement because it has been consolidated.

“It’s kind of like the difference between long-term memory and short-term memory in our brains”, Professor Kuncic said.

Long-term memories last for years. We also have a working memory, which lets us keep something in our minds for a limited time by repeating it. Short-term memory is used when, for instance, the name of a new acquaintance, a statistic, or some other detail is consciously processed and retained for at least a short period of time. Short-term memories last seconds to hours.

If we want to remember the information for a long time, we must continually train our brains to consolidate it; otherwise, it gradually fades away.

“One task showed that the nanowire network can store up to seven items in memory at substantially higher than chance levels without reinforcement training and near-perfect accuracy with reinforcement training”.

Therefore, artificial intelligence will also soon have the hardware to support software-based neural networks. This implies a further development in the field of robotics in the creation of real artificial brains that will be able to simulate the human one albeit with due limitations.

Dan Brokenhouse

Recent Posts

8 humanoid robots to change the workforce

Eight cutting-edge humanoid robots poised to transform industries and redefine the future of work in…

4 days ago

Boston Dynamics’s Atlas changes shape

Boston Dynamics retires its iconic Atlas robot and unveils a new advanced, all-electric humanoid robot…

2 weeks ago

The growing trend of AI-generated virtual partners

A tech executive reveals the growing trend of AI-generated "girlfriend" experiences, with some men spending…

3 weeks ago

Does TikTok read your mind?

An in-depth look at how TikTok's algorithm shapes user experiences, and the importance of human…

4 weeks ago

AI ‘ghost’ avatars

Experts warn AI "ghost" avatars could disrupt the grieving process, leading to stress, confusion, and…

1 month ago

How LLMs retrieve some stored knowledge

Large language models use linear functions to retrieve factual knowledge, providing insights into their inner…

1 month ago
Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.