Skip to main content

Benefits of neuromorphic chips for AI training efficiency

Neural Network Training simplified by intelligent hardware solutions

Overview of Neuromorphic Chip Technology

AI and Neuromorphic Chips

Large-scale neural network models underpin numerous AI technologies, including neuromorphic chips inspired by the human brain. Training these networks can be ardous, time-consuming, and energy-inefficient, as the model is typically trained on a computer before being transferred to the chip. This process restricts the application and efficiency of neuromorphic chips.

On-Chip Training Innovation

TU/e researchers have developed a neuromorphic device that facilitates on-chip training, removing the requirement to transfer trained models to the chip. This breakthrough could lead to the creation of efficient and dedicated AI chips.

Inspiration and Teamwork Behind the Technology

Mimicking the Human Brain

"Contemplate the marvel that is the human brain--a remarkable computational marvel known for its speed, dynamism, adaptability, and exceptional energy efficiency."

"Researchers at TU/e, led by Yoeri Van de Burgt, have been inspired by these attributes to replicate neural processes in technologies critical for learning, including AI applications in transport, communication, and healthcare."

Neural Networks and Challenges

According to Van de Burgt, an associate professor in the Department of Mechanical Engineering at TU/e, a neural network is typically central to these AI system.

Neural networks are computational models inspired by the brain, in the human brain, neurons communicate through synapses, and the more frequently two neurons interact, the stronger their connection becomes, Similarly, in neural network models composed of nodes, the strength of the connection between any two nodes is quantified by a value known as the weight.

Van de Burgt explains that while neural networks can tackle intricate problems with substantial data, their expansion leads to increased energy demands and hardware limitations. Nevertheless, a promising alternative in hardware--neuromorphic chips---offers a solution.

The Neuromorphic Issue

Neuromorphic chips, much like neural networks, are based on brain fuctions, but they achieve a higher degree of imitation. In the brain, neurons fire and send electical charges to other neurons when their electrical charge changes. This process is mirrored by neuromorphic chips.

"In neuromorphic chips, there are memristors--memory resistors--that can retain information about the electrical charge that has flowed through them, Van de Burgt states. This functionality is crucial for devices designed to emulate the information storage and communication of brain neurons."

However, a neuromorphic challenge exists, and it pertains to the two methods, used for training hardware based on neuromorphic chips. The first method involves conducting the training on a computer,  with the resultant network weights being transferred to the chip hardware. The alternative approach is to conduct the trainign in-situ, directly within the hardware. However, current devices must be individually programmed and subsequently error-checke. This necessity arises because most memeristors exhibit stochastie behavior, making it impossible to update the device without verification.

These strategies are costly regarding time, energy, and computing resources, Van de Burgt indicates. To achieve the full energy efficiency potential of neuromorphic chips, training should be directly conducted on them.

The TU/e Solution

This is precisely what Van de Burgt and his collaborators at TU/e have accomplished, as detailed in their recent publication in Science Advances. "This was a true team effort, spearheaded by co-first authors Tim Stevens and Eveline van Doremaele," Van de Burgt notes.

This research began with Tim Stevens' master's journey. 'I became fascinated by this subject during my master's research. Our work shows that training can be executed directly on hardware without needing to transfer a trained model to the chip. This advancement could lead to more efficient AI chips,' says Stevens.

Van de Burgt, Stevens, and Van Doremaele--who completed her Ph.D. thesis on neuromorphic chips in 2023--required assistance with the hardware design. Consequently, they sought the expertise of Marco Fattore from the Department of Electrical Engineering.

"My team assisted with the circuit design elements of the chip, "Fattori states. "It was fantastic to collaborate on this interdisciplinary project, bridging the gap between chip developers and software engineers."

Van de Burgt observed form this project that valuable ideas can arise form any academic stage. "Tim identified the broader potential of our device properties during his master's studies. There's an important lesson here for all projects."

Dual-Layer Training

The researchers faced a central challenge:integrating critical components necessary for on-chip training onto a single neuromorphic chip. "One of the major tasks was integrating components such as electrochemical random-access memory (EC-RAM)," explains Van de Burgt. "These components replicate the electrical charge storage and firing characterstic of neurons."

The researchers developed a two-layer neural network utilizing EC-RAM components fabricated from organic materials. They evaluated the hardware using an iteration of the widely adopted training algorithm, back propagation with gradient descent. "The conventional algorithm is commonly used to refine neural network accuracy, but it's not suited for our hardware, leading us to create our own variant," states Stevens.

In addition, as AI applications in many domains become increasingly unsustainable in their energy consumption, the prospect of training neural networks on hardware with minimal energy expenditure becomes an attractive option across various applications--from ChatGPT to weather forecasting.

The Subsequent phase

While the researchers have shown the efficacy of the new training approach, the next logical move is to scale up, innovate further, and enhance.

"We have demonstrated the feasibility with a small two-layer network," states Van de Burgt. "Our next objective is to collaborate with industry and major research institutions to scale up to larger networks of hardware devices and validate them with real-world data challenges."

This progression would enable the researchers to showcase the high efficiency of these systems in both training and operationalizing beneficial neural networks and AI system. "We aim to deploy this technology across various practical scenarious," remarks Van de Burgt. "My vision is for such technologies to set a standard in future AI applications."

More information - The article 'Hardware implementation of backpropagation using progressive gradient descent for in situ training of multilayer neural network' by Eveline R. W. van Doremaele et al. appears in Science Advances (2024). DOI: 10.1126/sciadv.ado8999


Comments

Popular posts from this blog

NASA chile scientists comet 3i atlas nickel mystery

NASA and Chilean Scientists Study 3I/ATLAS, A Comet That Breaks the Rules Interstellar visitors are rare guests in our Solar System , but when they appear they often rewrite the rules of astronomy. Such is the case with 3I/ATLAS , a fast-moving object that has left scientists puzzled with its bizarre behaviour. Recent findings from NASA and Chilean researchers reveal that this comet-like body is expelling an unusual plume of nickel — without the iron that typically accompanies it. The discovery challenges conventional wisdom about how comets form and evolve, sparking both excitement and controversy across the scientific community. A Cosmic Outsider: What Is 3I/ATLAS? The object 3I/ATLAS —the third known interstellar traveler after "Oumuamua (2017) and 2I/Borisov (2019) —was first detected in July 2025 by the ATLAS telescope network , which scans he skies for potentially hazardous objects. Earlier images from Chile's Vera C. Rubin Observatory had unknowingly captured it, but ...

Quantum neural algorithms for creating illusions

Quantum Neural Networks and Optical Illusions: A New Era for AI? Introduction At first glance, optical illusions, quantum mechanics, and neural networks may appear unrelated. However, my recent research in APL Machine Learning Leverages "quantum tunneling" to create a neural network that perceives optical illusions similarly to humans. Neural Network Performance The neural network I developed successfully replicated human perception of the Necker cube and Rubin's vase illusions, surpassing the performance of several larger, conventional neural networks in computer vision tasks. This study may offer new perspectives on the potential for AI systems to approximate human cognitive processes. Why Focus on Optical Illusions? Understanding Visual Perception O ptical illusions mani pulate our visual  perce ption,  presenting scenarios that may or may not align with reality. Investigating these illusions  provides valuable understanding of brain function and dysfunction, inc...

fractal universe cosmic structure mandelbrot

Is the Universe a Fractal? Unraveling the Patterns of Nature The Cosmic Debate: Is the Universe a Fractal? For decades, cosmologists have debated whether the universe's large-scale structure exhibits fractal characteristics — appearing identical across scales. The answer is nuanced: not entirely, but in certain res pects, yes. It's a com plex matter. The Vast Universe and Its Hierarchical Structure Our universe is incredibly vast, com prising a p proximately 2 trillion galaxies. These galaxies are not distributed randomly but are organized into hierarchical structures. Small grou ps ty pically consist of u p to a dozen galaxies. Larger clusters contain thousands, while immense su perclusters extend for millions of light-years, forming intricate cosmic  patterns. Is this where the story comes to an end? Benoit Mandelbrot and the Introduction of Fractals During the mid-20th century, Benoit Mandelbrot introduced fractals to a wider audience . While he did not invent the conce pt —...