Sunday, July 28, 2024

AI enhanced magnetic tunnel junction

Engineers Innovate with Magnetic Tunnel Junction Device to Enhance AI Energy

magnetic tunnel juction

Introduction to State-of-Art Hardware Device

Energy Consumption in AI

State-of-the-Art Hardware Device Developed by University of Minnesota Twin Cities Researchers Could Decrease AI Computing Energy Consumption by At Least 1,000 Times.

Research Overview and Significance

Published Research and Patents

The research, featured in npj Unconventional Computing under the title 'Experimental Demonstration of Magnetic Tunnel Junction-Based Computational Random-Access Memory,' highlights technology for which the researchers hold multiple patents.

AI Demand and Energy Efficiency

The demand for AI applications rises, researchers are exploring methods to enhance energy efficiency while maintaining high performance and cost-effectiveness. Typically, AI processes involve significant power consumption due to the frequent transfer of data between logic (Where information is processed) and memory (where data is stored).

Introduction of Computational Random-Access Memory (CRAM)

Researchers at the University of Minnesota College of Science and Engineering have introduced a novel model known as computational random-access memory (CRAM), where data remains within the memory throughout the process.

Experimental Validation and Impact

First Experimental Validation of CRAM

According to Yang Lv, postdoctoral researcher and primary author at the University of Minnesota's Department of Electrical and Computer Engineering, 'This study marks the first experimental validation of CRAM, where data remains within the memory array throughout the processing, without needing to leave the information grid.'

Global AI Energy Coonsumption Forecast

According to a forecast by the International Energy Agency (IEA) published in March 2024, global energy use for AI is expected to double from 460 terawatt-hours (TWh) in 2022 to 1,000 TWh in 2026, matching the electricity consumption of Japan.

Projected Energy Savings with CRAM

The authors of the recent study project that a CRAM-based machine learning inference accelerator could achieve improvements on the scale of 1,000 times. Additionally, another example highlighted energy reductions of 2,500 and 1,700 times relative to traditional methods.

Research and Development Journey

Two Decades of Research Effort

The research has spanned more than two decades of effort.

Jian-Ping Wang, the senior author of the paper and a Distinguished McKnight Professor and Robert F. Hartmann Chair at the University of Minnesota's Department of Electrical and Computer Engineering, reflected, 'Our original idea to utilize memory cells directly for computing was deemed unconventional two decades ago.'

Collaborative Research Success

According to Wang, 'The collaboration of an evolving student cohort and a diverse faculty team at the University of Minnesota-spanning physics, materials science, engineering, computer science, modeling, and hardware development-has led to successful results, confirming that this technology is both viable and ready for integration.'

Magnetic Tunnel Junctions (MTJs) Development

MRAM

This research forms a continuation of a sustained and cohesive effort, expanding on Wang's and his collaborators' pioneering, patented work with Magnetic Tunnel Junctions (MTJs). These nanostructured devices enhance hard drives, sensors, and various microelectronics system, including Magnetic Random Access Memory (MRAM), which is utilized in embedded systems like microcontrollers and smartwatches.

CRAM Architecture and Its Advantages

Breaking the von Neumann Architecture Barrier

The CRAM architecture facilitates genuine computation within the memory itself, effectively dismantling the barrier between computation and memory that characterizes the traditional von Neumann architecture--a theoretical framework that underpins nearly all contemporary computers.

CRAM as an Energy-Efficient In-Memory Computing Substrate

Ulya Karpuzcu, an expert in computing architecture, co-author of the paper, and Associate Professor in the Department of Electrical and Computer Engineering at the University of Minnesota, explained, 'CRAM, as an exceptionally energy-efficient in-memory computing substrate, offers remarkable flexibility by renabling computation at any point within the memory array. This adaptability allows us to tailor CRAM configurations to meet the performance requirements of various AI algorithms.'

"It exhibits superior energy efficiency compared to conventional building blocks used in current AI systems."

Execution of Computations Within Memory Cells

Karpuzcu explained that CRAM executes computations directly within memory cells, leveraging the array's structure to eliminate the need for slow and energy-consuming data transfers.

Spintronic Devices and Future Development

Advantages of Magnetic Tunnel Junction (MTJ)

While the most efficient short-term random access memory (RAM) device typically utilizes four or five transistors to represent binary data, a single Magnetic Tunnel Junction (MTJ) ---a spintronic device---achieves the same functionality with significantly reduced energy consumption, enhanced speed, and greater resilience to harsh environments. Spintronic devices utilize electron spin rather than electrical charge to store data, offering a more efficient alternative to traditional transistor-based chips.

Collaboration with the Semiconductor Industry

The team is now coordinating with leading figures in the semiconductor industry, including key players in Minnesota, to conduct large-scale demonstrations and develop hardware that will enhance AI capabilities.

Contributors to the Research

Research Team Members

Besides Lv, Wang And Karpuzcu, the team included Robert Bloom and Husrev Cilasun from the University of Minnesota's Department of Electrical and Computer Engineering, Distinguished McKnight Professor Sachin Sapatnekar and former postdoctoral researchers Brandon Zink, Zamshed Chowdhury and Salonik Resch. Researchers from Arizona University---Pravin Khanal, Ali Habiboglu and Professor Weigang Wang---also participated in the research.

Source  

Labels: