Friday, August 2, 2024

next-gen semiconductor designs for AI

AI Computational Performance Enhancement through ECRAM Devices

next-gen semiconductor

Introduction

  • A research group has revealed that using ECRAM devices in analog hardware can significantly boost AI computational performance, underscoring its commercialization prospects. The study is published in Science Advances.

AI Technology Development and Hardware Challenges

  • The fast-paced development of AI technology, such as generative AI, has pushed the scalability boundaries of traditional digital hardware (CPUs, GPUs, ASICs, etc.). This challenge has led to active exploration of analog hardware designed specifically for AI computation.
  • Analog hardware modulates semiconductor resistance through external voltage or current and employs a cross-point array structure with intersecting memory devices to execute parallel AI computations. Despite its advantages over digital hardware for certain computational tasks and continuous data processing, fulfilling the varied demands for computational learning and inference poses significant challenges.

Addressing Analog Hardware Shortcomings

  • Addressing the shortcomings of analog hardware memeory devices, a research team led by Professor Seyoung Kim from the Department of Materials Science and Engineering and Semiconductor Engineering investigated Electrochemical Random Access Memory (ECRAM), which controls electrical conductivity via ion movement and concentraion.

ECRAM Device Structure and Performance

  • Unlike standard semiconductor memory, these devices employ a three-terminal structure with separate data read and write channels, facilitating low-power operation.
  • The team's study involved fabricating ECRAM devices based on three-terminal semiconductors within a 64x64 array. Experimental results showed the hardware displayed superior electrical and switching characterstics, combined with yield and consistency.

Integration of the Tiki-Taka Algorithm

  • The study further integrated the Tiki-Taka algorithm, a pioneering analog learning algorithm, with the high-yield hardware, which notably improved the accuracy of AI neural network training processes.

Weight Retention Property and Learning Improvement

  • The researchers notably showcased how the 'weight retention' property of hardware training affects learning, proving that their method avoids overloading artificial neural networks and underscoring the technology's commercialization potential.

Advancement in ECRAM Device Arrays

  • This research marks a key advancement from the 10x10 ECRAM device arrays documented in the literature, successfully scaling up to a larger array with diverse device features.

Conclusion

Professor Seyoung Kim from POSTECH remarked, 'Through the advancement of large-scale arrays with innovative memory technologies and the creation of analog-specific AI algorithms, we have uncovered a significant potential for AI computational performance and energy efficiency that exceeds current digital methods.'

Source

Labels: ,

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home