II395: The Next Frontier Of AI MAC Hardware Accelerator Architecture

Lee Yang Yang Universiti Sains Malaysia

MIIX24 | Intermediate Innovator

CR: 0.0526 | 9 Likes | 171 Views | 640 times | LS: 649.5
Like it? | Support them now!

The increasing demand for matrix multiply-accumulate (MAC) operations in Artificial Intelligence (AI) tasks is pushing traditional binary computing architectures to their limits, where speed often comes at the expense of computational precision. This invention harnesses the randomness and probabilistic nature of stochastic computing (SC) to redefine computing paradigms. Complex MAC operations are achieved by utilizing simple multiplexers (MUX) entirely, transcending binary computing logic convention. Remarkable energy efficiency gains of over 368x and a 31x increase in data throughput are realized in AI convolutional neural network (CNN) image classification tasks, with a mere 0.14% accuracy loss compared to binary methods. The progressive precision of the SC yields unprecedented energy efficiency exceeding 1400x, with minimal compromise in accuracy, rendering it exceptionally appealing for edge computing AI hardware acceleration applications. In layman's terms, imagine your phone counter-intuitively accelerating its performance while its battery is running low. This groundbreaking innovation is protected by patent and its scientific findings are disclosed through open-access publication in a prestigious international journal, indexed by the US NIH and NASA ADS digital libraries. The recent awarding of the 2024 Abel Prize (Nobel Prize in mathematics) and the 2023 Turing Award (Nobel Prize in computing) served as pivotal acknowledgements of the critical role randomness plays in both mathematical theory and computational practice, firmly signifying the relevance of the SC.