ΔNN: Power-efficient neural network acceleration using differential weights
Mahdiani, H.; Khadem, A.; Ghanbari, A.; Modarressi, M.; Fattahi-Bayat, F.; Daneshtalab, M. (2020). ΔNN: Power-efficient neural network acceleration using differential weights. Journal of Functional Foods, 40 (1), 67−74.10.1109/MM.2019.2948345.
Mahdiani, H.; Khadem, A.; Ghanbari, A.; Modarressi, M.; Fattahi-Bayat, F.; Daneshtalab, M.
Journal of Functional Foods
1.1. Teadusartiklid, mis on kajastatud Web of Science andmebaasides Science Citation Index Expanded, Social Sciences Citation Index, Arts & Humanities Citation Index, Emerging Sources Citation Index ja/või andmebaasis Scopus (v.a. kogumikud)
University of Tehran; Mälardalens högskola
© 1981-2012 IEEE. The enormous and ever-increasing complexity of state-of-the-art neural networks has impeded the deployment of deep learning on resource-limited embedded and mobile devices. To reduce the complexity of neural networks, this article presents ΔNN, a power-efficient architecture that leverages a combination of the approximate value locality of neuron weights and algorithmic structure of neural networks. ΔNN keeps each weight as its difference (Δ) to the nearest smaller weight: each weight reuses the calculations of the smaller weight, followed by a calculation on the Δ value to make up the difference. We also round up/down the Δ to the closest power of two numbers to further reduce complexity. The experimental results show that ΔNN boosts the average performance by 14%-37% and reduces the average power consumption by 17%-49% over some state-of-the-art neural network designs.
Deep neural network | Difierential computation | Hardware acceleration