Neuromorphic computing for sustainable AI: Energy-efficient architectures for resource-constrained environment

Paper Details

Research Paper 03/06/2025
Views (16) Download (3)
current_issue_feature_image
publication_file

Neuromorphic computing for sustainable AI: Energy-efficient architectures for resource-constrained environment

Abstract

This paper explores the convergence of neuromorphic computing and sustainable AI, proposing novel architectures specifically designed for resource-constrained environments. Despite significant advances in artificial intelligence, current models face substantial energy consumption challenges, particularly in edge computing and IoT applications. We introduce a hybrid neuromorphic framework that combines spike-based processing with selective precision computing to achieve substantial energy efficiency while maintaining computational performance. Our experimental results demonstrate up to 87% reduction in energy consumption compared to conventional deep learning implementations, with minimal accuracy trade-offs. We further propose adaptive power scaling techniques that respond dynamically to computational demands. This approach represents a significant step toward sustainable AI systems that can operate effectively in environments with limited power resources.

Anguita D, Ghio A, Oneto L, Parra X, Reyes-Ortiz JL. 2013. A public domain dataset for human activity recognition using smartphones. European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN).

Brockman G, Cheung V, Pettersson L, Schneider J, Schulman J, Tang J, Zaremba W. 2016. OpenAI Gym. arXiv preprint arXiv:1606.01540.

Davies M, Srinivasa N, Lin TH, Chinya G, Cao Y, Choday SH, Wang H. 2018. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1), 82–99.

Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L. 2009. ImageNet: A large-scale hierarchical image database. 2009 IEEE Conference on Computer Vision and Pattern Recognition, 248–255.

Diehl PU, Neil D, Binas J, Cook M, Liu SC, Pfeiffer M. 2015. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. 2015 International Joint Conference on Neural Networks (IJCNN), 1–8.

Furber S. 2016. Large-scale neuromorphic computing systems. Journal of Neural Engineering 13(5), 051001.

Furber SB, Galluppi F, Temple S, Plana LA. 2014. The SpiNNaker project. Proceedings of the IEEE 102(5), 652–665.

Han S, Pool J, Tran J, Dally W. 2015. Learning both weights and connections for efficient neural network. Advances in Neural Information Processing Systems, 1135–1143.

Hinton G, Vinyals O, Dean J. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.

Howard A, Sandler M, Chu G, Chen LC, Chen B, Tan M, Adam H. 2019. Searching for MobileNetV3. IEEE/CVF International Conference on Computer Vision, 1314–1324.

Jacob B, Kligys S, Chen B, Zhu M, Tang M, Howard A, Kalenichenko D. 2018. Quantization and training of neural networks for efficient integer-arithmetic-only inference. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2704–2713.

Krizhevsky A, Hinton G. 2009. Learning multiple layers of features from tiny images. Technical report, University of Toronto.

Maass W. 1997. Networks of spiking neurons: The third generation of neural network models. Neural Networks 10(9), 1659–1671.

Merolla PA, Arthur JV, Alvarez-Icaza R, Cassidy AS, Sawada J, Akopyan F, Modha DS. 2014. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345(6197), 668–673.

Schuman CD, Potok TE, Patton RM, Birdwell JD, Dean ME, Rose GS, Plank JS. 2017. A survey of neuromorphic computing and neural networks in hardware. arXiv preprint arXiv:1705.06963.

Schwartz R, Dodge J, Smith NA, Etzioni O. 2020. Green AI. Communications of the ACM 63(12), 54–63.

Strubell E, Ganesh A, McCallum A. 2019. Energy and policy considerations for deep learning in NLP. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 57, 3645–3650.

Strubell E, Ganesh A, McCallum A. 2020. Energy and policy considerations for modern deep learning research. Proceedings of the AAAI Conference on Artificial Intelligence 34(9), 13693–13696.

Tan M, Le Q. 2019. EfficientNet: Rethinking model scaling for convolutional neural networks. International Conference on Machine Learning 97, 6105–6114.

Tavanaei A, Ghodrati M, Kheradpisheh SR, Masquelier T, Maida A. 2019. Deep learning in spiking neural networks. Neural Networks 111, 47–63.

Wang A, Singh A, Michael J, Hill F, Levy O, Bowman SR. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.

Yin S, Venkataramanaiah SK, Chen GK, Krishnamurthy R, Cao Y, Chakrabarti C, Seo JS. 2021. Accurate and efficient time-domain convolutions in spiking neural networks using sparse binary compression. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12981–12990.