- A comprehensive list of publications is available here.
- An open source github repository for projects from our lab can be found here.
2024
-
[C] Abhiroop Bhattacharjee, Ruokai Yin, Abhishek Moitra, and Priyadarshini Panda. Are SNNs Truly Energy efficient?—A Hardware Perspective. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2024).
-
[C] Yeshwanth Venkatesha, Abhiroop Bhattacharjee, Abhishek Moitra, and Priyadarshini Panda. HaLo-FL: Hardware-Aware Low-Precision Federated Learning. In Design, Automation, and Test in Europe (DATE) Conference (2024).
-
[C] Donghyun Lee, Ruokai Yin+, Youngeun Kim, Abhishek Moitra, Yuhang Li, and Priyadarshini Panda. TT-SNN: Tensor Train Decomposition for Efficient Spiking Neural Network Training. In Design, Automation, and Test in Europe (DATE) Conference (2024).
2023
- [C] Li, Yuhang, Tamar Geller, Youngeun Kim+, and Priyadarshini Panda. Seenn: Towards temporal spiking early exit neural networks. In Advances in Neural Information Processing Systems (NeurIPS), 2023. (Link, Code)
- [J] Li, Yuhang, Youngeun Kim, Hyoungseob Park, and Priyadarshini Panda. Uncovering the Representation of Spiking Neural Networks Trained with Surrogate Gradient. Transactions in Machine Learning Research (2023). (Link, Code)
- [J] Moitra, Abhishek, Abhiroop Bhattacharjee, Runcong Kuang, Gokul Krishnan, Yu Cao, and Priyadarshini Panda. SpikeSim: An end-to-end Compute-in-Memory Hardware Evaluation Tool for Benchmarking Spiking Neural Networks. IEEE Transactions on Computer Aided Design (2023). (Link, Code)
- [C] Moitra, Abhishek, Abhiroop Bhattacharjee, Youngeun Kim, and Priyadarshini Panda. XPert: Peripheral Circuit & Neural Architecture Co-search for Area and Energy-efficient Xbar-based Computing. Accepted in Design Automation Conference (2023). (Link, Code)
- [J] Bhattacharjee, Abhiroop, Abhishek Moitra, and Priyadarshini Panda. XploreNAS: Explore Adversarially Robust & Hardware-efficient Neural Architectures for Non-ideal Xbars. ACM Transactions on Embedded Computing Systems (2023).
- [C] Li, Yuhang, Abhishek Moitra, Tamar Geller, and Priyadarshini Panda. Input-Aware Dynamic Timestep Spiking Neural Networks for Efficient In-Memory Computing. Accepted in Design Automation Conference (2023). (Link, Code).
- [C] Bhattacharjee, Abhiroop, Abhishek Moitra, Youngeun Kim, Yeshwanth Venkatesha, and Priyadarshini Panda. Examining the Role and Limits of Batchnorm Optimization to Mitigate Diverse Hardware-noise in In-memory Computing. In GLSVLSI (2023). (Link)
- [C] Moitra, Abhishek, Ruokai Yin, and Priyadarshini Panda. Hardware Accelerators for Spiking Neural Networks for Energy-Efficient Edge Computing. In GLSVLSI (2023). (Link)
- [C] Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Anna Hambitzer, Priyadarshini Panda. Exploring Temporal Information Dynamics in Spiking Neural Networks. Accepted to AAAI Conference on Artificial Intelligence (2023). (Link, Code)
- [C] Duy-Thanh Nguyen, Abhiroop Bhattacharjee, Abhishek Moitra and Priyadarshini Panda. DeepCAM: A fully CAM-based inference accelerator with variable hash lengths for energy-efficient deep neural networks. Accepted to Design, Automation and Test in Europe Conference (2023).
2022
- [J] Ruokai Yin, Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim, and Priyadarshini Panda. SATA: Sparsity-Aware Training Accelerator for Spiking Neural Networks. In IEEE Transactions on Computer-aided Design of Integrated Circuits and Systems (TCAD) (2022). (Link, Code)
- [J] Abhiroop Bhattacharjee, and Priyadarshini Panda. SwitchX-Gmin-Gmax Switching for Energy-Efficient and Robust Implementation of Binary Neural Networks on Memristive Xbars. In ACM Transactions on Design Automation of Electronic Systems (2022)
- [C] Yuhang Li, Ruokai Yin, Hyoungseob Park, Youngeun Kim, Priyadarshini Panda. Wearable-based Human Activity Recognition with Spatio-Temporal Spiking Neural Networks. In NeurIPS 2022 Workshops (2022). Spotlight (Link)
- [J] Youngeun Kim, Joshua Chough, and Priyadarshini Panda. Beyond classification: Directly training spiking neural networks for semantic segmentation. In IOP Neuromorphic Computing and Engineering (2022) (Link)
- [C] Abhiroop Bhattacharjee, Youngeun Kim, Abhishek Moitra and Priyadarshini Panda. Examining the Robustness of Spiking Neural Networks on Non-ideal Memristive Crossbars. In ACM/IEEE International Symposium on Low Power Electronics and Design (ISLPED) (2022). (Link) Best Paper Award!
- [C] Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, and Priyadarshini Panda. Neural Architecture Search for Spiking Neural Networks. In European Conference on Computer Vision (ECCV) (2022). (Link, Code)
- [C] Yuhang Li, Youngeun Kim, Hyoungseob Park, Tamar Geller, and Priyadarshini Panda. Neuromorphic Data Augmentation for Training Spiking Neural Networks. In European Conference on Computer Vision (ECCV) (2022). (Link, Code will be available soon)
- [C] Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Ruokai Yin, and Priyadarshini Panda. Exploring Lottery Ticket Hypothesis in Spiking Neural Networks. In European Conference on Computer Vision (ECCV) (2022). Oral Presentation. (Link, Code)
- [C] Abhiroop Bhattacharjee, Yeshwanth Venkatesha, Abhishek Moitra and Priyadarshini Panda. MIME: Adapting a Single Neural Network for Multi-task Inference with Memory-efficient Dynamic Pruning. In Design Automation Conference (2022). (Link)
- [C] Youngeun Kim, Hyoungseob Park, Abhishek Moitra, Abhiroop Bhattacharjee, Yeshwanth Venkatesha, and Priyadarshini Panda. Rate Coding Or Direct Coding: Which One is Better for Accurate, Robust, and Energy-efficient Spiking Neural Networks?. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2022). (Link, Code)
- [C] Youngeun Kim, Yeshwanth Venkatesha, and Priyadarshini Panda. PrivateSNN: Fully Privacy-Preserving Spiking Neural Networks. In AAAI Conference on Artificial Intelligence (2022). (Link, Code)
- [C] Abhiroop Bhattacharjee, Lakshya Bhatnagar, and Priyadarshini Panda. Examining and Mitigating the Impact of Crossbar Non-idealities for Accurate Implementation of Sparse Deep Neural Networks. In Design, Automation and Test in Europe Conference (2022). (Link)
- [C] Youngeun Kim, Hyunsoo Kim, Seijoon Kim, Sang Joon Kim, and Priyadarshini Panda. Gradient-based Bit Encoding Optimization for Noise-Robust Binary Memristive Crossbar. In Design, Automation and Test in Europe Conference (2022). (Link)
2021 & 2020
- [J]Youngeun Kim, and Priyadarshini Panda. Revisiting Batch Normalization for Training Low-latency Deep Spiking Neural Networks from Scratch. In Frontiers in Neuroscience (2021) (Code, Slides)
- [J] Yeshwanth Venkatesha, Youngeun Kim, Leandros Tassiulas, and Priyadarshini Panda. Federated Learning with Spiking Neural Networks. In IEEE Transactions on Signal Processing (2021) (Link, Code)
- [J] Youngeun Kim, and Priyadarshini Panda. Optimizing Deeper Spiking Neural Networks for Dynamic Vision Sensing. In Neural Networks-Elsevier (2021) (Link)
- [J] Youngeun Kim, and Priyadarshini Panda. Visual Explanations from Spiking Neural Networks using Interspike Intervals. In Nature Scientific reports (2021) (Link, Code, Slides)
- [J] Abhishek Moitra, and Priyadarshini Panda. DetectX–Adversarial Input Detection using Current Signatures in Memristive XBar Arrays. In IEEE Transactions on Circuits and Systems I: Regular Papers (2021) (Link, Code)
- [J] Rachel Sterneck, Abhishek Moitra, and Priyadarshini Panda. Noise Sensitivity-Based Energy Efficient and Robust Adversary Detection in Neural Networks. In IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (2021) (Link)
- [J] Abhiroop Bhattacharjee, Lakshya Bhatnagar, Youngeun Kim, and Priyadarshini Panda. NEAT: Non-linearity Aware Training for Accurate and Energy-Efficient Implementation of Neural Networks on 1T-1R Memristive Crossbars. In IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (2021) (Link)
- Abhishek Moitra, and Priyadarshini Panda. Exposing the Robustness and Vulnerability of Hybrid 8T-6T SRAM Memory Architectures to Adversarial Attacks in Deep Neural Networks. arXiv preprint arXiv:2011.13392 (2020)
- [C] Karina Vasquez, Yeshwanth Venkatesha, Abhiroop Bhattacharjee, Abhishek Moitra, and Priyadarshini Panda. Activation Density based Mixed-Precision Quantization for Energy Efficient Neural Networks. In Design, Automation and Test in Europe Conference (2021) (Link)
- [C] Abhiroop Bhattacharjee, Abhishek Moitra, and Priyadarshini Panda. Efficiency-driven Hardware Optimization for Adversarially Robust Neural Networks. In Design, Automation and Test in Europe Conference (2021) (Link)
- [C] Timothy Foldy-Porto, Yeshwanth Venkatesha, and Priyadarshini Panda. Activation Density driven Energy-Efficient Pruning in Training. In International Confernce on Pattern Recognition (2020) (Link, Slides)
- [C] Priyadarshini Panda. QUANOS-Adversarial Noise Sensitivity Driven Hybrid Quantization of Neural Networks. In ACM/IEEE International Symposium on Low Power Electronics and Design (2020) (Link, Slides)
Selected Works from Prof. Panda’s Ph.D.
- [ J] Kaushik Roy, Akhilesh Jaiswal, and Priyadarshini Panda. Towards spike-based machine intelligence with neuromorphic computing. Nature 575, 607–617 (2019) doi:10.1038/s41586-019-1677-2.
- An influential article on the promise of spiking neural networks and neuromorphic computing to enable a future with green and sustainable AI.
- An online tutorial of the article encompassing the perspectives on neuromorphic computing field is available on youtube.
- Presentation slides on overview of spiking neural networks is available here.
- [C] Priyadarshini Panda, Abhronil Sengupta, and Kaushik Roy. “Conditional deep learning for energy-efficient and enhanced pattern recognition.” 2016 design, automation & test in europe conference & exhibition (DATE). IEEE, 2016.
- First of its kind work to showcase early exit and adaptive computation philosphy for deep learning architectures.
- Early exit technique was adopted in Intel’s open source python package library: Distiller for AI compression.