• A comprehensive list of publications is available here.
  • An open source github repository for selected projects from our lab can be found here.


  • [J] Li, Yuhang, Youngeun Kim, Hyoungseob Park, and Priyadarshini Panda. Uncovering the Representation of Spiking Neural Networks Trained with Surrogate Gradient. Transactions in Machine Learning Research (2023). (Link, Code)
  • [J] Moitra, Abhishek, Abhiroop Bhattacharjee, Runcong Kuang, Gokul Krishnan, Yu Cao, and Priyadarshini Panda. SpikeSim: An end-to-end Compute-in-Memory Hardware Evaluation Tool for Benchmarking Spiking Neural Networks. IEEE Transactions on Computer Aided Design (2023). (Link, Code)
  • [C] Moitra, Abhishek, Abhiroop Bhattacharjee, Youngeun Kim, and Priyadarshini Panda. XPert: Peripheral Circuit & Neural Architecture Co-search for Area and Energy-efficient Xbar-based Computing. Accepted in Design Automation Conference (2023). (Link, Code) Acceptance Rate 23%.
  • [J] Bhattacharjee, Abhiroop, Abhishek Moitra, and Priyadarshini Panda. XploreNAS: Explore Adversarially Robust & Hardware-efficient Neural Architectures for Non-ideal Xbars. ACM Transactions on Embedded Computing Systems (2023). 
  • [C] Li, Yuhang, Abhishek Moitra, Tamar Geller, and Priyadarshini Panda. Input-Aware Dynamic Timestep Spiking Neural Networks for Efficient In-Memory Computing. Accepted in Design Automation Conference (2023). (Link, Code will be made available soon) Acceptance Rate 23%.
  • [C] Bhattacharjee, Abhiroop, Abhishek Moitra, Youngeun Kim, Yeshwanth Venkatesha, and Priyadarshini Panda. Examining the Role and Limits of Batchnorm Optimization to Mitigate Diverse Hardware-noise in In-memory Computing. In GLSVLSI (2023). (Link)
  • [C] Moitra, Abhishek, Ruokai Yin, and Priyadarshini Panda. Hardware Accelerators for Spiking Neural Networks for Energy-Efficient Edge Computing. In GLSVLSI (2023). (Link)
  • [C] Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Anna Hambitzer, Priyadarshini Panda. Exploring Temporal Information Dynamics in Spiking Neural Networks. Accepted to AAAI Conference on Artificial Intelligence (2023). (LinkCode) Acceptance Rate 19.6%
  • [C] Duy-Thanh Nguyen, Abhiroop Bhattacharjee, Abhishek Moitra and Priyadarshini Panda. DeepCAM: A fully CAM-based inference accelerator with variable hash lengths for energy-efficient deep neural networks. Accepted to Design, Automation and Test in Europe Conference (2023). Acceptance Rate 25%


  • [J] Ruokai Yin, Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim, and Priyadarshini Panda. SATA: Sparsity-Aware Training Accelerator for Spiking Neural Networks. In IEEE Transactions on Computer-aided Design of Integrated Circuits and Systems (TCAD) (2022). (Link, Code)
  • [J] Abhiroop Bhattacharjee, and Priyadarshini Panda. SwitchX-Gmin-Gmax Switching for Energy-Efficient and Robust Implementation of Binary Neural Networks on Memristive Xbars. In ACM Transactions on Design Automation of Electronic Systems (2022)
  • [C] Yuhang Li, Ruokai Yin, Hyoungseob Park, Youngeun Kim, Priyadarshini Panda. Wearable-based Human Activity Recognition with Spatio-Temporal Spiking Neural Networks. In NeurIPS 2022 Workshops (2022). Spotlight (Link)
  • [J] Youngeun Kim, Joshua Chough, and Priyadarshini Panda. Beyond classification: Directly training spiking neural networks for semantic segmentation. In IOP Neuromorphic Computing and Engineering (2022) (Link)
  • [C] Abhiroop Bhattacharjee, Youngeun Kim, Abhishek Moitra and Priyadarshini Panda. Examining the Robustness of Spiking Neural Networks on Non-ideal Memristive Crossbars. In ACM/IEEE International Symposium on Low Power Electronics and Design (ISLPED) (2022). (Link) Acceptance Rate 22%, Best Paper Award!
  • [C] Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, and Priyadarshini Panda. Neural Architecture Search for Spiking Neural Networks. In European Conference on Computer Vision (ECCV) (2022). Acceptance Rate 28% (Link, Code)
  • [C] Yuhang Li, Youngeun Kim, Hyoungseob Park, Tamar Geller, and Priyadarshini Panda. Neuromorphic Data Augmentation for Training Spiking Neural Networks. In European Conference on Computer Vision (ECCV) (2022). Acceptance Rate 28%  (Link, Code will be available soon)
  • [C] Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Ruokai Yin, and Priyadarshini Panda. Exploring Lottery Ticket Hypothesis in Spiking Neural Networks. In European Conference on Computer Vision (ECCV) (2022). Oral Presentation. Acceptance Rate 2.7% (Link, Code)
  • [C] Abhiroop Bhattacharjee, Yeshwanth Venkatesha, Abhishek Moitra and Priyadarshini Panda. MIME: Adapting a Single Neural Network for Multi-task Inference with Memory-efficient Dynamic Pruning. In Design Automation Conference (2022). (Link) Acceptance Rate 23%
  • [C] Youngeun Kim, Hyoungseob Park, Abhishek Moitra, Abhiroop Bhattacharjee, Yeshwanth Venkatesha, and Priyadarshini Panda. Rate Coding Or Direct Coding: Which One is Better for Accurate, Robust, and Energy-efficient Spiking Neural Networks?. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2022). (Link, Code
  • [C] Youngeun Kim, Yeshwanth Venkatesha, and Priyadarshini Panda. PrivateSNN: Fully Privacy-Preserving Spiking Neural Networks. In AAAI Conference on Artificial Intelligence (2022). (Link, Code) Accepatnce Rate 15%
  • [C] Abhiroop Bhattacharjee, Lakshya Bhatnagar, and Priyadarshini Panda. Examining and Mitigating the Impact of Crossbar Non-idealities for Accurate Implementation of Sparse Deep Neural Networks. In Design, Automation and Test in Europe Conference (2022). (Link) Acceptance Rate 22%
  • [C] Youngeun Kim, Hyunsoo Kim, Seijoon Kim, Sang Joon Kim, and Priyadarshini Panda. Gradient-based Bit Encoding Optimization for Noise-Robust Binary Memristive Crossbar. In Design, Automation and Test in Europe Conference (2022). (Link) Acceptance Rate 22%

2021 & 2020

  • [J]Youngeun Kim, and Priyadarshini Panda. Revisiting Batch Normalization for Training Low-latency Deep Spiking Neural Networks from Scratch. In Frontiers in Neuroscience (2021) (CodeSlides)
  • [J] Yeshwanth Venkatesha,  Youngeun Kim, Leandros Tassiulas, and Priyadarshini Panda. Federated Learning with Spiking Neural Networks. In IEEE Transactions on Signal Processing (2021) (Link, Code)
  • [J] Youngeun Kim, and Priyadarshini Panda.  Optimizing Deeper Spiking Neural Networks for Dynamic Vision Sensing. In Neural Networks-Elsevier (2021)  (Link)
  • [J] Youngeun Kim, and Priyadarshini Panda.  Visual Explanations from Spiking Neural Networks using Interspike Intervals. In Nature Scientific reports (2021) (LinkCodeSlides)
  • [J] Abhishek Moitra, and Priyadarshini Panda. DetectX–Adversarial Input Detection using Current Signatures in Memristive XBar Arrays. In IEEE Transactions on Circuits and Systems I: Regular Papers (2021) (Link, Code)
  • [J] Rachel Sterneck, Abhishek Moitra, and Priyadarshini Panda. Noise Sensitivity-Based Energy Efficient and Robust Adversary Detection in Neural Networks. In IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (2021) (Link)
  • [J] Abhiroop Bhattacharjee, Lakshya Bhatnagar, Youngeun Kim, and Priyadarshini Panda. NEAT: Non-linearity Aware Training for Accurate and Energy-Efficient Implementation of Neural Networks on 1T-1R Memristive Crossbars. In IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (2021) (Link)
  • Abhishek Moitra, and Priyadarshini Panda. Exposing the Robustness and Vulnerability of Hybrid 8T-6T SRAM Memory Architectures to Adversarial Attacks in Deep Neural Networks. arXiv preprint arXiv:2011.13392 (2020)
  • [C]  Karina Vasquez, Yeshwanth Venkatesha, Abhiroop Bhattacharjee, Abhishek Moitra, and Priyadarshini Panda. Activation Density based Mixed-Precision Quantization for Energy Efficient Neural Networks. In Design, Automation and Test in Europe Conference (2021) (Link)
  • [C] Abhiroop Bhattacharjee, Abhishek Moitra, and Priyadarshini Panda. Efficiency-driven Hardware Optimization for Adversarially Robust Neural Networks. In Design, Automation and Test in Europe Conference (2021) (Link)
  • [C] Timothy Foldy-Porto, Yeshwanth Venkatesha, and Priyadarshini Panda. Activation Density driven Energy-Efficient Pruning in Training. In International Confernce on Pattern Recognition (2020) (LinkSlides)
  • [C] Priyadarshini Panda. QUANOS-Adversarial Noise Sensitivity Driven Hybrid Quantization of Neural Networks. In ACM/IEEE International Symposium on Low Power Electronics and Design (2020) (Link, Slides)

Selected Publications from ​2019 and Prior 


  • Kaushik Roy, Akhilesh Jaiswal, and Priyadarshini Panda. Towards spike-based machine intelligence with neuromorphic computing. Nature 575, 607–617 (2019) doi:10.1038/s41586-019-1677-2.  
    • An online tutorial of the article encompassing the perspectives on neuromorphic computing field is available on youtube.
    •  Presentation slides on overview of spiking neural networks is available  here.
  • Fan Zuo*, Priyadarshini Panda*, Michele Kotiuga, Jiarui Li, Mingu Kang, Claudio Mazzoli, Hua Zhou et al. Habituation based synaptic plasticity and organismic learning in a quantum perovskiteNature communications 8, no. 1 (2017): 240 (*Equal author contributions).
  • Priyadarshini Panda, Swagath Venkataramani, Abronil Sengupta, Anand Raghunathan, and Kaushik Roy. Energy-efficient object detection using semantic decompositionIEEE Transactions on Very Large Scale Integration (VLSI) Systems, doi:10.1109/TVLSI.2017.2707077, 25(9):2673–2677, Sept 2017.
  • Priyadarshini Panda, Indranil Chakraborty, and Kaushik Roy. Discretization based Solutions for Secure Machine Learning against Adversarial AttacksIEEE Access (2019).
  • Abhronil Sengupta, Priyadarshini Panda, Parami Wijesinghe, Yusung Kim, and Kaushik Roy. Magnetic tunnel junction mimics stochastic cortical spiking neuronsScientific reports (2016): 30039.
  • Deboleena Roy, Priyadarshini Panda, and Kaushik Roy. Tree-cnn: A hierarchical deep convolutional neural network for incremental learning. arXiv preprint arXiv:1802.05800, 2018, In Neural Networks (Elsevier), 2019.
  • Chankyu Lee, Priyadarshini Panda, Gopalakrishnan Srinivasan, and Kaushik Roy. Training deep spiking convolutional neural networks with stdp-based unsupervised pre-training followed by supervised finetuningFrontiers in Neuroscience, 12:435, 2018.


  • Priyadarshini Panda, and Kaushik Roy. Unsupervised regenerative learning of hierarchical features in spiking deep networks for object recognition. In 2016 International Joint Conference on Neural Networks (IJCNN), pp. 299-306. IEEE, 2016.
  • Priyadarshini Panda and Kaushik Roy. Implicit generative modeling of random noise during training for adversarial robustness. arXiv preprint arXiv:1807.02188, In ICML 2019 - Workshop on Uncertainty and Robustness in Deep Learning.
  • Priyadarshini Panda, Abhronil Sengupta, Syed Shakib Sarwar, Gopalakrishnan Srinivasan, Swagath Venkataramani, Anand Raghunathan, and Kaushik Roy. Cross-layer approximations for neuromorphic computing: From devices to circuits and systems. In 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC), pp. 1-6. IEEE, 2016.