Focus & Approach

The Intelligent Computing Lab at Yale is focused on enabling enhanced learning capabilities in the next generation of intelligent computing systems with an algorithm-hardware co-design approach as pointed below. 
  • Designing algorithmic methods to optimize the compute complexity of large-scale deep learning/spiking networks based on principled and interpretable metrics for robust, explainable and energy-efficient AI.

  • Exploring novel non-Von Neumann architecture solutions (such as analog dot product engines) with both standard CMOS and emerging technology for improved and energy-efficient AI hardware. 

  • Designing novel hardware solutions (such as, discretization etc.) to address the algorithm vulnerabilities (such as, adversarial attacks, non-interpretability) of today's AI systems and vice-versa.

  • Exploring bio-plausible algorithms and hardware guided by natural intelligence (how the brain learns, the internal fabric of the brain etc.) to define the next generation of robust and efficient AI systems for beyond-vision static recognition tasks with the ability to perceive, reason, decide autonomously in real-time.

  • Building adaptive learning solutions that are secure and reliable for enabling ubiquitous intelligence centered around IoT devices.

Our goal is to co-design solutions that can optimize 3-dimensional tradeoff between energy-accuracy/capability-robustness with advancements in algorithms-architectures-theory.
 

 

Select directions from Priya Panda’s past research:

Energy-efficient implementation of deep learning networks:  

Here, we designed algorithms or modify network architectures that reduce the computational requirements for a given network without compromising its accuracy. Our proposals are based on the observation that current training processes often result in deep learning networks that are more complex than necessary (e.g., they spend the same computational effort on each input regardless of the fact that in practice the inherent difficulty of classification varies greatly across inputs). We demonstrated the hardware energy benefits on a custom CMOS neuromorphic engine designed to take advantage of our proposed scalable classification strategy.

[DATE 2016] [IEEE TCAD 2017]  [ISLPED 2017] [DAC 2016]

Learning algorithms for spiking neural networks (supervised/unsupervised)

We explored both supervised backpropagation and unsupervised spike timing dependent plasticity training to develop the deep convolutional spiking frameworks for image classification tasks. We demonstrated the energy-accuracy tradeoff benefits of using spiking networks over conventional deep learning frameworks with a reconfigurable, scalable architecture comprising of memristive (or emerging technology) crossbar arrays. We also explored the compatibility of emerging technology (particularly, spintronic devices) to mimic neuronal/synaptic functions.

[IJCNN 2016] [ACM JETC 2018]  [Frontiers In NeuoSc. 2018] [DAC 2017]

An online tutorial on an upcoming Nature paper encompassing the perspectives on neuromorphic computing field is available on "https://www.youtube.com/watch?v=HnxkQvPcdXs". Recommended for folks interested in learning about Neuromorphic Algorithms and Hardware.

Continual/Incremental Learning Strategies for Deep Learning/Spiking Neural Networks

A fundamental feature of learning in animals is the “ability to forget" that allows an organism to perceive, model and make decisions from disparate streams of information and adapt to changing environments. Against this backdrop, we present a novel unsupervised learning mechanism for improved recognition with spiking networks for real time on-line learning in a dynamic environment. We also devised an incremental learning strategy to add new classes and grow deep learning models without undergoing catastrophic forgetting.

[Nature Commn. 2017] [JETCAS 2017] [Neural Networks Journal 2019]

 

 

Adversarial Attacks and Robustness in Deep Learning/Spiking Neural Networks

We explore the efficacy of discretization techniques like, input and weight quantization and the implication of using stochastic spiking models that are primarily used to deploy energy-efficient networks, for defending against adversarial attacks. We also introduce techniques for training neural networks that are intrinsically robust to adversarial attacks.

[ICML 2019- Workshop] [IEEE Access 2019] [IJCNN 2019]

 

Using Liquid State Machines as an energy-efficient alternative for recurrent learning tasks

We designed learning algorithms for a novel spiking computing model, Liquid State Machines (LSM), that offer an efficient, low complexity architecture for sequential spatio-temporal data recognition in comparison to recurrent neural networks (RNNs). We also derived metrics to quantify and relate the sparse connectivity of LSM with the energy-accuracy tradeoff benefits.

[FrontiersInNeuroSc., 2017]  [FrontiersInNeuroSc., 2018] [FrontiersInNeuroSc., 2019]