Focus & Approach

The Intelligent Computing Lab at Yale is focused on enabling enhanced learning capabilities in the next generation of intelligent computing systems with an algorithm-hardware co-design approach as pointed below. 
  • Designing algorithmic methods to optimize the compute complexity of large-scale deep learning/spiking networks based on principled and interpretable metrics for robust, explainable and energy-efficient AI.

  • Exploring novel non-Von Neumann architecture solutions (such as analog dot product engines) with both standard CMOS and emerging technology for improved and energy-efficient AI hardware. 

  • Designing novel hardware solutions (such as, discretization etc.) to address the algorithm vulnerabilities (such as, adversarial attacks, non-interpretability) of today’s AI systems and vice-versa.

  • Exploring bio-plausible algorithms and hardware guided by natural intelligence (how the brain learns, the internal fabric of the brain etc.) to define the next generation of robust and efficient AI systems for beyond-vision static recognition tasks with the ability to perceive, reason, decide autonomously in real-time.

  • Building adaptive learning solutions that are secure and reliable for enabling ubiquitous intelligence centered around IoT devices.

Our goal is to co-design solutions that can optimize 3-dimensional tradeoff between energy-accuracy/capability-robustness with advancements in algorithms-architectures-theory.
 

Research directions - Algorithm:

Learning algorithms for Accurate, Robust and Interpretable Deep Spiking Networks:

Spiking neural networks (SNNs) have emerged as next-generation deep learning models due to their huge energy efficiency benefits and biological plausibility. However, the development of energy-efficient training and interpretability still is immature compared to conventional deep learning, which limits its utility in real-world applications. To fill the gap, we designed new algorithms and architectures that improve the accuracy, robustness, and Interpretability of deep SNNs. We demonstrated that SNNs have a huge potential to synchronize their performance with  conventional deep learning frameworks. We also proposed visualization tool (Spike Activation Map - SAM) for SNNs and observed SNNs provide reliable accuracy and explain (i.e., heatmap) regarding the adversarial attacks.

[BNTT paper] [SAM paper] [Invited Talk by Prof. Panda]

Figure. (a) Illustration of SNN with our proposed BNTT. (b) The average value of  at each layer over all time-steps. (c) Early exit time can be calculated as since values at every layer are lower than threshold  after time-step 20.

Beyond Vision Applications with Spiking Neural Networks:

Spiking Neural Networks (SNNs) are one of the emerging technologies providing an energy efficient alternative to traditional neural networksAs these are often targeted towards low power edge devices, these models need to be regularly updated as new data is generated, without leaking sensitive data. Federated Learning offers a practical solution to train neural networks on edge devices while preserving data privacy. We explore state of the art training methods  and analyze the advantages of SNNs in a federated setting and study the advantages of SNNs over ANNs.

Figure. Schematic diagram of Federated Learning.

Research directions - Hardware:

 

Improving Adversarial Robustness in digital CMOS-based accelerators / analog Crossbar Arrays:

Using algorithm-hardware codesign approaches for improving adversarial robustness and energy efficient solutions for adversarial input detection. Specific layers in digital CMOS based accelerators can be designed using hybrid 8T-6T SRAM memories [1]. Likewise, using algorithmic metrics like adversarial perturbations in intermediate layers can lead to lightweight and more structured adversarial input detection [2].

Memristive crossbars based on emerging devices have been explored for the energy-efficient and compact implementation of neural networks on hardware.  We investigate the cumulative impact of various resistive and device-level non-idealities in memristive crossbar arrays for adversarially robust implementation of Deep Neural Networks (DNNs) on hardware [3]. To support this cause, we also explore a non-ideality and Xbar synapse-driven approach (termed SwitchX) to map weights onto crossbar arrays that increases the proportion of high resistance synapses for achieving energy-efficient and robust implementation of DNNs at the same time [4]. Further, we extend our study to 1T-1R crossbar arrays and explore a non-linearity aware DNN training method (NEAT) to mitigate the impact of transistor-induced non-linearities at lower gate voltages during DNN inference and arrive at an energy-efficient weight-mapping strategy [5]. We plan to further extend this study to consider the impact of resistive crossbar non-idealities towards unleashing robustness to such DNNs on 1T-1R crossbars.
 
Figure: Hardware non-idealities inadvertently leading to defense via gradient obfuscation against adversarial perturbations, thereby bringing out adversarial resilience
 
 

Hardware evaluation tool for large scale Spiking Neural Networks:

Designing an accurate energy-area-latency model for analog crossbar based hardware accelerators customly built for benchmarking Spiking Neural Networks. The model exploits the temporal and asynchronous properties of SNNs.