Focus & Approach

An open source github repository for selected projects from our lab can be found here.
 

The Intelligent Computing Lab at Yale is focused on enabling enhanced learning capabilities in the next generation of intelligent computing systems with a neuromorphic algorithm-hardware co-design approach as pointed below. 

  • Designing algorithms for large-scale spiking neural networks (SNN) and deep neural networks (DNN) including transformers, to optimize their compute complexity during training and inference based on principled metrics, such as, robustness, energy-efficiency, memory usage for on-device applications.
  • Designing energy-aware algorithm-hardware co-search and co-optimization frameworks for DNN/SNN, that perform pruning, quantization, neural architecture search, input-aware dynamic computation etc. for learning or inference for edge AI.
  • Developing compute-in-memory accelerators and benchmarking simulators based on eNVM- emerging non-volatile memory technologies and standard CMOS memory (SRAM) for energy-efficient and non-ideality aware AI hardware. 
  • Neuromorphic custom IC design and prototype chip tapeout of in-memory and near-memory processors with energy-efficient co-design strategies for SNN/DNN workloads
  • Exploring novel computing scenarios such as, event-data processing, multi-modal learning, continual learning, federated/distributed learning guided by natural intelligence (how the brain learns, the internal fabric of the brain etc.) to define the next generation of robust and efficient AI systems with the ability to perceive, reason, decide in real-time.

The above goals require co-designing solutions that can optimize 3-dimensional tradeoff between energy-accuracy-robustness with advancements in algorithms-architectures-systems-and-circuits. 

Research Projects

Algorithms, Applications and Fundamental Theory for SNNs

Spiking neural networks (SNNs) have emerged as next-generation deep learning models due to their huge energy efficiency benefits and biological plausibility. However, the development of energy-efficient training still is immature compared to conventional deep learning, which limits its utility in real-world applications. To fill the gap, we are designing new algorithms, architectures, applications grounded with fundamental theories that improve the accuracy, robustness, and efficiency of SNNs.

Hardware Accelerators and Benchmarking Simulators for SNNs

To truly leverage the energy-efficiency benefits of SNNs for large-scale AI applications, we need to build benchmarking tools and hardware simulators that will enable us to run tradeoff analysis between energy-latency-area-accuracy for different SNN models across various tasks. We are designing end-to-end hardware benchmarking accelerators to understand the benefits and limitations of SNNs and thereby develop novel hardware-aware algorithms.

Energy-Efficient and Robust Algorithm-Hardware Co-Design Methodologies for Edge AI

Developing energy-efficient and robust algorithm-hardware co-exploration/co-design methods for Edge AI is crucial for unlocking AI’s potential at the edge. This means smart algorithms, such as, SNNs, CNNs, Transformers etc. working hand-in-hand with specialized hardware, like, Systolic Arrays, Compute-in-Memory (CiM), FPGA, Raspberry Pi etc. making edge devices more capable, robust and sustainable. These advancements will transform areas like IoT, autonomous vehicles, and healthcare, among others.

Practical and Novel Learning Scenarios for on-device or Edge AI

Examining tangible and forward-looking methods for training AI models directly on edge devices is crucial for on-device intelligence. We investigate hands-on, context-aware learning experiences tailored to different AI paradigms, fostering the seamless integration of multi-modal and continual learning into the edge computing landscape. By emphasizing practical, on-device learning scenarios, we contribute to the effective implementation of SNNs, DNNs, Transformers for edge applications.

  • Multi-Modal Learning
  • Continual Learning

Federated Learning and Distributed Edge-Cloud Efficiency

This research delves into the realm of collaborative machine learning, focusing on Federated Learning techniques and their optimization for distributed edge-cloud environments. We explore the intricacies of training machine learning models across a network of decentralized edge devices while ensuring efficiency, privacy, and security.

Neuromorphic Custom Circuit Design & Chip Tape Out

The integration of on-chip learning with a hybrid combination of Spiking Neural Networks (SNNs) and Artificial Neural Networks (ANNs) presents exciting possibilities for versatile and adaptable AI systems. Additionally, the implementation of compute-in-memory techniques and SNN prototypes signifies tangible progress in the development of energy-efficient and high-performance neuromorphic hardware. These breakthroughs will not only advance our understanding of neuro-inspired computing but also hold great promise for applications ranging from robotics to cognitive computing.

  • On-chip Learning with SNN-ANN Hybrid Macros
  • Compute-In-Memory Implementation & Prototype of SNN