Priyadarshini (Priya) Panda

Priyadarshini (Priya) Panda

Yale University

H-index: 36

North America-United States

About Priyadarshini (Priya) Panda

Priyadarshini (Priya) Panda, With an exceptional h-index of 36 and a recent h-index of 35 (since 2020), a distinguished researcher at Yale University, specializes in the field of Spiking Neural Networks, Neuromorphic Computing, Robust Deep Learning, In-memory Computing.

His recent articles reflect a diverse array of research interests and contributions to the field:

TT-SNN: Tensor Train Decomposition for Efficient Spiking Neural Network Training

One-stage Prompt-based Continual Learning

RobustEdge: Low Power Adversarial Detection for Cloud-Edge Systems

Rethinking skip connections in Spiking Neural Networks with Time-To-First-Spike coding

Are SNNs Truly Energy-efficient?—A Hardware Perspective

SEENN: Towards Temporal Spiking Early Exit Neural Networks

PIVOT-Input-aware Path Selection for Energy-efficient ViT Inference

ClipFormer: Key-Value Clipping of Transformers on Memristive Crossbars for Write Noise Mitigation

Priyadarshini (Priya) Panda Information

University

Yale University

Position

Assistant Professor Electrical Engineering

Citations(all)

5628

Citations(since 2020)

5372

Cited By

1418

hIndex(all)

36

hIndex(since 2020)

35

i10Index(all)

63

i10Index(since 2020)

62

Email

University Profile Page

Yale University

Priyadarshini (Priya) Panda Skills & Research Interests

Spiking Neural Networks

Neuromorphic Computing

Robust Deep Learning

In-memory Computing

Top articles of Priyadarshini (Priya) Panda

TT-SNN: Tensor Train Decomposition for Efficient Spiking Neural Network Training

Authors

Donghyun Lee,Ruokai Yin,Youngeun Kim,Abhishek Moitra,Yuhang Li,Priyadarshini Panda

Journal

arXiv preprint arXiv:2401.08001

Published Date

2024/1/15

Spiking Neural Networks (SNNs) have gained significant attention as a potentially energy-efficient alternative for standard neural networks with their sparse binary activation. However, SNNs suffer from memory and computation overhead due to spatio-temporal dynamics and multiple backpropagation computations across timesteps during training. To address this issue, we introduce Tensor Train Decomposition for Spiking Neural Networks (TT-SNN), a method that reduces model size through trainable weight decomposition, resulting in reduced storage, FLOPs, and latency. In addition, we propose a parallel computation pipeline as an alternative to the typical sequential tensor computation, which can be flexibly integrated into various existing SNN architectures. To the best of our knowledge, this is the first of its kind application of tensor decomposition in SNNs. We validate our method using both static and dynamic datasets, CIFAR10/100 and N-Caltech101, respectively. We also propose a TT-SNN-tailored training accelerator to fully harness the parallelism in TT-SNN. Our results demonstrate substantial reductions in parameter size (7.98X), FLOPs (9.25X), training time (17.7%), and training energy (28.3%) during training for the N-Caltech101 dataset, with negligible accuracy degradation.

One-stage Prompt-based Continual Learning

Authors

Youngeun Kim,Yuhang Li,Priyadarshini Panda

Journal

arXiv preprint arXiv:2402.16189

Published Date

2024/2/25

Prompt-based Continual Learning (PCL) has gained considerable attention as a promising continual learning solution as it achieves state-of-the-art performance while preventing privacy violation and memory overhead issues. Nonetheless, existing PCL approaches face significant computational burdens because of two Vision Transformer (ViT) feed-forward stages; one is for the query ViT that generates a prompt query to select prompts inside a prompt pool; the other one is a backbone ViT that mixes information between selected prompts and image tokens. To address this, we introduce a one-stage PCL framework by directly using the intermediate layer's token embedding as a prompt query. This design removes the need for an additional feed-forward stage for query ViT, resulting in ~50% computational cost reduction for both training and inference with marginal accuracy drop < 1%. We further introduce a Query-Pool Regularization (QR) loss that regulates the relationship between the prompt query and the prompt pool to improve representation power. The QR loss is only applied during training time, so there is no computational overhead at inference from the QR loss. With the QR loss, our approach maintains ~ 50% computational cost reduction during inference as well as outperforms the prior two-stage PCL methods by ~1.4% on public class-incremental continual learning benchmarks including CIFAR-100, ImageNet-R, and DomainNet.

RobustEdge: Low Power Adversarial Detection for Cloud-Edge Systems

Authors

Abhishek Moitra,Abhiroop Bhattacharjee,Youngeun Kim,Priyadarshini Panda

Journal

IEEE Transactions on Emerging Topics in Computational Intelligence

Published Date

2024/2/16

In practical cloud-edge scenarios, where a resource constrained edge performs data acquisition and a cloud system (having sufficient resources) performs inference tasks with a deep neural network (DNN), adversarial robustness is critical for reliability and ubiquitous deployment. Adversarial detection is a prime adversarial defense technique used in prior literature. However, in prior detection works, the detector is attached to the classifier model and both detector and classifier work in tandem to perform adversarial detection that requires a high computational overhead which is not available at the lowpower edge. Therefore, prior works can only perform adversarial detection at the cloud and not at the edge. This means that in case of adversarial attacks, the unfavourable adversarial samples must be communicated to the cloud which leads to energy wastage at the edge device. Therefore, a low-power edge-friendly …

Rethinking skip connections in Spiking Neural Networks with Time-To-First-Spike coding

Authors

Youngeun Kim,Adar Kahana,Ruokai Yin,Yuhang Li,Panos Stinis,George Em Karniadakis,Priyadarshini Panda

Journal

Frontiers in Neuroscience

Published Date

2024/2/14

Time-To-First-Spike (TTFS) coding in Spiking Neural Networks (SNNs) offers significant advantages in terms of energy efficiency, closely mimicking the behavior of biological neurons. In this work, we delve into the role of skip connections, a widely used concept in Artificial Neural Networks (ANNs), within the domain of SNNs with TTFS coding. Our focus is on two distinct types of skip connection architectures: (1) addition-based skip connections, and (2) concatenation-based skip connections. We find that addition-based skip connections introduce an additional delay in terms of spike timing. On the other hand, concatenation-based skip connections circumvent this delay but produce time gaps between after-convolution and skip connection paths, thereby restricting the effective mixing of information from these two paths. To mitigate these issues, we propose a novel approach involving a learnable delay for skip connections in the concatenation-based skip connection architecture. This approach successfully bridges the time gap between the convolutional and skip branches, facilitating improved information mixing. We conduct experiments on public datasets including MNIST and Fashion-MNIST, illustrating the advantage of the skip connection in TTFS coding architectures. Additionally, we demonstrate the applicability of TTFS coding on beyond image recognition tasks and extend it to scientific machine-learning tasks, broadening the potential uses of SNNs.

Are SNNs Truly Energy-efficient?—A Hardware Perspective

Authors

Abhiroop Bhattacharjee,Ruokai Yin,Abhishek Moitra,Priyadarshini Panda

Published Date

2024/4/14

Spiking Neural Networks (SNNs) have gained attention for their energy-efficient machine learning capabilities, utilizing bio-inspired activation functions and sparse binary spike-data representations. While recent SNN algorithmic advances achieve high accuracy on large-scale computer vision tasks, their energy-efficiency claims rely on certain impractical estimation metrics. This work studies two hardware benchmarking platforms for large-scale SNN inference, namely SATA and SpikeSim. SATA is a sparsity-aware systolic-array accelerator, while SpikeSim evaluates SNNs implemented on In-Memory Computing (IMC) based analog crossbars. Using these tools, we find that the actual energy-efficiency improvements of recent SNN algorithmic works differ significantly from their estimated values due to various hardware bottlenecks. We identify and addresses key roadblocks to efficient SNN deployment on hardware …

SEENN: Towards Temporal Spiking Early Exit Neural Networks

Authors

Yuhang Li,Tamar Geller,Youngeun Kim,Priyadarshini Panda

Journal

Advances in Neural Information Processing Systems

Published Date

2024/2/13

Spiking Neural Networks (SNNs) have recently become more popular as a biologically plausible substitute for traditional Artificial Neural Networks (ANNs). SNNs are cost-efficient and deployment-friendly because they process input in both spatial and temporal manner using binary spikes. However, we observe that the information capacity in SNNs is affected by the number of timesteps, leading to an accuracy-efficiency tradeoff. In this work, we study a fine-grained adjustment of the number of timesteps in SNNs. Specifically, we treat the number of timesteps as a variable conditioned on different input samples to reduce redundant timesteps for certain data. We call our method Spiking Early-Exit Neural Networks (SEENNs). To determine the appropriate number of timesteps, we propose SEENN-I which uses a confidence score thresholding to filter out the uncertain predictions, and SEENN-II which determines the number of timesteps by reinforcement learning. Moreover, we demonstrate that SEENN is compatible with both the directly trained SNN and the ANN-SNN conversion. By dynamically adjusting the number of timesteps, our SEENN achieves a remarkable reduction in the average number of timesteps during inference. For example, our SEENN-II ResNet-19 can achieve 96.1\% accuracy with an average of 1.08 timesteps on the CIFAR-10 test dataset. Code is shared at https://github. com/Intelligent-Computing-Lab-Yale/SEENN.

PIVOT-Input-aware Path Selection for Energy-efficient ViT Inference

Authors

Abhishek Moitra,Abhiroop Bhattacharjee,Priyadarshini Panda

Journal

arXiv preprint arXiv:2404.15185

Published Date

2024/4/10

The attention module in vision transformers(ViTs) performs intricate spatial correlations, contributing significantly to accuracy and delay. It is thereby important to modulate the number of attentions according to the input feature complexity for optimal delay-accuracy tradeoffs. To this end, we propose PIVOT - a co-optimization framework which selectively performs attention skipping based on the input difficulty. For this, PIVOT employs a hardware-in-loop co-search to obtain optimal attention skip configurations. Evaluations on the ZCU102 MPSoC FPGA show that PIVOT achieves 2.7x lower EDP at 0.2% accuracy reduction compared to LVViT-S ViT. PIVOT also achieves 1.3% and 1.8x higher accuracy and throughput than prior works on traditional CPUs and GPUs. The PIVOT project can be found at https://github.com/Intelligent-Computing-Lab-Yale/PIVOT.

ClipFormer: Key-Value Clipping of Transformers on Memristive Crossbars for Write Noise Mitigation

Authors

Abhiroop Bhattacharjee,Abhishek Moitra,Priyadarshini Panda

Journal

arXiv preprint arXiv:2402.02586

Published Date

2024/2/4

Transformers have revolutionized various real-world applications from natural language processing to computer vision. However, traditional von-Neumann computing paradigm faces memory and bandwidth limitations in accelerating transformers owing to their massive model sizes. To this end, In-memory Computing (IMC) crossbars based on Non-volatile Memories (NVMs), due to their ability to perform highly parallelized Matrix-Vector-Multiplications (MVMs) with high energy-efficiencies, have emerged as a promising solution for accelerating transformers. However, analog MVM operations in crossbars introduce non-idealities, such as stochastic read & write noise, which affect the inference accuracy of the deployed transformers. Specifically, we find pre-trained Vision Transformers (ViTs) to be vulnerable on crossbars due to the impact of write noise on the dynamically-generated Key (K) and Value (V) matrices in the attention layers, an effect not accounted for in prior studies. We, thus, propose ClipFormer, a transformation on the K and V matrices during inference, to boost the non-ideal accuracies of pre-trained ViT models. ClipFormer requires no additional hardware and training overhead and is amenable to transformers deployed on any memristive crossbar platform. Our experiments on Imagenet-1k dataset using pre-trained DeiT-S transformers, subjected to standard training and variation-aware-training, show >10-40% higher non-ideal accuracies at the high write noise regime by applying ClipFormer.

A collective AI via lifelong learning and sharing at the edge

Authors

Andrea Soltoggio,Eseoghene Ben-Iwhiwhu,Vladimir Braverman,Eric Eaton,Benjamin Epstein,Yunhao Ge,Lucy Halperin,Jonathan How,Laurent Itti,Michael A Jacobs,Pavan Kantharaju,Long Le,Steven Lee,Xinran Liu,Sildomar T Monteiro,David Musliner,Saptarshi Nath,Priyadarshini Panda,Christos Peridis,Hamed Pirsiavash,Vishwa Parekh,Kaushik Roy,Shahaf Shperberg,Hava T Siegelmann,Peter Stone,Kyle Vedder,Jingfeng Wu,Lin Yang,Guangyao Zheng,Soheil Kolouri

Journal

Nature Machine Intelligence

Published Date

2024/3

One vision of a future artificial intelligence (AI) is where many separate units can learn independently over a lifetime and share their knowledge with each other. The synergy between lifelong learning and sharing has the potential to create a society of AI systems, as each individual unit can contribute to and benefit from the collective knowledge. Essential to this vision are the abilities to learn multiple skills incrementally during a lifetime, to exchange knowledge among units via a common language, to use both local data and communication to learn, and to rely on edge devices to host the necessary decentralized computation and data. The result is a network of agents that can quickly respond to and learn new tasks, that collectively hold more knowledge than a single agent and that can extend current knowledge in more diverse ways than a single agent. Open research questions include when and what knowledge …

Workload-balanced pruning for sparse spiking neural networks

Authors

Ruokai Yin,Youngeun Kim,Yuhang Li,Abhishek Moitra,Nitin Satpute,Anna Hambitzer,Priyadarshini Panda

Journal

arXiv preprint arXiv:2302.06746

Published Date

2023/2/13

Pruning for Spiking Neural Networks (SNNs) has emerged as a fundamental methodology for deploying deep SNNs on resource-constrained edge devices. Though the existing pruning methods can provide extremely high weight sparsity for deep SNNs, the high weight sparsity brings a workload imbalance problem. Specifically, the workload imbalance happens when a different number of non-zero weights are assigned to hardware units running in parallel, which results in low hardware utilization and thus imposes longer latency and higher energy costs. In preliminary experiments, we show that sparse SNNs (98% weight sparsity) can suffer as low as 59% utilization. To alleviate the workload imbalance problem, we propose u-Ticket, where we monitor and adjust the weight connections of the SNN during Lottery Ticket Hypothesis (LTH) based pruning, thus guaranteeing the final ticket gets optimal utilization when deployed onto the hardware. Experiments indicate that our u-Ticket can guarantee up to 100% hardware utilization, thus reducing up to 76.9% latency and 63.8% energy cost compared to the non-utilization-aware LTH method.

Mint: Multiplier-less integer quantization for spiking neural networks

Authors

Ruokai Yin,Yuhang Li,Abhishek Moitra,Priyadarshini Panda

Journal

arXiv preprint arXiv:2305.09850

Published Date

2023/5/16

We propose Multiplier-less INTeger (MINT) quantization, an efficient uniform quantization scheme for the weights and membrane potentials in spiking neural networks (SNNs). Unlike prior SNN quantization works, MINT quantizes the memory-hungry membrane potentials to extremely low bit-width (2-bit) to significantly reduce the total memory footprint. Additionally, MINT quantization shares the quantization scale between the weights and membrane potentials, eliminating the need for multipliers and floating arithmetic units, which are required by the standard uniform quantization. Experimental results demonstrate that our proposed method achieves accuracy that matches other state-of-the-art SNN quantization works while outperforming them on total memory footprint and hardware cost at deployment time. For instance, 2-bit MINT VGG-16 achieves 48.6% accuracy on TinyImageNet (0.28% better than the full-precision baseline) with approximately 93.8% reduction in total memory footprint from the full-precision model; meanwhile, our model reduces area by 93% and dynamic power by 98% compared to other SNN quantization counterparts.

Hardware Accelerators for Spiking Neural Networks for Energy-Efficient Edge Computing

Authors

Abhishek Moitra,Ruokai Yin,Priyadarshini Panda

Published Date

2023/6/5

Spiking Neural Networks (SNNs) have gained significant attention as an energy-efficient machine learning solution. SNNs process data over multiple timesteps using biologically inspired non-linear activation functions such as Leaky-integrate-and-Fire (LIF) neurons. At each timestep, the data is encoded as a spike (binary 1) or a no-spike (binary 0) information. The sparse temporal spike data representation can potentially have several hardware benefits 1) It entails a completely multiplier-less computation unit (only requires accumulators) unlike artificial neural networks (ANNs) that require multipliers and accumulators for matrix vector multiplications. 2) The binary nature of SNNs significantly reduces the on-chip memory required to store intermediate layer activations during SNN processing. These features add to the energy efficiency of SNN algorithms. Fortunately, over the last few years, there have been huge …

Robustness for Embedded Machine Learning Using In-Memory Computing

Authors

Priyadarshini Panda,Abhiroop Bhattacharjee,Abhishek Moitra

Published Date

2023/10/7

Deep Neural Networks (DNNs) have achieved superhuman-like performance in several real-world applications such as classification, segmentation among others. Recently, analog crossbar architectures have been proposed as a viable in-memory computing alternative to improve the compute efficiency of DNNs for low power embedded applications. Although DNNs have achieved high performance, recent works have shown that they are vulnerable to adversarial attacks where small, imperceptible noise added to the input data can degrade the DNN performance. To this end, prior works have proposed algorithmic strategies such as adversarial classification and detection to mitigate the effect of adversarial attacks. However, these approaches are not energy-efficient and suffer from performance degradation when naively implemented on analog crossbars having various non-idealities inherently. To this end, in …

DeepCAM: A Fully CAM-based Inference Accelerator with Variable Hash Lengths for Energy-efficient Deep Neural Networks

Authors

Duy-Thanh Nguyen,Abhiroop Bhattacharjee,Abhishek Moitra,Priyadarshini Panda

Published Date

2023/4/17

With ever increasing depth and width in deep neural networks to achieve state-of-the-art performance, deep learning computation has significantly grown, and dot-products remain dominant in overall computation time. Most prior works are built on conventional dot-product where weighted input summation is used to represent the neuron operation. However, another implementation of dot-product based on the notion of angles and magnitudes in the Euclidean space has attracted limited attention. This paper proposes DeepCAM, an inference accelerator built on two critical innovations to alleviate the computation time bottleneck of convolutional neural networks. The first innovation is an approximate dot-product built on computations in the Euclidean space that can replace addition and multiplication with simple bit-wise operations. The second innovation is a dynamic size content addressable memory-based (CAM-based …

Uncovering the Representation of Spiking Neural Networks Trained with Surrogate Gradient

Authors

Yuhang Li,Youngeun Kim,Hyoungseob Park,Priyadarshini Panda

Journal

arXiv preprint arXiv:2304.13098 (Accepted to TMLR 2023)

Published Date

2023/4/25

Spiking Neural Networks (SNNs) are recognized as the candidate for the next-generation neural networks due to their bio-plausibility and energy efficiency. Recently, researchers have demonstrated that SNNs are able to achieve nearly state-of-the-art performance in image recognition tasks using surrogate gradient training. However, some essential questions exist pertaining to SNNs that are little studied: Do SNNs trained with surrogate gradient learn different representations from traditional Artificial Neural Networks (ANNs)? Does the time dimension in SNNs provide unique representation power? In this paper, we aim to answer these questions by conducting a representation similarity analysis between SNNs and ANNs using Centered Kernel Alignment (CKA). We start by analyzing the spatial dimension of the networks, including both the width and the depth. Furthermore, our analysis of residual connections shows that SNNs learn a periodic pattern, which rectifies the representations in SNNs to be ANN-like. We additionally investigate the effect of the time dimension on SNN representation, finding that deeper layers encourage more dynamics along the time dimension. We also investigate the impact of input data such as event-stream data and adversarial attacks. Our work uncovers a host of new findings of representations in SNNs. We hope this work will inspire future research to fully comprehend the representation power of SNNs. Code is released at https://github.com/Intelligent-Computing-Lab-Yale/SNNCKA.

Examining the role and limits of batchnorm optimization to mitigate diverse hardware-noise in in-memory computing

Authors

Abhiroop Bhattacharjee,Abhishek Moitra,Youngeun Kim,Yeshwanth Venkatesha,Priyadarshini Panda

Published Date

2023/6/5

In-Memory Computing (IMC) platforms such as analog crossbars are gaining focus as they facilitate the acceleration of low-precision Deep Neural Networks (DNNs) with high area- & compute-efficiencies. However, the intrinsic non-idealities in crossbars, which are often non-deterministic and non-linear, degrade the performance of the deployed DNNs. In addition to quantization errors, most frequently encountered non-idealities during inference include crossbar circuit-level parasitic resistances and device-level non-idealities such as stochastic read noise and temporal drift. In this work, our goal is to closely examine the distortions caused by these non-idealities on the dot-product operations in analog crossbars and explore the feasibility of a nearly training-less solution via crossbar-aware fine-tuning of batchnorm parameters in real-time to mitigate the impact of the non-idealities. This enables reduction in hardware …

Overview of Recent Advancements in Deep Learning and Artificial Intelligence

Authors

Vijaykrishnan Narayanan,Yu Cao,Priyadarshini Panda,Nagadastagiri Reddy Challapalle,Xiaocong Du,Youngeun Kim,Gokul Krishnan,Chonghan Lee,Yuhang Li,Jingbo Sun,Yeshwanth Venkatesha,Zhenyu Wang,Yi Zheng

Journal

Advances in Electromagnetics Empowered by Artificial Intelligence and Deep Learning

Published Date

2023/9/26

Artificial intelligence (AI) systems have made significant impact on the society in the recent years in a wide range of fields, including healthcare, transportation, and finances. In healthcare, AI systems are used to diag‐nose diseases and develop new medicine. Autonomous vehicles based on AI solutions are developed to revolutionize transportation. These advances have been driven by the development of deep learning algorithms, the avail‐ability of large amounts of data, and new designs of powerful computing hardware. This chapter will explore the milestone methods and algorithms in deep learning, as well as the different hardware designs that can be used to accelerate these algorithms.

MCAIMem: a Mixed SRAM and eDRAM Cell for Area and Energy-efficient on-chip AI Memory

Authors

Duy-Thanh Nguyen,Abhiroop Bhattacharjee,Abhishek Moitra,Priyadarshini Panda

Journal

arXiv preprint arXiv:2312.03559

Published Date

2023/12/6

AI chips commonly employ SRAM memory as buffers for their reliability and speed, which contribute to high performance. However, SRAM is expensive and demands significant area and energy consumption. Previous studies have explored replacing SRAM with emerging technologies like non-volatile memory, which offers fast-read memory access and a small cell area. Despite these advantages, non-volatile memory's slow write memory access and high write energy consumption prevent it from surpassing SRAM performance in AI applications with extensive memory access requirements. Some research has also investigated eDRAM as an area-efficient on-chip memory with similar access times as SRAM. Still, refresh power remains a concern, leaving the trade-off between performance, area, and power consumption unresolved. To address this issue, our paper presents a novel mixed CMOS cell memory design that balances performance, area, and energy efficiency for AI memory by combining SRAM and eDRAM cells. We consider the proportion ratio of one SRAM and seven eDRAM cells in the memory to achieve area reduction using mixed CMOS cell memory. Additionally, we capitalize on the characteristics of DNN data representation and integrate asymmetric eDRAM cells to lower energy consumption. To validate our proposed MCAIMem solution, we conduct extensive simulations and benchmarking against traditional SRAM. Our results demonstrate that MCAIMem significantly outperforms these alternatives in terms of area and energy efficiency. Specifically, our MCAIMem can reduce the area by 48\% and energy consumption by 3.4$\times $ compared to SRAM designs, without incurring any accuracy loss.

Efficient Human Activity Recognition with Spatio-Temporal Spiking Neural Networks

Authors

Yuhang Li,Ruokai Yin,Youngeun Kim,Priyadarshini Panda

Journal

Frontiers in Neuroscience

Published Date

2023

In this study, we explore Human Activity Recognition (HAR), a task that aims to predict individuals' daily activities utilizing time series data obtained from wearable sensors for health-related applications. Although recent research has predominantly employed end-to-end Artificial Neural Networks (ANNs) for feature extraction and classification in HAR, these approaches impose a substantial computational load on wearable devices and exhibit limitations in temporal feature extraction due to their activation functions. To address these challenges, we propose the application of Spiking Neural Networks (SNNs), an architecture inspired by the characteristics of biological neurons, to HAR tasks. SNNs accumulate input activation as presynaptic potential charges and generate a binary spike upon surpassing a predetermined threshold. This unique property facilitates spatio-temporal feature extraction and confers the advantage of low-power computation attributable to binary spikes. We conduct rigorous experiments on three distinct HAR datasets using SNNs, demonstrating that our approach attains competitive or superior performance relative to ANNs, while concurrently reducing energy consumption by up to 94%.

Neurobench: Advancing neuromorphic computing through collaborative, fair and representative benchmarking

Authors

Jason Yik,Soikat Hasan Ahmed,Zergham Ahmed,Brian Anderson,Andreas G Andreou,Chiara Bartolozzi,Arindam Basu,Douwe den Blanken,Petrut Bogdan,Sander Bohte,Younes Bouhadjar,Sonia Buckley,Gert Cauwenberghs,Federico Corradi,Guido de Croon,Andreea Danielescu,Anurag Daram,Mike Davies,Yigit Demirag,Jason Eshraghian,Jeremy Forest,Steve Furber,Michael Furlong,Aditya Gilra,Giacomo Indiveri,Siddharth Joshi,Vedant Karia,Lyes Khacef,James C Knight,Laura Kriener,Rajkumar Kubendran,Dhireesha Kudithipudi,Gregor Lenz,Rajit Manohar,Christian Mayr,Konstantinos Michmizos,Dylan Muir,Emre Neftci,Thomas Nowotny,Fabrizio Ottati,Ayca Ozcelikkale,Noah Pacik-Nelson,Priyadarshini Panda,Sun Pao-Sheng,Melika Payvand,Christian Pehle,Mihai A Petrovici,Christoph Posch,Alpha Renner,Yulia Sandamirskaya,Clemens JS Schaefer,André van Schaik,Johannes Schemmel,Catherine Schuman,Jae-sun Seo,Sumit Bam Shrestha,Manolis Sifalakis,Amos Sironi,Kenneth Stewart,Terrence C Stewart,Philipp Stratmann,Guangzhi Tang,Jonathan Timcheck,Marian Verhelst,Craig M Vineyard,Bernhard Vogginger,Amirreza Yousefzadeh,Biyan Zhou,Fatima Tuz Zohora,Charlotte Frenkel,Vijay Janapa Reddi

Journal

arXiv preprint arXiv:2304.04640

Published Date

2023/4/10

The field of neuromorphic computing holds great promise in terms of advancing computing efficiency and capabilities by following brain-inspired principles. However, the rich diversity of techniques employed in neuromorphic research has resulted in a lack of clear standards for benchmarking, hindering effective evaluation of the advantages and strengths of neuromorphic methods compared to traditional deep-learning-based methods. This paper presents a collaborative effort, bringing together members from academia and the industry, to define benchmarks for neuromorphic computing: NeuroBench. The goals of NeuroBench are to be a collaborative, fair, and representative benchmark suite developed by the community, for the community. In this paper, we discuss the challenges associated with benchmarking neuromorphic solutions, and outline the key features of NeuroBench. We believe that NeuroBench will be a significant step towards defining standards that can unify the goals of neuromorphic computing and drive its technological progress. Please visit neurobench.ai for the latest updates on the benchmark tasks and metrics.

See List of Professors in Priyadarshini (Priya) Panda University(Yale University)

Priyadarshini (Priya) Panda FAQs

What is Priyadarshini (Priya) Panda's h-index at Yale University?

The h-index of Priyadarshini (Priya) Panda has been 35 since 2020 and 36 in total.

What are Priyadarshini (Priya) Panda's top articles?

The articles with the titles of

TT-SNN: Tensor Train Decomposition for Efficient Spiking Neural Network Training

One-stage Prompt-based Continual Learning

RobustEdge: Low Power Adversarial Detection for Cloud-Edge Systems

Rethinking skip connections in Spiking Neural Networks with Time-To-First-Spike coding

Are SNNs Truly Energy-efficient?—A Hardware Perspective

SEENN: Towards Temporal Spiking Early Exit Neural Networks

PIVOT-Input-aware Path Selection for Energy-efficient ViT Inference

ClipFormer: Key-Value Clipping of Transformers on Memristive Crossbars for Write Noise Mitigation

...

are the top articles of Priyadarshini (Priya) Panda at Yale University.

What are Priyadarshini (Priya) Panda's research interests?

The research interests of Priyadarshini (Priya) Panda are: Spiking Neural Networks, Neuromorphic Computing, Robust Deep Learning, In-memory Computing

What is Priyadarshini (Priya) Panda's total number of citations?

Priyadarshini (Priya) Panda has 5,628 citations in total.

What are the co-authors of Priyadarshini (Priya) Panda?

The co-authors of Priyadarshini (Priya) Panda are Kaushik Roy, Anand Raghunathan, Karin M. Rabe, Shriram Ramanathan, Fan Zuo, Abhronil Sengupta.

    Co-Authors

    H-index: 123
    Kaushik Roy

    Kaushik Roy

    Purdue University

    H-index: 84
    Anand Raghunathan

    Anand Raghunathan

    Purdue University

    H-index: 80
    Karin M. Rabe

    Karin M. Rabe

    Rutgers, The State University of New Jersey

    H-index: 71
    Shriram Ramanathan

    Shriram Ramanathan

    Purdue University

    H-index: 37
    Fan Zuo

    Fan Zuo

    Indiana State University

    H-index: 35
    Abhronil Sengupta

    Abhronil Sengupta

    Penn State University

    academic-engine

    Useful Links