Ramalingam Chellappa

Ramalingam Chellappa

Johns Hopkins University

H-index: 143

North America-United States

Professor Information

University

Johns Hopkins University

Position

Bloomberg Distinguished Professor

Citations(all)

97152

Citations(since 2020)

32026

Cited By

77567

hIndex(all)

143

hIndex(since 2020)

77

i10Index(all)

718

i10Index(since 2020)

360

Email

University Profile Page

Johns Hopkins University

Research & Interests List

Image Analysis

artificial intelligence

biometrics

Computer Vision

Biomedical Data Science

Top articles of Ramalingam Chellappa

Diffprotect: Generate adversarial examples with diffusion models for facial privacy protection

The increasingly pervasive facial recognition (FR) systems raise serious concerns about personal privacy, especially for billions of users who have publicly shared their photos on social media. Several attempts have been made to protect individuals from being identified by unauthorized FR systems utilizing adversarial attacks to generate encrypted face images. However, existing methods suffer from poor visual quality or low attack success rates, which limit their utility. Recently, diffusion models have achieved tremendous success in image generation. In this work, we ask: can diffusion models be used to generate adversarial examples to improve both visual quality and attack performance? We propose DiffProtect, which utilizes a diffusion autoencoder to generate semantically meaningful perturbations on FR systems. Extensive experiments demonstrate that DiffProtect produces more natural-looking encrypted images than state-of-the-art methods while achieving significantly higher attack success rates, e.g., 24.5% and 25.1% absolute improvements on the CelebA-HQ and FFHQ datasets.

Authors

Jiang Liu,Chun Pong Lau,Rama Chellappa

Journal

arXiv preprint arXiv:2305.13625

Published Date

2023/5/23

GADER: GAit DEtection and Recognition in the Wild

Gait recognition holds the promise of robustly identifying subjects based on their walking patterns instead of color information. While previous approaches have performed well for curated indoor scenes, they have significantly impeded applicability in unconstrained situations, e.g. outdoor, long distance scenes. We propose an end-to-end GAit DEtection and Recognition (GADER) algorithm for human authentication in challenging outdoor scenarios. Specifically, GADER leverages a Double Helical Signature to detect the fragment of human movement and incorporates a novel gait recognition method, which learns representations by distilling from an auxiliary RGB recognition model. At inference time, GADER only uses the silhouette modality but benefits from a more robust representation. Extensive experiments on indoor and outdoor datasets demonstrate that the proposed method outperforms the State-of-The-Arts for gait recognition and verification, with a significant 20.6% improvement on unconstrained, long distance scenes.

Authors

Yuxiang Guo,Cheng Peng,Ram Prabhakar,Chun Pong Lau,Rama Chellappa

Journal

arXiv preprint arXiv:2307.14578

Published Date

2023/7/27

MOST: Multiple Object localization with Self-supervised Transformers for object discovery

We tackle the challenging task of unsupervised object localization in this work. Recently, transformers trained with self-supervised learning have been shown to exhibit object localization properties without being trained for this task. In this work, we present Multiple Object localization with Self-supervised Transformers (MOST) that uses features of transformers trained using self-supervised learning to localize multiple objects in real world images. MOST analyzes the similarity maps of the features using box counting; a fractal analysis tool to identify tokens lying on foreground patches. The identified tokens are then clustered together, and tokens of each cluster are used to generate bounding boxes on foreground regions. Unlike recent state-of-the-art object localization methods, MOST can localize multiple objects per image and outperforms SOTA algorithms on several object localization and discovery benchmarks on PASCAL-VOC 07, 12 and COCO20k datasets. Additionally, we show that MOST can be used for self-supervised pretraining of object detectors, and yields consistent improvements on fully, semi-supervised object detection and unsupervised region proposal generation. Our project is publicly available at rssaketh. github. io/most.

Authors

Sai Saketh Rambhatla,Ishan Misra,Rama Chellappa,Abhinav Shrivastava

Published Date

2023

Robust and scalable vehicle re-identification via self-supervision

Many state-of-the-art solutions for vehicle re-identification (re-id) mostly focus on improving the accuracy on existing re-id benchmarks using additional annotated data. To balance the demands of accuracy, availability of annotated data, and computational efficiency, we propose a simple yet effective hybrid solution empowered by self-supervised learning which is free of intricate and computationally-demanding add-on attention modules often seen in state-of-the-art approaches. Through extensive experimentation, we show our approach, termed Self-Supervised and Boosted VEhicle Re-Identification (SSBVER), is on par with state-of-the-art alternatives in terms of accuracy without introducing any additional overhead during deployment. Additionally, we show that our approach, generalizes to different backbone architectures which accommodates various resource constraints and consistently results in a significant accuracy boost. Our code is available at https://github. com/Pirazh/SSBVER.

Authors

Pirazh Khorramshahi,Vineet Shenoy,Rama Chellappa

Published Date

2023

PDRF: progressively deblurring radiance field for fast scene reconstruction from blurry images

We present Progressively Deblurring Radiance Field (PDRF), a novel approach to efficiently reconstruct high quality radiance fields from blurry images. While current State-of-The-Art (SoTA) scene reconstruction methods achieve photo-realistic renderings from clean source views, their performances suffer when the source views are affected by blur, which is commonly observed in the wild. Previous deblurring methods either do not account for 3D geometry, or are computationally intense. To addresses these issues, PDRF uses a progressively deblurring scheme for radiance field modeling, which can accurately model blur with 3D scene context. PDRF further uses an efficient importance sampling scheme that results in fast scene optimization. We perform extensive experiments and show that PDRF is 15X faster than previous SoTA while achieving better performance on both synthetic and real scenes.

Authors

Cheng Peng,Rama Chellappa

Journal

Proceedings of the AAAI Conference on Artificial Intelligence

Published Date

2023/6/26

Attribute-guided encryption with facial texture masking

The increasingly pervasive facial recognition (FR) systems raise serious concerns about personal privacy, especially for billions of users who have publicly shared their photos on social media. Several attempts have been made to protect individuals from unauthorized FR systems utilizing adversarial attacks to generate encrypted face images to protect users from being identified by FR systems. However, existing methods suffer from poor visual quality or low attack success rates, which limit their usability in practice. In this paper, we propose Attribute Guided Encryption with Facial Texture Masking (AGE-FTM) that performs a dual manifold adversarial attack on FR systems to achieve both good visual quality and high black box attack success rates. In particular, AGE-FTM utilizes a high fidelity generative adversarial network (GAN) to generate natural on-manifold adversarial samples by modifying facial attributes, and performs the facial texture masking attack to generate imperceptible off-manifold adversarial samples. Extensive experiments on the CelebA-HQ dataset demonstrate that our proposed method produces more natural-looking encrypted images than state-of-the-art methods while achieving competitive attack performance. We further evaluate the effectiveness of AGE-FTM in the real world using a commercial FR API and validate its usefulness in practice through an user study.

Authors

Chun Pong Lau,Jiang Liu,Rama Chellappa

Journal

arXiv preprint arXiv:2305.13548

Published Date

2023/5/22

Halp: Hallucinating latent positives for skeleton-based self-supervised learning of actions

Supervised learning of skeleton sequence encoders for action recognition has received significant attention in recent times. However, learning such encoders without labels continues to be a challenging problem. While prior works have shown promising results by applying contrastive learning to pose sequences, the quality of the learned representations is often observed to be closely tied to data augmentations that are used to craft the positives. However, augmenting pose sequences is a difficult task as the geometric constraints among the skeleton joints need to be enforced to make the augmentations realistic for that action. In this work, we propose a new contrastive learning approach to train models for skeleton-based action recognition without labels. Our key contribution is a simple module, HaLP-to Hallucinate Latent Positives for contrastive learning. Specifically, HaLP explores the latent space of poses in suitable directions to generate new positives. To this end, we present a novel optimization formulation to solve for the synthetic positives with an explicit control on their hardness. We propose approximations to the objective, making them solvable in closed form with minimal overhead. We show via experiments that using these generated positives within a standard contrastive learning framework leads to consistent improvements across benchmarks such as NTU-60, NTU-120, and PKU-II on tasks like linear evaluation, transfer learning, and kNN evaluation. Our code can be found at https://github. com/anshulbshah/HaLP.

Authors

Anshul Shah,Aniket Roy,Ketul Shah,Shlok Mishra,David Jacobs,Anoop Cherian,Rama Chellappa

Published Date

2023

STEPs: Self-Supervised Key Step Extraction and Localization from Unlabeled Procedural Videos

We address the problem of extracting key steps from unlabeled procedural videos, motivated by the potential of Augmented Reality (AR) headsets to revolutionize job training and performance. We decompose the problem into two steps: representation learning and key steps extraction. We propose a training objective, Bootstrapped Multi-Cue Contrastive (BMC2) loss to learn discriminative representations for various steps without any labels. Different from prior works, we develop techniques to train a light-weight temporal module which uses off-the-shelf features for self supervision. Our approach can seamlessly leverage information from multiple cues like optical flow, depth or gaze to learn discriminative features for key-steps, making it amenable for AR applications. We finally extract key steps via a tunable algorithm that clusters the representations and samples. We show significant improvements over prior works for the task of key step localization and phase classification. Qualitative results demonstrate that the extracted key steps are meaningful and succinctly represent various steps of the procedural tasks.

Authors

Anshul Shah,Benjamin Lundell,Harpreet Sawhney,Rama Chellappa

Published Date

2023

Professor FAQs

What is Ramalingam Chellappa's h-index at Johns Hopkins University?

The h-index of Ramalingam Chellappa has been 77 since 2020 and 143 in total.

What are Ramalingam Chellappa's research interests?

The research interests of Ramalingam Chellappa are: Image Analysis, artificial intelligence, biometrics, Computer Vision, Biomedical Data Science

What is Ramalingam Chellappa's total number of citations?

Ramalingam Chellappa has 97,152 citations in total.

What are the co-authors of Ramalingam Chellappa?

The co-authors of Ramalingam Chellappa are Larry Davis, Vishal M. Patel, B.S. Manjunath, Amit K. Roy-Chowdhury, Nasser M Nasrabadi.

Co-Authors

H-index: 139
Larry Davis

Larry Davis

University of Maryland

H-index: 82
Vishal M. Patel

Vishal M. Patel

Johns Hopkins University

H-index: 79
B.S. Manjunath

B.S. Manjunath

University of California, Santa Barbara

H-index: 64
Amit K. Roy-Chowdhury

Amit K. Roy-Chowdhury

University of California, Riverside

H-index: 60
Nasser M Nasrabadi

Nasser M Nasrabadi

West Virginia University

academic-engine

Useful Links