Terrance E. Boult

Terrance E. Boult

University of Colorado Colorado Springs

H-index: 64

North America-United States

About Terrance E. Boult

Terrance E. Boult, With an exceptional h-index of 64 and a recent h-index of 37 (since 2020), a distinguished researcher at University of Colorado Colorado Springs, specializes in the field of Open-set recognition, Machine Perception, Statistical Learning, Computer Vision, Biometric Security.

His recent articles reflect a diverse array of research interests and contributions to the field:

DaliID: Distortion-Adaptive Learned Invariance for Identification–a Robust Technique for Face Recognition and Person Re-Identification.

Comparative study on chromatin loop callers using Hi-C data reveals their effectiveness

Open-set face recognition with maximal entropy and Objectosphere loss

Large-scale Fully-Unsupervised Re-Identification

AG-ReID 2023: Aerial-Ground Person Re-identification Challenge Results

Machine learning systems and methods for improved localization of image forgery

Novelty in image classification

Cast: Conditional attribute subsampling toolkit for fine-grained evaluation

Terrance E. Boult Information

University

University of Colorado Colorado Springs

Position

El Pomar Prof. of Innovation and Security

Citations(all)

17816

Citations(since 2020)

8244

Cited By

12334

hIndex(all)

64

hIndex(since 2020)

37

i10Index(all)

184

i10Index(since 2020)

91

Email

University Profile Page

University of Colorado Colorado Springs

Terrance E. Boult Skills & Research Interests

Open-set recognition

Machine Perception

Statistical Learning

Computer Vision

Biometric Security

Top articles of Terrance E. Boult

DaliID: Distortion-Adaptive Learned Invariance for Identification–a Robust Technique for Face Recognition and Person Re-Identification.

Authors

Wes Robbins,Gabriel Bertocco,Terrance E Boult

Journal

arXiv preprint arXiv:2302.05753

Published Date

2023/2/11

In unconstrained scenarios, face recognition and person re-identification are subject to distortions such as motion blur, atmospheric turbulence, or upsampling artifacts. To improve robustness in these scenarios, we propose a methodology called Distortion-Adaptive Learned Invariance for Identification (DaliID) models. We contend that distortion augmentations, which degrade image quality, can be successfully leveraged to a greater degree than has been shown in the literature. Aided by an adaptive weighting schedule, a novel distortion augmentation is applied at severe levels during training. This training strategy increases feature-level invariance to distortions and decreases domain shift to unconstrained scenarios. At inference, we use a magnitude-weighted fusion of features from parallel models to retain robustness across the range of images. DaliID models achieve state-of-the-art (SOTA) for both face recognition and person re-identification on seven benchmark datasets, including IJB-S, TinyFace, DeepChange, and MSMT17. Additionally, we provide recaptured evaluation data at a distance of 750+ meters and further validate on real long-distance face imagery.

Comparative study on chromatin loop callers using Hi-C data reveals their effectiveness

Authors

HMA Mohit Chowdhury,Terrance Boult,Oluwatosin Oluwadare

Journal

BMC bioinformatics

Published Date

2024/3/21

BackgroundChromosome is one of the most fundamental part of cell biology where DNA holds the hierarchical information. DNA compacts its size by forming loops, and these regions house various protein particles, including CTCF, SMC3, H3 histone. Numerous sequencing methods, such as Hi-C, ChIP-seq, and Micro-C, have been developed to investigate these properties. Utilizing these data, scientists have developed a variety of loop prediction techniques that have greatly improved their methods for characterizing loop prediction and related aspects.ResultsIn this study, we categorized 22 loop calling methods and conducted a comprehensive study of 11 of them. Additionally, we have provided detailed insights into the methodologies underlying these algorithms for loop detection, categorizing them into five distinct groups based on their fundamental approaches. Furthermore, we have included critical information …

Open-set face recognition with maximal entropy and Objectosphere loss

Authors

Rafael Henrique Vareto,Yu Linghu,Terrance E Boult,William Robson Schwartz,Manuel Günther

Journal

Image and Vision Computing

Published Date

2023/11/14

Open-set face recognition characterizes a scenario where unknown individuals, unseen during the training and enrollment stages, appear on operation time. This work concentrates on watchlists, an open-set task that is expected to operate at a low false-positive identification rate and generally includes only a few enrollment samples per identity. We introduce a compact adapter network that benefits from additional negative face images when combined with distinct cost functions, such as Objectosphere Loss (OS) and the proposed Maximal Entropy Loss (MEL). MEL modifies the traditional Cross-Entropy loss in favor of increasing the entropy for negative samples and attaches a penalty to known target classes in pursuance of gallery specialization. The proposed approach adopts pre-trained deep neural networks (DNNs) for face recognition as feature extractors. Then, the adapter network takes deep feature …

Large-scale Fully-Unsupervised Re-Identification

Authors

Gabriel Bertocco,Fernanda Andaló,Terrance E Boult,Anderson Rocha

Journal

arXiv preprint arXiv:2307.14278

Published Date

2023/7/26

Fully-unsupervised Person and Vehicle Re-Identification have received increasing attention due to their broad applicability in surveillance, forensics, event understanding, and smart cities, without requiring any manual annotation. However, most of the prior art has been evaluated in datasets that have just a couple thousand samples. Such small-data setups often allow the use of costly techniques in time and memory footprints, such as Re-Ranking, to improve clustering results. Moreover, some previous work even pre-selects the best clustering hyper-parameters for each dataset, which is unrealistic in a large-scale fully-unsupervised scenario. In this context, this work tackles a more realistic scenario and proposes two strategies to learn from large-scale unlabeled data. The first strategy performs a local neighborhood sampling to reduce the dataset size in each iteration without violating neighborhood relationships. A second strategy leverages a novel Re-Ranking technique, which has a lower time upper bound complexity and reduces the memory complexity from O(n^2) to O(kn) with k << n. To avoid the pre-selection of specific hyper-parameter values for the clustering algorithm, we also present a novel scheduling algorithm that adjusts the density parameter during training, to leverage the diversity of samples and keep the learning robust to noisy labeling. Finally, due to the complementary knowledge learned by different models, we also introduce a co-training strategy that relies upon the permutation of predicted pseudo-labels, among the backbones, with no need for any hyper-parameters or weighting optimization. The proposed methodology …

AG-ReID 2023: Aerial-Ground Person Re-identification Challenge Results

Authors

Kien Nguyen,Clinton Fookes,Sridha Sridharan,Feng Liu,Xiaoming Liu,Arun Ross,Dana Michalski,Huy Nguyen,Debayan Deb,Mahak Kothari,Manisha Saini,Dawei Du,Scott McCloskey,Gabriel Bertocco,Fernanda Andaló,Terrance E Boult,Anderson Rocha,Haidong Zhu,Zhaoheng Zheng,Ram Nevatia,Zaigham Randhawa,Sinan Sabri,Gianfranco Doretto

Published Date

2023/9/25

Person re-identification (Re-ID) on aerial-ground platforms has emerged as an intriguing topic within computer vision, presenting a plethora of unique challenges. Highflying altitudes of aerial cameras make persons appear differently in terms of viewpoints, poses, and resolution compared to the images of the same person viewed from ground cameras. Despite its potential, few algorithms have been developed for person re-identification on aerial-ground data, mainly due to the absence of comprehensive datasets. In response, we have collected a large-scale dataset and organized the Aerial-Ground person Re-IDentification Challenge (AG-ReID2023) to foster advancements in the field. The dataset comprises 100,502 images with 1,615 unique identities, including 51,530 training images featuring 807 identities. The test set is divided into two subsets: Aerial to Ground (808 ids, 4,348 query images, 19,259 gallery …

Machine learning systems and methods for improved localization of image forgery

Published Date

2023/5/30

A system for improved localization of image forgery. The system generates a variational information bottleneck objective function and works with input image patches to implement an encoder-decoder architecture. The encoder-decoder architecture controls an information flow between the input image patches and a representation layer. The system utilizes information bottleneck to learn useful residual noise patterns and ignore semantic content present in each input image patch. The system trains a neural network to learn a representation indicative of a statistical fingerprint of a source camera model from each input image patch while excluding semantic content thereof. The system can determine a splicing manipulation localization by the trained neural network.

Novelty in image classification

Authors

A Shrivastava,P Kumar,Anubhav,C Vondrick,W Scheirer,DS Prijatelj,M Jafarzadeh,T Ahmad,S Cruz,R Rabinowitz,A Al Shami,T Boult

Published Date

2023/8/2

In this chapter, we introduce real-world data in the form of RGB images and expand on the application of the underlying theory. We divide the world of images into known and novel sets and then use a sampling process to generate a large number of novelty experiments. There is a learning-based novelty-aware classification agent, but it does not actively interact with the world. For the image classification task, the perceptual operators are defined as the features computed from a Deep Neural Network (DNN) trained on the classification task using the known classes.

Cast: Conditional attribute subsampling toolkit for fine-grained evaluation

Authors

Wes Robbins,Steven Zhou,Aman Bhatta,Chad Mello,Vítor Albiero,Kevin W Bowyer,Terrance E Boult

Published Date

2023

Thorough evaluation is critical for developing models that are fair and robust. In this work, we describe the Conditional Attribute Subsampling Toolkit (CAST) for selecting data subsets for fine-grained scientific evaluations. Our toolkit efficiently filters data given an arbitrary number of conditions for metadata attributes. The purpose of the toolkit is to allow researchers to easily to evaluate models on targeted test distributions. The functionality of CAST is demonstrated on the WebFace42M face Recognition dataset. We calculate over 50 attributes for this dataset including race, image quality, facial features, and accessories. Using our toolkit, we create over a hundred test sets conditioned on one or multiple attributes. Results are presented for subsets of various demographics and image quality ranges. Using eleven different subsets, we build a face recognition 1: 1 verification benchmark called C11 that exclusively contains pairs that are near the decision threshold. Evaluation on C11 with state-of-the-art methods demonstrates the suitability of the proposed benchmark. The toolkit is publicly available at https://github. com/WesRobbins/CAST.

A Unifying Framework for Novelty

Authors

T Boult,DS Prijatelj,W Scheirer

Published Date

2023/8/2

“What is novel?” is an important AI research question that informs the design of agents tolerant to novel inputs. Is a noticeable change in the world that does not impact an agent’s task performance a novelty? How about a change that impacts performance but is not directly perceptible? If the world has not changed, but the agent senses a random error that produces an input that leads to an unexpected state, is that novel.

Consistency and accuracy of celeba attribute values

Authors

Haiyu Wu,Grace Bezold,Manuel Günther,Terrance Boult,Michael C King,Kevin W Bowyer

Published Date

2023

We report the first systematic analysis of the experimental foundations of facial attribute classification. Two annotators independently assigning attribute values shows that only 12 of 40 common attributes are assigned values with>= 95% consistency, and three (high cheekbones, pointed nose, oval face) have essentially random consistency. Of 5,068 duplicate face appearances in CelebA, attributes have contradicting values on from 10 to 860 of the 5,068 duplicates. Manual audit of a subset of CelebA estimates error rates as high as 40% for (no beard= false), even though the labeling consistency experiment indicates that no beard could be assigned with>= 95% consistency. Selecting the mouth slightly open (MSO) for deeper analysis, we estimate the error rate for (MSO= true) at about 20% and (MSO= false) at about 2%. A corrected version of the MSO attribute values enables learning a model that achieves higher accuracy than previously reported for MSO. Corrected values for CelebA MSO are available at https://github. com/HaiyuWu/CelebAMSO.

Novetly in 3D CartPole Domain

Authors

Terrance Boult,NM Windesheim,S Zhou,C Pereyda,LB Holder

Published Date

2023/8/2

This chapter expands on the underlying theory and applies it to a 3D CartPole domain. We introduce a Weibull Open World control-agent (WOW-agent) that uses Extreme Value Theory (EVT) to convert dissimilarity to probability of novelty. While novelty is something sufficiently dissimilar to the past experience, that does not mean the WOW-agent cannot reason about the novelty. While there have been many books addressing out-of-distribution detection, novel class discovery, and open-world learning, not all novelty is out-of-distribution data or a novel class.

Pose2Trajectory: using transformers on body pose to predict tennis player’s trajectory

Authors

Ali AlShami,Terrance Boult,Jugal Kalita

Journal

Journal of Visual Communication and Image Representation

Published Date

2023/12/1

Tracking the trajectory of tennis players can help camera operators in production. Predicting future movement enables cameras to automatically track and predict a player’s future trajectory without human intervention. It is also intellectually satisfying to predict future human movement in the context of complex physical tasks. Swift advancements in sports analytics and the wide availability of videos for tennis have inspired us to propose a novel method called Pose2Trajectory, which predicts a tennis player’s future trajectory as a sequence derived from their body joints’ data and ball position. Demonstrating impressive accuracy, our approach capitalizes on body joint information to provide a comprehensive understanding of the human body’s geometry and motion, thereby enhancing the prediction of the player’s trajectory. We use encoder–decoder Transformer architecture trained on the joints and trajectory …

Novelty in 2D CartPole Domain

Authors

PA Grabowicz,C Pereyda,K Clary,R Stern,T Boult,D Jensen,LB Holder

Published Date

2023/8/2

In this chapter, we apply the constructs defined in Chapter 1 to the relatively simple domain of 2D CartPole. We consider different versions of CartPole agents to highlight the different effects of novelties on an agent. The results generally show that the novelty theoretic framework allows the estimation of different novelties’ impact on the performance of agents, which can be used for testing open-world learning hypotheses. This chapter presents a few surprising results that help further inform the framework.

DOERS: Distant Observation Enhancement and Recognition System

Authors

Dawei Du,Cole Hill,Gabriel Bertocco,Mauricio Pamplona Segundo,Wes Robbins,Brandon RichardWebster,Roderic Collins,Sudeep Sarkar,Terrance Boult,Scott McCloskey

Published Date

2023/9/25

In order to recognize people across long distances and from elevated viewpoints, biometric systems must handle the challenges of imaging through atmospheric turbulence and non-frontal presentations, in addition to the traditional A-PIE challenges of aging, pose, illumination, and expression. While individual biometric modalities such as facial appearance, gait, and whole body appearance each have a role to play, no single modality can address all of these challenges. This paper describes a novel multi-modal biometric recognition system that addresses the challenges of atmospheric turbulence, occlusions, and elevated viewpoints by combining these modalities. We demonstrate our system on both $R G B$ video-based identity verification and both open and closed-world search.

Weibull-open-world (wow) multi-type novelty detection in cartpole3d

Authors

Terrance E Boult,Nicolas M Windesheim,Steven Zhou,Christopher Pereyda,Lawrence B Holder

Journal

Algorithms

Published Date

2022/10/18

Algorithms for automated novelty detection and management are of growing interest but must address the inherent uncertainty from variations in non-novel environments while detecting the changes from the novelty. This paper expands on a recent unified framework to develop an operational theory for novelty that includes multiple (sub)types of novelty. As an example, this paper explores the problem of multi-type novelty detection in a 3D version of CartPole, wherein the cart Weibull-Open-World control-agent (WOW-agent) is confronted by different sub-types/levels of novelty from multiple independent agents moving in the environment. The WOW-agent must balance the pole and detect and characterize the novelties while adapting to maintain that balance. The approach develops static, dynamic, and prediction-error measures of dissimilarity to address different signals/sources of novelty. The WOW-agent uses the Extreme Value Theory, applied per dimension of the dissimilarity measures, to detect outliers and combines different dimensions to characterize the novelty. In blind/sequestered testing, the system detects nearly 100% of the non-nuisance novelties, detects many nuisance novelties, and shows it is better than novelty detection using a Gaussian-based approach. We also show the WOW-agent’s lookahead collision avoiding control is significantly better than a baseline Deep-Q-learning Networktrained controller.

Systems and methods for machine classification and learning that is robust to unknown inputs

Published Date

2022/4/5

The invention includes systems and methods, including computer programs encoded on computer storage media, for classifying inputs as belonging to a known or unknown class as well as for updating the system to improve is performance. In one system, there is a desired feature representation for unknown inputs, eg, a zero vector, and the system includes transforming input data to produce a feature representation, using that to compute dissimilarity with the desired feature representation for unknown inputs and combining dissimilarity with other transformations of the feature representation to determine if the input is from a specific known class or if it is unknown. In one embodiment, the system transforms the magnitude of the feature representation into a confidence score. In an update method to improve performance, the system transforms inputs into feature representations which go through a scoring means and …

Few-shot class incremental learning leveraging self-supervised features

Authors

Touqeer Ahmad,Akshay Raj Dhamija,Steve Cruz,Ryan Rabinowitz,Chunchun Li,Mohsen Jafarzadeh,Terrance E Boult

Published Date

2022

Few-Shot Class Incremental Learning (FSCIL) is a recently introduced Class Incremental Learning (CIL) setting that operates under more constrained assumptions: only very few samples per class are available in each incremental session, and the number of samples/classes is known ahead of time. Due to limited data for class incremental learning, FSCIL suffers more from over-fitting and catastrophic forgetting than general CIL. In this paper we study leveraging the advances due to self-supervised learning to remedy over-fitting and catastrophic forgetting and significantly advance the state-of-the-art FSCIL. We explore training a lightweight feature fusion plus classifier on a concatenation of features emerging from supervised and self-supervised models. The supervised model is trained on data from a base session, where a relatively larger amount of data is available in FSCIL. Whereas a self-supervised model is learned using an abundance of unlabeled data. We demonstrate a classifier trained on the fusion of such features outperforms classifiers trained independently on either of these representations. We experiment with several existing self-supervised models and provide results for three popular benchmarks for FSCIL including Caltech-UCSD Birds-200-2011 (CUB200), miniImageNet, and CIFAR100 where we advance the state-of-the-art for each benchmark. Code is available at: https://github. com/TouqeerAhmad/FeSSSS

Enhanced Performance of Pre-Trained Networks by Matched Augmentation Distributions

Authors

Touqeer Ahmad,Mohsen Jafarzadeh,Akshay Raj Dhamija,Ryan Rabinowitz,Steve Cruz,Chunchun Li,Terrance E Boult

Published Date

2022/7/18

There exists a distribution discrepancy between training and testing, in the way images are fed to modern CNNs. Recent work tried to bridge this gap either by fine-tuning or re-training the network at different resolutions. However retraining a network is rarely cheap and not always viable. To this end, we propose a simple solution to address the train-test distributional shift and enhance the performance of pretrained models - which commonly ship as a package with deep learning platforms e.g., PyTorch. Specifically, we demonstrate that running inference on the center crop of an image is not always the best as important discriminatory information may be cropped-off. Instead we propose to combine results for multiple random crops for a test image. This not only matches the train time augmentation but also provides the full coverage of the input image. We explore combining representation of random crops through …

Variable few shot class incremental and open world learning

Authors

Touqeer Ahmad,Akshay Raj Dhamija,Mohsen Jafarzadeh,Steve Cruz,Ryan Rabinowitz,Chunchun Li,Terrance E Boult

Published Date

2022

Prior work on few-shot class incremental learning has operated with an unnatural assumption: the number of ways and number of shots are assumed to be known and fixed eg, 10-ways 5-shots, 5-ways 5-shots, etc. Hence, we refer to this setting as Fixed-Few-Shot Class Incremental Learning (FFSCIL). In practice, the pre-specified fixed number of classes and examples per class may not be available, meaning one cannot update the model. Evaluation of FSCIL approaches in such unnatural settings renders their applicability questionable for practical scenarios where such assumptions do not hold. To mitigate the limitation of FFSCIL, we propose Variable-Few-Shot Class Incremental Learning (VFSCIL) and demonstrate it with Up-to N-Ways, Up-to K-Shots class incremental learning; wherein each incremental session, a learner may have up to N classes and up to K samples per class. Consequently, conventional FFSCIL is a special case of herein introduced VFSCIL. Further, we extend VFSCIL to a more practical problem of Variable-Few-Shot Open-World Learning (VFSOWL), where an agent is not only required to perform incremental learning, but must detect unknown samples and enroll only those that it detects correctly. We formulate and study VFSCIL and VFSOWL on two benchmark datasets conventionally employed for FFSCIL ie, Caltech-UCSD Birds-200-2011 (CUB200) and miniImageNet. First, to serve as a strong baseline, we extend the state-of-the-art FSCIL approach to operate in Up-to N-Ways, Up-to K-Shots class incremental and open-world settings. Then, we propose a novel but simple approach for VFSCIL/VFSOWL where we …

System and method for transforming video data into directional object count

Published Date

2022/5/26

The present invention is a computer-implemented system and method for transforming video data into directional object counts. The method of transforming video data is uniquely efficient in that it uses only a single column or row of pixels in a video camera to define the background from a moving object, count the number of objects and determine their direction. By taking an image of a single column or row every frame and concatenating them together, the result is an image of the object that has passed, referred to herein as a sweep image. In order to determine the direction, two different methods can be used. Method one involves constructing another image using the same method. The two images are then compared, and the direction is determined by the location of the object in the second image compared to the location of the object in the first image. Due to this recording method, elongation or compression of …

See List of Professors in Terrance E. Boult University(University of Colorado Colorado Springs)

Terrance E. Boult FAQs

What is Terrance E. Boult's h-index at University of Colorado Colorado Springs?

The h-index of Terrance E. Boult has been 37 since 2020 and 64 in total.

What are Terrance E. Boult's top articles?

The articles with the titles of

DaliID: Distortion-Adaptive Learned Invariance for Identification–a Robust Technique for Face Recognition and Person Re-Identification.

Comparative study on chromatin loop callers using Hi-C data reveals their effectiveness

Open-set face recognition with maximal entropy and Objectosphere loss

Large-scale Fully-Unsupervised Re-Identification

AG-ReID 2023: Aerial-Ground Person Re-identification Challenge Results

Machine learning systems and methods for improved localization of image forgery

Novelty in image classification

Cast: Conditional attribute subsampling toolkit for fine-grained evaluation

...

are the top articles of Terrance E. Boult at University of Colorado Colorado Springs.

What are Terrance E. Boult's research interests?

The research interests of Terrance E. Boult are: Open-set recognition, Machine Perception, Statistical Learning, Computer Vision, Biometric Security

What is Terrance E. Boult's total number of citations?

Terrance E. Boult has 17,816 citations in total.

What are the co-authors of Terrance E. Boult?

The co-authors of Terrance E. Boult are Shree Nayar, Peter Belhumeur, Dr. Anderson Rocha (Full-Professor), Visvanathan Ramesh, Walter Scheirer, Manuel Günther.

    Co-Authors

    H-index: 134
    Shree Nayar

    Shree Nayar

    Columbia University in the City of New York

    H-index: 62
    Peter Belhumeur

    Peter Belhumeur

    Columbia University in the City of New York

    H-index: 51
    Dr. Anderson Rocha (Full-Professor)

    Dr. Anderson Rocha (Full-Professor)

    Universidade Estadual de Campinas

    H-index: 49
    Visvanathan Ramesh

    Visvanathan Ramesh

    Goethe-Universität Frankfurt am Main

    H-index: 43
    Walter Scheirer

    Walter Scheirer

    University of Notre Dame

    H-index: 28
    Manuel Günther

    Manuel Günther

    Universität Zürich

    academic-engine

    Useful Links