Daniel Kifer

Daniel Kifer

Penn State University

H-index: 43

North America-United States

About Daniel Kifer

Daniel Kifer, With an exceptional h-index of 43 and a recent h-index of 39 (since 2020), a distinguished researcher at Penn State University, specializes in the field of privacy, machine learning.

His recent articles reflect a diverse array of research interests and contributions to the field:

PatchRefineNet: Improving Binary Segmentation by Incorporating Signals from Optimal Patch-wise Binarization

Reply to Muralidhar et al., Kenny et al., and Hotz et al.: The benefits of engagement with external research teams

An optimal and scalable matrix mechanism for noisy marginals under convex loss functions

Exact Privacy Analysis of the Gaussian Sparse Histogram Mechanism

Stability Analysis of Various Symbolic Rule Extraction Methods from Recurrent Neural Network

On the Tensor Representation and Algebraic Homomorphism of the Neural State Turing Machine

Physics-guided machine learning for laboratory earthquake prediction

On the computational complexity and formal hierarchy of second order recurrent neural networks

Daniel Kifer Information

University

Penn State University

Position

___

Citations(all)

17432

Citations(since 2020)

8102

Cited By

12921

hIndex(all)

43

hIndex(since 2020)

39

i10Index(all)

94

i10Index(since 2020)

83

Email

University Profile Page

Penn State University

Daniel Kifer Skills & Research Interests

privacy

machine learning

Top articles of Daniel Kifer

PatchRefineNet: Improving Binary Segmentation by Incorporating Signals from Optimal Patch-wise Binarization

Authors

Savinay Nagendra,Daniel Kifer

Published Date

2024

The purpose of binary segmentation models is to determine which pixels belong to an object of interest (eg, which pixels in an image are part of roads). The models assign a logit score (ie, probability) to each pixel and these are converted into predictions by thresholding (ie, each pixel with logit score>= t is predicted to be part of a road). However, a common phenomenon in current and former state-of-the-art segmentation models is spatial bias--in some patches, the logit scores are consistently biased upwards and in others they are consistently biased downwards. These biases cause false positives and false negatives in the final predictions. In this paper, we propose PatchRefineNet (PRN), a small network that sits on top of a base segmentation model and learns to correct its patch-specific biases. Across a wide variety of base models, PRN consistently helps them improve mIoU by 2-3%. One of the key ideas behind PRN is the addition of a novel supervision signal during training. Given the logit scores produced by the base segmentation model, each pixel is given a pseudo-label that is obtained by optimally thresholding the logit scores in each image patch. Incorporating these pseudo-labels into the loss function of PRN helps correct systematic biases and reduce false positives/negatives. Although we mainly focus on binary segmentation, we also show how PRN can be extended to saliency detection and few-shot segmentation. We also discuss how the ideas can be extended to multi-class segmentation. Source code is available at https://github. com/savinay95n/PatchRefineNet.

Reply to Muralidhar et al., Kenny et al., and Hotz et al.: The benefits of engagement with external research teams

Authors

Ron S Jarmin,John M Abowd,Robert Ashmead,Ryan Cumings-Menon,Nathan Goldschlag,Michael Hawes,Sallie Ann Keller,Daniel Kifer,Philip Leclerc,Jerome P Reiter,Rolando A Rodríguez,Ian Schmutte,Victoria A Velkoff,Pavel I Zhuravlev

Journal

Proceedings of the National Academy of Sciences

Published Date

2024/3/12

We thank Kenny et al.(1), Hotz et al.(2), and Muralidhar et al.(3) for taking the time to comment on our recent paper [Jarmin et al.(4)]. The modernization of disclosure avoidance technology at the Census Bureau has elicited substantial response from academia, and we are delighted to see that the letters to the editor exhibit the passion that the authors hold for their research and for census data. Feedback from these and other scholars resulted in tangible and notable improvements to the 2020 Census Disclosure Avoidance System, and we are confident that continued discussion and debate on these issues will serve to further benefit the Census Bureau and its statistical products moving forward. For that, we are extremely grateful.

An optimal and scalable matrix mechanism for noisy marginals under convex loss functions

Authors

Yingtai Xiao,Guanlin He,Danfeng Zhang,Daniel Kifer

Journal

Advances in Neural Information Processing Systems

Published Date

2024/2/13

Noisy marginals are a common form of confidentiality-protecting data release and are useful for many downstream tasks such as contingency table analysis, construction of Bayesian networks, and even synthetic data generation. Privacy mechanisms that provide unbiased noisy answers to linear queries (such as marginals) are known as matrix mechanisms. We propose ResidualPlanner, a matrix mechanism for marginals with Gaussian noise that is both optimal and scalable. ResidualPlanner can optimize for many loss functions that can be written as a convex function of marginal variances (prior work was restricted to just one predefined objective function). ResidualPlanner can optimize the accuracy of marginals in large scale settings in seconds, even when the previous state of the art (HDMM) runs out of memory. It even runs on datasets with 100 attributes in a couple of minutes. Furthermore ResidualPlanner can efficiently compute variance/covariance values for each marginal (prior methods quickly run out of memory, even for relatively small datasets).

Exact Privacy Analysis of the Gaussian Sparse Histogram Mechanism

Authors

Arjun Wilkins,Daniel Kifer,Danfeng Zhang,Brian Karrer

Journal

Journal of Privacy and Confidentiality

Published Date

2024/2/11

Sparse histogram methods can be useful for returning differentially private counts of items in large or infinite histograms, large group-by queries, and more generally, releasing a set of statistics with sufficient item counts. We consider the Gaussian version of the sparse histogram mechanism and study the exact epsilon, delta differential privacy guarantees satisfied by this mechanism. We compare these exact epsilon, delta parameters to the simpler overestimates used in prior work to quantify the impact of their looser privacy bounds.

Stability Analysis of Various Symbolic Rule Extraction Methods from Recurrent Neural Network

Authors

Neisarg Dave,Daniel Kifer,C Lee Giles,Ankur Mali

Journal

arXiv preprint arXiv:2402.02627

Published Date

2024/2/4

This paper analyzes two competing rule extraction methodologies: quantization and equivalence query. We trained RNN models, extracting DFA with a quantization approach (k-means and SOM) and DFA by equivalence query() methods across initialization seeds. We sampled the datasets from Tomita and Dyck grammars and trained them on RNN cells: LSTM, GRU, O2RNN, and MIRNN. The observations from our experiments establish the superior performance of O2RNN and quantization-based rule extraction over others. , primarily proposed for regular grammars, performs similarly to quantization methods for Tomita languages when neural networks are perfectly trained. However, for partially trained RNNs, shows instability in the number of states in DFA, e.g., for Tomita 5 and Tomita 6 languages, produced more than states. In contrast, quantization methods result in rules with number of states very close to ground truth DFA. Among RNN cells, O2RNN produces stable DFA consistently compared to other cells. For Dyck Languages, we observe that although GRU outperforms other RNNs in network performance, the DFA extracted by O2RNN has higher performance and better stability. The stability is computed as the standard deviation of accuracy on test sets on networks trained across seeds. On Dyck Languages, quantization methods outperformed with better stability in accuracy and the number of states. often showed instability in accuracy in the order of for GRU and MIRNN while deviation for quantization methods varied in . In many instances with LSTM and GRU, DFA's extracted by  …

On the Tensor Representation and Algebraic Homomorphism of the Neural State Turing Machine

Authors

Ankur Mali,Alexander Ororbia,Daniel Kifer,Lee Giles

Journal

arXiv preprint arXiv:2309.14690

Published Date

2023/9/26

Recurrent neural networks (RNNs) and transformers have been shown to be Turing-complete, but this result assumes infinite precision in their hidden representations, positional encodings for transformers, and unbounded computation time in general. In practical applications, however, it is crucial to have real-time models that can recognize Turing complete grammars in a single pass. To address this issue and to better understand the true computational power of artificial neural networks (ANNs), we introduce a new class of recurrent models called the neural state Turing machine (NSTM). The NSTM has bounded weights and finite-precision connections and can simulate any Turing Machine in real-time. In contrast to prior work that assumes unbounded time and precision in weights, to demonstrate equivalence with TMs, we prove that a -neuron bounded tensor RNN, coupled with third-order synapses, can model any TM class in real-time. Furthermore, under the Markov assumption, we provide a new theoretical bound for a non-recurrent network augmented with memory, showing that a tensor feedforward network with th-order finite precision weights is equivalent to a universal TM.

Physics-guided machine learning for laboratory earthquake prediction

Authors

Parisa Shokouhi,Prabhav Borate,Jacques Riviere,Ankur Mali,Dan Kifer

Journal

EGU General Assembly Conference Abstracts

Published Date

2023/5

Recent laboratory studies of fault friction have shown that deep learning can accurately predict the magnitude and timing of stick-slip sliding events, the laboratory equivalent of earthquakes, from the preceding acoustic emissions (AE) events or time-lapse active-source ultrasonic signals. While there are observations that provide insight into the physics of these predictions, the underlying precursory mechanisms are not fully understood. Furthermore, these purely data-driven models require a large amount of training data and may not generalize well outside their training domain. Here, we present a physics-guided machine learning approach-by incorporating the relevant physics directly in the prediction model architecture-with the objectives of enhancing model predictions and generalizability as well as reducing the amount of required training data. We use data from well-controlled double-direct shear laboratory …

On the computational complexity and formal hierarchy of second order recurrent neural networks

Authors

Ankur Mali,Alexander Ororbia,Daniel Kifer,Lee Giles

Journal

arXiv preprint arXiv:2309.14691

Published Date

2023/9/26

Artificial neural networks (ANNs) with recurrence and self-attention have been shown to be Turing-complete (TC). However, existing work has shown that these ANNs require multiple turns or unbounded computation time, even with unbounded precision in weights, in order to recognize TC grammars. However, under constraints such as fixed or bounded precision neurons and time, ANNs without memory are shown to struggle to recognize even context-free languages. In this work, we extend the theoretical foundation for the -order recurrent network ( RNN) and prove there exists a class of a RNN that is Turing-complete with bounded time. This model is capable of directly encoding a transition table into its recurrent weights, enabling bounded time computation and is interpretable by design. We also demonstrate that nd order RNNs, without memory, under bounded weights and time constraints, outperform modern-day models such as vanilla RNNs and gated recurrent units in recognizing regular grammars. We provide an upper bound and a stability analysis on the maximum number of neurons required by nd order RNNs to recognize any class of regular grammar. Extensive experiments on the Tomita grammars support our findings, demonstrating the importance of tensor connections in crafting computationally efficient RNNs. Finally, we show order RNNs are also interpretable by extraction and can extract state machines with higher success rates as compared to first-order RNNs. Our results extend the theoretical foundations of RNNs and offer promising avenues for future explainable AI research.

Differentiable modeling to unify machine learning and physical models and advance Geosciences

Authors

Chaopeng Shen,Alison P Appling,Pierre Gentine,Toshiyuki Bandai,Hoshin Gupta,Alexandre Tartakovsky,Marco Baity-Jesi,Fabrizio Fenicia,Daniel Kifer,Li Li,Xiaofeng Liu,Wei Ren,Yi Zheng,Ciaran J Harman,Martyn Clark,Matthew Farthing,Dapeng Feng,Praveen Kumar,Doaa Aboelyazeed,Farshid Rahmani,Hylke E Beck,Tadd Bindas,Dipankar Dwivedi,Kuai Fang,Marvin Höge,Chris Rackauckas,Tirthankar Roy,Chonggang Xu,Binayak Mohanty,Kathryn Lawson

Journal

arXiv preprint arXiv:2301.04027

Published Date

2023/1/10

Process-Based Modeling (PBM) and Machine Learning (ML) are often perceived as distinct paradigms in the geosciences. Here we present differentiable geoscientific modeling as a powerful pathway toward dissolving the perceived barrier between them and ushering in a paradigm shift. For decades, PBM offered benefits in interpretability and physical consistency but struggled to efficiently leverage large datasets. ML methods, especially deep networks, presented strong predictive skills yet lacked the ability to answer specific scientific questions. While various methods have been proposed for ML-physics integration, an important underlying theme -- differentiable modeling -- is not sufficiently recognized. Here we outline the concepts, applicability, and significance of differentiable geoscientific modeling (DG). "Differentiable" refers to accurately and efficiently calculating gradients with respect to model variables, critically enabling the learning of high-dimensional unknown relationships. DG refers to a range of methods connecting varying amounts of prior knowledge to neural networks and training them together, capturing a different scope than physics-guided machine learning and emphasizing first principles. Preliminary evidence suggests DG offers better interpretability and causality than ML, improved generalizability and extrapolation capability, and strong potential for knowledge discovery, while approaching the performance of purely data-driven ML. DG models require less training data while scaling favorably in performance and efficiency with increasing amounts of data. With DG, geoscientists may be better able to frame and investigate …

The 2010 Census Confidentiality Protections Failed, Here's How and Why

Authors

John M Abowd,Tamara Adams,Robert Ashmead,David Darais,Sourya Dey,Simson L Garfinkel,Nathan Goldschlag,Daniel Kifer,Philip Leclerc,Ethan Lew,Scott Moore,Rolando A Rodríguez,Ramy N Tadros,Lars Vilhuber

Published Date

2023/12/25

Using only 34 published tables, we reconstruct five variables (census block, sex, age, race, and ethnicity) in the confidential 2010 Census person records. Using the 38-bin age variable tabulated at the census block level, at most 20.1% of reconstructed records can differ from their confidential source on even a single value for these five variables. Using only published data, an attacker can verify that all records in 70% of all census blocks (97 million people) are perfectly reconstructed. The tabular publications in Summary File 1 thus have prohibited disclosure risk similar to the unreleased confidential microdata. Reidentification studies confirm that an attacker can, within blocks with perfect reconstruction accuracy, correctly infer the actual census response on race and ethnicity for 3.4 million vulnerable population uniques (persons with nonmodal characteristics) with 95% accuracy, the same precision as the confidential data achieve and far greater than statistical baselines. The flaw in the 2010 Census framework was the assumption that aggregation prevented accurate microdata reconstruction, justifying weaker disclosure limitation methods than were applied to 2010 Census public microdata. The framework used for 2020 Census publications defends against attacks that are based on reconstruction, as we also demonstrate here. Finally, we show that alternatives to the 2020 Census Disclosure Avoidance System with similar accuracy (enhanced swapping) also fail to protect confidentiality, and those that partially defend against reconstruction attacks (incomplete suppression implementations) destroy the primary statutory use case: data for …

A Floating-Point Secure Implementation of the Report Noisy Max with Gap Mechanism

Authors

Zeyu Ding,John Durrell,Daniel Kifer,Prottay Protivash,Guanhong Wang,Yuxin Wang,Yingtai Xiao,Danfeng Zhang

Journal

arXiv preprint arXiv:2308.08057

Published Date

2023/8/15

The Noisy Max mechanism and its variations are fundamental private selection algorithms that are used to select items from a set of candidates (such as the most common diseases in a population), while controlling the privacy leakage in the underlying data. A recently proposed extension, Noisy Top-k with Gap, provides numerical information about how much better the selected items are compared to the non-selected items (e.g., how much more common are the selected diseases). This extra information comes at no privacy cost but crucially relies on infinite precision for the privacy guarantees. In this paper, we provide a finite-precision secure implementation of this algorithm that takes advantage of integer arithmetic.

Answering Private Linear Queries Adaptively using the Common Mechanism

Authors

Daniel Kifer

Published Date

2023

Answering Private Linear Queries Adaptively using the Common Mechanism ScholarSphere Penn State – University Libraries About Help Contact Login Answering Private Linear Queries Adaptively using the Common Mechanism Answering Private Linear Queries Adaptively using the Common Mechanism Yingtai Xiao, Guanhong Wang, Danfeng Zhang, Daniel Kifer in VLDB 2023 It also includes the Age Gender and HispRace datasets extracted from the 2010 Census Summary File 1 Files commonmech2.zip size: 67.3 MB | mime_type: application/zip | date: 2023-06-19 | sha256: 426c917 content_paste Metadata Work Title Answering Private Linear Queries Adaptively using the Common Mechanism Access Open Access Creators 1.Daniel Kifer License MIT License Work Type Software Or Program Code Acknowledgments 1.Primary Developers: Yingtai Xiao, Guanhong Wang Publication Date 2023 Deposited June …

Disclosure Avoidance for the 2020 Census Demographic and Housing Characteristics File

Authors

Ryan Cumings-Menon,Robert Ashmead,Daniel Kifer,Philip Leclerc,Matthew Spence,Pavel Zhuravlev,John M Abowd

Published Date

2023/12/18

In "The 2020 Census Disclosure Avoidance System TopDown Algorithm," Abowd et al. (2022) describe the concepts and methods used by the Disclosure Avoidance System (DAS) to produce formally private output in support of the 2020 Census data product releases, with a particular focus on the DAS implementation that was used to create the 2020 Census Redistricting Data (P.L. 94-171) Summary File. In this paper we describe the updates to the DAS that were required to release the Demographic and Housing Characteristics (DHC) File, which provides more granular tables than other data products, such as the Redistricting Data Summary File. We also describe the final configuration parameters used for the production DHC DAS implementation, as well as subsequent experimental data products to facilitate development of tools that provide confidence intervals for confidential 2020 Census tabulations.

Differentiable modelling to unify machine learning and physical models for geosciences

Authors

Chaopeng Shen,Alison P Appling,Pierre Gentine,Toshiyuki Bandai,Hoshin Gupta,Alexandre Tartakovsky,Marco Baity-Jesi,Fabrizio Fenicia,Daniel Kifer,Li Li,Xiaofeng Liu,Wei Ren,Yi Zheng,Ciaran J Harman,Martyn Clark,Matthew Farthing,Dapeng Feng,Praveen Kumar,Doaa Aboelyazeed,Farshid Rahmani,Yalan Song,Hylke E Beck,Tadd Bindas,Dipankar Dwivedi,Kuai Fang,Marvin Höge,Chris Rackauckas,Binayak Mohanty,Tirthankar Roy,Chonggang Xu,Kathryn Lawson

Published Date

2023/8

Process-based modelling offers interpretability and physical consistency in many domains of geosciences but struggles to leverage large datasets efficiently. Machine-learning methods, especially deep networks, have strong predictive skills yet are unable to answer specific scientific questions. In this Perspective, we explore differentiable modelling as a pathway to dissolve the perceived barrier between process-based modelling and machine learning in the geosciences and demonstrate its potential with examples from hydrological modelling. ‘Differentiable’ refers to accurately and efficiently calculating gradients with respect to model variables or parameters, enabling the discovery of high-dimensional unknown relationships. Differentiable modelling involves connecting (flexible amounts of) prior physical knowledge to neural networks, pushing the boundary of physics-informed machine learning. It offers better …

Estimating Uncertainty in Landslide Segmentation Models

Authors

Savinay Nagendra,Chaopeng Shen,Daniel Kifer

Journal

arXiv preprint arXiv:2311.11138

Published Date

2023/11/18

Landslides are a recurring, widespread hazard. Preparation and mitigation efforts can be aided by a high-quality, large-scale dataset that covers global at-risk areas. Such a dataset currently does not exist and is impossible to construct manually. Recent automated efforts focus on deep learning models for landslide segmentation (pixel labeling) from satellite imagery. However, it is also important to characterize the uncertainty or confidence levels of such segmentations. Accurate and robust uncertainty estimates can enable low-cost (in terms of manual labor) oversight of auto-generated landslide databases to resolve errors, identify hard negative examples, and increase the size of labeled training data. In this paper, we evaluate several methods for assessing pixel-level uncertainty of the segmentation. Three methods that do not require architectural changes were compared, including Pre-Threshold activations, Monte-Carlo Dropout and Test-Time Augmentation -- a method that measures the robustness of predictions in the face of data augmentation. Experimentally, the quality of the latter method was consistently higher than the others across a variety of models and metrics in our dataset.

Database Systems

Authors

Johann-Christoph FREYTAG,Theo HÄRDER

Journal

Computer science (Berlin. Print)

Published Date

2009

DataBase Systems CNRS Inist Pascal-Francis CNRS Pascal and Francis Bibliographic Databases Simple search Advanced search Search by classification Search by vocabulary My Account Home > Search results Help Export Export Selection : Selected items (1) Format : Permanent link CopyPermanent link Copy http://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=21913139 DataBase Systems Author FREYTAG, Johann-Christoph (Editor) 1 ; HÄRDER, Theo (Editor) 2 [1] Institut für Informatik, Humboldt-Universität zu Berlin, 12489 Berlin, Germany [2] Dept. of Computer Science, University of Kaiserslautern, 67663 Kaiserslautern, Germany Source Computer science (Berlin. Print). 2009, Vol 24, Num 3, 71 p. ; ref : dissem ISSN 1865-2034 Scientific domain Computer science Publisher Springer, Heidelberg Publication country Germany Document type Serial Issue Language English Classification …

Backpropagation-free deep learning with recursive local representation alignment

Authors

Alexander G Ororbia,Ankur Mali,Daniel Kifer,C Lee Giles

Journal

Proceedings of the AAAI Conference on Artificial Intelligence

Published Date

2023/6/26

Training deep neural networks on large-scale datasets requires significant hardware resources whose costs (even on cloud platforms) put them out of reach of smaller organizations, groups, and individuals. Backpropagation (backprop), the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize. Furthermore, researchers must continually develop various specialized techniques, such as particular weight initializations and enhanced activation functions, to ensure stable parameter optimization. Our goal is to seek an effective, neuro-biologically plausible alternative to backprop that can be used to train deep networks. In this paper, we propose a backprop-free procedure, recursive local representation alignment, for training large-scale architectures. Experiments with residual networks on CIFAR-10 and the large benchmark, ImageNet, show that our algorithm generalizes as well as backprop while converging sooner due to weight updates that are parallelizable and computationally less demanding. This is empirical evidence that a backprop-free algorithm can scale up to larger datasets.

An in-depth examination of requirements for disclosure risk assessment

Authors

Ron S Jarmin,John M Abowd,Robert Ashmead,Ryan Cumings-Menon,Nathan Goldschlag,Michael B Hawes,Sallie Ann Keller,Daniel Kifer,Philip Leclerc,Jerome P Reiter,Rolando A Rodríguez,Ian Schmutte,Victoria A Velkoff,Pavel Zhuravlev

Journal

Proceedings of the National Academy of Sciences

Published Date

2023/10/24

The use of formal privacy to protect the confidentiality of responses in the 2020 Decennial Census of Population and Housing has triggered renewed interest and debate over how to measure the disclosure risks and societal benefits of the published data products. We argue that any proposal for quantifying disclosure risk should be based on prespecified, objective criteria. We illustrate this approach to evaluate the absolute disclosure risk framework, the counterfactual framework underlying differential privacy, and prior-to-posterior comparisons. We conclude that satisfying all the desiderata is impossible, but counterfactual comparisons satisfy the most while absolute disclosure risk satisfies the fewest. Furthermore, we explain that many of the criticisms levied against differential privacy would be levied against any technology that is not equivalent to direct, unrestricted access to confidential data. More research is …

Free gap estimates from the exponential mechanism, sparse vector, noisy max and related algorithms

Authors

Zeyu Ding,Yuxin Wang,Yingtai Xiao,Guanhong Wang,Danfeng Zhang,Daniel Kifer

Journal

The VLDB Journal

Published Date

2023/1

Private selection algorithms, such as the exponential mechanism, noisy max and sparse vector, are used to select items (such as queries with large answers) from a set of candidates, while controlling privacy leakage in the underlying data. Such algorithms serve as building blocks for more complex differentially private algorithms. In this paper we show that these algorithms can release additional information related to the gaps between the selected items and the other candidates for free (i.e., at no additional privacy cost). This free gap information can improve the accuracy of certain follow-up counting queries by up to 66%. We obtain these results from a careful privacy analysis of these algorithms. Based on this analysis, we further propose novel hybrid algorithms that can dynamically save additional privacy budget.

Using a physics-informed neural network and fault zone acoustic monitoring to predict lab earthquakes

Authors

Prabhav Borate,Jacques Rivière,Chris Marone,Ankur Mali,Daniel Kifer,Parisa Shokouhi

Journal

Nature communications

Published Date

2023/6/21

Predicting failure in solids has broad applications including earthquake prediction which remains an unattainable goal. However, recent machine learning work shows that laboratory earthquakes can be predicted using micro-failure events and temporal evolution of fault zone elastic properties. Remarkably, these results come from purely data-driven models trained with large datasets. Such data are equivalent to centuries of fault motion rendering application to tectonic faulting unclear. In addition, the underlying physics of such predictions is poorly understood. Here, we address scalability using a novel Physics-Informed Neural Network (PINN). Our model encodes fault physics in the deep learning loss function using time-lapse ultrasonic data. PINN models outperform data-driven models and significantly improve transfer learning for small training datasets and conditions outside those used in training. Our work …

See List of Professors in Daniel Kifer University(Penn State University)

Daniel Kifer FAQs

What is Daniel Kifer's h-index at Penn State University?

The h-index of Daniel Kifer has been 39 since 2020 and 43 in total.

What are Daniel Kifer's top articles?

The articles with the titles of

PatchRefineNet: Improving Binary Segmentation by Incorporating Signals from Optimal Patch-wise Binarization

Reply to Muralidhar et al., Kenny et al., and Hotz et al.: The benefits of engagement with external research teams

An optimal and scalable matrix mechanism for noisy marginals under convex loss functions

Exact Privacy Analysis of the Gaussian Sparse Histogram Mechanism

Stability Analysis of Various Symbolic Rule Extraction Methods from Recurrent Neural Network

On the Tensor Representation and Algebraic Homomorphism of the Neural State Turing Machine

Physics-guided machine learning for laboratory earthquake prediction

On the computational complexity and formal hierarchy of second order recurrent neural networks

...

are the top articles of Daniel Kifer at Penn State University.

What are Daniel Kifer's research interests?

The research interests of Daniel Kifer are: privacy, machine learning

What is Daniel Kifer's total number of citations?

Daniel Kifer has 17,432 citations in total.

    academic-engine

    Useful Links