Alexey Svyatkovskiy

Alexey Svyatkovskiy

Princeton University

H-index: 200

North America-United States

Description

Alexey Svyatkovskiy, With an exceptional h-index of 200 and a recent h-index of 99 (since 2020), a distinguished researcher at Princeton University, specializes in the field of Particle physics, Machine learning, Computational Science.

Professor Information

University

Princeton University

Position

Microsoft Purdue University

Citations(all)

159712

Citations(since 2020)

51449

Cited By

125775

hIndex(all)

200

hIndex(since 2020)

99

i10Index(all)

604

i10Index(since 2020)

495

Email

University Profile Page

Princeton University

Research & Interests List

Particle physics

Machine learning

Computational Science

Top articles of Alexey Svyatkovskiy

Transfer learning system for automated software engineering tasks

A transfer learning system is used for the development of neural transformer models pertaining to software engineering tasks. The transfer learning system trains source code domain neural transformer models with attention in various configurations on a large corpus of unsupervised training dataset of source code programs and/or source code-related natural language text. A web service provides the trained models for use in developing a model that may be fine-tuned on a supervised training dataset associated with a software engineering task thereby generating a tool to perform the software engineering task.

Published Date

2024/2/13

Custom models for source code generation via prefix-tuning

Custom source code generation models are generated by tuning a pre-trained deep learning model by freezing the model parameters and optimizing a prefix. The tuning process is distributed across a user space and a model space where the embedding and output layers are performed in the user space and the execution of the model is performed in a model space that is isolated from the user space. The tuning process updates the embeddings of the prefix across the separate execution spaces in a manner that preserves the privacy of the data used in the tuning process.

Published Date

2023/5/25

Unit test case generation with transformers

A unit test generation system employs a neural transformer model with attention to generate candidate unit test sequences given a focal method of a programming language. The neural transformer model is pre-trained with source code programs and natural language text and fine-tuned with mapped test case pairs. A mapped test case pair includes a focal method and a unit test case for the focal method. In this manner, the neural transformer model is trained to learn the semantics and statistical properties of a natural language, the syntax of a programming language and the relationships between the code elements of the programming language and the syntax of a unit test case.

Published Date

2024/2/6

Debugging tool for code generation neural language models

A debugging tool identifies the smallest subset of an input sequence or rationales that influenced a neural language model to generate an output sequence. The debugging tool uses the rationales to understand why the model made its predictions and in particular, the particular input tokens that had the most impact on the output sequence. In the case of erroneous output, the rationales are used to alter the input sequence to avoid the error or to tailor a new training dataset to retrain the model to improve its performance.

Published Date

2024/3/28

Multi-lingual line-of-code completion system

A code completion tool uses a neural transformer model to generate candidate sequences to complete a line of source code. The neural transformer model is trained using a conditional language modeling objective on a large unsupervised dataset that includes source code programs written in several different programming languages. The neural transformer model is used within a beam search that predicts the most likely candidate sequences for a code snippet under development.

Published Date

2024/1/25

Code generation through reinforcement learning using code-quality rewards

A deep learning model trained to learn to predict source code is tuned for a target source code generation task through reinforcement learning using a reward score that considers the quality of the source code predicted during the tuning process. The reward score is adjusted to consider code-quality factors and source code metrics. The code-quality factors account for the predicted source code having syntactic correctness, successful compilation, successful execution, successful invocation, readability, functional correctness, and coverage. The source code metrics generate a score based on how close the predicted source code is to a ground truth code.

Published Date

2024/3/26

Syntax subtree code strengthening

During software development, embodiments find various kinds of weak spots in source code and automatically suggest fixes to strengthen the code, without requiring developers to expressly select weakness finder mechanisms or fixer mechanisms by navigating a development tool's menu system. Weakness finders may analyze code using items such as hole detection, diagnostic errors, test results, changed code matches, prospective code discrepancies, generated code confidence scores, generated suggestion competition, and artificial intelligence. Weak spots and their context are submitted to weak spot fixers, which may generate fix suggestions using functionalities such as code synthesis, refactoring, autocompletion, retesting, and artificial intelligence. Fix candidate sets may be evaluated for consistency, diagnostic errors, and discrepancies. Snippets may be dynamically filled for presentation to a user.

Published Date

2024/1/4

Automatic generation of assert statements for unit test cases

An assert statement generator employs a neural transformer model with attention to generate candidate assert statements for a unit test method that tests a focal method. The neural transformer model is pre-trained with source code programs and natural language text and fine-tuned with test-assert triplets. A test-assert triplet includes a source code snippet that includes:(1) a unit test method with an assert placeholder;(2) the focal method; and (3) a corresponding assert statement. In this manner, the neural transformer model is trained to learn the semantics and statistical properties of a natural language, the syntax of a programming language, and the relationships between the code elements of the programming language and the syntax of an assert statement.

Published Date

2023/11/28

Professor FAQs

What is Alexey Svyatkovskiy's h-index at Princeton University?

The h-index of Alexey Svyatkovskiy has been 99 since 2020 and 200 in total.

What are Alexey Svyatkovskiy's research interests?

The research interests of Alexey Svyatkovskiy are: Particle physics, Machine learning, Computational Science

What is Alexey Svyatkovskiy's total number of citations?

Alexey Svyatkovskiy has 159,712 citations in total.

academic-engine

Useful Links