Christos H PAPADIMITRIOU

Christos H PAPADIMITRIOU

Columbia University in the City of New York

H-index: 131

North America-United States

Description

Christos H PAPADIMITRIOU, With an exceptional h-index of 131 and a recent h-index of 54 (since 2020), a distinguished researcher at Columbia University in the City of New York, specializes in the field of Algorithms, Complexity, Game Theory, Evolution, Computational Neuroscience.

Professor Information

University

Columbia University in the City of New York

Position

___

Citations(all)

93854

Citations(since 2020)

17188

Cited By

87803

hIndex(all)

131

hIndex(since 2020)

54

i10Index(all)

331

i10Index(since 2020)

205

Email

University Profile Page

Columbia University in the City of New York

Research & Interests List

Algorithms

Complexity

Game Theory

Evolution

Computational Neuroscience

Top articles of Christos H PAPADIMITRIOU

Computation with Sequences of Assemblies in a Model of the Brain

Even as machine learning exceeds human-level performance on many applications, the generality, robustness, and rapidity of the brain’s learning capabilities remain unmatched. How cognition arises from neural activity is the central open question in neuroscience, inextricable from the study of intelligence itself. A simple formal model of neural activity was proposed in Papadimitriou (2020) and has been subsequently shown, through both mathematical proofs and simulations, to be capable of implementing certain simple cognitive operations via the creation and manipulation of assemblies of neurons. However, many intelligent behaviors rely on the ability to recognize, store, and manipulate temporal sequences of stimuli (planning, language, navigation, to list a few). Here we show that, in the same model, time can be captured naturally as precedence through synaptic weights and plasticity, and, as a result, a range of computations on sequences of assemblies can be carried out. In particular, repeated presentation of a sequence of stimuli leads to the memorization of the sequence through corresponding neural assemblies: upon future presentation of any stimulus in the sequence, the corresponding assembly and its subsequent ones will be activated, one after the other, until the end of the sequence. If the stimulus sequence is presented to two brain areas simultaneously, a scaffolded representation is created, resulting in more efficient memorization and recall, in agreement with cognitive experiments. Finally, we show that any finite state machine can be learned in a similar way, through the presentation of appropriate patterns of sequences …

Authors

Max Dabagia,Christos Papadimitriou,Santosh Vempala

Published Date

2024/3/15

On Limitations of the Transformer Architecture

What are the root causes of hallucinations in large language models (LLMs)? We use Communication Complexity to prove that the Transformer layer is incapable of composing functions (e.g., identify a grandparent of a person in a genealogy) if the domains of the functions are large enough; we show through examples that this inability is already empirically present when the domains are quite small. We also point out that several mathematical tasks that are at the core of the so-called compositional tasks thought to be hard for LLMs are unlikely to be solvable by Transformers, for large enough instances and assuming that certain well accepted conjectures in the field of Computational Complexity are true.

Authors

Binghui Peng,Srini Narayanan,Christos Papadimitriou

Journal

arXiv preprint arXiv:2402.08164

Published Date

2024/2/13

The complexity of non-stationary reinforcement learning

The problem of continual learning in the domain of reinforcement learning, often called non-stationary reinforcement learning, has been identified as an important challenge to the application of reinforcement learning. We prove a worst-case complexity result, which we believe captures this challenge: Modifying the probabilities or the reward of a single state-action pair in a reinforcement learning problem requires an amount of time almost as large as the number of states in order to keep the value function up to date, unless the strong exponential time hypothesis (SETH) is false; SETH is a widely accepted strengthening of the P NP conjecture. Recall that the number of states in current applications of reinforcement learning is typically astronomical. In contrast, we show that just a new state-action pair is considerably easier to implement.

Authors

Christos Papadimitriou,Binghui Peng

Journal

arXiv preprint arXiv:2307.06877

Published Date

2023/7/13

Computation with Sequences in the Brain

Even as machine learning exceeds human-level performance on many applications, the generality, robustness, and rapidity of the brain's learning capabilities remain unmatched. How cognition arises from neural activity is a central open question in neuroscience, inextricable from the study of intelligence itself. A simple formal model of neural activity was proposed in Papadimitriou [2020] and has been subsequently shown, through both mathematical proofs and simulations, to be capable of implementing certain simple cognitive operations via the creation and manipulation of assemblies of neurons. However, many intelligent behaviors rely on the ability to recognize, store, and manipulate temporal sequences of stimuli (planning, language, navigation, to list a few). Here we show that, in the same model, time can be captured naturally as precedence through synaptic weights and plasticity, and, as a result, a range of computations on sequences of assemblies can be carried out. In particular, repeated presentation of a sequence of stimuli leads to the memorization of the sequence through corresponding neural assemblies: upon future presentation of any stimulus in the sequence, the corresponding assembly and its subsequent ones will be activated, one after the other, until the end of the sequence. Finally, we show that any finite state machine can be learned in a similar way, through the presentation of appropriate patterns of sequences. Through an extension of this mechanism, the model can be shown to be capable of universal computation. We support our analysis with a number of experiments to probe the limits of learning in this model in …

Authors

Max Dabagia,Christos H Papadimitriou,Santosh S Vempala

Journal

arXiv preprint arXiv:2306.03812

Published Date

2023/6/6

An impossibility theorem in game dynamics

The Nash equilibrium—a combination of choices by the players of a game from which no self-interested player would deviate—is the predominant solution concept in game theory. Even though every game has a Nash equilibrium, it is not known whether there are deterministic behaviors of the players who play a game repeatedly that are guaranteed to converge to a Nash equilibrium of the game from all starting points. If one assumes that the players’ behavior is a discrete-time or continuous-time rule whereby the current mixed strategy profile is mapped to the next, this question becomes a problem in the theory of dynamical systems. We apply this theory, and in particular Conley index theory, to prove a general impossibility result: There exist games, for which all game dynamics fail to converge to Nash equilibria from all starting points. The games which help prove this impossibility result are degenerate, but we …

Authors

Jason Milionis,Christos Papadimitriou,Georgios Piliouras,Kelly Spendlove

Journal

Proceedings of the National Academy of Sciences

Published Date

2023/10/10

Neuroscience needs network science

The brain is a complex system comprising a myriad of interacting neurons, posing significant challenges in understanding its structure, function, and dynamics. Network science has emerged as a powerful tool for studying such interconnected systems, offering a framework for integrating multiscale data and complexity. To date, network methods have significantly advanced functional imaging studies of the human brain and have facilitated the development of control theory-based applications for directing brain activity. Here, we discuss emerging frontiers for network neuroscience in the brain atlas era, addressing the challenges and opportunities in integrating multiple data streams for understanding the neural transitions from development to healthy function to disease. We underscore the importance of fostering interdisciplinary opportunities through workshops, conferences, and funding initiatives, such as …

Authors

Dániel L Barabási,Ginestra Bianconi,Ed Bullmore,Mark Burgess,SueYeon Chung,Tina Eliassi-Rad,Dileep George,István A Kovács,Hernán Makse,Thomas E Nichols,Christos Papadimitriou,Olaf Sporns,Kim Stachenfeld,Zoltán Toroczkai,Emma K Towlson,Anthony M Zador,Hongkui Zeng,Albert-László Barabási,Amy Bernard,György Buzsáki

Journal

Journal of Neuroscience

Published Date

2023/8/23

The Architecture of a Biologically Plausible Language Organ

We present a simulated biologically plausible language organ, made up of stylized but realistic neurons, synapses, brain areas, plasticity, and a simplified model of sensory perception. We show through experiments that this model succeeds in an important early step in language acquisition: the learning of nouns, verbs, and their meanings, from the grounded input of only a modest number of sentences. Learning in this system is achieved through Hebbian plasticity, and without backpropagation. Our model goes beyond a parser previously designed in a similar environment, with the critical addition of a biologically plausible account for how language can be acquired in the infant's brain, not just processed by a mature brain.

Authors

Daniel Mitropolsky,Christos H Papadimitriou

Journal

arXiv preprint arXiv:2306.15364

Published Date

2023/6/27

Nash, conley, and computation: Impossibility and incompleteness in game dynamics

Under what conditions do the behaviors of players, who play a game repeatedly, converge to a Nash equilibrium? If one assumes that the players' behavior is a discrete-time or continuous-time rule whereby the current mixed strategy profile is mapped to the next, this becomes a problem in the theory of dynamical systems. We apply this theory, and in particular the concepts of chain recurrence, attractors, and Conley index, to prove a general impossibility result: there exist games for which any dynamics is bound to have starting points that do not end up at a Nash equilibrium. We also prove a stronger result for -approximate Nash equilibria: there are games such that no game dynamics can converge (in an appropriate sense) to -Nash equilibria, and in fact the set of such games has positive measure. Further numerical results demonstrate that this holds for any between zero and . Our results establish that, although the notions of Nash equilibria (and its computation-inspired approximations) are universally applicable in all games, they are also fundamentally incomplete as predictors of long term behavior, regardless of the choice of dynamics.

Authors

Jason Milionis,Christos Papadimitriou,Georgios Piliouras,Kelly Spendlove

Journal

arXiv preprint arXiv:2203.14129

Published Date

2022/3/26

Professor FAQs

What is Christos H PAPADIMITRIOU's h-index at Columbia University in the City of New York?

The h-index of Christos H PAPADIMITRIOU has been 54 since 2020 and 131 in total.

What are Christos H PAPADIMITRIOU's research interests?

The research interests of Christos H PAPADIMITRIOU are: Algorithms, Complexity, Game Theory, Evolution, Computational Neuroscience

What is Christos H PAPADIMITRIOU's total number of citations?

Christos H PAPADIMITRIOU has 93,854 citations in total.

What are the co-authors of Christos H PAPADIMITRIOU?

The co-authors of Christos H PAPADIMITRIOU are Scott Shenker, Jon Kleinberg, John Tsitsiklis, Mihalis Yannakakis, Joseph S. B. Mitchell, Santosh S. Vempala.

Co-Authors

H-index: 161
Scott Shenker

Scott Shenker

University of California, Berkeley

H-index: 122
Jon Kleinberg

Jon Kleinberg

Cornell University

H-index: 96
John Tsitsiklis

John Tsitsiklis

Massachusetts Institute of Technology

H-index: 94
Mihalis Yannakakis

Mihalis Yannakakis

Columbia University in the City of New York

H-index: 74
Joseph S. B. Mitchell

Joseph S. B. Mitchell

Stony Brook University

H-index: 71
Santosh S. Vempala

Santosh S. Vempala

Georgia Institute of Technology

academic-engine

Useful Links