John P.A. Ioannidis

John P.A. Ioannidis

Stanford University

H-index: 254

North America-United States

Professor Information

University

Stanford University

Position

Professor of Medicine/Health Research & Policy/Biomedical Data Science/Statistics

Citations(all)

540658

Citations(since 2020)

322700

Cited By

343045

hIndex(all)

254

hIndex(since 2020)

178

i10Index(all)

1282

i10Index(since 2020)

1012

Email

University Profile Page

Stanford University

Research & Interests List

meta-research

clinical epidemiology

evidence-based medicine

research methods

meta-analysis

Top articles of John P.A. Ioannidis

Guidance to best tools and practices for systematic reviews

Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy. A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how …

Authors

Kat Kolaski,Lynne Romeiser Logan,John PA Ioannidis

Published Date

2024/1

Consolidated guidance for behavioral intervention pilot and feasibility studies

In the behavioral sciences, conducting pilot and/or feasibility studies (PFS) is a key step that provides essential information used to inform the design, conduct, and implementation of a larger-scale trial. There are more than 160 published guidelines, reporting checklists, frameworks, and recommendations related to PFS. All of these publications offer some form of guidance on PFS, but many focus on one or a few topics. This makes it difficult for researchers wanting to gain a broader understanding of all the relevant and important aspects of PFS and requires them to seek out multiple sources of information, which increases the risk of missing key considerations to incorporate into their PFS. The purpose of this study was to develop a consolidated set of considerations for the design, conduct, implementation, and reporting of PFS for interventions conducted in the behavioral sciences. To develop this consolidation, we undertook a review of the published guidance on PFS in combination with expert consensus (via a Delphi study) from the authors who wrote such guidance to inform the identified considerations. A total of 161 PFS-related guidelines, checklists, frameworks, and recommendations were identified via a review of recently published behavioral intervention PFS and backward/forward citation tracking of a well-known PFS literature (e.g., CONSORT Ext. for PFS). Authors of all 161 PFS publications were invited to complete a three-round Delphi survey, which was used to guide the creation of a consolidated list of considerations to guide the design, conduct, and reporting of PFS conducted by researchers in the behavioral sciences. A total of …

Authors

Christopher D Pfledderer,Lauren von Klinggraeff,Sarah Burkart,Alexsandra da Silva Bandeira,David R Lubans,Russell Jago,Anthony D Okely,Esther MF van Sluijs,John Ioannidis,James F Thrasher,Xiaoming Li,Michael W Beets

Journal

Pilot and Feasibility Studies

Published Date

2024/12

Footprint of publication selection bias on meta-analyses in medicine, environmental sciences, psychology, and economics

Publication selection bias undermines the systematic accumulation of evidence. To assess the extent of this problem, we survey over 68,000 meta-analyses containing over 700,000 effect size estimates from medicine (67,386/597,699), environmental sciences (199/12,707), psychology (605/23,563), and economics (327/91,421). Our results indicate that meta-analyses in economics are the most severely contaminated by publication selection bias, closely followed by meta-analyses in environmental sciences and psychology, whereas meta-analyses in medicine are contaminated the least. After adjusting for publication selection bias, the median probability of the presence of an effect decreased from 99.9% to 29.7% in economics, from 98.9% to 55.7% in psychology, from 99.8% to 70.7% in environmental sciences, and from 38.0% to 29.7% in medicine. The median absolute effect sizes (in terms of standardized mean differences) decreased from d = 0.20 to d = 0.07 in economics, from d = 0.37 to d = 0.26 in psychology, from d = 0.62 to d = 0.43 in environmental sciences, and from d = 0.24 to d = 0.13 in medicine.

Authors

Frantisek Bartos,Maximilian Maier,Eric-Jan Wagenmakers,Franziska Nippold,Hristos Doucouliagos,John Ioannidis,Willem M Otte,Martina Sladekova,Teshome K Deresssa,Stephan BRUNS,Daniele Fanelli,TD Stanley

Published Date

2024

Leading researchers in the leadership of leading research universities: meta-research analysis

It is unknown to what extent leading researchers are currently involved in the leadership of leading research universities as presidents or as executive board members. The academic administrative leader (president or equivalent role) of each of the 146 Carnegie tier 1 USA universities and of any of the top-100 universities per Times Higher Education (THE) 2024 ranking and the members of the executive governing bodies (Board of Trustees, Council, Corporation or similar) for the each of the top-20 universities per THE 2024 ranking were examined for high citation impact in their scientific subfield. Highly-cited was defined as the top-2% of a composite citation indicator (that considers citations, h-index, co-authorship adjusted hm-index and citations to papers as single, first, last authors) in their main scientific subfield based on career-long impact until end-2022 among all scholars focusing in the same subfield and having published ≥5 full papers. Very highly-cited was similarly defined as the top-0.2%. Science was divided into 174 fields per Science-Metrix classification. 38/146 (26%) tier 1 USA university leaders as of end-2023 were highly-cited and 5/146 (3%) were very highly-cited. The respective figures for the top-100 THE 2024 universities globally were 43/100 and 12/100. For the 13 US universities among the top-20 of THE 2024, the probability of their leader being highly-cited was lower (6/13, 46%) than the probability of a randomly chosen active full tenured professor from their faculty being highly-cited (52-77%). Across 444 board members of 14 top-10 THE 2024 universities with data, only 65 (15%) were academics, and 19 (4%) were …

Authors

John Ioannidis

Journal

bioRxiv

Published Date

2024

Are the Risk of Generalizability Biases Generalizable? A Meta-Epidemiological Study

BackgroundPreliminary studies (eg, pilot/feasibility studies) can result in misleading evidence that an intervention is ready to be evaluated in a large-scale trial when it is not. Risk of Generalizability Biases (RGBs, a set of external validity biases) represent study features that influence estimates of effectiveness, often inflating estimates in preliminary studies which are not replicated in larger-scale trials. While RGBs have been empirically established in interventions targeting obesity, the extent to which RGBs generalize to other health areas is unknown. Understanding the relevance of RGBs across health behavior intervention research can inform organized efforts to reduce their prevalence.PurposeThe purpose of our study was to examine whether RGBs generalize outside of obesity-related interventions.MethodsA systematic review identified health behavior interventions across four behaviors unrelated to obesity that follow a similar intervention development framework of preliminary studies informing larger-scale trials (ie, tobacco use disorder, alcohol use disorder, interpersonal violence, and behaviors related to increased sexually transmitted infections). To be included, published interventions had to be tested in a preliminary study followed by testing in a larger trial (the two studies thus comprising a study pair). We extracted health-related outcomes and coded the presence/absence of RGBs. We used meta-regression models to estimate the impact of RGBs on the change in standardized mean difference (ΔSMD) between the preliminary study and larger trial.ResultsWe identified sixty-nine study pairs, of which forty-seven were eligible for …

Authors

Lauren von Klinggraeff,Chris D Pfledderer,Sarah Burkart,Kaitlyn Ramey,Michal Smith,Alexander C McLain,Bridget Armstrong,R Glenn Weaver,Anthony Okely,David Lubans,John PA Ioannidis,Russell Jago,Gabrielle Turner-McGrievy,James Thrasher,Xiaoming Li,Michael W Beets

Published Date

2024/2/26

Author Correction: Mortality outcomes with hydroxychloroquine and chloroquine in COVID-19 from an international collaborative meta-analysis of randomized trials

The original version of this article contained an error in Table 1, which misidentified the trial included in the meta-analysis registered as NCT04323527 as CloroCOVID19II instead of CloroCOVID19III. The NCT04323527 registration includes the trials CloroCOVID19I and Cloro-COVID19III. CloroCOVID19I was not included in the meta-analysis. In addition, the original version of the Methods section inadvertently omitted details of which formulations of hydroxychloroquine or chloroquine the reported dosages refer to. The following information has been included in the legend for Table 1 and in the corrected methods section:“In all trials that used hydroxychloroquine, dosages refer to hydroxychloroquine sulfate. In trials that used chloroquine, the dosages for ARCHAIC, ChiCTR2000030054 and ChiCTR2000031204 refer to chloroquine phosphate, while those for CloroCOVID19II and CloroCOVID19III refer to …

Authors

Cathrine Axfors,Andreas M Schmitt,Perrine Janiaud,Janneke van’t Hooft,Sherief Abd-Elsalam,Ehab F Abdo,Benjamin S Abella,Javed Akram,Ravi K Amaravadi,Derek C Angus,Yaseen M Arabi,Shehnoor Azhar,Lindsey R Baden,Arthur W Baker,Leila Belkhir,Thomas Benfield,Marvin AH Berrevoets,Cheng-Pin Chen,Tsung-Chia Chen,Shu-Hsing Cheng,Chien-Yu Cheng,Wei-Sheng Chung,Yehuda Z Cohen,Lisa N Cowan,Olav Dalgard,Fernando F de Almeida E Val,Marcus VG de Lacerda,Gisely C de Melo,Lennie Derde,Vincent Dubee,Anissa Elfakir,Anthony C Gordon,Carmen M Hernandez-Cardenas,Thomas Hills,Andy IM Hoepelman,Yi-Wen Huang,Bruno Igau,Ronghua Jin,Felipe Jurado-Camacho,Khalid S Khan,Peter G Kremsner,Benno Kreuels,Cheng-Yu Kuo,Thuy Le,Yi-Chun Lin,Wu-Pu Lin,Tse-Hung Lin,Magnus Nakrem Lyngbakken,Colin McArthur,Bryan J McVerry,Patricia Meza-Meneses,Wuelton M Monteiro,Susan C Morpeth,Ahmad Mourad,Mark J Mulligan,Srinivas Murthy,Susanna Naggie,Shanti Narayanasamy,Alistair Nichol,Lewis A Novack,Sean M O’Brien,Nwora Lance Okeke,Léna Perez,Rogelio Perez-Padilla,Laurent Perrin,Arantxa Remigio-Luna,Norma E Rivera-Martinez,Frank W Rockhold,Sebastian Rodriguez-Llamazares,Robert Rolfe,Rossana Rosa,Helge Røsjø,Vanderson S Sampaio,Todd B Seto,Muhammad Shahzad,Shaimaa Soliman,Jason E Stout,Ireri Thirion-Romero,Andrea B Troxel,Ting-Yu Tseng,Nicholas A Turner,Robert J Ulrich,Stephen R Walsh,Steve A Webb,Jesper M Weehuizen,Maria Velinova,Hon-Lai Wong,Rebekah Wrenn,Fernando G Zampieri,Wu Zhong,David Moher,Steven N Goodman,John PA Ioannidis,Lars G Hemkens

Journal

Nature communications

Published Date

2024/2/5

Best Practices for Data Management and Sharing in Experimental Biomedical Research

Effective data management is crucial for scientific integrity and reproducibility, a cornerstone of scientific progress. Well-organized and well-documented data enable validation and building upon results. Data management encompasses activities including organization, documentation, storage, sharing, and preservation. Robust data management establishes credibility, fostering trust within the scientific community and benefiting researchers' careers. In experimental biomedicine, comprehensive data management is vital due to the typically intricate protocols, extensive metadata, and large datasets. Low-throughput experiments, in particular, require careful management to address variations and errors in protocols and raw data quality. Transparent and accountable research practices rely on accurate documentation of procedures, data collection, and analysis methods. Proper data management ensures long-term …

Authors

Teresa Cunha-Oliveira,John PA Ioannidis,Paulo J Oliveira

Published Date

2024/3/7

Methods proposed for monitoring the implementation of evidence-based research: a cross-sectional study

ObjectivesEvidence-based research (EBR) is the systematic and transparent use of prior research to inform a new study so that it answers questions that matter in a valid, efficient, and accessible manner. This study surveyed experts about existing (e.g., citation analysis) and new methods for monitoring EBR and collected ideas about implementing these methods.Study Design and SettingWe conducted a cross-sectional study via an online survey between November 2022 and March 2023. Participants were experts from the fields of evidence synthesis and research methodology in health research. Open-ended questions were coded by recurring themes; descriptive statistics were used for quantitative questions.ResultsTwenty-eight expert participants suggested that citation analysis should be supplemented with content evaluation (not just what is cited but also in which context), content expert involvement, and …

Authors

Livia Puljak,Małgorzata M Bala,Joanna Zając,Tomislav Meštrović,Sandra Buttigieg,Mary Yanakoulia,Matthias Briel,Carole Lunny,Wiktoria Lesniak,Tina Poklepović Peričić,Pablo Alonso-Coello,Mike Clarke,Benjamin Djulbegovic,Gerald Gartlehner,Konstantinos Giannakou,Anne-Marie Glenny,Claire Glenton,Gordon Guyatt,Lars G Hemkens,John PA Ioannidis,Roman Jaeschke,Karsten Juhl Jørgensen,Carolina Castro Martins-Pfeifer,Ana Marušić,Lawrence Mbuagbaw,Jose Francisco Meneses Echavez,David Moher,Barbara Nussbaumer-Streit,Matthew J Page,Giordano Pérez-Gaxiola,Karen A Robinson,Georgia Salanti,Ian J Saldanha,Jelena Savović,James Thomas,Andrea C Tricco,Peter Tugwell,Joost van Hoof,Dawid Pieper

Published Date

2024/4/1

Professor FAQs

What is John P.A. Ioannidis's h-index at Stanford University?

The h-index of John P.A. Ioannidis has been 178 since 2020 and 254 in total.

What are John P.A. Ioannidis's research interests?

The research interests of John P.A. Ioannidis are: meta-research, clinical epidemiology, evidence-based medicine, research methods, meta-analysis

What is John P.A. Ioannidis's total number of citations?

John P.A. Ioannidis has 540,658 citations in total.

What are the co-authors of John P.A. Ioannidis?

The co-authors of John P.A. Ioannidis are Gordon Guyatt, Douglas G Altman, Matthias Egger, Prof Julian Higgins, David Moher.

Co-Authors

H-index: 296
Gordon Guyatt

Gordon Guyatt

McMaster University

H-index: 281
Douglas G Altman

Douglas G Altman

University of Oxford

H-index: 183
Matthias Egger

Matthias Egger

Universität Bern

H-index: 183
Prof Julian Higgins

Prof Julian Higgins

University of Bristol

H-index: 182
David Moher

David Moher

Ottawa University

academic-engine

Useful Links