Arthur A. Stone

Arthur A. Stone

University of Southern California

H-index: 115

North America-United States

Professor Information

University

University of Southern California

Position

___

Citations(all)

65507

Citations(since 2020)

24563

Cited By

53406

hIndex(all)

115

hIndex(since 2020)

66

i10Index(all)

241

i10Index(since 2020)

177

Email

University Profile Page

University of Southern California

Top articles of Arthur A. Stone

Questions in self‐selection studies used in consumer research for nonprescription drug candidates: Limitations and recommendations

Self‐selection studies assess participants' ability to use label information to decide if a proposed nonprescription drug is appropriate for their use. The optimal use of information from participants' responses in these trials has not been studied. Data from a self‐selection study for a progestogen‐only contraceptive pill (norgestrel 0.075 mg tablets, Opill) were analyzed. Participants (1772 evaluable participants) were provided with the proposed drug facts label and asked sequentially: (1) Given what you have read on the label and your own health history, is this product okay or not okay for you to take home today and start to use?; (2) Why or why not?; (3) Would you like to purchase Opill today to take home for your own use?; (4) Why or why not? The Per Guidance Analysis based on the US Food and Drug Administration's Guidance considered only the response to the first question as defining a selector. The Complete …

Authors

Eric P Brass,Russell D Bradford,Arthur Stone

Journal

Clinical and Translational Science

Published Date

2024/2

Cognitive functioning and the quality of survey responses: An individual participant data meta-analysis of 10 epidemiological studies of aging

Objectives Self-reported survey data are essential for monitoring the health and well-being of the population as it ages. For studies of aging to provide precise and unbiased results, it is necessary that the self-reported information meets high psychometric standards. In this study, we examined whether the quality of survey responses in panel studies of aging depends on respondents’ cognitive abilities. Methods Over 17 million survey responses from 157,844 participants aged 50 years and older in 10 epidemiological studies of aging were analyzed. We derived 6 common statistical indicators of response quality from each participant’s data and estimated the correlations with participants’ cognitive test scores at each study wave. Effect sizes (correlations) were synthesized across studies, cognitive tests, and waves using individual participant data meta-analysis methods. Results …

Authors

Stefan Schneider,Pey-Jiuan Lee,Raymond Hernandez,Doerte U Junghaenel,Arthur A Stone,Erik Meijer,Haomiao Jin,Arie Kapteyn,Bart Orriens,Elizabeth M Zelinski

Journal

The Journals of Gerontology, Series B: Psychological Sciences and Social Sciences

Published Date

2024/3/9

Response times in Ecological Momentary Assessment (EMA): shedding light on the response process with a drift diffusion model

Mental processes underlying people’s responses to Ecological Momentary Assessments (EMA) have rarely been studied. In cognitive psychology, one of the most popular and successful mental process models is the drift diffusion model. It decomposes response time (RT) data to distinguish how fast information is accessed and processed (“drift rate”), and how much information is accessed and processed (“boundary separation”). We examined whether the drift diffusion model could be successfully applied to people’s RTs for EMA questions and could shed light on between- and within-person variation in the mental process components underlying momentary reports. We analyzed EMA data (up to 6 momentary surveys/day for one week) from 954 participants in the Understanding America Study (29,067 completed measurement occasions). An item-response-theory diffusion model was applied to RTs associated …

Authors

Stefan Schneider,Raymond Hernandez,Doerte U Junghaenel,Bart Orriens,Pey-Jiuan Lee,Arthur A Stone

Journal

Current Psychology

Published Date

2024/2

Daily sampling frequency and sampling duration affect reliability of person-level estimates of physical activity outcomes: Optimizing Ecological Momentary Assessment studies of …

Studies on the interrelationship between physical activity (PA) behaviors and EMA-assessed constructs should use measures with high reliability of both the EMA-assessed constructs and the time-matched accelerometry-assessed PA behavior. The aim of this paper is to evaluate how the reliability of accelerometry-assessed PA outcomes is affected by different EMA sampling schemes. Emulating relevant sampling schemes in EMA studies, multiple random samples of real-world accelerometer data (measured via activPAL worn for ∼7 days) were drawn that varied in the number of daily samples (3, 5, and 7 daily samples) and in the duration of each sample (5 min, 60 min, and 120 min), totaling 9 sampling schemes. The reliability of the resulting PA outcomes was estimated by correlating weekly aggregates of the sampled data with the true parameter values (weekly aggregates of all data). A total of 4231 days were …

Authors

Meynard John L Toledo,Stefan Schneider,Arthur A Stone

Journal

Psychology of Sport and Exercise

Published Date

2024/5/1

Just-in-time adaptive ecological momentary assessment (JITA-EMA)

Interest in just-in-time adaptive interventions (JITAI) has rapidly increased in recent years. One core challenge for JITAI is the efficient and precise measurement of tailoring variables that are used to inform the timing of momentary intervention delivery. Ecological momentary assessment (EMA) is often used for this purpose, even though EMA in its traditional form was not designed specifically to facilitate momentary interventions. In this article, we introduce just-in-time adaptive EMA (JITA-EMA) as a strategy to reduce participant response burden and decrease measurement error when EMA is used as a tailoring variable in JITAI. JITA-EMA builds on computerized adaptive testing methods developed for purposes of classification (computerized classification testing, CCT), and applies them to the classification of momentary states within individuals. The goal of JITA-EMA is to administer a small and informative selection …

Authors

Stefan Schneider,Doerte U Junghaenel,Joshua M Smyth,Cheng K Fred Wen,Arthur A Stone

Journal

Behavior research methods

Published Date

2024/2

Comparisons of Self-Report With Objective Measurements Suggest Faster Responding but Little Change in Response Quality Over Time in Ecological Momentary Assessment Studies

Response times (RTs) to ecological momentary assessment (EMA) items often decrease after repeated EMA administration, but whether this is accompanied by lower response quality requires investigation. We examined the relationship between EMA item RTs and EMA response quality. In one data set, declining response quality was operationalized as decreasing correspondence over time between subjective and objective measures of blood glucose taken at the same time. In a second EMA study data set, declining response quality was operationalized as decreasing correspondence between subjective ratings of memory test performance and objective memory test scores. We assumed that measurement error in the objective measures did not increase across time, meaning that decreasing correspondence across days within a person could be attributed to lower response quality. RTs to EMA items decreased …

Authors

Raymond Hernandez,Stefan Schneider,Amy E Pinkham,Colin A Depp,Robert Ackerman,Elizabeth A Pyatak,Varsha D Badal,Raeanne C Moore,Philip D Harvey,Kensie Funsch,Arthur A Stone

Journal

Assessment

Published Date

2024/4/18

Administering selected subscales of patient-reported outcome questionnaires to reduce patient burden and increase relevance: a position statement on a modular approach

Patient-reported outcome (PRO) questionnaires considered in this paper contain multiple subscales, although not all subscales are equally relevant for administration in all target patient populations. A group of measurement experts, developers, license holders, and other scientific-, regulatory-, payer-, and patient-focused stakeholders participated in a panel to discuss the benefits and challenges of a modular approach, defined here as administering a subset of subscales out of a multi-scaled PRO measure. This paper supports the position that it is acceptable, and sometimes preferable, to take a modular approach when administering PRO questionnaires, provided that certain conditions have been met and a rigorous selection process performed. Based on the experiences and perspectives of all stakeholders, using a modular approach can reduce patient burden and increase the relevancy of the items …

Authors

Daniel Serrano,David Cella,Don Husereau,Bellinda King-Kallimanis,Tito Mendoza,Tomas Salmonson,Arthur Stone,Alexandra Zaleta,Devender Dhanda,Andriy Moshyk,Fei Liu,Alan L Shields,Fiona Taylor,Sasha Spite,James W Shaw,Julia Braverman

Journal

Quality of Life Research

Published Date

2024/1/24

Can you tell people’s cognitive ability level from their response patterns in questionnaires?

Questionnaires are ever present in survey research. In this study, we examined whether an indirect indicator of general cognitive ability could be developed based on response patterns in questionnaires. We drew on two established phenomena characterizing connections between cognitive ability and people’s performance on basic cognitive tasks, and examined whether they apply to questionnaires responses.(1) The worst performance rule (WPR) states that people’s worst performance on multiple sequential tasks is more indicative of their cognitive ability than their average or best performance.(2) The task complexity hypothesis (TCH) suggests that relationships between cognitive ability and performance increase with task complexity. We conceptualized items of a questionnaire as a series of cognitively demanding tasks. A graded response model was used to estimate respondents’ performance for each item …

Authors

Stefan Schneider,Raymond Hernandez,Doerte U Junghaenel,Haomiao Jin,Pey-Jiuan Lee,Hongxin Gao,Danny Maupin,Bart Orriens,Erik Meijer,Arthur A Stone

Journal

Behavior Research Methods

Published Date

2024/3/25

Professor FAQs

What is Arthur A. Stone's h-index at University of Southern California?

The h-index of Arthur A. Stone has been 66 since 2020 and 115 in total.

What is Arthur A. Stone's total number of citations?

Arthur A. Stone has 65,507 citations in total.

What are the co-authors of Arthur A. Stone?

The co-authors of Arthur A. Stone are Daniel Kahneman, Norbert Schwarz, Clemens Kirschbaum, Angus Deaton, Alan Krueger, Paul Pilkonis.

Co-Authors

H-index: 160
Daniel Kahneman

Daniel Kahneman

Princeton University

H-index: 152
Norbert Schwarz

Norbert Schwarz

University of Southern California

H-index: 147
Clemens Kirschbaum

Clemens Kirschbaum

Technische Universität Dresden

H-index: 119
Angus Deaton

Angus Deaton

Princeton University

H-index: 119
Alan Krueger

Alan Krueger

Princeton University

H-index: 98
Paul Pilkonis

Paul Pilkonis

University of Pittsburgh

academic-engine

Useful Links