Participants shift response deadlines based on list difficulty during reading-aloud megastudies

Michael J. Cortese, Maya M. Khanna, Robert Kopp, Jonathan B. Santo, Kailey S. Preston, Tyler van Zuiden

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

We tested the list homogeneity effect in reading aloud (e.g., Lupker, Brown, & Colombo, 1997) using a megastudy paradigm. In each of two conditions, we used 25 blocks of 100 trials. In the random condition, words were selected randomly for each block, whereas in the experimental condition, words were blocked by difficulty (e.g., easy words together, etc.), but the order of the blocks was randomized. We predicted that standard factors (e.g., frequency) would be more predictive of reaction times (RTs) in the blocked than in the random condition, because the range of RTs across the experiment would increase in the blocked condition. Indeed, we found that the standard deviations and ranges of RTs were larger in the blocked than in the random condition. In addition, an examination of items at the difficulty extremes (i.e., very easy vs. very difficult) demonstrated a response bias. In regression analyses, a predictor set of seven sublexical, lexical, and semantic variables accounted for 2.8% more RT variance (and 2.6% more zRT variance) in the blocked than in the random condition. These results indicate that response deadlines apply to megastudies of reading aloud, and that the influence of predictors may be underestimated in megastudies when item presentation is randomized. In addition, the CDP++ model accounted for 0.8% more variance in RTs (1.2% in zRTs) in the blocked than in the random condition. Thus, computational models may have more predictive power on item sets blocked by difficulty than on those presented in random order. The results also indicate that models of word processing need to accommodate response criterion shifts.

Original languageEnglish (US)
Pages (from-to)1-11
Number of pages11
JournalMemory and Cognition
DOIs
StateAccepted/In press - Feb 16 2017

Fingerprint

Reading
Cytidine Diphosphate
Word Processing
Semantics
Regression Analysis
Reaction Time
Reading Aloud
Predictors

All Science Journal Classification (ASJC) codes

  • Neuropsychology and Physiological Psychology
  • Experimental and Cognitive Psychology
  • Arts and Humanities (miscellaneous)

Cite this

Participants shift response deadlines based on list difficulty during reading-aloud megastudies. / Cortese, Michael J.; Khanna, Maya M.; Kopp, Robert; Santo, Jonathan B.; Preston, Kailey S.; van Zuiden, Tyler.

In: Memory and Cognition, 16.02.2017, p. 1-11.

Research output: Contribution to journalArticle

Cortese, Michael J. ; Khanna, Maya M. ; Kopp, Robert ; Santo, Jonathan B. ; Preston, Kailey S. ; van Zuiden, Tyler. / Participants shift response deadlines based on list difficulty during reading-aloud megastudies. In: Memory and Cognition. 2017 ; pp. 1-11.
@article{51dc6f66ee504798b8bcc0815b17a14a,
title = "Participants shift response deadlines based on list difficulty during reading-aloud megastudies",
abstract = "We tested the list homogeneity effect in reading aloud (e.g., Lupker, Brown, & Colombo, 1997) using a megastudy paradigm. In each of two conditions, we used 25 blocks of 100 trials. In the random condition, words were selected randomly for each block, whereas in the experimental condition, words were blocked by difficulty (e.g., easy words together, etc.), but the order of the blocks was randomized. We predicted that standard factors (e.g., frequency) would be more predictive of reaction times (RTs) in the blocked than in the random condition, because the range of RTs across the experiment would increase in the blocked condition. Indeed, we found that the standard deviations and ranges of RTs were larger in the blocked than in the random condition. In addition, an examination of items at the difficulty extremes (i.e., very easy vs. very difficult) demonstrated a response bias. In regression analyses, a predictor set of seven sublexical, lexical, and semantic variables accounted for 2.8{\%} more RT variance (and 2.6{\%} more zRT variance) in the blocked than in the random condition. These results indicate that response deadlines apply to megastudies of reading aloud, and that the influence of predictors may be underestimated in megastudies when item presentation is randomized. In addition, the CDP++ model accounted for 0.8{\%} more variance in RTs (1.2{\%} in zRTs) in the blocked than in the random condition. Thus, computational models may have more predictive power on item sets blocked by difficulty than on those presented in random order. The results also indicate that models of word processing need to accommodate response criterion shifts.",
author = "Cortese, {Michael J.} and Khanna, {Maya M.} and Robert Kopp and Santo, {Jonathan B.} and Preston, {Kailey S.} and {van Zuiden}, Tyler",
year = "2017",
month = "2",
day = "16",
doi = "10.3758/s13421-016-0678-8",
language = "English (US)",
pages = "1--11",
journal = "Memory and Cognition",
issn = "0090-502X",
publisher = "Springer New York",

}

TY - JOUR

T1 - Participants shift response deadlines based on list difficulty during reading-aloud megastudies

AU - Cortese, Michael J.

AU - Khanna, Maya M.

AU - Kopp, Robert

AU - Santo, Jonathan B.

AU - Preston, Kailey S.

AU - van Zuiden, Tyler

PY - 2017/2/16

Y1 - 2017/2/16

N2 - We tested the list homogeneity effect in reading aloud (e.g., Lupker, Brown, & Colombo, 1997) using a megastudy paradigm. In each of two conditions, we used 25 blocks of 100 trials. In the random condition, words were selected randomly for each block, whereas in the experimental condition, words were blocked by difficulty (e.g., easy words together, etc.), but the order of the blocks was randomized. We predicted that standard factors (e.g., frequency) would be more predictive of reaction times (RTs) in the blocked than in the random condition, because the range of RTs across the experiment would increase in the blocked condition. Indeed, we found that the standard deviations and ranges of RTs were larger in the blocked than in the random condition. In addition, an examination of items at the difficulty extremes (i.e., very easy vs. very difficult) demonstrated a response bias. In regression analyses, a predictor set of seven sublexical, lexical, and semantic variables accounted for 2.8% more RT variance (and 2.6% more zRT variance) in the blocked than in the random condition. These results indicate that response deadlines apply to megastudies of reading aloud, and that the influence of predictors may be underestimated in megastudies when item presentation is randomized. In addition, the CDP++ model accounted for 0.8% more variance in RTs (1.2% in zRTs) in the blocked than in the random condition. Thus, computational models may have more predictive power on item sets blocked by difficulty than on those presented in random order. The results also indicate that models of word processing need to accommodate response criterion shifts.

AB - We tested the list homogeneity effect in reading aloud (e.g., Lupker, Brown, & Colombo, 1997) using a megastudy paradigm. In each of two conditions, we used 25 blocks of 100 trials. In the random condition, words were selected randomly for each block, whereas in the experimental condition, words were blocked by difficulty (e.g., easy words together, etc.), but the order of the blocks was randomized. We predicted that standard factors (e.g., frequency) would be more predictive of reaction times (RTs) in the blocked than in the random condition, because the range of RTs across the experiment would increase in the blocked condition. Indeed, we found that the standard deviations and ranges of RTs were larger in the blocked than in the random condition. In addition, an examination of items at the difficulty extremes (i.e., very easy vs. very difficult) demonstrated a response bias. In regression analyses, a predictor set of seven sublexical, lexical, and semantic variables accounted for 2.8% more RT variance (and 2.6% more zRT variance) in the blocked than in the random condition. These results indicate that response deadlines apply to megastudies of reading aloud, and that the influence of predictors may be underestimated in megastudies when item presentation is randomized. In addition, the CDP++ model accounted for 0.8% more variance in RTs (1.2% in zRTs) in the blocked than in the random condition. Thus, computational models may have more predictive power on item sets blocked by difficulty than on those presented in random order. The results also indicate that models of word processing need to accommodate response criterion shifts.

UR - http://www.scopus.com/inward/record.url?scp=85013067773&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85013067773&partnerID=8YFLogxK

U2 - 10.3758/s13421-016-0678-8

DO - 10.3758/s13421-016-0678-8

M3 - Article

C2 - 28211025

AN - SCOPUS:85013067773

SP - 1

EP - 11

JO - Memory and Cognition

JF - Memory and Cognition

SN - 0090-502X

ER -