Asymdystopia: The Threat of Small Biases in Evaluations of Education Interventions that Need to be Powered to Detect Small Impacts

Publisher: Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance
Oct 03, 2017
Authors
John Deke, Thomas Wei, and Tim Kautz

Key Findings:

  • For RCTs, evaluators must either achieve much lower rates of missing data than before or offer a strong justification for why missing data are unlikely to be related to study outcomes.
  • For RDDs, state-of-the-art statistical methods can protect against inaccuracies from incorrect regression models, but this protection comes at a cost – much larger sample sizes are needed in order to detect small effects when using these methods. 
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as “small.” While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may create a new challenge for researchers: the need to guard against smaller biases. The purpose of this paper is twofold. First, we examine the potential for small biases to increase the risk of making false inferences as studies are powered to detect smaller impacts, a phenomenon we refer to as asymdystopia. We examine this potential for two of the most rigorous designs commonly used in education research—randomized controlled trials (RCTs) and regression discontinuity designs (RDDs). Second, we recommend strategies researchers can use to avoid or mitigate these biases.