Assessing the Rothstein Test: Does It Really Show Teacher Value-Added Models Are Biased?

Assessing the Rothstein Test: Does It Really Show Teacher Value-Added Models Are Biased?

Working Paper 5
Published: Jan 31, 2012
Publisher: Seattle, WA: Center for Education Data & Research, University of Washington
Download
Authors

Dan Goldhaber

In a provocative and influential paper, Jesse Rothstein (2010) finds that standard value-added models (VAMs) suggest implausible future teacher effects on past student achievement, a finding that obviously cannot be viewed as causal. This is the basis of a falsification test (the Rothstein falsification test) that appears to indicate bias in VAM estimates of current teacher contributions to student learning.

Rothstein’s finding is significant because there is considerable interest in using VAM teacher effect estimates for high-stakes teacher personnel policies, and the results of the Rothstein test cast considerable doubt on the notion that VAMs can be used fairly for this purpose. However, in this paper, we illustrate—theoretically and through simulations—plausible conditions under which the Rothstein falsification test rejects VAMs even when there is no bias in estimated teacher effects, and even when students are randomly assigned conditional on the covariates in the model. On the whole, our findings show that the “Rothstein falsification test” is not definitive in showing bias, which suggests a much more encouraging picture for those wishing to use VAM teacher effect estimates for policy purposes.

How do you apply evidence?

Take our quick four-question survey to help us curate evidence and insights that serve you.

Take our survey