Get Updates via Email Get Updates Get our RSS Feed
  Follow Mathematica on Twitter  Share/Save/Bookmark
Education

Education Policy Research

Scientifically based methods are the hallmark of our work evaluating education programs and studying education policy issues. Our studies cover early learning experiences as well as education in the K-12 grades and college years. Our studies have provided important counsel to policymakers as they seek ideas for improving American education. We have also played an important role in advancing the state of the science in education research. Read more about our work on specific education topics.


What's New


  • Highlights
  • Latest Work
  • Newsroom
  • Conferences
  • Multimedia
  • Testimony

Report Reveals Promising Practices of High-Impact CMOs

photo of high school girl with teacher in classroomThe final report from the National Study of Charter Management Organization (CMO) Effectiveness highlights approaches five successful charter school management organizations use to help improve student achievement. It offers guidance for schools and districts looking to replicate these promising practices.

Op-Ed Highlights Teacher Evaluation Strategies

 

 

 

 

 

 

  • "KIPP Middle Schools: Impacts on Achievement and Other Outcomes." Christina Clark Tuttle, Brian Gill, Philip Gleason, Virginia Knechtel, Ira Nichols-Barrer, and Alexandra Resch, February 2013. This report shows that Knowledge Is Power Program (KIPP) middle schools have significant and substantial positive impacts on student achievement in four core academic subjects: reading, math, science, and social studies. One of the report’s analyses confirms the positive impacts using a rigorous randomized experimental analysis that relies on the schools’ admissions lotteries to identify comparison students, thereby accounting for students’ prior achievement, as well as factors such as student and parent motivation. The latest findings from Mathematica’s multiyear study of KIPP middle schools, the report is the most rigorous large-scale evaluation of KIPP charter schools to date, covering 43 KIPP middle schools in 13 states and the District of Columbia. Student outcomes examined included state test results in reading and math, test scores in science and social studies, results on a nationally normed assessment that includes measures of higher-order thinking, and behaviors reported by students and parents. Executive summary. Fact sheet.
  • "Value-Added Estimates for Phase 1 of the Pennsylvania Teacher and Principal Evaluation Pilot." Stephen Lipscomb, Hanley Chiang, and Brian Gill, April 2012. This report describes the development of value-added models for estimating the contributions of Pennsylvania teachers and principals toward the achievement growth of their students. Estimates were obtained during the first phase of a multiyear pilot to develop new evaluation systems for teachers and principals. The report also examines whether teachers with higher classroom observation scores on specific professional practices among those who participated in the first phase tended to have greater impacts on student achievement, as measured by value-added models. Executive Summary. Technical Report.
  • "Replicating Experimental Impact Estimates Using a Regression Discontinuity Approach." Philip M. Gleason, Alexandra M. Resch, and Jillian A. Berk, April 2012. This report attempts to replicate the estimated impacts from experimental studies of two different education interventions using a regression discontinuity design. In each case, the estimated impact of the intervention based on the regression discontinuity design was not significantly different from the experimental impact estimate, though the magnitude of the differences between the point estimates of impacts from the two designs sometimes was nontrivial.
  • "Impacts of Title I Supplemental Educational Services on Student Achievement." John Deke, Lisa Dragoset, Karen Bogen, and Brian Gill, May 2012. As part of No Child Left Behind, parents of low-income students in low-performing schools are offered Supplemental Educational Services (SES) for their children. These academic supports, such as extra tutoring or group sessions, take place outside the regular school day. This report for the Institute of Education Sciences examine potential achievement benefits. In the six study districts located in Connecticut, Florida, and Ohio, the program was directed to the lowest-achieving students due to oversubscription. However, not all students who were offered access to the program participated. The study found no evidence of impacts from offering SES to students near the cutoff of acceptance into the program. Furthermore, there were no impacts from participating in SES on student achievement in reading or math. Providers offered an average of 21 hours of SES per student for the school year, either one-on-one or in group sessions conducted by local teachers. No observed provider characteristics and practices, including intensity of services, were significantly associated with stronger impacts. The six districts were not nationally representative. Executive summary.
  • "Using an Experimental Evaluation of Charter Schools to Test Whether Nonexperimental Comparison Group Methods Can Replicate Experimental Impact Estimates." Kenneth Fortson, Natalya Verbitsky-Savitz, Emma Kopa, and Philip Gleason, April 2012. Using data from Mathematica's experimental evaluation of charter schools, this methodological study examines the validity of four different comparison group approaches to test whether these designs can replicate findings from a well-implemented random assignment study.
  • "Value-Added Models for the Pittsburgh Public Schools." Matthew Johnson, Stephen Lipscomb, Brian Gill, Kevin Booker, and Julie Bruch, February 2012. This report describes the value-added models (VAMs) created for the Pittsburgh Public Schools and the Pittsburgh Federation of Teachers. Pittsburgh's VAMs use not only state assessments but also course-specific assessments, student attendance, and course completion rates, aiming to produce estimates of the contributions of teachers and schools that are fair, valid, reliable, and robust.
  • "Moving Teachers: Implementation of Transfer Incentives in Seven Districts." Steven Glazerman, Ali Protik, Bing-ru Teh, Julie Bruch, and Neil Seftor, April 2012. By offering $20,000 per teacher, seven school districts piloting a transfer-incentive strategy, known as the Talent Transfer Initiative (TTI), filled 90 percent of their targeted vacancies in hard-to-staff schools with some of the districts' highest-performing teachers. A new study highlights the implementation experience and intermediate impacts of TTI, which is intended to expand disadvantaged students' access to the most effective teachers. Previous research conducted by Mathematica shows that, on average, low-income middle school students are significantly less likely to have access to the highest-performing teachers. Executive Summary
  • "Learning from Charter School Management Organizations: Strategies for Student Behavior and Teacher Coaching." Robin Lake, Melissa Bowen, Allison Demeritt, Moira McCullough, Joshua Haimson, and Brian Gill, March 2012. A new Mathematica study, conducted with the Center on Reinventing Public Education, highlights approaches five successful charter school management organizations (CMOs) use to help improve student achievement. This report expands on a previous report showing that CMOs with the greatest positive impact on student achievement were most likely to establish consistent schoolwide behavior expectations for students, as well as use an intense approach to monitoring and coaching teachers. The latest report offers guidance for schools and districts looking to replicate these promising practices.
  • "An Evaluation of the Chicago Teacher Advancement Program (Chicago TAP) After Four Years." Steven Glazerman and Allison Seifullah, March 2012. Mathematica's final report on the Chicago Teacher Advancement Program (Chicago TAP) found that the program did not raise student math or reading scores, but it increased teacher retention in some schools. For example, teachers in Chicago TAP schools at the start of the program in fall 2007 were about 20 percent more likely than teachers in comparison schools to be in those same schools three years later (67 percent versus 56 percent retention rate). However, the program did not have an impact on student achievement overall in the four-year rollout period in the Chicago Public Schools (CPS). Although Chicago TAP increased the amount of mentoring, promotion opportunities, and compensation in participating schools relative to non-TAP schools, the program did not fully implement its performance-based pay or value-added components as intended.
  • "Combination Classes and Educational Achievement." Jaime L. Thomas. Economics of Education Review, December 2012 (subscription required). This article examines the relationship between combination class membership in 1st grade and 1st-grade test scores, finding that 1st graders are not harmed by being in a combination class or by their schools offering combination classes. As long as other stakeholders—such as parents, teachers, and students in other grades—are not made worse off, these findings suggest that offering combination classes may be a viable cost-saving option for school administrators.
  • "Effectiveness of Four Supplemental Reading Comprehension Interventions." Susanne James-Burdumy, John Deke, Russell Gersten, Julieta Lugo-Gil, Rebecca Newman-Gonchar, Joseph Dimino, Kelly Haymond, and Albert Yung-Hsu Liu. Journal of Research on Educational Effectiveness, October 2012 (subscription required). This article presents evidence from a large-scale randomized controlled trial of the effects of four supplemental reading comprehension curricula (Project CRISS, ReadAbout, Read for Real, and Reading for Knowledge). The impact analyses in the study's first year revealed a statistically significant negative impact of Reading for Knowledge on students' reading comprehension scores and no other significant impacts. The impact of ReadAbout was positive and significant in the study's second year among teachers with one year of experience using the intervention.
  • "Random Assignment Within Schools: Lessons Learned from the Teach For America Experiment." Steven Glazerman. Education Finance and Policy, April 2012 (subscription required). This article discusses the trade-offs associated with study designs that involve random assignment of students within schools and describes the experience from one such study of Teach For America (TFA). The article concludes that within-school random assignment studies such as the TFA evaluation are challenging but may also be feasible and generate useful evidence.
  • "Examining Charter Student Achievement Effects Across Seven States." Ron Zimmer, Brian Gill, Kevin Booker, Stéphane Lavertu, and John Wittle. Economics of Education Review, April 2012 (subscription required). Previous charter school research has shown mixed results for student achievement, which could be the consequence of different policy environments or methodological approaches with differing assumptions across studies. This analysis discusses these approaches and assumptions and estimates effects using a consistent methodology across seven locations.
  • "Links Between Young Children's Behavior and Achievement: The Role of Social Class and Classroom Composition." Annie Georges, Jeanne Brooks-Gunn, and Lizabeth M. Malone. American Behavioral Scientist, December 2011 (subscription required). This article examines the association between attentive and aggressive behavior (at the child- and class-level) and individual child achievement. Children with low attention, alone or in combination with aggressive behavior, made fewer gains in test scores during kindergarten. Additionally, having more children in the classroom with low attention was linked with lower achievement gains, even for children who did not themselves demonstrate problem behaviors.
  • "The Effectiveness of Mandatory-Random Student Drug Testing: A Cluster Randomized Trial." Susanne James-Burdumy, Brian Goesling, John Deke, and Eric Einspruch. Journal of Adolescent Health, October 2011 (subscription required). This article presents findings from the largest experimental evaluation to date of school-based mandatory-random student drug testing (MRSDT) and its effectiveness in reducing substance use among high school students. The study found that students who were subject to MRSDT reported less substance use in the past 30 days than comparable students in schools without MRSDT.
  • "Supporting Policy and Program Decisions: Recommendations for Conducting High Quality Systematic Evidence Reviews." Susan Zief and Roberto Agodini, October 2012. Systematic reviews are a useful tool for decision makers because they identify relevant studies about the effectiveness of a policy or program of interest, assess the quality of the evidence from the relevant studies, and summarize the valid findings. Mathematica's Center for Improving Research Evidence recently released an issue brief offering recommendations for conducting high quality systematic reviews. The recommendations aim to increase the number of such reviews in order to provide decision makers with a greater number of useful evidence summaries that can inform policy and program decisions.
  • "Findings from a Randomized Experiment of Playworks: Selected Results from Cohort 1." Martha Bleeker, Susanne James-Burdumy, Nicholas Beyler, Allison Hedley Dodd, Rebecca A. London, Lisa Westrich, Katie Stokes-Guinan, and Sebastian Castrechini, April 2012. Findings based on the first cohort of schools included several significant, positive impacts. Playworks had a positive impact on teachers' perceptions of students' safety and feeling more included during recess. Teachers in Playworks schools also reported less bullying and exclusionary behavior during recess, and found transitions from recess to classroom learning were less difficult than teachers in control schools found. Teachers in Playworks schools reported significantly better student behavior at recess and readiness for class than teachers in control schools and were also more likely to report that their students enjoyed adult-organized recess activities. Students in Playworks schools reported better behavior and attention in class after sports, games, and play than students in control schools. Overall, most teachers, students, and principals reported positive perceptions of the Playworks program.
  • "Do Low-Income Students Have Equal Access to the Highest-Performing Teachers?" Steven Glazerman and Jeffrey Max, April 2011. Most research on equal educational opportunity has focused on inputs like teacher experience and degrees. This brief estimated teachers’ value added (contribution to student achievement growth) and measured access to highest-performing teachers in high- and low-poverty schools. Across 10 selected districts in seven states students in the highest-poverty schools had unequal access, on average, to the district’s highest-performing middle school teachers. The pattern for elementary school was less clear. The degree of equal access varied by district. Technical Appendix.
  • "Infusing Academics into Career and Technical Education." Trends in Education Research, Issue Brief #3. Joshua Haimson, James R. Stone, III, and Donna Pearson, December 2008. Integrating academic learning into career and technical education (CTE) classes can be challenging for educators and curriculum developers but can be aided by securing detailed feedback from CTE teachers. Drawing on a recent study, this issue brief identifies challenges developers faced in infusing more math into CTE curricula and notes that incorporating academic learning into CTE requires substantial time, effort, and other resources.
  • "Horseshoes, Hand Grenades, and Treatment Effects? Reassessing Bias in Nonexperimental Estimators." Kenneth Fortson, Philip Gleason, Emma Kopa, and Natalya Verbitsky-Savitz, March 2013. Nonexperimental methods, such as regression modeling or statistical matching, produce unbiased estimates if the underlying assumptions hold, but these assumptions are usually not testable. Most studies testing nonexperimental designs find that they fail to produce unbiased estimates, but these studies have examined weaker evaluation designs. This working paper addresses these limitations and finds the use of baseline data that are strongly predictive of the key outcome measures considerably reduces bias, but might not completely eliminate it.
  • "Is School Value-Added Indicative of Principal Quality?" Hanley Chiang, Stephen Lipscomb, and Brian Gill, November 2012. Using data on elementary and middle school math and reading outcomes for Pennsylvania students, this working paper found that school value-added provides little useful information for comparing the general leadership skills of different principals when these comparisons include some principals who are in their first three years at their current positions.
  • "Student Selection, Attrition, and Replacement in KIPP Middle Schools." Ira Nichols-Barrer, Brian P. Gill, Philip Gleason, and Christina Clark Tuttle, September 2012. Using longitudinal, student-level data, this American Educational Research Association conference paper examines the entry and exit of students in KIPP middle schools, comparing KIPP's rates of attrition and replacement with rates in nearby district-run schools.
  • "Statistical Power for Regression Discontinuity Designs in Education: Empirical Estimates of Design Effects Relative to Randomized Controlled Trials." John Deke and Lisa Dragoset, June 2012. Using data from four previously published education studies, this working paper finds that a study using a regression discontinuity design needs between 9 and 17 times as many schools or students as a randomized controlled trial to produce an impact with the same level of statistical precision. The need for a large sample is driven primarily by bandwidth selection, not adjusting for random misspecification error.
  • "Methods for Accounting for Co-Teaching in Value-Added Models." Heinrich Hock and Eric Isenberg, June 2012. This working paper helps to address the issue of isolating the effect of each teacher on student achievement when the student is taught the same subject by more than one teacher. This paper considers and compares three methods—Partial Credit Method, Teacher Team Method, and Full Roster Method—to estimate teacher effects. Based on the analysis, the authors conclude that the latter two methods provide a more stable approach to estimating teacher effects on student achievement. Furthermore, the Full Roster Method offers the most promise for robust, practical implementation.
  • "Assessing the Rothstein Test: Does It Really Show Teacher Value-Added Models Are Biased?" Dan Goldhaber and Duncan Chaplin, February 2012. This working paper illustrates—theoretically and through simulations—that the Rothstein falsification test is not definitive in indicating bias in value-added model estimates of current teacher contributions to student learning.
  • "Do Charter Schools Improve Student Achievement? Evidence from a National Randomized Study." Melissa A. Clark, Philip Gleason, Christina Clark Tuttle, and Marsha K. Silverberg, December 2011. This paper looks at findings from the first national randomized study of the impacts of charter schools on student achievement.
  • "False Performance Gains: A Critique of Successive Cohort Indicators." Steven M. Glazerman and Liz Potamites, December 2011. There are many ways to use student test scores to evaluate schools. This paper defines and examines different estimators including regression-based value-added indicators, average gains, and successive cohort differences in achievement levels. The paper helps assess which methods provide useful information for school accountability and why.
  • "High School Dual Enrollment Programs: Are We Fast-Tracking Students Too Fast?" Cecilia Speroni, December 2011. This study tracked a subset of Florida's 2000–2001 and 2001–2002 high school seniors who took a college algebra placement test. Students who passed the test (with a score very near the cut-off for eligibility) and enrolled in a rigorous dual enrollment college algebra class were 16 percent more likely to go to college and 23 percent more likely to earn a college degree than similar students who did not take the class.
  • "Passing Muster: Evaluating Teacher Evaluation Systems." Steven Glazerman, Dan Goldhaber, Susanna Loeb, Stephen Raudenbush, Douglas O. Staiger, Grover J. Whitehurst, and Michelle Croft, April 2011. This report addresses the comparison of teacher evaluation systems and proposes ways to achieve a uniform standard for dispensing funds to districts to recognize exceptional teachers without imposing a uniform evaluation system on those districts. The report provides practical procedures to determine reliable local teacher evaluation systems. It also demonstrates how the reliability of the evaluation system determines the proportion of teachers who can be identified as exceptional.
  • "Student Selection, Attrition, and Replacement in KIPP Middle Schools: Working Paper Presented at the 2011 Annual Meeting of the American Educational Research Association." Ira Nichols-Barrer, Christina Clark Tuttle, Brian P. Gill, and Philip Gleason, April 2011. Using longitudinal, student-level data, this American Educational Research Association conference paper examines the entry and exit of students in KIPP middle schools, comparing KIPP’s rates of attrition and replacement with rates in nearby district-run schools.
  • What Works Clearinghouse (WWC)
  • Administered by Mathematica for the U.S. Department of Education's Institute of Education Sciences, the WWC produces a variety of reports that assess and summarize education research. WWC reports can help educators make important decisions about what curriculums to implement, what products to purchase, and what methods to use in their classrooms and schools.

    See the WWC's latest releases at whatworks.ed.gov and explore available Practice Guides, Intervention Reports, and Quick Reviews or take a guided tour of the site.

Annual Conference of the Association for Education Finance and PolicyEducation Renewal and Reform…in the Face of Resource Constraints—New Orleans, LA—March 14-16, 2013

Society for Research on Educational Effectiveness—Capitalizing on Contradiction: Learning from Mixed Results—Washington, DCMarch 7-9, 2013
John Deke, Chair: Contamination and Implementation Fidelity in RCTs
Peter Schochet: "A Statistical Model for Misreported Binary Outcomes in Clustered RCTs of Education Interventions"
Philip Gleason, Christina Tuttle, Brian Gill, Ira Nichols-Barrer, and Alex Resch: "Impacts of KIPP Schools on Student Outcomes"
Susanne James-Burdumy, Martha Bleeker, Nicholas Beyler, and Others: "Does Playworks Work? Findings from a Randomized Controlled Trial"
Susan Zief, Discussant: Leveraging Local Evaluations to Understand Contradictions

Center for Naval AnalysesPromoting the Resilience of Military Children Through Effective Programs—Washington, DCNovember 29-30, 2012
Cay Bradley: "Creating a System for Evaluating Education Programs"
Peter Schochet: "Quasi Experimental Designs"

University of ArkansasUnderstanding the Effectiveness of KIPP: Factors Related to Impacts—Fayetteville, ARNovember 16, 2012
Christina Clark Tuttle, Lecturer

Frontiers in Education ConferenceSoaring to New Heights in Engineering Education—Seattle, WAOctober 3-6, 2012
Margaret Sullivan: "Understanding Engineering Transfer Students: Demographic Characteristics and Educational Outcomes"

Society for Research on Educational Effectiveness Fall ConferenceEffective Partnerships: Linking Practice and Research—Washington, DCSeptember 6-8, 2012
Jill Constantine and Neil Seftor: Introduction to the What Works Clearinghouse (workshop)
Jean Knab and Cay Bradley: What Works Clearinghouse Certification (training session)

Administration for Children & Families/Office of Planning, Research, and Evaluation Methodological Advancement MeetingInnovative Directions in Estimating Impact—Washington, DCSeptember 6-7, 2012
Sarah Avellar: "Core Analytics and the Context of Impact Estimation: Evaluating Evidence"
Philip Gleason: "Innovations in Identification Strategies: Discontinuity Designs"
Steven Glazerman: "Innovations in Identification StrategiesIntegrated Answers: Best of Both WorldsHybrid Designs"
Peter Schochet: "Addressing Real World Issues with Identification and Inference: Problem of the Late Pretest"

National Center for Postsecondary Research ConferenceStrengthening Developmental Education: What Have We Learned, and What's Next?—New York, NYJune 21-22, 2012
Cecilia Speroni: "Results of Florida and California Dual Enrollment Studies"

National Bureau of Economic Research Education Meeting—Cambridge, MAMay 11, 2012
Duncan Chaplin and Others: "Assessing the Rothstein Test: Does It Really Show Teacher Value-Added Models Are Biased?"

Center for Education Policy Research at Harvard University—WebinarDecember 3, 2012
Ali Protik: "Moving High-Performing Teachers: Implementation of Transfer Incentives in Seven Districts"

American Youth Policy ForumDual Enrollment: A Strategy for Improving College Readiness and Success for All StudentsFebruary 10, 2012 (Video)
Cecilia Speroni, Presenter

American Enterprise Institute for Public Policy ResearchTeacher Pay Incentives: Lessons from North Carolina's Teacher Bonus ProgramWashington, DCJune 28, 2011
Duncan Chaplin, Panelist (Video)

Jill ConstantineJill Constantine, vice president, Director of NJ Human Services Research, and area leader for Mathematica's education research, testified before the California Commission on Teacher Credentialing on our evaluation of teachers trained through different routes to certification. Her testimony is available as an audio file or slides.