We examined practices of teachers who make larger contributions to student achievement growth, reviewed plans for an overall effectiveness measure, described variation in professional practice scores, and examined practices strongly correlated with contributions to student achievement growth.
K–12 education policy, including charter schools, high-stakes testing and accountability, educator effectiveness, state and federal education policy implementation, and homework
- School Reform
- Teacher and Principal Effectiveness
- School Choice and Charters
Brian Gill studies K–12 education policy, including charter schools, educator effectiveness, and the implementation and impacts of high-stakes testing and other accountability regimes.
Gill is one of the nation’s leading experts on the effects of charter schools. He served as principal investigator for the first rigorous, nationwide examination of the effectiveness of nonprofit charter-school management organizations. He was also principal investigator on the first nationwide evaluation of the effects of the KIPP schools. Gill co-directed the first study of the effects of charter high schools on graduation, college enrollment, and earnings in adulthood; and the first nationwide study of the operations of online charter schools. He is now leading a pioneering study of the impact of charter schools on civic participation.
Gill is also an expert on accountability regimes in education. He serves as a principal investigator for the federal study of the implementation of Title I of the Elementary and Secondary Education Act, including its provisions related to accountability and efforts to improve low-performing schools. He was lead author on a study of the implications of behavioral science research for accountability in schools, describing the ways that accountability can be broadened beyond high-stakes testing to incorporate professional accountability systems that simultaneously incentivize and support improvement in teaching.
In related work, Gill has also conducted extensive research related to measures of school and educator performance. He has helped to develop and test measures of educator effectiveness based on professional practice, student feedback, and student achievement growth. Gill has played a key role in pioneering studies of the evaluation of school principals as well as teachers, including a study for the U.S. Department of Education to assess the validity of school principals’ contributions to student achievement growth.
Gill frequently works closely with state and local educational leaders on various K-12 challenges. He has played a lead role on several projects with the Mid-Atlantic Regional Educational Laboratory, assisting educators and officials with high-priority projects. He also served as senior adviser on the first study of the predictive validity of new, Common-Core-aligned assessments, assisting Massachusetts in its decision about using the Partnership for the Assessment of Readiness for College and Careers (PARCC) assessments. He also co-developed a conceptual framework for data-driven decision making providing guidance to state and local officials.
Gill directed a rigorous evaluation commissioned by the U.S. Department of Education’s Institute of Education Sciences to assess the effectiveness of supplemental educational services provided under the Elementary and Secondary Education Act (ESEA). He is now serving as a principal investigator for the federal study of the implementation of Title I of ESEA.
Gill’s award-winning research on homework compiles half a century of data on time spent by students and a century of debates among educators and parents. His research examines homework not only as a tool for promoting academic achievement but also as a means of communication from school to parents and a potential flash point for school-family conflict.
Gill was senior advisor for school choice issues on the U.S. Department of Education’s National Longitudinal Study of NCLB and served on the National Working Commission on Choice in K–12 Education at the Brookings Institution. Before joining Mathematica in 2007, he spent a decade at the RAND Corporation. Lead author of Rhetoric vs. Reality: What We Know and What We Need to Know About Vouchers and Charter Schools, he has published in the Journal of Research on Educational Effectiveness, Educational Evaluation and Policy Analysis, Behavioral Science and Policy, Statistics and Public Policy, the Journal of Labor Economics, Economics of Education Review, Education Finance and Policy, American Journal of Education, Teachers College Record, Peabody Journal of Education, Education Next, the Handbook of Research on School Choice, and the Encyclopedia of Education Economics and Finance. He also regularly writes blog posts and op-eds for non-academic audiences. Gill holds a Ph.D. in jurisprudence and social policy and a J.D. from the University of California at Berkeley.
Pennsylvania Teacher and Principal Evaluation Pilot
Assessing Teacher Effectiveness in Pittsburgh Public Schools
In a project with the Pittsburgh Public Schools, we developed value-added statistical models that estimate teachers’ and schools’ contributions to the achievement of their students. Our findings suggest that the value-added model estimates provide meaningful information on teacher and school performance.
KIPP: Preparing Youth for College
Mathematica built on our initial study of KIPP middle schools with this five-year project, designed to address the question of whether KIPP can maintain its effectiveness as the network grows. The study included an impact analysis, an implementation analysis, and a correlational analysis.
Improving America’s Schools: Study of Title I and Title II of the Elementary and Secondary Education Act
This project involves evaluating implementation of Titles I and II of ESEA, relying on surveys at the state, district, school, and teacher levels.
Comparing the Predictive Validity of High-Stakes Standardized Tests
A Mathematica case study and a recent article in Education Next examine first-of-its-kind research that measures how accurately a so-called next generation high school assessment designed for the Common Core predicts college success, compared with the existing state assessment in Massachusetts.
Online Charter Schools Struggle to Engage Their Students
Innovative new research funded by the Walton Family Foundation suggests that students of online charter schools had significantly weaker academic performance in math and reading, compared with their counterparts in conventional schools.
Mathematica Experts Win AEFP Award for Best Academic Paper on School Choice
The Association for Education Finance and Policy (AEFP), in conjunction with the Walton Family Foundation, has selected Mathematica's report "Do KIPP Schools Boost Student Achievement?" as the Best Academic Paper on School Choice and Reform 2014.
Media Coverage of Mathematica's Study of The Equity Project
Does paying teachers $125,000 a year make a difference?
High Salaries for Teachers and Significant Impacts on Student Achievement
By the end of the 2012–2013 school year, TEP’s impacts on student achievement were consistently positive across subjects and cohorts, with especially large effects in math.
Comparing the Predictive Validity of High-Stakes Standardized Tests
This study, conducted for the state of Massachusetts, examined the predictive validity of the Partnership for Assessment of Readiness for College and Careers (PARCC) exam when compared to a specific state assessment--the Massachusetts Comprehensive Assessment System (MCAS)—that it would be replacing.
Our Charter School Research: Providing an Objective Voice in the Debate
In 2004, the U.S. Department of Education’s Institute of Education Sciences commissioned Mathematica to conduct the first nationwide, lottery-based study of charter schools. The lottery-based design compared outcomes for applicants admitted to the charter middle school through the lottery to outcomes...
Building the Knowledge Base on Teacher Preparation and Effectiveness
Mathematica designed and conducted three large-scale studies on the relationship between teacher preparation and effectiveness, using the most rigorous approach possible—random assignment of students to teachers from different kinds of programs—and compared student test scores to gauge teacher effectiveness.