Measuring Principal Performance Is Hard - But We Need to Try

Measuring Principal Performance Is Hard - But We Need to Try

Nov 18, 2016

As any teacher could tell you, principals matter. Our research confirms this: principals have almost as much of an impact on student achievement as teachers do. Indeed, we found in a study of school reform in Washington, D.C., that replacing ineffective principals improves student achievement. But how do we distinguish good principals from bad ones?

 

Measuring principal performance is even harder than measuring teacher performance. In a series of studies, we’ve learned that this is true for measures based on student achievement growth and for measures based on professional practice.

 

Measures of teachers’ impact on test scores (often and perhaps regrettably called “value-added”) have received an enormous amount of attention from researchers. Despite some concerns about high-stakes uses of the measures, there is strong evidence that they provide important information about teachers’ effectiveness in promoting student achievement growth. These measures take advantage of two facts that make it possible to isolate teachers’ impacts on their students: students typically change teachers every year, and teachers interact with students directly.

Coffee cup world's greatest principal

 

It is much harder to measure principal value-added because students don’t change principals every year, and principals’ effects on students are mostly indirect: principals affect student achievement through teachers. They can improve the school environment in ways that enable better teaching, develop the skills of their teachers, or put better teachers in the classroom. Although a teacher can produce measurable effects on student achievement in a single year of instruction, a principal’s efforts are likely to require several years before their full effect on student outcomes becomes evident.

 

The fact that a principal’s effect is indirect and can take years to appear means that principal value-added is not the same thing as school value-added. Some states have chosen to include school value-added (or the school’s median student growth percentile, which is similar to value-added) as a component in the evaluation of principals. But consider a chronically low-performing school that is full of relatively ineffective (low value-added) teachers. Even if a district assigns its best principal to take over that school, the school will not instantly become a high-value-added school. A highly effective principal might improve the school’s value-added so that it is more effective than it was in the past, but it is likely to take several years before the combination of staff development, improvements in school environment, and replacement of ineffective teachers can make the school a high-value-added school.1

 

We need a way to identify a principal’s unique contribution to student achievement. Unfortunately, no one has solved this problem. Nobody knows how to measure principals’ value-added in a way that can be uniformly applied to all principals. We’ve tried. We examined several ways of applying student achievement data to principals, using data from schools across the state of Pennsylvania. We found that simple changes in schoolwide student achievement don’t provide a good indicator of principals’ contributions. We found that school value-added provides minimal information about principals’ contributions. And even a measure that attempts to assess improvements in school value-added is mostly wrong. I’m not optimistic that anyone will be able to come up with a valid and reliable measure of individual principals’ contributions to student achievement anytime soon.

 

Other types of principal effectiveness measures are problematic as well. Like teacher ratings, ratings of principals that are not based on statistical analysis of test scores tend to have little differentiation, with a Lake Wobegon effect in which everyone looks good. In the first year of implementation of a new principal evaluation system in New Jersey, we found that 99 percent of principals were rated as effective or highly effective. More encouragingly, in Pennsylvania we found that the state’s measure of principals’ professional practice produced ratings that were correlated with our best estimate of the principals’ contributions to student achievement growth. (This pertained to the subset of principals for whom it was possible to produce a credible measure of their contributions to student achievement growth, because we could observe different principals in the same school in different years.)

 

It is hardly surprising that it is difficult to assess principals’ professional performance, because their jobs are so highly varied. Unlike classroom teaching, effective school leadership constitutes a wide range of tasks that are not easily observed. It is not obvious what a superintendent should attempt to observe in order to gauge the effectiveness of a principal’s leadership.

 

Even so, despite the knotty problems associated with measuring principals’ professional practices and their contributions to student achievement, the situation is not hopeless. An ongoing U.S. Department of Education study is examining the implementation and impacts of evaluation and feedback measures for teachers and principals; the principal evaluation measure is the Vanderbilt Assessment of Leadership in Education, or VAL-ED. Borrowing an idea from the “360” evaluations that are often used in the business world, the VAL-ED includes a survey of the school’s teachers. Although I tend to be skeptical of claims that schools should borrow a lot from business, this seems like a no-brainer. Why doesn’t every school use a teacher survey to inform the evaluation of the principal?

 

The VAL-ED has not yet been validated to show that the teacher survey rating is related to student achievement growth, but I’d bet it provides better information about principal performance than either a rating by a supervisor or any currently existing value-added measure. We know that students can identify effective teachers; surely teachers can identify effective principals. If I were a superintendent and needed to evaluate my principals, the first thing I would do is ask the teachers.

 

As researchers, we have a long way to go to demonstrate that we can provide useful measures of principal effectiveness, but teacher surveys are surely a good place to start.

 

1Calculus lovers might think of the difference between school value-added and principal value-added as analogous to the difference between the first and second derivative. Physics lovers might think of school value-added as velocity and principal value-added as acceleration. Schools (and teachers) advance student achievement, whereas principals accelerate student achievement by increasing the effectiveness of teachers and schools.

About the Author