- “Does data source X provide useful information?” and
- “Should data source X be used for purpose Y?”
are two very different questions. Unfortunately, conflation of these questions by education researchers, writers, and advocates far too frequently results in bad policy recommendations.
This problem surfaces especially often in debates about value added modeling (VAM), a statistical method aimed at capturing a teacher’s effectiveness in the classroom. Based on a new paper from economists Raj Chetty, John Friedman, and Jonah Rockoff, Andew Flowers writes, in response to question 1 above, that we’re pretty good at “the science of grading teachers” with VAM results. Flowers weighs in on question 2 as well, arguing that Chetty et al.’s work means that “administrators can legitimately use value-added scores to hire, fire and otherwise evaluate teacher performance.”
In terms of question 1, the idea that VAM research indicates that we’re pretty good at “grading teachers” is itself debatable. Flowers doesn’t conduct an extensive survey of researchers or research, but focuses on six well-known veterans of VAM debates, including several of the more outspoken defenders of the metric (Chetty and Thomas Kane specifically; Friedman, Rockoff, and Douglas Staiger are also longtime VAM supporters). While many respected academics caution about VAM’s limitations and/or have more nuanced positions on its use, Jesse Rothstein is the only one Flowers cites.
In fact, whether VAM estimates are systematically biased (Rothstein’s argument) or not (Chetty et al.’s contention), there are legitimate questions about whether VAM results are valid (whether or not they are really capturing “teacher effectiveness” in the way that most people think about it). VAM estimates correlate surprisingly little with other measures aimed at capturing effective teaching (like experts’ assessments of classroom instruction). They’re also notoriously unstable, meaning that a teacher’s scores bounce around a lot depending on the year and test studied. While other methods of evaluating teacher effectiveness have similar issues and there are certain approaches to VAM (not commonly used) that are more useful than others, it’s perfectly reasonable to argue that we’re still pretty bad at “grading teachers.”
More importantly, however, debates about bias, validity, and stability in VAM actually have much less to do with the answer to question 2 – should we use VAM to evaluate teachers in the way its proponents recommend? – than many people think. To understand why, we need look no farther than two of the core purposes of teacher evaluation, purposes which everyone from teachers unions to education reform organizations generally agree about (at least rhetorically).
1) One core purpose of teacher evaluation is helping teachers improve. Making VAM results a defined percentage of a teacher’s evaluation is not useful for this purpose even if we assume VAM results are unbiased, valid, and stable. Such a policy may actually undermine teacher improvement, and hence the quality of instruction that students receive.
For starters, a VAM score is opaque. Teachers cannot match their VAM results back to specific questions on a test or use them to figure out what their students did or didn’t know. VAM may be able to tell a teacher if her students did well or poorly on a specific test, but not why students did well or poorly. In other words, a VAM score provides no actionable feedback. It does not indicate anything about what a teacher can do to help her students learn.
In addition, VAM results are outcomes over which a teacher has very limited control – research typically finds that teachers contribute to less than a fifth of the variation in student test scores (the rest is mostly random error and outside-of-school factors). If a teacher’s VAM results look good, that might be because the teacher did something well, but it also might be because the teacher got lucky, or because some other factor contributed to her students’ success. The tendency to view VAM results as indicative of whether or not a teacher did a good job – a common side effect of making VAM results a defined percentage of a teacher’s evaluation – is thus misguided (and a potential recipe for the reinforcement of unhelpful behaviors). This concern is especially germane because VAM results are often viewed as “grades” by the teachers receiving them – even if they are only a small percentage of a teachers’ evaluation “score” – and thus threaten to overwhelm other, potentially productive elements of an evaluation conversation.
A better evaluation system would focus on actionable feedback about things over which a teacher has direct control. Student performance should absolutely be included in the teacher evaluation process, but instead of making VAM a defined percentage of a teacher’s evaluation (part of a “grade”), evaluators should give teachers feedback on how well they use information about student performance to analyze their teaching practices and adapt their instruction accordingly. This approach, unlike the approach favored by many VAM proponents, would help a teacher improve over time.
2) A second core purpose of teacher evaluation is to help evaluators make personnel decisions. Relative to the evaluation system described above – one that focuses on actions over which a teacher has control – making VAM results a defined percentage of teacher evaluations does not help us with this purpose, either. Suppose a teacher gets a bad VAM result. If that result is consistent with classroom observation data, the quality of assigned work, and various other elements of the teacher’s practice, an evaluator shouldn’t need it to conclude that the teacher is ineffective.
If there is a discrepancy between the VAM result and the other measures, on the other hand, there are a few possibilities. The VAM results might have been unlucky. The teaching practices the teacher employed might not be as useful as the teacher or evaluator thought they would be. Or perhaps VAM isn’t a very good indicator of teacher quality (there’s also a possibility that the various other measures aren’t good indicators of teacher quality, but the measures suggested all have more face validity – meaning that they’re more intuitively likely to reflect relevant information – than do VAM results). Under any of these alternative scenarios, using VAM results as a defined percentage of a teacher’s evaluation makes us more likely both to fire teachers who might actually be good at their jobs and to reward teachers who might not be.
To be fair, question 1 could have some relevance for this purpose of teacher evaluation; if VAM results were an excellent indicator of teaching quality (again, they aren’t, but let’s suspend disbelief for a moment), that would negate one of the concerns above and make us more confident in using VAM for reward and punishment. Yet even in this case the defined-percentage approach would hold little if any advantage over the properly-designed evaluation system described above in helping administrators make personnel decisions, and it would be significantly more likely both to feel unfair to teachers and to result in a variety of negative consequences.
I’ve had many conversations with proponents of making VAM a defined percentage of teacher evaluations, and not a single one has been able to explain why their approach to VAM is better than an alternative approach that focuses on aspects of teaching practice – like creating a safe classroom environment, lesson planning, analyzing student data, and delivering high-quality instruction – over which teachers have more control.
So while the answer to question 1 in the case of VAM is that, despite its shortcomings, it may provide useful information, the answer to question 2 – should VAM results be used as a defined percentage of teacher evaluations? – is a resounding “no.” And those who understand the crucial distinction between the two questions know that no amount of papers, articles, or researcher opinions, however interesting or useful for other purposes they may be, is ever going to change that fact.