Several people have sent me articles discussing Mathematica’s recent research study that examined Teach For America (TFA) math teacher effectiveness. This study is significant because, to my knowledge, it is the first large-scale study on TFA to randomly assign students to classrooms. Its experimental design provides fairly convincing support for the idea that TFA teachers’ students perform no worse than the students of non-TFA teachers at TFA placement schools. But this finding is consistent with the findings of previous research and does not support the assertion that Teach For America teachers can close the achievement gap.
Anyone who continues to argue that TFA teachers yield worse educational outcomes than other teachers (generally citing pretty old research and ignoring the much larger body of research that contradicts that claim) are just plain wrong. While there are some methodological concerns with recent studies, enough evidence exists for me to state confidently that TFA teachers, on average, do not harm student achievement. At the same time, TFA and its proponents must also stop using misleading data and insisting that studies like this one prove more than they actually do. Despite articles’ claims, the new Mathematica research does not suggest that TFA teachers’ students outperform non-TFA teachers’ students in a meaningful way.
The study showed a difference between TFA teachers and all comparison teachers of 7% of one standard deviation. To put that number in context, a difference of 7% of one standard deviation in home runs between two baseball players in 2012 would be a difference of less than one home run over the course of the entire 162-game season. Or, if you aren’t a baseball fan, a difference of 7% of one standard deviation between two students on the math section of the SAT in 2012 was equivalent to a difference of less than one correctly answered question. The authors of the Mathematica study and just about every article quoting the study claim 7% of one standard deviation in this context is equivalent to 2.6 months of learning, using this 2007 research paper as justification, but that number is invalid and based on an inappropriately applied heuristic. The average student in a non-TFA classroom scored in the 27th percentile on the tests administered while the average student in a TFA classroom scored in the 30th percentile; moving from the 27th percentile on a test to the 30th percentile does not represent, on average, 2.6 months of learning. Furthermore, 40% of classrooms with TFA teachers scored lower than comparison classrooms taught by non-TFA teachers. The study’s results were statistically significant, sure, but the advantage they show for TFA teachers is remarkably slight at best.
To me, the most important takeaway from the Mathematica study is that students at TFA placement schools, in general, perform terribly on standardized tests no matter who happens to be teaching them. The reasons for that fact, as I alluded to in my last post, have a lot less to do with teaching and school quality than reformers would have us believe. Most teachers, whether from TFA or any other program, want to help kids learn and are working hard towards that end most of the time. But despite our best efforts, in-school reforms alone do little to impact the achievement gap. The Mathematica study suggests that educators can only succeed if we simultaneously address economic inequality and other outside-of-school factors that disadvantage low-income students. I’d like to see critics and proponents of TFA alike stop quibbling about marginal improvements on standardized tests and start concentrating on the larger-scale advocacy that can really make a difference.