Tag Archives: Rhee

The Problem with Outcome-Oriented Evaluations

Imagine I observe two poker players playing two tournaments each. During their first tournaments, Player A makes $1200 and Player B loses $800. During her second tournament, Player A pockets another $1000. Player B, on the other hand, loses $1100 more during her second tournament. Would it be a good decision for me to sit down at a table and model my play after Player A?

For many people the answer to this question – no – is counterintuitive. I watched Player A and Player B play two tournaments each and their results were very different – haven’t I seen enough to conclude that Player A is the better poker player? Yet poker involves a considerable amount of luck and there are numerous possible short- and longer-term outcomes for skilled and unskilled players. As Nate Silver writes in The Signal and the Noise, I could monitor each player’s winnings during a year of their full-time play and still not know whether either of them was any good at poker. It would be fully plausible for a “very good limit hold ‘em player” to “have lost $35,000” during that time. Instead of focusing on the desired outcome of their play – making money – I should mimic the player who uses strategies that will, over time, increase the likelihood of future winnings. As Silver writes,

When we play poker, we control our decision-making process but not how the cards come down. If you correctly detect an opponent’s bluff, but he gets a lucky card and wins the hand anyway, you should be pleased rather than angry, because you played the hand as well as you could. The irony is that by being less focused on your results, you may achieve better ones.

As Silver recommends for poker and Teach For America recommends to corps members, we should always focus on our “locus of control.” For example, I have frequently criticized Barack Obama for his approach to the Affordable Care Act. While I am unhappy that the health care bill did not include a public option, I couldn’t blame Obama if he had actually tried to pass such a bill and failed because of an obstinate Congress. My critique lies instead with the President’s deceptive work against a more progressive bill – while politicians don’t always control policy outcomes, they do control their actions. As another example, college applicants should not judge their success on whether or not colleges accept them. They should evaluate themselves on what they control – the work they put into high school and their applications. Likewise, great football coaches recognize that they should judge their teams not on their won-loss records, but on each player’s successful execution of assigned responsibilities. Smart decisions and strong performance do not always beget good results; the more factors in-between our actions and the desired outcome, the less predictive power the outcome can give us.

Most education reformers and policymakers, unfortunately, still fail to recognize this basic tenet of probabilistic reasoning, a fact underscored in recent conversations between Jack Schneider (a current professor and one of the best high school teachers I’ve ever had) and Michelle Rhee. We implement teacher and school accountability metrics that focus heavily on student outcomes without realizing that this approach is invalid. As the American Statistical Association’s (ASA’s) recent statement on value-added modeling (VAM) clearly states, “teachers account for about 1% to 14% of the variability in [student] test scores” and “[e]ffects – positive or negative – attributed to a teacher may actually be caused by other factors that are not captured in the model.” Paul Bruno astutely notes that the ASA’s statement is an indictment of the way VAM is used, not the idea of VAM itself, yet little correlation currently exists between VAM results and effective teaching. As I’ve mentioned before, research on both student and teacher incentives suggests that rewards and consequences based on outcomes don’t work. When we use student outcome data to assign credit or blame to educators, we may close good schools, demoralize and dismiss good teachers, and ultimately undermine the likelihood of achieving the student outcomes we want.

Better policy would focus on school and teacher inputs. For example, we should agree on a set of clear and specific best teaching practices (with the caveat that they’d have to be sufficiently flexible to allow for different teaching styles) on which to base teacher evaluations. Similarly, college counselors should provide college applicants with guidance about the components of good applications. Football coaches should likewise focus on their players’ decision-making and execution of blocking, tackling, route-running, and other techniques.

Input Output Graphic

When we evaluate schools on student outcomes, we reward (and punish) them for factors they don’t directly control.  A more intelligent and fair approach would evaluate the actions schools take in pursuit of better student outcomes, not the outcomes themselves.

Outcomes are incredibly important to monitor and consider when selecting effective inputs, of course. Mathematicians use outcomes in a process called Bayesian analysis to constantly update our assessments of whether or not our strategies are working. If we observe little correlation between successful implementation of our identified best teaching practices and student growth for five consecutive years, for instance, we may want to revisit our definition of best practices. A college counselor whose top students are consistently rejected from Ivy League schools should begin to reconsider the advice he gives his students on their applications. Relatedly, if a football team suffers through losing season after losing season despite players’ successful completion of their assigned responsibilities, the team should probably overhaul its strategy.

The current use of student outcome data to make high-stakes decisions in education, however, flies in the face of these principles. Until we shift our measures of school and teacher performance from student outputs to school and teacher inputs, we will unfortunately continue to make bad policy decisions that simultaneously alienate educators and undermine the very outcomes we are trying to achieve.

Update: A version of this piece appeared in Valerie Strauss’s column in The Washington Post on Sunday, May 25.

8 Comments

Filed under Education, Philosophy

Approaching Education Data the Nate Silver Way

My girlfriend’s very hospitable and generous family gave me some great gifts for the holidays when I stayed with them in upstate New York.  As I rocked my new Teach For America T-shirt in the Rochester airport on Christmas Eve, my cursory overview of Nate Silver’s new book, The Signal and the Noise, inspired me to write this post.

While most people probably know Silver for his election predictions and designation in 2009 as one of the world’s 100 Most Influential People, Silver has been my baseball stat guru for considerably longer than he’s been doing political analysis.  In one of my favorite books of all time, Baseball Between the Numbers, Silver penned a brilliant examination of clutch hitting that I still quote at least four or five times a year.  I have generally found Silver’s arguments compelling not just because of his statistical brilliance, but also because of his high standards for data collection and analysis, evident in the following passage from the introduction of his book:

The numbers have no way of speaking for themselves.  We speak for them.  We imbue them with meaning…[W]e may construe them in self-serving ways that are detached from their objective reality…Before we demand more of our data, we need to demand more of ourselves.

In few fields are Silver’s words as relevant as education.  While the phrase “data-driven” has become ubiquitous in discussions of school reform and high-quality instruction, most people discussing education have very little understanding of what the statistics actually say.  As I’ve written before, many studies that reformers reference to push their policy agendas are methodologically unsound, and many more have findings very different than the summaries that make it into the news.

It’s hard to know how many reformers just don’t understand statistics, how many fall victim to confirmation bias, and how many intentionally mislead people.  But no matter the reason for their errors, those of us who care about student outcomes have a responsibility to identify statistical misinterpretation and manipulation and correct it.  Policy changes based on bad data and shoddy analyses won’t help (and will quite possibly harm) low-income students.

Fortunately, I believe one simple practice can help us identify truth in education research: read the full text of education research articles.

Yes, reading the full text of academic research papers can be time consuming and mind-numbingly dull at times, but reading articles’ full text is vitally important if you want to understand research findings.  Sound bites on education studies rarely provide accurate information.  In a Facebook comment following my most recent post about TFA, a former classmate of mine referenced a 2011 study by Raj Chetty to argue that we can’t blame the achievement gap on poverty.  “If you leave a low value-added teacher in your school for 10 years, rather than replacing him with an average teacher, you are hypothetically talking about $2.5 million in lost income,” claims one of the co-authors of the study in a New York Times article.  Sounds impressive.  Look under the hood, however, and we find that, even assuming the study’s methodology is foolproof (it isn’t), the actual evidence can at best show an average difference of $182 in the annual salaries of 28-year-olds.

As I’ve mentioned before, there’s also a poor statistical basis for linking student results on standardized test scores to teacher evaluation systems.  Otherwise useful results can give readers the wrong impression when they gloss over or omit this fact, a point underscored by a recent article describing an analysis of IMPACT (the D.C. Public Schools teacher evaluation system).  The full text of the study provides strong evidence that the success of D.C.’s system thus far has been achieved despite a lack of variation in standardized test score results among teachers in different effectiveness categories.  Instead, the successes of the D.C. evaluation system are driven by programs teachers unions frequently support, programs like robust and meaningful classroom observations that more accurately measure teacher effectiveness.

Policymakers have misled the public with PISA data as well.  In a recent interview with MSNBC’s Chris Hayes, Michelle Rhee made the oft-repeated claim that U.S. schools are failing because American students, in aggregate, score lower on international tests than their peers in other countries.  Yet, as Hayes pointed out, it is abundantly clear from a more thorough analysis that poverty explains the PISA results much better than school quality, not least because poor US students have been doing better on international tests than poor students elsewhere for several years.

I would, in general, recommend skepticism when reading articles on education, but I’d recommend skepticism in particular when someone offers a statistic suggesting that school-related changes can solve the achievement gap.  Education research’s only clear conclusion right now is that poverty explains the majority of student outcomes.  The full text of Chetty’s most recent study defending value-added models acknowledges that “differences in teacher quality are not the primary reason that high SES students currently do much better than their low SES peers” and that “differences in [kinder through eighth grade] teacher quality account for only…7% of the test score differences” between low- and high-income schools.  In fact, that more recent study performs a hypothetical experiment in which the lowest-performing low-income students receive the “best” teachers and the highest-performing affluent students receive the “worst” teachers from kinder through eighth grade and concludes that the affluent students would still outperform the poor students on average (albeit by a much smaller margin).  Hayes made the same point to Rhee that I made in my last post: because student achievement is influenced significantly more by poverty than by schools, discussions about how to meet our students’ needs must address income inequality in addition to evidence-based school reforms.  We can’t be advocates for poor students and exclude policies that address poverty from our recommendations.

When deciding which school-based recommendations to make, we must remember that writers and policymakers all too often misunderstand education research.  Many reformers selectively highlight decontextualized research that supports their already-formed opinions.  Our students, on the other hand, depend on us to combat misleading claims by doing our due diligence, unveiling erroneous interpretations, and ensuring that sound data and accurate statistical analyses drive decision-making. They rely on us to adopt Nate Silver’s approach to baseball statistics: continuously ask questions, keep an open mind about potential answers, and conduct thorough statistical analyses to better understand reality.  They rely on us to distinguish statistical significance from real-world relevance.  As Silver writes about data in the information age more generally, education research “will produce progress – eventually.  How quickly it does, and whether we regress in the meantime, depends on us.”

Update: Gary Rubinstein and Bruce Baker (thanks for the heads up, Demian Godon) have similar orientations to education research – while we don’t always agree, I appreciate their approach to statistical analysis.

Update 2 (6/8/14): Matthew Di Carlo is an excellent read for anyone interested in thoughtful analysis of educational issues.

Update 3 (7/8/14): The Raj Chetty study linked above seems to have been modified – the pieces I quoted have disappeared.  Not sure when that happened, or why, but I’d love to hear an explanation from the authors and see a link to the original.

7 Comments

Filed under Education

Working Together for Educational Equity: What’s Missing from the TFA Debate

Teach For America (TFA) articles are all the rage right now.  Over the past month and a half, the four articles linked below have received particular attention:

“I Quit Teach for[sic] America” by Olivia Blanchard (The Atlantic, September 23)

“Remember the ‘I Quit Teach for[sic] America’ essay?  Here’s the counterpoint. ‘I stayed.’” by Maureen Downey and Tre Tennyson (The Atlanta-Journal Constitution, October 3)

“Why I Stopped Writing Recommendation Letters for Teach for[sic] America” by Catherine Michna (Slate, October 9)

“I Almost Quit Teach for[sic] America” by Eleanor Barkhorn (The Atlantic, October 14)

Though I generally hesitate to suggest that truth lies somewhere towards the middle of two extremes, the majority of both pro- and anti- TFA articles in this case contain inaccurate claims and arguments that unnecessarily pit people with the same goals against each other.  This post is my attempt to debunk the inaccuracies presented in these articles and identify the true benefits and drawbacks of TFA.  I also hope to identify how TFA and opponents of TFA can find common ground in their work for educational equity.

Before I make those arguments, a little bit about my educational background: I attended a traditional public school in a working class, mostly white neighborhood in southern New Jersey from first grade through sixth grade.  From seventh grade to twelfth grade, my parents sent me to Moorestown Friends School (MFS), a high-performing private Quaker school twenty-five minutes from my house.  I moved across the country to attend Stanford University for college and joined TFA right afterwards. San Jose Unified School District (SJUSD) paid TFA a few thousand dollars to hire me to teach at San Jose Community Day School (SJCDS), a school for students expelled from other schools for drug, weapon, violent, or other behavioral offenses.  I taught at SJCDS for three years, the second year of which I served as my school’s Site Representative for the San Jose Teachers Association (SJTA), the union that represents around 1,700 professional educators in SJUSD.  In my third year at SJCDS, I was appointed to the SJTA Executive Board.  I still serve as the SJTA Outreach Director in my new role as an instructional coach in SJUSD and also run professional development sessions for first and second year TFA corps members in San Jose.  I feel connected to both SJTA and TFA, though I tend to hear more compelling arguments from my colleagues at SJTA than I hear from the TFA staff members I know.  I hope you find this context valuable as I address the claims either directly made or implied in the above articles by answering the questions below:

Are TFA teachers prepared for their teaching assignments?

The short answer to this question is no.  As both Blanchard’s and Barkhorn’s articles note, TFA’s summer Institute, besides being short and often unrelated to a teacher’s upcoming teaching assignment, focuses far too much on theory and vision and far too little on tangible skills.  However, criticisms of TFA along these lines are, as another alum puts it, “a moot point” – nobody does a particularly good job preparing first-year teachers for assignments in low-income neighborhoods.  As I mentioned in an earlier blog post, nearly all the evidence suggests that there is very little, if any, difference, on average, between the standardized test results of students who have had TFA teachers and students who have had teachers with different backgrounds.  One of my friends, fellow 2010 TFA alum Connor O’Steen, summarized the problem with the “lack of preparation” critique in response to that post on Facebook:

…[W]hat does it mean when (at least) two years and 40-50,000 dollars of ed school has you performing ever so slightly worse on average than someone who’s done a six week crash course over a summer? Certainly you’d expect ed school–this long and formal educational experience which usually culminates in a Master’s degree–to add more value, more human capital? I think a lot of the criticism of TFA comes from stakeholders in the traditional ed pipeline who are made genuinely uncomfortable by the fact that all the training and apprenticeships seem to put people solely on par with beginning TFA corps members. Granted, there are more ways to measure achievement than standardized tests, but I don’t think many people would see percentile scores this low and think there’s *not* a problem here.

While I know several teachers (both within and outside of TFA) who believe their training contributed value to their teaching, I know many more, from a variety of preparation programs, who believe their training was practically useless.  Studies suggest corps members and other teachers have similar attitudes about their preparation and there’s no escaping the fact that there’s no well-established statistical correlation between time spent in a teacher preparation program and teacher effectiveness.  I proposed three possible explanations for this fact in my response to Connor’s post:

1. TFA and traditional teacher education systems are similarly ineffective at preparing teachers for placements in low-income schools.
2. TFA is less effective than traditional teacher education systems at training teachers but recruits better “talent” on average than those programs. One of the more interesting findings from the Mathematica study was the lack of correlation between a teacher’s undergraduate background and student achievement. But Dana Goldstein has an alternate theory (http://www.danagoldstein.net/…) that work ethic, a strong orientation to a mission, and intense focus on data and testing all explain the results.
3. TFA is more effective than traditional teacher education systems at training teachers but traditional teacher education systems recruit better “talent” than TFA. There are few people who make this argument.

Whichever of the above three options is most accurate, it’s hard to indict TFA for putting poorly prepared teachers in schools unless you indict every single teacher preparation program for the same fault.  I actually believe both traditional teacher preparation programs and TFA’s program (which is very similar in content to traditional programs) could improve significantly, but my point is that this critique is not valid when used to compare TFA to other programs (the only exception to this rule may be special education.  As one member of the SJTA Board pointed out to me, TFA teachers typically lack the legal knowledge necessary to succeed as special educators.  While two of the best special education teachers I know in SJUSD are TFA alums who have remained in the classroom well after their TFA commitment expired, I think that argument is valid).

Does the relatively short two-year commitment negatively impact students?

Most studies suggest that common sense is correct and teacher turnover is bad for students.  Though TFA placement regions have high turnover rates for first and second year teachers in general, attrition rates for TFA corps members are in the same ballpark during those two years and are significantly greater in subsequent years.  I personally believe TFA should not recommend corps members for positions for which there are other qualified candidates more likely to remain in education long-term.  In SJUSD, for example, TFA has placed a number of corps members at schools that are relatively low-poverty and easy-to-staff, which seems antithetical to the TFA mission.

At the same time, and contrary to Michna’s claims, extremely hard-to-staff positions with high turnover rates exist.  Also in SJUSD, which I believe to be one of the best large urban school districts in the country, we still have several open positions and are nearly three months into the school year.  TFA focuses primarily on these hard-to-staff positions and explicitly tries to select people unlikely to quit on commitments (they obviously failed in the case of Blanchard, but I think they’re pretty justified in excoriating her for her decision; TFA asks applicants outright in the final interview if they would quit under any circumstances and I find it hard to believe she answered this question honestly).  TFA in many places effectively addresses teacher shortages.

Do TFA teachers, on average, help level the playing field for children in low-income communities (do TFA teachers close the achievement gap)?

The short answer to this question is also no; as I mentioned above and discussed in an earlier post, TFA teachers seem to guide students to roughly equivalent standardized test results as all other teachers.  Those results are overwhelmingly poor compared to the results for affluent students.

Tennyson states in his article that, in his first year, “100 percent of [his] students passed the ELA exam and 90 percent were proficient or above in reading.  Down the hall, Donna Jenkins, the third corps member at [his] school, led [her] fifth graders to a 95 percent pass rate in math and 97 percent in science.”  These numbers sound great, but there are several possible explanations for them.  While it’s certainly possible that Tennyson and Jenkins were two of the best teachers in America during their first years of teaching and were able to single-handedly change the academic trajectories of their students in one year, I think it’s more likely that these statistics are misleading.  Perhaps their students weren’t all that disadvantaged before fifth grade.  Perhaps some out-of-school factors were contributing to the success of these students.  Perhaps these teacher-designed assessments don’t tell the whole story of student performance.  I’d bet a fair amount of money that Tennyson was at least a pretty good teacher based on what he wrote, but I’d bet even more money that the results he lists have a lot less to do with excellent teaching than he makes it sound.  We’d have to see his tests and get significantly more context and data about his students and classroom to know for sure, but while I’m sure he genuinely believes he can teach kids out of poverty, nearly all the externally verified data we have suggests that’s highly unlikely (again, check out my previous post here for a summary of research findings).  Even if Tennyson and Jenkins did work miracles with their students, they’d be incredibly unique within TFA.  There’s no reason to believe their success would be replicable on a large scale because, if it were, TFA would be teaching their best practices to all new teachers and getting results better than what they’re getting.

Again, none of that is to say TFA teachers (and other teachers, for that matter) can’t make a difference and change students’ lives – they definitely can and I know a number of people who were very good teachers as corps members – but many TFA teachers, like many charter networks, have a tendency to overstate their impact.  In the case of individual teachers, I’m inclined to believe the misleading information they present is unintentional, though I am less predisposed to think that misinformation coming from organizational leadership is so innocuous.

How do TFA’s leadership development, political work and alignment, and brand affect low-income students?

TFA’s mission includes developing leaders who work “to ensure that all children can receive an excellent education” outside of the classroom.  Critics sometimes forget this purpose.  The hope is that even people who join TFA solely to build their resumes will see the obstacles low-income children face during their two-year stints in the corps and will then advocate for those children long after they have left the teaching profession for their careers in law, medicine, or business.  I believe this goal is admirable.  At the same time, however, TFA’s brand often develops leaders and political outcomes that actively harm students in poverty.

Blanchard formulates a pretty accurate summary of the problem.  Pervading TFA is

…the unspoken logic that current, non-TFA teachers and schools are failing at the task of closing the achievement gap, through some combination of apathy or incompetence. Although TFA seminars and presentations never explicitly accuse educators of either, the implication is strong within the program’s very structure: recruit high-achieving college students, train them over the summer, and send them into America’s lowest-performing schools to make things right. The subtext is clear: Only you can fix what others have screwed up.

Her analysis gels with my TFA experience – most people within TFA are hesitant to explicitly blame the achievement gap on bad teachers and schools, but most also perpetuate a negative narrative about public education at least implicitly.  When Blanchard asked a TFA spokesperson about TFA’s views on traditionally trained teachers, she received the response that “[i]f anything, teachers are victims of more structural problems: inequitable funding; inadequate systems of training and supporting teachers; the absence of strong school and district leadership.”  Notice that this response still implies that teachers aren’t doing a very good job; it just blames the problem on inequitable school funding, poor training, and bad leadership instead of laying the proximate culpability at teachers’ feet.  I really like nearly everyone I know on TFA staff, but I have never gotten a single one of them to admit the well-established fact that in-school factors explain, at most, 33% of student achievement.

This mindset – that teachers and schools have nearly total control over student outcomes – has two really problematic implications.  The first implication is that schools that serve low-performing students are bad schools, that some combination of the teachers and leadership at those schools are doing a terrible job that someone else could do significantly better and mass firings and closings are warranted.  The second implication is that we can focus our political energy away from solving poverty directly; if education can fix poverty, as Teach For America suggests, poor children can succeed without a drastic overhaul of society.  School-based reforms are all we need.  The reality, though, is that education cannot solve society’s problems.  Education can make a difference, but the main reasons low-income students perform poorly compared to their affluent peers have nothing to do with school and everything to do with the gamut of obstacles they face from birth.  When you break down school performance in the US by free and reduced-price lunch rate before comparing it to school performance internationally, “low-performing” US schools with high numbers of poor students have higher test scores than schools in countries with similar concentrations of disadvantage.

The best critique of Teach For America, in my opinion, is based on political affiliations and impact.  The organization produces a large number of influential alumni who support the expansion of charter schools, changes to teacher employment law, and making student standardized test scores increasingly more important in teacher and school evaluations.  There is, unfortunately, very little evidence that these reforms help poor students.

Yet a lot of politicians who couldn’t care less about poor kids rally around TFA’s “unspoken logic.”  Chris Christie, the governor of New Jersey, is a prime example.  Christie uses the cover of an education “reform” agenda – he promotes closing schools, opening more charters, eliminating tenure, and introducing “merit pay” based on student test score data – to hide the fact that tax cuts for the rich are a higher priority for him than poor students eating breakfast or lunch (see this link for a more extensive list of Christie’s cuts to education).  TFA obviously doesn’t support cutting school breakfast money, but the concept that educator and school-related changes are most important for poor students enables people like Christie to further disadvantage low-income kids, bust unions, enrich the wealthy even further, and receive credit for supposedly student-oriented ideas at the same time.

How can TFA, teachers unions, and other proponents of opportunities for low-income children work together for educational equity?

In the end, most people within Teach For America and most other people working in education have very similar goals; to use the words of the San Jose Teachers Association, most of us want to “educate, inspire, and change lives through public education.”  As I recently discussed with my older sister, the biggest shame about the TFA debate is that, while people who care about kids are arguing with each other about teacher and school quality, people like Christie are exacerbating poverty and directly destroying the lives of low-income students.

So what should TFA and people like Michna and Blanchard do differently to better support their stated missions?

First, and most importantly, TFA should acknowledge that the achievement gap is caused by poverty, not by bad teachers and schools.  School-related changes alone can address only some of poverty’s symptoms.   TFA should thus publicly advocate for policies that address poverty, policies like single-payer health care, increased taxes on the wealthy, wraparound services for low-income kids, and more environmentally and socially responsible food standards.  This advocacy will lose TFA money – I highly doubt Arthur Rock and many of Teach For America’s “National Corporate Partners, Sponsors, Supporters, and Investors” will continue to support the organization if TFA begins to promote reducing income inequality – but if TFA is really “students first,” TFA will worry about that funding later and start working now for the change most likely to actually benefit poor students.  Quality teaching matters, but what matters more is the overall environment in which the student grows up and lives.

Second, everyone in education should promote further research on the link between various reform ideas and student outcomes.  Until other reform ideas are supported by strong evidence, however, we should focus on the school-related change everyone agrees about: teacher support.  Though more study and experimentation is needed, research suggests that teachers can benefit greatly from ongoing professional development in the form of one-on-one coaching.  TFA already has a structure for coaching corps members and, when it comes to TFA teachers, believes in development instead of dismissal.  Many traditional school districts, like SJUSD, have coaching models as well for the same purpose.  I believe directing energy and policy focus towards making these systems more effective and aligned with this purpose should be the primary goal of education reform.  Focusing on evidence-based support first and other evidence-based reforms second is both the ethical way to treat the teaching workforce and a way to encourage the development of strong teachers interested in remaining in the profession.

At the same time, educators must consider additional reforms pending future research.  While student test scores, for example, are not yet a valid or reliable indicator of effectiveness, we should continue to study them.  Teachers unions can get behind that idea; unions only oppose linking test scores to teacher evaluations because doing so currently provides an inaccurate picture of a teacher’s effectiveness.  Unions believe in robust evaluation systems that more accurately assess teachers’ contributions.  If empowered by a change in the education narrative and given adequate support, I also believe the vast majority of teachers would buy into respectful, evidence-based discussions about revised layoff procedures and expedited dismissal processes for the small fraction of teachers not doing their jobs.  Those discussions present a problem now mainly because reformers like Michelle Rhee continue to promote unproven reforms and focus on teacher blame and dismissal rather than substantive, constructive criticism and support.

In general, critics of TFA should stop harping on illegitimate complaints about TFA teachers’ lack of preparedness.  A lot of TFA teachers turn out to be very good teachers, even in their first years, and targeting well-intentioned, hardworking, and talented individuals for the problems of the larger organization is counterproductive.  Teachers should also remember that we do make a difference – though we can’t close the achievement gap, we can markedly improve our students’ lives.  And TFA should stop sending the sometimes explicit and frequently implicit message to its corps members and the general public that educational changes alone can fix poverty, since they can’t.

All educational stakeholders should be able to agree that we must continuously improve our schools and practices to better serve our students.  But to truly put our low-income kids first, TFA and other stakeholders must simultaneously band together with teachers unions and advocate for social justice policies that address economic inequality.

Note: Thanks to Jack Schneider, this post was updated to include the most recent data on teacher attitudes about their preparation programs.

Update 2 (2/21/14): The second-to-last paragraph of this piece originally referred to critics of TFA as “the anti-reform crowd.”  This reference has been changed because of a thoughtful comment by Serge Vartanov.

Update 3 (3/2/14): The text above originally included a parenthetical aside that referenced a flawed study on teacher preparation programs.  Thank you to Demian Godon for prompting me to reexamine it.

Update 4 (9/26/15): In reading back through this post, I realized that the text originally said the following:

“When you break down school performance in the US by poverty rate before comparing it to school performance internationally, ‘low-performing’ US schools with high poverty rates do better than schools in every other country with similar rates.”

The link, however, does not break down US schools by the official poverty rate, but by the percent of students who receive free and reduced-price lunch.  I have updated the text to more accurately reflect this fact, though it’s worth noting that the official poverty rate in the US is very low and that the percent of students receiving free and reduced-price lunches is probably a better proxy for disadvantage.

11 Comments

Filed under Education