This text shows the supremacy of conjucting the creativity of working in group with the same spirituality betfair software of compromising that which is called as an alternative power 4 pics 1 word of freedom and happyness.online casino In case you like our work,icon pop quiz you should adhere what we love to call as beteks, the specific 4 pics 1 word word named after logos quiz answers luxury and money.apps
Evaluating Teachers with VAM: Variable Ambiguous Mistake | Ecology of EducationEcology of Education

Evaluating Teachers with VAM: Variable Ambiguous Mistake

Share it now!

Anyone who has read this blog before is probably aware of my position on the use value-added measurement for teacher evaluation.  I have argued many times here, and in Teacher Magazine, that politicians, self-styled education reformers, and members of the general public are ill-informed if they believe that we can use state tests to determine teacher effectiveness.  Accomplished California Teachers (ACT) addressed that issue in detail in our report on teacher evaluation, which also featured our recommendations for how California can improve teacher evaluations.

Imaginary VAM results for seven teachers. Image by author.

This morning, having read Ken Bernstein’s Daily Kos post on the same topic, I have one more opportunity to address the issue, by looking at a new policy report from the Economic Policy Institute (EPI).  The title is “Problems with the Use of Student Test Scores to Evaluate Teachers.” EPI convened ten experts* in the fields of teaching, learning, schools, testing, statistics, economics, and social policy, and their review of the available research yields a powerful consensus:

[T]here is broad agreement among statisticians, psychometricians, and economists that student test scores alone are not sufficiently reliable and valid indicators of teacher effectiveness to be used in high-stakes personnel decisions, even when the most sophisticated statistical applications such as value-added modeling are employed.

Of course, many of the advocates of VAM in teacher evaluations are particularly interested in firing the bad teachers.  They may talk about helping identify the best teachers and helping all teachers improve, but you don’t have to be more than a casual observer of these debates to have noticed how they relish the prospect of getting tough on teachers.  The authors of this report have a response to the notion that VAM will help schools clean house and produce better results:

If new laws or policies specifically require that teachers be fired if their students’ test scores do not rise by a certain amount, then more teachers might well be terminated than is now the case. But there is not strong evidence to indicate either that the departing teachers would actually be the weakest teachers, or that the departing teachers would be replaced by more effective ones. There is also little or no evidence for the claim that teachers will be more motivated to improve student learning if teachers are evaluated or monetarily rewarded for student test score gains.

Everyone who cares about schools and students should be shouting down the proponents of VAM for teacher evaluation until they produce evidence to counter those demonstrating all of the problems with that approach.  For too long, the politicians and the tough-on-teachers, tough-on-unions education reformers have been able to coast on their sound bites.  They tell us we “support the status-quo;” they tell us “it’s about the students’ needs, not the adults.”  They rely on the simplistically appealing but incorrect notion that, of course, student test scores measure teaching effectiveness – a notion disproven on several levels now.  They tell us that schools need to run more like businesses.  The EPI report authors respond:

A second reason to be wary of evaluating teachers by their students’ test scores is that so much of the promotion of such approaches is based on a faulty analogy—the notion that this is how the private sector evaluates professional employees. In truth, although payment for professional employees in the private sector is sometimes related to various aspects of their performance, the measurement of this performance almost never depends on narrow quantitative measures analogous to test scores in education.

There are many reasons that VAM fails, most of which I’ve touched upon before, and anyone wanting a more detailed overview can check Ken’s post, or read the report.  I just want to call attention to one of the more interesting ones.  It turns out that if you analyze the data backwards, VAM can appear to prove that next year’s teacher raised this year’s test scores.  Now, of course that can’t be true.  However, if VAM were valid, you would expect that it could isolate the effects of teaching that has occurred, and teaching that hasn’t yet occurred would make the data turn “fuzzy” – you wouldn’t observe any results looking at the data that way.  If the data can be turned upside down and appear to show that “future effect” then that students are not randomly placed with teachers.  The deck is stacked in ways that will bias the results for or against a given teacher.

I had some success with a baseball analogy last week, so I’ll try another.  If you wanted to measure the effectiveness of basketball coaches, wouldn’t you need to randomly distribute the players, and account for the variable quality of the rest of the staff, and the facilities?  Of course you would.  So, Phil Jackson should have as good a chance of guiding the Los Angeles Clippers to a championship as he did the Los Angeles Lakers.

Now, if we were to run win-loss records through the VAM data analysis in the same backwards fashion, consider the results.  Do you think next year’s lineup would appear to affect this year’s record?  Of course they would, and the reason is obvious: in most cases, there will be considerable overlap.  You don’t start each year with an entirely new roster.  However, in schools, you do start with a new roster.  The report has this to say about the phantom future effect:

Inasmuch as a student’s later fifth grade teacher cannot possibly have influenced that student’s fourth grade performance, this curious result can only mean that students are systematically grouped into fifth grade classrooms based on their fourth grade performance. For example, students who do well in fourth grade may tend to be assigned to one fifth grade teacher while those who do poorly are assigned to another.  The usefulness of value-added modeling requires the assumption that teachers whose performance is being compared have classrooms with students of similar ability (or that the analyst has been able to control statistically for all the relevant characteristics of students that differ across classrooms).

On this point, it appears to me the EPI report authors could go even further in taking apart VAM.  Consider the impossibility of ever identifying and controlling for “all” of the factors that could affect the data (especially dealing with samples as small as one class.  Larger studies can claim to mute the effects of variables by working with large samples).  Or, lower the bar from “all” to “enough” – and then tell me how you find “enough” without knowing how many there are in the first place.  And in actuality, the proponents of VAM would need to account not only for the varying characteristics of students, but also the varying effects of combinations of students, the varying effects of classrooms themselves, and the varying effects of every relevant factor in the school that could affect the teacher, students, or classroom.

Good luck.

Meanwhile, more states are winning Race to the Top grants, and celebrating the opportunity to waste money on this misguided approach, as they ignore more cost-effective and proven ways to improve schools and support teachers and students.  We lead the industrialized world in child poverty and poor health care, but by all means, lets pour hundreds of millions of dollars into voodoo methods to pick out the bad teachers and reward the good ones.  The report authors conclude their executive summary with this sobering and entirely realistic assessment of the consequences if we continue down this path:

Adopting an invalid teacher evaluation system and tying it to rewards and sanctions is likely to lead to inaccurate personnel decisions and to demoralize teachers, causing talented teachers to avoid high-needs students and schools, or to leave the profession entirely, and discouraging potentially effective teachers from entering it. Legislatures should not mandate a test-based approach to teacher evaluation that is unproven and likely to harm not only teachers, but also the children they instruct.

*Disclosure: note that one of the EPI report authors is Linda Darling-Hammond, whose work at Stanford includes advising Accomplished California Teachers, the group responsible for this blog.

This post was originally published on InterACT, Accomplished California Teachers.

Dice Image: Evidentia
Be a fan

Tags: , , , , ,

Author:David Cohen

David has been teaching for over 15 years, and is now in his 12th year of teaching in California public high schools. He completed a B.A. in English at U.C. Berkeley (’91), and earned a Master’s degree in Education through the Stanford Teacher Education Program (’95). After achieving National Board Certification in 2004, David served for two years as a support provider for National Board candidates. As one of the founding members of ACT, he helped author the group’s first two policy reports (both due in 2010). David was invited to join Teacher Leaders Network (TLN) in 2007, and has written for the TLN website, along with several articles and a “live blog” for Teacher Magazine. Collaborating with fellow ACT member and 2009 California Teacher of the Year Alex Kajitani, David has contributed op-ed pieces to the Sacramento Bee and the San Diego Union Tribune. In February of 2010, David presented at the annual conference of the California Association of Teachers of English (CATE), and co-presented with ACT director Sandy Dean at the 2010 Summer Conference of the National Staff Development Council (NSDC). In August of 2010, David was a guest on “Forum” – a current affairs program on KQED-FM, a National Public Radio affiliate in San Francisco. David teaches 9th and 10th grade English, serves as an academic advisor, and has particular interests in grading and assessment practices and professional development.
  • Pingback: Tweets that mention Evaluating Teachers with VAM: Variable Ambiguous Mistake | Ecology of Education -- Topsy.com

  • http://twitter.com/jasonflom/status/22486242621 Jason Flom

    Evaluating Teachers with VAM: Variable Ambiguous Mistake – by @CohenD | Ecology of Education http://b2l.me/am2sug (via @Eco_of_Ed) #edreform

  • http://twitter.com/thegreengod/status/22504090492 thegreengod

    Evaluating Teachers with VAM: Variable Ambiguous Mistake | Ecology …: About us. Ecology of Education is a multi-… http://bit.ly/cgmGQw

  • http://twitter.com/jasonflom/status/22526580523 Jason Flom

    Insightful analysis of EPI's report on using VAM to evaluate teachers by @CohenD http://bit.ly/9EQoQi #edreform #edpolicy

  • http://twitter.com/drmmtatom/status/22529439786 Monte Tatom

    Evaluating Teachers with VAM: Variable Ambiguous Mistake #fhuedu610 http://tinyurl.com/28l77fq

  • http://twitter.com/lernys/status/22596372822 Fernando Santamaría

    Evaluating Teachers with VAM: Variable Ambiguous Mistake http://j.mp/9na03R

  • http://twitter.com/mnolasz/status/22596727012 mnolasz

    (Adptando Foucault -> Evaluar y punir) RT @lernys: Evaluating Teachers with VAM: Variable Ambiguous Mistake http://j.mp/9na03R