4 pics 1 word answers logo quiz answers guess the emoji answers 100 pics quiz answers closeup pics answers Website Security Horoskopi Horoskopi sot Horoskopi ditor Poezi dashurie poezi dashuria Dragon City Dragon City Hack Godaddy Coupon Codes Godaddy Coupon Code free coupons hostgator coupon mpix coupon code promo codes voucher codes overstock coupon online coupons discount vouchers voucher codes printable coupons pizza hut coupons top ten top ten movies top 10
Who Evaluates Teachers, and Why? | Ecology of EducationEcology of Education

Who Evaluates Teachers, and Why?

Over at EdWeek, Stephen Sawchuk recently wrote an article about teacher evaluation and assessment, titled Wanted: Ways to Assess the Majority of Teachers.  The article provides an informative look at various ways that teachers, their unions, administrators and districts can join together to improve teacher evaluation.  Those of us at Accomplished California Teachers welcome any focus on developing robust teacher evaluation systems that serve all teachers, and involve teachers in their design and implementation.  That kind of approach represents some of the core thinking in our teacher evaluation report from last year.

In the comments section on that article, I engaged with another commentator, John Mierzwa, from Las Vegas, whose first comment began:

The answer is ridiculously simple: let their bosses (principals and vice-principals) craft and conduct performance assessments, and then hold their employees (teachers) accountable according to a district-wide progressive reward/discipline matrix. High performers will be recognized and rewarded, average teachers will be pressed to improve, and poor teachers will either improve or be fired. The End.

John added that he felt unions are the main obstacle to addressing evaluations in a straightforward manner, as they only look out for their members’ jobs.  As a union member on a negotiating team that I believe is taking a progressive approach to evaluations, I took issue with John laying the whole problem at the feet of the unions, and also pointed out that in Ohio, union teachers on a review panel evaluating their peers removed more teachers than comparable panels without teacher participation.

But whether we have teachers or administrators conducting the evaluation, the more important issue to me is how we might devise an evaluation system that accomplishes more than minimal quality control (which is often the case right now), and even more than a slightly expanded spectrum of effectiveness rating, as I think John envisions, with “high performers,” “average teachers,” and “poor teachers” – whom he assumes will respond to the “progressive reward/discipline matrix.”  What we should be aiming for is an evaluation system that helps every teacher analyze practice, reflect, and improve constantly.  In the ACT report, we emphasized that the evaluator should have grade level and subject matter expertise in order to provide a high quality evaluation that will really help all teachers.

John replied to my comments at EdWeek, and reinforced a vision of evaluation that is grounded, I think, in the wrong vision for evaluation – ensuring basic quality, rather than ensuring constant growth.  He wrote:

it is largely irrelevant what subject (if any) the evaluator(s) used to teach.

Why do I make that claim? Because in most meaningful ways, teachers are no different from any other employee in a responsible position. For example, a person in any workforce is expected to impart his/her experience/wisdom/knowledge (i.e. teach) to those around – especially those who are junior to him/her.

… In my experience, a good manager certainly does not need to know every last detail of his/her reports’ daily tasks in order to notice whether or not the person is doing a good job, following company policies, and striving to improve.

(John goes on to imply that teachers do not work in “the real world” – never a good way to win an argument.  It sure feels real to the students, parents, and educators I know).

So many reasons I take issue with the analogies… first of all, I do not “impart” experience, wisdom, and knowledge: I educate.  Educare, the Latin root of the word, means “extract”.  John’s language suggests to me the empty vessel or tabula rasa view of students.  I would worry about being evaluated by someone with that difference in perspective on what’s happening in the classroom.

Furthermore, it is entirely relevant what the evaluator used to teach.  I’m not suggesting that we need a perfect match – a former teacher of early elementary grades might be quite well-equipped to evaluate teachers in middle grades (though I’m open to learning more from my colleagues on that matter).  At the secondary level however, that expertise certainly matters.  Let me offer a few examples.

First, from “the real world” – in Good Boss, Bad BossBob Sutton extols bosses and CEOs from companies like Disney, Microsoft, Oracle, Apple, Pixar, McDonald’s, Xerox, Google, and others, whose success followed from, “a deep understanding of the work they led.”   Sutton adds, “In an ideal world, bosses would always manage work they understood deeply.  But it isn’t always feasible.  Every boss can’t have deep knowledge of every follower’s expertise.  When that happens, a boss’s job is to ask good questions, listen, defer to those with greater expertise, and above all, to accept his or her own ignorance.”  In secondary education, lack of subject matter knowledge precludes deep understanding, and I don’t get the impression John was talking about having evaluators who ask questions and defer to others.

A colleague from Mississippi, Renee Moore, once shared this anecdote.  At a conference of some sort, she shared a videotape of her teaching practices with two groups – one group of administrators, and one group of teachers.  The administrators saw chaos in the classroom, too many kids talking, not enough management or discipline.  One student sitting on a desk crumpled up the paper he was holding and tossed it on the floor.  Meanwhile, the teachers marveled at the level of student energy and engagement.  And the student who had discarded his paper, it turned out, was a formerly reluctant writer who at that moment had decided his third draft wasn’t good enough and was about to start the fourth.  The practitioners with deeper knowledge of the work seem to have seen the situation quite differently, and I would argue, more productively if the goal was to evaluate Renee’s teaching.

And finally, here’s part of the the response I offered to John in the comments section back at EdWeek:

“I think our apparent disagreement stems from a different expectation for evaluations. I agree that I don’t need to be a calculus teacher in order to recognize order, organization, engagement, etc., and to review results. Yes, from a management perspective, that might be enough – to manage. You also focused on the extremes, which we might agree are easy to spot. I’m more concerned about the center of the bell curve – since that accounts for most teachers and students. Also, how does the ‘manager’ type evaluator deal with the teacher who seems to do everything right but still isn’t getting the best results?

“I want evaluations that serve to help every teacher improve at teaching, rather than simply ensure some baseline quality control. So, if I (an English teacher) become an evaluator someday, I might be able to see what’s going on in a math class, but I might be stuck regarding ideas to help the teacher actually provide better mathematics education. Do the students need clearer explanations, better examples, more practice, less practice, different practice? What are the usual types of misunderstandings that arise for students at this point in geometry or calculus? I wouldn’t have a clue, nor would I recognize any gaps in the teacher’s knowledge of math.

“Similarly, I might find that my English students are really struggling to identify textual details to support literary analysis. I can tell you from experience that the former science teachers who evaluated me just saw an orderly classroom and an appropriate, engaging, well-planned lesson, and they signed off. An experienced English teacher might sit with me to review student writing, and recognize something that it took me years to realize on my own: younger students had a tendency to use only character dialogue when gathering evidence. They confused the idea of using quotations from the text with the portions of the text presented in quotation marks. Once I got that, I became much more effective (in a way that will never show up on a standardized test).”

I encourage readers to go back to the EdWeek article and the comments for a more complete picture of the issues, and to see John Mierzwa’s full comments in the event that anyone feels I’ve unfairly characterized his statements in these excerpts.

This blog post originally appeared at InterACT, a group blog from Accomplished California Teachers.

Image: Joycelyn Wyatt
Share/Save

Tags: , , ,

Author:David Cohen

David has been teaching for over 15 years, and is now in his 12th year of teaching in California public high schools. He completed a B.A. in English at U.C. Berkeley (’91), and earned a Master’s degree in Education through the Stanford Teacher Education Program (’95). After achieving National Board Certification in 2004, David served for two years as a support provider for National Board candidates. As one of the founding members of ACT, he helped author the group’s first two policy reports (both due in 2010). David was invited to join Teacher Leaders Network (TLN) in 2007, and has written for the TLN website, along with several articles and a “live blog” for Teacher Magazine. Collaborating with fellow ACT member and 2009 California Teacher of the Year Alex Kajitani, David has contributed op-ed pieces to the Sacramento Bee and the San Diego Union Tribune. In February of 2010, David presented at the annual conference of the California Association of Teachers of English (CATE), and co-presented with ACT director Sandy Dean at the 2010 Summer Conference of the National Staff Development Council (NSDC). In August of 2010, David was a guest on “Forum” – a current affairs program on KQED-FM, a National Public Radio affiliate in San Francisco. David teaches 9th and 10th grade English, serves as an academic advisor, and has particular interests in grading and assessment practices and professional development.
  • Jason Flom

    "Who Evaluates Teachers, and Why?" by @CohenD | Ecology of Education http://bit.ly/hqsxtm Excellent analysis. #edreform

  • http://twitter.com/jasonflom/status/33168305355227136 jasonflom

    "Who Evaluates Teachers, and Why?" by @CohenD | Ecology of Education http://bit.ly/hqsxtm Excellent analysis. #edreform

  • http://twitter.com/cohend/status/33210076185501697 David B. Cohen

    Thanks, Jason! RT @JasonFlom: "Who Evaluates Teachers, & Why?" by @CohenD Ecology of Ed. http://bit.ly/hqsxtm Excellent analysis. #edreform

  • http://twitter.com/jasonflom/status/33248689539518464 Jason Flom

    Why evaluate teachers? To ensure basic quality or ensure constant growth? The answer matters. http://bit.ly/hqsxtm Nice piece by @CohenD

  • http://twitter.com/drmmtatom/status/33364949497548800 Monte Tatom

    Who Evaluates Teachers, and Why? #fhuedu610 http://tinyurl.com/4p5auka

  • A Psychometrician

    So DC has just fired 206 Teachers due to ineffective ratings.http://www.washingtonpost.com/local/education/206-low-performing-dc-teachers-fired/2011/07/15/gIQANEj5GI_story.htmlLets look closely at this data. 566 teachers were rated minimally effective year one528 were rated minimally effective in year 2. 141 of those teachers were rated minimally effective both years60% of teachers rated as minimally effective in year one were rated higher in year 2. (That would be about 339 teachers)That should leave 89 teachers that were rated as minimally effective in year 1 and didn’t return, or were rated as ineffective in year 2. and here is the point… that leaves387 teachers who were newly identified as ineffective in year 2. That is roughly 73% of those teachers rated as minimally effective were either new teachers, or were actually rated effective in previous years. This raises two issues that are not well address by this system or article. First, that the assumption in firing teachers is that there is a ready source of effective teachers waiting to be hired. If new teachers fill the ranks of the minimally effective ranking, this assumption is not supported.Second, if a large number of teachers rated as minimally effective were actually rated as effective in the past, we are presented with the hard to accept conclusion that past rating is not necessarily a good predictor of future rating. So either actual teacher quality jumps around a considerable amount, in which case we can’t be guaranteed that firing today’s ineffective teacher would have actually been ineffective tomorrow, or if there is a large amount of error in these ratings it exlains the large percentage of teachers moving from ineffective to effective as one due to expected chance rather than actual improvement. To demonstrate this in another way, notice that we have a total of 3341 teachers in year 1, where 641 earned ratings of minimally effective or lower, which is about 20%. If we identified teachers at RANDOM we would expect 4% of teachers to be identified two years in a row. 4% of 3341 is … drum roll please … 132. So when we consider the 141 that were rated as ineffective twice, we should remember that we would have expected 132 to be identified by random.

    http://abathroomscale.blogspot.com/