Monday, January 26, 2015

Observation Bias?

At a recent networking event for teacher evaluation leaders, we were presented with a provocative research article about classroom observations as the basis for teacher evaluation. One of their contentions is that there is an inherent bias in the Charlotte Danielson rubrics against teachers working with underperforming students. The authors write, “…a   rating of ‘distinguished’ on questioning and discussion techniques requires the teacher’s questions to consistently provide high cognitive challenge with adequate time for students to respond, and requires that students formulate many questions during discussion. Intuitively, the teacher with a greater share of students who are challenging to teach is going to have a tougher time performing well under this rubric than the teacher in the gifted and talented classroom.”

My initial response to this was one of disagreement. As a former instructional coach and Charlotte Danielson fan, I wanted to argue that good teaching is good teaching. For years we’ve argued that the rubrics apply equally in all settings: PE classrooms, special education classrooms, basic skills classrooms, and honors classrooms. It’s been a foundation of both our Q-Comp program as well as our teacher evaluation program for over a decade. Despite my initial dismissal of the article, I continued reading. And then the authors shared the data.



The disproportionate ratings gave me pause to reflect on my own teaching experiences. When I returned to teaching after spending three-and-a-half years as an instructional coach, I asked my principal to assign me the students with the greatest needs. It was rewarding. And challenging. And exhausting. How would I have been rated that year?  I loved the kids I taught, believed in them, and still they did not engage in the learning in the same year as my students who had historically been successful in school.

The implications of this research are huge. As the stakes associated with teacher evaluation increase, teachers may be less inclined to work with students who need the most support. Currently, there’s legislation in discussion to connect lay-offs with teacher evaluation. While 35% of teacher evaluation is now, by statute, based on student achievement measures, the remainder is likely based on administrative observation.

An additional implication of this research is its effect on the feedback teachers receive on their classes. In most cases, teachers choose which of their classes will be the subject of the observation. In a growth model, such as the Q-Comp instructional coach model, teachers will frequently invite their colleague into their most challenging class. The second set of eyes provides them with insights that lead them to reflect on their practice and consider alternatives to their current methodologies. If, however, the observations are high-stakes, and if the research is correct, teachers will be less likely to invite their administrators into their most challenging classes, perhaps bypassing the opportunity to get feedback that would ultimately help their students.

If the research is valid, how might evaluators level the playing field for teachers? The authors of the research study suggest a complex statistical analysis of student demographics and performance to create a value-add formula that would adjust for these difference. This may be oversolving the problem, and may actually lead to less transparency and additional issues. A simpler solution may be to simply raise awareness to this potential bias. The simple awareness will influence the administrators’ assessments. This also begs additional emphasis on the pre-observation and post-observation conferences. Teachers need to have the opportunity to educate the evaluator on the composition of their classes and to articulate the strategies they are employing to meet the needs of those students.

To be sure, the research presented in this article wasn’t perfect. It was a small sample size, and the authors seem to have some biases of their own. And even with those flaws, it should give pause to all of those involved in high-stakes observations.


Tuesday, January 20, 2015

Personalizing Online Feedback

For the past few weeks, the topic of feedback has come up again and again. How important is feedback? Who should be giving the feedback? Under what circumstances? What should the feedback look like and sound like? How should feedback change when it is delivered electronically?

As teachers our lives revolve around giving feedback. We give feedback to our students informally when we smile at a response while nodding our heads. We give feedback to students formally when we identify the strengths and areas for growth in an essay. We see the impact of our feedback when it delivered well: “light bulb” moments, improved writing, persisting in a challenge. We have also seen the impact of our feedback when it was delivered poorly: tears, frustration, giving up. So we got better at giving feedback. We learned how to personalize the message to each student. When we have to give students negative feedback – which we do – we know how to couch it with support. But, as more and more of our assessments are given and feedback is received digitally, how do we have to modify it?

Research shows that tone and interpretation of online communications are entirely dependent upon the mood of the reader. Yikes! Knowing that we can’t possibly know, predict, or control the mood of the reader, how can we effectively communicate with our students about their learning progress?

Michelle Gill, Vice President of Professional Learning for PLS 3rd Learning, an organization dedicated to helping teachers develop and facilitate online learning, suggests a few things.
  • While brevity is often glorified, particularly in email communications, when giving feedback it can have detrimental effects. The more you can explain your feedback, the better. Another option is to use an online voice recorder lie Vocaroo to record your feedback orally. This can also personalize the message as it literally has your voice.
  • Read the feedback aloud before you send it. Can you read it in an angry tone?
  • Be cautious about the use of an exclamation point. The exclamation point can easily be misinterpreted. Instead, use vivid language to express your enthusiasm. Gill rarely uses them in electronic communications.
  • Get feedback on your virtual voice. Ask a colleague – or, for the truly brave, your students – to analyze your communications for tone and voice. Your goal with your online communications is friendly, supportive, and open to further communication.

While these tips may not change the mood of the recipient of the feedback, they may mitigate it. Just as we developed skills in giving feedback to students in-person, we’ll develop skills in the online platform. 

Monday, January 5, 2015

Survey Says...

In my house, Family Feud was an after school staple. I loved watching the contestants try to read the minds of the surveyed viewers. Of course, now, as an adult with survey writing and analytic experience, I have all sorts of questions. How were respondents selected? Was it determined by geography? Age? Was it conducted over the phone? Through the mail? Was it anonymous? All of these demographics impact the results.

While we’ve been using surveys in education to garner feedback on educational programs for many years, we haven’t surveyed students in any systemic way to learn about their perceptions about teachers. The new Minnesota teacher evaluation law that went into effect in July requires that teachers be evaluated on student engagement. As a result, teachers are receiving feedback from students – often in the form of anonymous surveys. And inevitably, for even the teachers with great reputations and relationships with students and families, a student – or a few students – gives honest feedback that may be difficult to read. How we respond to this feedback is the key. In my conversations with teachers, I've landed on a few recommendations.

First, when examining the data, look at the trends holistically. What big picture does the aggregated data paint? What are the identified strengths? How can those strengths be leveraged? In a growth model, working from strengths is really powerful.

Then dig in. When dis-aggregating the data, look at it through different filters: class, gender, ethnic demographics. What trends do you notice within these filters? If unfavorable data is discovered, dig even deeper. Some platforms allow the user to browse through individual survey responses. This can provide greater insights into the responses. Caution: avoid the temptation to try to determine the source of the response. This can lead to dismissing the feedback. Reflect on the feedback and how it aligns with your self-perceptions. If it doesn't align with how you see yourself, reflect on what actions may have led to this specific feedback.

Finally, choose one growth area. Focus on specific strategies that could improve student engagement. We use Robert Marzano’s book The Highly Engaged Classroom as our foundation for professional growth. The book is full of strategies that are high-impact and teacher-friendly.


When students are re-surveyed later in the year, the survey will likely say that students are engaged at new levels.