Monday, April 13, 2015

Separating Gradebooks and Report Cards

I recently had the opportunity to engage in a two-day workshop with Dr. Tom Guskey of the University of Kentucky. His session, “Leading the Way to More Effective Grading and Reporting for All Students,” urged participants to consider three guiding questions when designing their grading and reporting systems:
  • Why do we use report cards and assign grades to students’ work?
  • Ideally, what purposes should report cards or grades serve?
  • What elements should teachers use in determining students grades?
  • With these questions in mind, Dr. Guskey identified potential purposes and audiences for a comprehensive grading system. This graphic summarizes those elements.



Once upon a time, before technology made grading and communicating about grades more efficient, those three reports were unique. The gradebook was the teacher’s domain. It was where she recorded progress and anecdotal information. Students and parents did not have access to it which gave the teacher a great deal more autonomy in using professional judgment when putting grades on the report card. Now that our gradebooks are public, mathematic algorithms override professional judgment to determine the grade on the report card. Efficiency has merged these three different reports. And maybe that’s not a good thing.

Let me be clear here: I am not advocating for eliminating online gradebooks.  They definitely serve purpose of reporting learning progress. The online gradebook holds all three interested parties – students, teachers, and families – accountable. But – here’s a crazy thought – what if the gradebook did not have a current grade? What if it only did what it was supposed to do: show progress?

On several occasions throughout the conference, Dr. Guskey emphasized the importance of informed professional judgment. We all know the perils of averaging grades. Students who take longer to master concepts and skills are penalized by earlier grades. Not to mention, most systems use the antiquated 100-point grading scale where failing grades are weighted 6 times those of passing grades. In our current system, teachers are challenged to manipulate the gradebook to make it reflect what they know is true about their students’ learning, and students play the game of school, calculating how many points they need to earn a grade. Moving to rubrics, a 4-point scale, and removing the grade from the online grade book may be a step in the right direction. The report card could then be the opportunity for the teacher to share the final professional judgment of the student’s learning during that grading period.


It’s a radical idea. It would require a fundamental shift in practice. It would involve a lot of education for all of the stakeholders about its purpose as well as the logistics. And it just might be worth it. 

Monday, February 9, 2015

Can We Afford NOT to Have Mentors?

The first year of teaching. Although it was almost a quarter of a century ago, I remember it clearly. My eighth grade students in a small city in Wisconsin graciously removed my rose-colored glasses in short order. My mentor, Jane Thompson, was my savior. She wiped away my tears of frustration while helping me develop my own teaching style. She loved the energy of middle school kids, even when they were at their hormonally instable worst and she passed this on to me. When I later moved on to other districts, I was once again blessed with amazing mentors, formal and informal. Without their support, I am certain I would not have survived, much less thrived.

Dr. Debra Pitton, Education Department Chair at Gustavus Adolphus College, was the keynote presenter at the Minnesota ASCD New Teacher-Mentor Summit in St. Paul last Thursday. Pitton, author of Mentoring Novice Teachers: Fostering a Dialogue, led the 150 mentors, mentees, and district leaders through a series of exercises and dialogues designed to build rapport and establish trust. She cited the work of James Comer (2004) and Roland Barth (2006) that recognize the significant impact that relationships among adults in a school has on student achievement.

Following the keynote, attendees broke into job-alike groups. The new teachers collaborated to discuss their struggles and successes. They learned about their own communication styles and how that may impact how they elicit and receive feedback. Mentors reflected with their peers on a similar topic. District leaders had the chance to discuss considerations for developing effective mentor programs. It was surprising the number of Minnesota districts who currently have no formal new teacher induction/mentoring program!

Mentoring new teachers is a key to retaining them, particularly as a teacher shortage looms.  The National Commission on Teaching and America’s Future (2007) reports that 20% of teachers leave the classroom within three years; in urban districts, close to 50% leave in their first five years of teaching. The National Center for Education Statistics has discovered a correlation between the level of support for new teachers and their likelihood of staying beyond the first year. They recommend districts create mentoring programs for the first two years of teachers’ careers.

Under the leadership of Diane Rundquist, High Potential and Teacher Induction Coordinator, Minnetonka Public Schools has developed a three-year support model for its new teachers. In the first year, teachers new to Minnetonka, regardless of their years of experience, work with a 1:1 mentor weekly. Additionally, they connect with other teachers new to Minnetonka through district-level seminars and in their Schoology group. Second year teachers have the opportunity to select a content mentor, someone who has experience and expertise in a self-identified area of growth for the new teacher. They may observe their mentor teach and process with them throughout the year. Finally, in the third year, teachers work with National Board Certified teachers who are their reflection mentors. It is a foundation for becoming reflective teachers and encourages them to perhaps consider the National Board Certification process.


Supporting new teachers is not just something nice to do if you can find the time and resources. It is a fundamental obligation. As we look toward the future of our profession, the role of mentors, both formal and informal, is critical.

Monday, February 2, 2015

EdCamp: Policy Edition

This past Saturday, Minnesota-ASCD, the state affiliate of ASCD, sponsored EdCamp Policy Edition. I have to admit: when this was first proposed I was skeptical. Is having a focus to an EdCamp antithetical to the EdCamp philosophy? (If you’re not familiar with the EdCamp philosophy, this short video highlight’s Minnetonka’s 2014 EdCamp.) And I had other concerns, too. Would people really come to an EdCamp with a policy focus? Would it end up being a day of complaining or a day of problem-solving? As it turns out, all my concerns were for naught.

At an EdCamp, the right people are in the room.  The participants ranged from superintendents and district administrators to classroom teachers to education consultants to higher education administrators. We even had a grandparent of Minnesota students with no other affiliation to the education field!

The topics ranged from teacher preparation, licensure, and tenure to the school calendar to empowering educator voice in decision making. (For a full list of topics, click here.) The challenge of the day was picking and choosing which session to attend! And while there certainly was an opportunity to air grievances, the focus was definitely solution-oriented. In fact, at the end of the day during the Smack Down, the chance for individuals to share their most impactful learning of the day, the conversation became about the next steps. Everyone present was ready to act!


EdCamps have become popular because of the opportunities for self-directed professional growth. This policy-based EdCamp offered two additional benefits. First, it was problem-solving. Participants walked away with concrete ideas of how to advance their work. Communications to legislators were drafted collaboratively! The second benefit was the networking opportunities. Committed educators from diverse backgrounds connected in ways that would not have been possible in any other format. This model of professional development and connecting has great promise for moving education priorities forward.

Monday, January 26, 2015

Observation Bias?

At a recent networking event for teacher evaluation leaders, we were presented with a provocative research article about classroom observations as the basis for teacher evaluation. One of their contentions is that there is an inherent bias in the Charlotte Danielson rubrics against teachers working with underperforming students. The authors write, “…a   rating of ‘distinguished’ on questioning and discussion techniques requires the teacher’s questions to consistently provide high cognitive challenge with adequate time for students to respond, and requires that students formulate many questions during discussion. Intuitively, the teacher with a greater share of students who are challenging to teach is going to have a tougher time performing well under this rubric than the teacher in the gifted and talented classroom.”

My initial response to this was one of disagreement. As a former instructional coach and Charlotte Danielson fan, I wanted to argue that good teaching is good teaching. For years we’ve argued that the rubrics apply equally in all settings: PE classrooms, special education classrooms, basic skills classrooms, and honors classrooms. It’s been a foundation of both our Q-Comp program as well as our teacher evaluation program for over a decade. Despite my initial dismissal of the article, I continued reading. And then the authors shared the data.



The disproportionate ratings gave me pause to reflect on my own teaching experiences. When I returned to teaching after spending three-and-a-half years as an instructional coach, I asked my principal to assign me the students with the greatest needs. It was rewarding. And challenging. And exhausting. How would I have been rated that year?  I loved the kids I taught, believed in them, and still they did not engage in the learning in the same year as my students who had historically been successful in school.

The implications of this research are huge. As the stakes associated with teacher evaluation increase, teachers may be less inclined to work with students who need the most support. Currently, there’s legislation in discussion to connect lay-offs with teacher evaluation. While 35% of teacher evaluation is now, by statute, based on student achievement measures, the remainder is likely based on administrative observation.

An additional implication of this research is its effect on the feedback teachers receive on their classes. In most cases, teachers choose which of their classes will be the subject of the observation. In a growth model, such as the Q-Comp instructional coach model, teachers will frequently invite their colleague into their most challenging class. The second set of eyes provides them with insights that lead them to reflect on their practice and consider alternatives to their current methodologies. If, however, the observations are high-stakes, and if the research is correct, teachers will be less likely to invite their administrators into their most challenging classes, perhaps bypassing the opportunity to get feedback that would ultimately help their students.

If the research is valid, how might evaluators level the playing field for teachers? The authors of the research study suggest a complex statistical analysis of student demographics and performance to create a value-add formula that would adjust for these difference. This may be oversolving the problem, and may actually lead to less transparency and additional issues. A simpler solution may be to simply raise awareness to this potential bias. The simple awareness will influence the administrators’ assessments. This also begs additional emphasis on the pre-observation and post-observation conferences. Teachers need to have the opportunity to educate the evaluator on the composition of their classes and to articulate the strategies they are employing to meet the needs of those students.

To be sure, the research presented in this article wasn’t perfect. It was a small sample size, and the authors seem to have some biases of their own. And even with those flaws, it should give pause to all of those involved in high-stakes observations.


Tuesday, January 20, 2015

Personalizing Online Feedback

For the past few weeks, the topic of feedback has come up again and again. How important is feedback? Who should be giving the feedback? Under what circumstances? What should the feedback look like and sound like? How should feedback change when it is delivered electronically?

As teachers our lives revolve around giving feedback. We give feedback to our students informally when we smile at a response while nodding our heads. We give feedback to students formally when we identify the strengths and areas for growth in an essay. We see the impact of our feedback when it delivered well: “light bulb” moments, improved writing, persisting in a challenge. We have also seen the impact of our feedback when it was delivered poorly: tears, frustration, giving up. So we got better at giving feedback. We learned how to personalize the message to each student. When we have to give students negative feedback – which we do – we know how to couch it with support. But, as more and more of our assessments are given and feedback is received digitally, how do we have to modify it?

Research shows that tone and interpretation of online communications are entirely dependent upon the mood of the reader. Yikes! Knowing that we can’t possibly know, predict, or control the mood of the reader, how can we effectively communicate with our students about their learning progress?

Michelle Gill, Vice President of Professional Learning for PLS 3rd Learning, an organization dedicated to helping teachers develop and facilitate online learning, suggests a few things.
  • While brevity is often glorified, particularly in email communications, when giving feedback it can have detrimental effects. The more you can explain your feedback, the better. Another option is to use an online voice recorder lie Vocaroo to record your feedback orally. This can also personalize the message as it literally has your voice.
  • Read the feedback aloud before you send it. Can you read it in an angry tone?
  • Be cautious about the use of an exclamation point. The exclamation point can easily be misinterpreted. Instead, use vivid language to express your enthusiasm. Gill rarely uses them in electronic communications.
  • Get feedback on your virtual voice. Ask a colleague – or, for the truly brave, your students – to analyze your communications for tone and voice. Your goal with your online communications is friendly, supportive, and open to further communication.

While these tips may not change the mood of the recipient of the feedback, they may mitigate it. Just as we developed skills in giving feedback to students in-person, we’ll develop skills in the online platform. 

Monday, January 5, 2015

Survey Says...

In my house, Family Feud was an after school staple. I loved watching the contestants try to read the minds of the surveyed viewers. Of course, now, as an adult with survey writing and analytic experience, I have all sorts of questions. How were respondents selected? Was it determined by geography? Age? Was it conducted over the phone? Through the mail? Was it anonymous? All of these demographics impact the results.

While we’ve been using surveys in education to garner feedback on educational programs for many years, we haven’t surveyed students in any systemic way to learn about their perceptions about teachers. The new Minnesota teacher evaluation law that went into effect in July requires that teachers be evaluated on student engagement. As a result, teachers are receiving feedback from students – often in the form of anonymous surveys. And inevitably, for even the teachers with great reputations and relationships with students and families, a student – or a few students – gives honest feedback that may be difficult to read. How we respond to this feedback is the key. In my conversations with teachers, I've landed on a few recommendations.

First, when examining the data, look at the trends holistically. What big picture does the aggregated data paint? What are the identified strengths? How can those strengths be leveraged? In a growth model, working from strengths is really powerful.

Then dig in. When dis-aggregating the data, look at it through different filters: class, gender, ethnic demographics. What trends do you notice within these filters? If unfavorable data is discovered, dig even deeper. Some platforms allow the user to browse through individual survey responses. This can provide greater insights into the responses. Caution: avoid the temptation to try to determine the source of the response. This can lead to dismissing the feedback. Reflect on the feedback and how it aligns with your self-perceptions. If it doesn't align with how you see yourself, reflect on what actions may have led to this specific feedback.

Finally, choose one growth area. Focus on specific strategies that could improve student engagement. We use Robert Marzano’s book The Highly Engaged Classroom as our foundation for professional growth. The book is full of strategies that are high-impact and teacher-friendly.


When students are re-surveyed later in the year, the survey will likely say that students are engaged at new levels.