[Remember to "allow images", and move this email to primary inbox.]
View this email in your browser
A newsletter designed to provide resources and ideas to the community of educators committed to providing more effective, equitable, coherent, and caring schools and classrooms.
Spring 2016
Good Reads!
"Grading: Why You Should Trust Your Judgment" by T. Guskey and Lee Ann Jung, published in Education Leadership (Apr '16) [More problems with grading software]

"How Our Grading Supports Inequity, and What We Can Do About It" by J. Feldman, published in Smartblog on Education (July '15) [Traditional unintentionally grading hurts our most vulnerable students]

"Seven Reasons for Standards-Based Grading" by P. Scriffiny, published in Education Leadership (Oct '08) [A teacher's perspective on the benefits of SBG]

"9 out of 10 Parents Think Their Kids Are on Grade Level. They're Probably Wrong" by A. Kamenetz, published on nprED (4.21.16) [Grades don't give accurate information to parents] 
Trusting Teachers' Judgments

"Doubting their own professional judgment, teachers often believe that grades calculated from statistical algorithms are more accurate and more reliable."--Guskey

Teachers, as trained professionals, resist scripted curriculum, administrative mandates, and bureaucratic regulations. Yet too often we accept our grading software, which replicate a traditional approach to determining grades: add up or average each student’s earned points or percentages, sometimes within weighted categories ("tests", "participation", "homework", etc.). This outdated approach to describing student performance has been shown to be both inaccurate and unfair.  We must stop allowing grading programs to supercede our expertise as professional educators, especially when it comes to something as important as our students’ grades.

There are three problems with relying on grading software: one mathematical, one conceptual, and one professional. 

Mathematical: No one would say that a professional golfer's handicap should be based on adding together her score on her first day of golfing plus all scores she received since then up to today, and averaging them. However, we continue to do this in our grades when we use software that averages student scores; a student’s early scores pull down any improved performance. We teachers know what mathematicians know--that averaging a set of student's scores isn't always the most accurate way to describe that student; it essentially penalizes them for where they started (and inequitably rewards students who came to our classes already knowing material). We could use an alternative algorithm--like the mode. Or, if we believe that a student's grade should reflect where they end, not where they started, the grade should represent the student's most recent performance. 

Conceptual: Grading software invites us to describe assignments by category and enter points for each, and the software pops out an averaged and weighted score for each student. But when we include in a grade a student’s behaviors (how often she followed directions, came on time, etc.) as well as academic performance (her scores on tests), we warp and confuse what the grade represents. When being on time counts in the grade, then the prompt student with weak content understanding gets the same grade as the student who is tardy but knows the content. Any grade can represent a wide variety of possible percentages and category weight combinations. When the same grade can describe so many different performance profiles of students, then our "hodgepodge" grades may not represent anything. 

Professional: Teachers confess to me that at the end of a grading term, they look at their students' final scores. If the grades seem about right, they leave them; if they seem wrong, teachers manually adjust prior scores and categories until the software generates an appropriate grade. How absurd that the professional educator is relegated to outfoxing the software so the software can validate the professional’s opinion!
Why then, when we know the software generates inaccurate scores, do we continue to use it? Well, none of us wants to go back to the days of paper gradebooks, calculator typing, and scratch paper. And few of us have the time and savvy to develop technological work-arounds with the our school’s or district’s grading software. Besides, most schools and districts wouldn't let us deviate from the pre-sets anyway.

In Crescendo’s partnerships with schools and districts, we help teachers develop more accurate and fair grading practices, and support those decisions by working with the district administration and technology specialists to improve and reconfigure the grading software. In all cases, we have found that the software can adapt to support improved grading practices—after all, teachers are the clients of the software companies, not the other way around.
So teachers--as you complete the final touches on your gradebook this June, don’t simply defer to the grades generated by your software—grades aren’t objective just because they’re calculated by a computer, they aren’t accurate because they use a mathematical formula, and they’re not best for your students simply because it’s convenient. You are a professional educator who is in the best position to make accurate and fair judgments about your students’ performance. Equipped with accurate and fair grading practices, trust your mind instead of your machine.
For more information and resources, visit us at crescendoedgroup,org. 
Follow us on
Copyright © 2016 Crescendo Education Group, All rights reserved.

unsubscribe from this list    update subscription preferences 

Email Marketing Powered by MailChimp