Have you ever met a teacher who says they are below average? No? Strange. That would mean half of teachers in classrooms across the U.S. necessarily cannot do an accurate self assessment of themselves. Follow the math. No matter what system you create to somehow evaluate teachers, upon assigning us a rating of any kind, the resulting data would give you a median score. Half of the teachers would perform above that score and half below. Yet, not a one of us would claim we are below average.
Instead of evaluating teachers’ performance directly, John Hattie’s well-known meta-analysis of over 50,000 studies involving over 240 million students, Visible Learning, examines the many influences on student achievement, not all of which are directly related to teacher/classroom practices. But many are and the results are to be expected – some are bad, some are indifferent, some are good, and some are great. If we are to believe his research and the resulting list of effect size (click the image on the right to see an enlarged view), can we then go a step farther and examine teacher pedagogy to determine whether or not we employ those strategies with the greatest effect most often and purposefully avoid those with little or even a negative effect? That is to say, can we claim our above average teachers use the best strategies and our below average teachers do not?
Hattie’s list is very important to me. In my own transitional understandings of learning, I often consider the conversations I have with teachers in my building and district, as well as many others throughout Kansas in my role with KATM, and I wonder, with Hattie’s research to point to, why do we all not focus on those influences which offer the most positive influence on student achievement? Even in my own district we implement strategies which, based on this body of research, are not overly efficient and most certainly are not the most effective available.
In fact, Visible Learning shows us the greatest impact on student achievement is asking students to predict their performance, or in other words, assess their own work; however, this is not a strategy we employ at a district or a building level. It may occur at a classroom level, and I could argue that a teacher who does so should be considered above average (depending, of course, on the other strategies they use as well).
Hattie includes numerous strategies that do not necessarily have an effect on our students, primarily because they belong to a group of strategies not actually utilized in the U.S. or are based on research studies conducted at the higher education level. Also, his list includes some which only affect sub-groups of students (because the original research studies only explored a particular sub-group of students). I tend to not spend much time thinking about these influences as I usually prefer to consider ways to best support student achievement for all students, but I do not throw out these ideas, as certainly I can think of instances where they may be valuable as well.
Another of the greatest effects comes from formative assessment. Larry Ainsworth, James Popham, and Margaret Heritage are only a few of the many leaders out there who drive my thinking about formative assessment. It’s a complicated topic, but like Supreme Court Justice Potter Stewart, I know it when I see it. The Colorado Coalition of Standards-Based Education recently published The Standards Based Teaching/Learning Cycle and provides a nice chart comparing formative, interim, and summative assessments (p. 59). Regardless of whose definition of formative assessments you subscribe to, they all have common characteristics: they are ongoing, informal, should inform/modify instruction and learning, and are used by both teachers and students.
I do not believe we are consistent at conducting formative assessments. Most often, we will hit some, if not most, of the characteristics I mentioned above; however, where we fall short almost all of the time is including the students in the feedback process. Students are not given many opportunities to examine their own work, assess themselves, and determine what they could improve as a part of their own transitional understanding, which brings me full circle to the number one influence on student achievement, asking students to self-grade.
Does this mean we are a below average staff? Absolutely not! Determining such a thing can in no way be stated by looking at the use/nonuse of one particular strategy. Rather, we must consider the whole body of work of a teacher’s instructional practice, and at Heights, our staff is dedicated to using research-based strategies proven to positively impact student achievement. The two caveats? We are not implementing all of the top effect size strategies (and thus are implementing some which we could consider letting go of) and there’s always a median. The remaining question to think about: which 50% are you in?