Teaching’s ‘Holy Grail’

hattieIn December, I briefly mentioned Professor John Hattie’s book ‘Visible Learning’: a synthesis of student achievement over 15 years, so it’s gratifying to read that his study of 83 million students has recently been lauded in the UK as teaching’s ‘Holy Grail’.

This is one of the most powerful and challenging texts to be published in the last decade and it deserves our close analysis and attention.

In my  view Hattie’s work gives education systems and governments a yardstick to measure the often ill conceived and short term methods that are imposed on schools.

In an interview with the New Zealand Sunday Star Times (4 January 2008), Hattie says parents should worry less about class sizes, curriculum and computers and more on the quality of relationships between teachers-students. In particular, he notes the quality of teacher feedback and trust in classrooms.

Hattie’s research confirms a universal truth: the quality of the teacher makes the  difference to student learning;  yet this evidence seems to have been buried in populist debate and recent election promises to improve schooling by introducing school report-cards, a national curriculum or the latest overseas quick fix.

It’s no surprise that Hattie wants to see more money spent on rewarding teacher excellence rather than experience. While “twenty is plenty” may be a good slogan to garner populist support, it hardly passes for rigourous evidence-based scrutiny.

We all have some hard truths to face as a result of Hattie’s research. I just hope that as more educators, politicians, unions and parents begin to digest the evidence, we see greater consensus on where resources should be spent and less energy arguing over issues at the edges.

13 thoughts on “Teaching’s ‘Holy Grail’

  1. It’s great to see Professor Hattie’s work getting some much needed exposure. He’s been speaking on these themes at conferences for well over five years and is always noted as one of the better presentations at conferences such as ACER and ACEL yet his message has seen little change.
    You make a good point that educators, politicians, unions and parents need to engage more with the things that do make a difference and less on those that make little or no difference. He put this list to the audience at ACEL 2007.
    Reducing disruptive behavior in the classroom.86
    Feedback .72
    Acceleration of gifted students .60
    Reading Recovery .50
    Integrated curriculum programs .40
    Homework .30
    Individualized instruction .20
    Ability grouping .10
    Open vs. traditional classes .00
    Retention (hold back a year) -.16
    Shifting schools -.34
    Many came up with much different scores than the research provided.
    It’s interesting how many of those with an effect size of less than 0.4 are so poular with parents and teachers in particular. e.g.Streaming / ability grouping is something in our schools that just won’t die.
    Parents will tell you that if their child is bright they will be disadvantaged if they are not streamed upwards. To not be in a “top class” is seen as academic failure
    Teachers talk of the value of streaming as the strategy to maximise the results of brighter students . Yet we don’t acknowledge the voluminous research that tells us that you have to enrich and accelerate to get a significant gain. Consequently streaming survives and selective schools still work like a magnet drawing able students from generalist secondary contexts. They then trumpet HSC results as if it is any surprise that bright students do well.
    Hattie’s research is truth but is a terribly difficult sell in the educational marketplace.
    The answer lies in utilising the data you have in your school to inform what you do and secondly focus on those things that Hattie tells us do make a diufference. Some discussion promoting the facets of just what Hattie defines as quality teachers is helpful too as I think there has been a tendency for “the teacher makes the difference’ message to have been interpreted as a license for teachers to not challenge themselves to improve what we do rather than focus on notions such as teacher – student relationship, challenging students, monitoring and evaluation, subject knowledge, direct instruction, surface and depth instruction.
    Professional Learning that Ken Rowe challenges us “gets inside the classroom door and improves things”.
    A debate well worth having.

  2. Chris, a good synthesis and unpacking of the problems. I think it is going to take a professional maturity on some teachers part to accept these findings and to change some of their long held perceptions. Some teachers still believe in streaming, holding students back and the like as the key ways to improve learning.

    Parents have great trust in teachers, and will take their advice if there is regular communication vis a vis explanation of the data and a commitment to sharing the responsibility for their child’s learning.

    We need to be talking with coherence and consensus among the profession which can be defended by the evidence and not just personal preference.

    What Hattie has highlighted for me is the fact that we’ve been having the wrong debate for so long. What I personally believe about good teaching is not important. What is import is what the evidence tells us. The focus of our debate should be on how to support teachers in acting on the evidence! This is the sharp end that will deliver the improvement in student learning for all.

  3. It is good to see the continual reference back to evidence based practices and particularly the emphasis placed on the expertise of teachers.

    I notice that the list provided by Chris Browne is only partial. There are some items left out that might be better included if schools are really going to use the research effectively.
    For example

    Influence Effect-Size

    Instructional quality 1.00
    Instructional quantity .84
    Direct instruction .82
    Remediation/feedback .65
    Mastery learning .50
    Teacher in-service ed .49

    Having less effect
    Testing .30
    Audio-visual aids .16
    Individualisation .14
    Finances/money .12
    Behavioural objectives .12
    Team teaching .06
    Traditional/Open classes 0.0
    Physical attributes
    of the school -.05
    Retention -.15

    So one could ask why the diocese is putting so much money into open classrooms, with no effect size, but not looking to Direct Instruction which has a demonstrated .82 effect size, why the emphasis on Team Teaching at .06 compared with Remediation/feedback at .65, or the physical attributes of the school at -0.5 instead of Mastery Learning at .50.

    The challenge is to look at ALL the results from Hattie’s stunning meta-study and see what they are saying, not just to cherry pick those items that appeal to one’s own beliefs about what makes good education, just as Greg says.

    “What Hattie has highlighted for me is the fact that we’ve been having the wrong debate for so long. What I personally believe about good teaching is not important. What is import is what the evidence tells us. The focus of our debate should be on how to support teachers in acting on the evidence!”

  4. Great last comment that should be shouted from the rooftops. I think the issue of opening the classrooms has become a distractor. Teachers are finding that it enhances the way they are teaching and exploring new spaces to help in this process. You are right to observe that we are not focusing on “direct instruction”. This is not because we are ignoring the effect size but because, as Hattie’s research points out, effective teachers use it as a part of their whole pedagogical approach. Of itself team teaching is not a stand alone solution to good learning. But what it does allow for is greater sharing and feedback for the teacher. This “high yield” strategy does improve student learning. Team teaching must also allow for direct instruction when appropriate. This is what we are encouraging in our dialogue with our school communities.

  5. I hope it’s not too late to join in this debate. I’m fascinated by Hattie’s work, but confused by some aspects. It seems like different people quote different effect sizes for different factors: does this mean that they are just getting it wrong, or that Hattie keeps revising his original figures? I also don’t understand some of the definitions, like Micro teaching and direct instruction. Are there any inherent contradictions in his findings, for example, between the low rating given to streaming and the high rating given to acceleration (presumably of gifted students)? I’d appreciate any help you can give me.

  6. John, it’s not too late to join this debate.

    I don’t know whether Hattie has revised any of his findings. My understanding of the difference between micro-teaching and direct instruction is that micro teaching takes place with a small targetted group and might use an array of pedagogical approaches.

    Direct instruction is a single pedagogical approach that locates the source of knowledge with the teacher who instructs a large group.

    I defer to Hattie to explain the last point you make. He will actually be in Parramatta to present the Ann D Carke Lecture in October. A good opportunity to ask him!

    The school leaders in our system are using Hattie’s work to sharpen the focus on what makes a difference in improving student learning.

    It gives teachers a common language to discuss school improvement and it encourages the use of hard data (best evidence) to inform that discussion. It certainly has widened the discussion from ‘i think this works..’ to ‘i can show you why this is working…’. It helps teachers to reflect on their professional practice and to look at ways of improving their teaching.

  7. John the research on streaming world wide points to the effectiveness of clustering small groups of bright students in a particular subject together. Note small groups. most generalist secondary high schools make the mistake of streaming an arbitrary number above a certain benchmark and assume they are doing a good thing.
    They are not mutually exclusive issues, they are connected.
    The research from Herb Marsh at UWS on self concept actually indicates such a practice can be harmful. Hattie is a researcher in the self concept area as well and he agrees with Marsh’s research.
    If you want to hear someone with clarity of belief, John Hattie’s your man. The great thing is he has such a deep well of research to base his views on.
    I’ve heard him three times and his data is consistent- the numbers haven’t been changing. What others publish I can’t comment on.
    If you google him at Auckland Uni you’ll find papers and ppts that are consistent.

  8. Like Chris, I have followed John Hattie’s research work with interest since 2003. What I find powerful about Hattie’s latest work, Visible Learning, is that he provides in his words, ‘an explanatory story, not a “what works” recipe’ (p.3). His stated purpose is to move beyond a synthesized literature review of scholarship over the last 15 to 20 years or more, to give his interpretation of its meaning in terms of a model of teaching and learning. He provides us with what he describes as a ‘plausible set of claims based on evidence’ (p.237).

    John, I think you raise an insightful question about whether there are some inherent contradictions in some of the research findings presented. I too grapple with the apparent contradictions in the low effect size of streaming for high ability students, but the high effect size of acceleration for high ability students who have had the opportunity to work through a program at a faster rate or with younger peers.

    A possible explanation…Hattie has presented the literature by classifying the variables around major effect dimensions. Ability grouping (streaming or tracking) falls into the classroom composition effect (along with for example, class size, mainstreaming and retention) whilst that presented around acceleration refers to the curricula for gifted students. So, we may not be comparing ‘apples with apples’ when comparing acceleration and streaming effects.

    I am also reminded of Hattie’s caution about interpretations of causation when interpreting effect size. For me the apparent contradiction raises a question about the nature of the curriculum that students experience. There are clues in the discussion around the effect size of school curricula for gifted students, with a key factor being a challenging curriculum that matches the students’ learning needs (p.99; p.100). Surely we would want the experience of a challenging curriculum for all of our children and how do we respond to this challenge in today’s world?

    We can pick up the trail again in the last chapter of the book, where as the title reveals, the model is brought together in the third of the six ‘signposts’ for excellence… all six of which involve what quality teaching involves.

    ‘Teachers need to be aware of what each and every student is thinking and knowing, to construct meaning and meaningful experiences in light of this knowledge, and have proficient knowledge and understanding of their content to provide meaningful and appropriate feedback such that each student moves progressively through the curriculum levels.'( p238)

    By the way John, Micro-teaching in Visible Learning, refers to a strategy used as part of pre-service teacher education and so, to pre-service teacher learning (2009, p. 112). Micro-teaching involves student teachers teaching small group of students, observed by other their peers or video-taped for this purpose, followed by a scaffolded discussion. This is what we also know about the way we learn best as teachers…when we de-privatise our teaching, seek and receive constructive feedback from our peers and self-evaluate in light of the effectiveness of our teaching, learning from and with one another. The story continues.

  9. Well, thank you very much Chris, Greg, Jan for your helpful comments. I’ve now got hold of Visible Learning and am having a great time with it. But here’s my newest problem. When it comes to the issue of school mobility, I would love to be able to quote to our parents authoritative data as to why they should not change schools frequently! As a small alternative school, we have a problem with people whom I call “serial school quitters”. So I was quite gratified that Hattie shows this behaviour to be very negative for student achievement. Unfortunately, when I started checking the bibliography, I found that the data seemed not as authoritative as I would have liked. Hattie quotes three studies. Turns out that one is for students moving from community colleges to four-year institutions, so not really relevant for primary and secondary schools in Australasia. Another is an unpublished thesis from the University of Georgia, so I guess that’s never been subjected to true critical analysis and scrutiny. And the third and last, by Mehana, speaks of sample sizes from 62 to 15,000, whereas, if I’ve read Hattie’s appendix A correctly, the total number of students is shown as 2889? Furthermore Mehana mentions variations in effect size according to grade of student, frequency of school change, and socio-economic status. (Less importantly, he also makes the point that there is not a negative result for the children of military personnel.)The apparent mistake in the total number, the failure to mention those very important variables, and the apparent lack of relevant and reliable studies makes me wonder…
    when I google Mehana and school mobility and student achievement, one of the studies that comes up is of low income urban black children and the Chicago district, , 10 years ago, which may or may not be relevant to our situation.

    Best regards, John

  10. Sorry, just to retract: the Mehana study Hattie quotes is from an unpublished doctoral thesis whereas the one I was referring to in my posting on this site an hour or so ago was from a later study by the same guy. Woops.

  11. Actually no, still getting it wrong, Hattie quotes two Mehana studies, one from 1997, the unpublished thesis; the other from 2004. So I’ll now retract my retraction and say again that I’m puzzled by the seeming error in the sample size and the failure to mention the variables.

    We got back late last night from a holiday in Canberra — is that an acceptable excuse?

  12. Hi, me again, have now just had it `officially cofirmed’ that the the Mehahna number in Visible Learning’ is wrong: in Appendix A under Mobility, the second study, Mehana, should have a figure of 110,060 as the total number of the sample size, instead of 2,889. Phew, I can now relax and enjoy Easter….. 🙂

  13. The definition of Direct Instruction provided by Greg is not at all the definition provided by Hattie. In addition Hattie mentions the surprise of his students when exposed to the data around DI vs Constructivism.
    I refer you to pps 204-207. I think you will find the discussion unexpectedly rich as it provides a model of teaching that is far more than a didactic approach and one that should be seen as exemplary, especially given the effect size.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.