In an earlier post we discovered that John Hattie had admitted (quietly) that half of the Statistics in Visible Learning were incorrect. John Hattie uses two statistics in the book, the ‘Effect Size’ and the ‘CLE’. All of the CLEs are wrong through-out the book.

Now, I didn’t really know why they were wrong, I thought, maybe he was using a computer program to calculate them and it had been set up incorrectly. I didn’t know. Until I received this comment from Per-Daniel Liljegren. He was giving a seminar on Visible Learning for some teachers in Sweden and didn’t understand some of what he’d found, so, he wrote to Debra Masters, Director of Visible Learning Plus, asking for help.

**“Now, when preparing the very first seminar, I was very puzzled over the CLEs in Hattie’s Visible Learning. It seems to me that most of the CLEs are simply the Effect Size, d, divided by the square root of 2.**

**Should not the CLE be some integral from -infinity to d divided by the square root of 2?”**

And if you grab your copy of Visible Learning and check, he’s right! The CLEs are just the Effect Size divided by the square root of 2.

He never received a reply to his letter.

If we look at this article that tells us how to calculate the CLE – How to calculate the Common Language Effect Size Statistic – we see that dividing by the square root of 2 actually finds the **z value.** This should then have been **converted** into the probability using standard Normal distribution tables, a very basic statistical technique that we teach to Year 12s in S1 A Level.

Throughout the book, Visible Learning, John Hattie has calculated the z values and used them as his CLEs when he should have converted them into probabilities.

Three very worrying things about all this –

1. John Hattie doesn’t know the difference between z values and the probabilities you get from z values. Really, really, basic stuff.

2. John Hattie knows about this mistake but has chosen not to publicise it. This could mean that many teachers are still relying on it to instruct their teaching.

2. No-one picked up on it for years, despite the fact the CLE is meant to be a probability. So, throughout the book he is saying that probability can be negative or more than 100%. So, who is checking John Hattie’s work? Because the academic educational establishment doesn’t appear to be.

Again we are left with two options to choose from

1. John Hattie is a genius who is doing things that even Mathematicians don’t understand.

2. John Hattie is a well meaning man with a Social Sciences degree who has made a mistake in using statistical techniques he didn’t realise were unknown to Mathematicians and incorrect.

The choice is yours.

Reblogged this on pedagog in the machine and commented:

Oops! I once saw Stephen Gorard in a talk say something like “the use of statistics in education research is a folly that ought to cease forthwith”. I think he may have a point. Because even if z-values and CLE values are basic to a Y12 statistician – and having done statistics at A level, I would question that claim – the point is, very few teachers understand how such metrics are arrived at, and as such don’t understand how to interpret them – or indeed whether to dismiss them! And so it perpetuates a paternalistic mentality where we are “done to”, and our practice is evaluated in esoteric ways by men so clever they don’t admit when they’re wrong… all of which is most unsatisfactory!

Ollie: just a quick note to say that it’s false to say that no one in the “academic educational establishment” spotted these problems with Hattie’s work. The BJES review of the book pointed out this problem in 2011. See page 199: http://dx.doi.org/10.1080/00071005.2011.584660

In response to pedagoginthemachine: you are arguing that because some teachers don’t understand statistics, education researchers shouldn’t abandon quantitative methods? Analogously, do you think that if GPs don’t understand models of cancerous cell growth, researchers should give up working on cancer treatments?

Thanks Mavan. Unfortunately, the article is behind a pay-wall. I would be very interested to read it if anyone has a link to a copy.

Hi Ollie, I’ve put a copy here:

Click to access review.pdf

Let me know when you’ve got it so I can delete the copy.

Btw. you can get almost any paywalled paper using the #icanhazpdf method:

http://neuroconscience.com/2013/01/16/join-papester-collective-1-0-how-to-reply-to-icanhazpdf-in-3-seconds/

Thanks. Got it now. Quite right, Steve Higgins and Adrian Simpson from Durham University spotted this in a review back in 2011.

Hello Mavan, I’ve tried your twitter technique, it did not seem to work is it possible for me to get a copy of that paper too?

Reblogged this on Dick van der Wateren's Blog and commented:

In this second post about the statistics in ‘Visible Learning’, the author asks some uncomfortable questions about the self-correcting capacity of the education science community.

For me, two questions remain:

If half of the statistics are wrong, how does that affect the recommendations to teachers based on those statistics, and

How much of the other half is reliable?

Very interesting reading so far. I’d also like to know how amending the statistics (by the method recommended in the article) would or wouldn’t change the recommendations made to teachers. Not suggesting that excuses these errors – which are serious and need addressing, along with admitting them publically – but wondering if there can still be something drawn from this huge research.

This blog motivated me to dig a bit deeper, not only are a lot of the calculations wrong, Hattie misinterprets many of the studies. Take the highest effect size for example, “self-report” grades where effect size, d =1.44. This represents a couple years acceleration of achievement, (if you use Hattie’s example of Homework d =.29 being a year’s acceleration).

Now if you think of it, this is a pretty amazing result, that if true would revolutionize learning and achievement.

So do the studies show such miraculous improvement in achievement?

Hattie uses 5 Meta-analyses, 1 of them measured peer assessment – not “self-rated”, another measured student’s memory of their GPA from a year or so in the past. The 3 others measured a students’ assessment of their ability versus their teachers’ assessment. So these are not measuring whether “self-report” improves achievement. So none of the 5 studies measured what Hattie says they did.

The conclusions drawn by the researches were similar to this- “A close correspondence between individual teacher and student’s marks suggests that the student has a good sense of his or her absolute level of performance.” Falchikov and Boud (1989) (p426).

Hattie infers the high correlation to mean accelerated performance. This is very poor scholarship.

One of the reasons he has gotten away with it is that it is very difficult or expensive to get copies of critiques of his work or of the meta-studies he’s used.

So what can we use? I don’t think we can use any of his research. While Hattie derides teachers’ experience, i would trust this rather than his research.

Some peer reviews of Hattie’s book are interesting:

Emeritus Professor Ivan Snook, et al; “Any meta-analysis that does not exclude poor or inadequate studies is misleading and potentially damaging.”

“We argue the process by which this number (effect size) has been derived has rendered it practically meaningless. “Higgins and Simpson (2011) (p199).

I’m looking for help to methodically go through the studies for many of Hattie’s influences. Luckily, many of the controversial ones have only 1 or 2 meta studies to read.

Pingback: Half of the Statistics in Visible Learning are wrong (Part 2) | Blogcollectief Onderzoek Onderwijs

So higher z values would give higher CLEs and higher probabilities, so this is just a matter of scaling – the bigger numbers still mean a thing is ‘better’ and we’re still provided with an internally-consistent metric with which to compare things. He’s made an error in the presentation of his values, but what exactly does it change?

this is a very big mistake; what it changes is his reliability and trustworthiness goes down by quite a few points.

Robyn Williams has a funny 1 minutes skit on what can happen if you get your units wrong- https://youtu.be/SiNWnQYXifA

To some extent Jerome is right that the CLE issue is not the key thing – in the book higher “Hattie-CLEs” do match with higher effect sizes in the sense he wants to use them. Of course, if you’re going to use a named measure whose definition is agreed with the community, you should calculate it in the way agreed (or invent a new name for your new measure and give us a darned good explanation for why we need another new one and what it provides).

But this is far from the key issue with Hattie’s book. There are more statistical issues: e.g. at the start of the book he provides a funnel plot to provide evidence that there is no publication bias, but plots it with the axes reversed and makes claims which can’t be justified if one plots it correctly (and Hattie in other publications has shown that he knows how to plot a funnel diagram). But way more important than his statistical faux pas are the conceptual ones: one just can’t compare many of the correlational and interventional effects; how can one magically combine the effect for sickle cell anemia with the effect for calculator use with the correlation between self-predicted grades and actual outcomes and then come up with a single number against which all effects must be compared. Worse still, how dare one say that studies which show females outperforming males are perforce negative?

The idea of trying to give people who make educational decisions a simple way of deciding between different changes which might be implemented in schools and individual classrooms is honourable. But they’re dangerously seductive supports for mysids bias and misinterpretation. Hattie and others (as well as making statistical and conceptual howlers) haven’t thought through the issue that a busy head teacher or headline seeking politician is unlikely to understand the statistical nuances which lie behind simple effect sizes. It is far more likely that they’ll implicitly be saying to themselves “Ooh, the effect size for homework is 0.3 and the effect size for after school clubs is 0.2, so we’ll cut after school clubs and boost homework”.

Pingback: John Hattie admits that half of the Statistics in Visible Learning are wrong | ollieorange2

Of course there is a third option that you don’t give us at the end…. That your analysis is flawed and misunderstands the Visible Learning Research. The fact that you left that one out is disappointing.

What a powerful argument. “Of course..”

You are, of course, welcome to point out the errors in my analysis in the Comments section.

Pingback: Teacher Quality, Wiggins and Hattie: More Doing the Wrong Things the Right Ways | the becoming radical

Pingback: Is John talking through his Hattie? | Dan Haesler

Pingback: Beware the Technocrats: More on the Reading Wars | the becoming radical

Has anyone recalculated the CLEs for Hattie and published them somewhere then?

In all the new editions of the books, i.e. the German version, the CLEs have been recalculated, but, the English version is still being sold with the incorrect numbers.

So what should educators do? Who should they take notice of? The average teacher or principal relies on researchers getting their calculations right and then interpreting these in laymen’s terms. What is the way forward? Scrap Hattie’s findings? Recalculate and publish? Explore other research (and who?).

Pingback: How Blogging can Inspire School Change | internationalheadteacher

Reblogged this on tonycairns.

Pingback: John Hattie admits that half of the Statistics in Visible Learning are wrong (Part 2) | tonycairns

Pingback: Week 8.1: Hattie – Not Such a Hottie? | 3250 Intructional Strategies

Hattie’s methods are flawed and his maths are incorrect. Which we all know. But it is still a useful list to orientate yourself by, I think. Like any research it should be used to provoke thought, guide practice but not followed like the Bible. (Not even the Bible should be followe like a bible!) Just because you can pt statistics to something doesn’t mean it is true, correct or objective. But it also doesn’t mean that you need to reject the whole argument.

Pingback: Hattie’s analysis of inquiry-based teaching | inquiry learning & information literacy

Pingback: (1 of 3) Measuring the Impact of Technology on Learning | Evolving Edtech

Pingback: Época: desinformação em três atos – final | AVALIAÇÃO EDUCACIONAL – Blog do Freitas

Pingback: The day I had a curry with John Hattie… – Learning & Teaching Magpie