Schools and totalitarian systems

I was reading this essay – The Power of the Powerless by Vaclav Havel the Czechoslovakian dissident. And there is lots of good stuff on this kind of thing by George Orwell and Theodore Dalrymlple as well. It struck me the parallels between working in schools and totalitarian regimes.

You can’t say what you think. You have to carefully monitor what you say to other people. Whenever there are discussions on Twitter, the progressives always treat it like it’s a discussion between two roughly equal groups, or if anything, the traditionals are stronger in schools. The reality is that it is very dangerous to speak up in a traditional way in schools. You face being hounded out of your job and sacked.

People have been talking this week about the case of Gillian Scott, who was recently struck off. Some have asked “But what were her results like?” They don’t understand. It was never about her results, it was about making her bend her knee to the current orthodoxy.

I can see her manager David Macluskey now, a party man climbing the greasy pole. Whatever the latest fad, he’ll be bullying all the teachers into using it. And as soon as it becomes clear that this didn’t work, well, we were never at war with Oceania. And, on to whatever is in vogue this week.

There is a big disconnect between reality and the propaganda. For example, the propaganda about how well progressive teaching compared with the reality. Until, quite recent decades, there was very little feedback about how well progressive teaching was actually doing. A bit like the dictator who sits at the top, fat and content, even though millions starve due to his incompetence. Then ways of feeding back and holding people accountable started to arrive, the league tables, SATs, more exams. Screams of outrage every step of the way. They didn’t care if the kids couldn’t read as long as no-one found out about it.

In the essay he points out that everyone is a victim, but also contributes to the atmosphere as well. When I was teaching, I realised that, there is no ‘system’ it is just made up of millions of moral choices made by normal people every day. You have to decide for yourself how much moral compromise you will make. I tried to take a middle ground of not using methods that damaged my pupils chances but keeping my head down and trying not to get myself into trouble. But I met people who would parrot the latest fad with no problem at all. I always wondered to myself if they believed what they were saying or were just saying it to get ahead and which would be worse?

 

 

 

 

John Hattie admits that half of the Statistics in Visible Learning are wrong (Part 2)

In an earlier post we discovered that John Hattie had admitted (quietly) that half of the Statistics in Visible Learning were incorrect. John Hattie uses two statistics in the book, the ‘Effect Size’ and the ‘CLE’. All of the CLEs are wrong through-out the book.

Now, I didn’t really know why they were wrong, I thought, maybe he was using a computer program to calculate them and it had been set up incorrectly. I didn’t know. Until I received this comment from Per-Daniel Liljegren. He was giving a seminar on Visible Learning for some teachers in Sweden and didn’t understand some of what he’d found, so, he wrote to Debra Masters, Director of Visible Learning Plus, asking for help.

“Now, when preparing the very first seminar, I was very puzzled over the CLEs in Hattie’s Visible Learning. It seems to me that most of the CLEs are simply the Effect Size, d, divided by the square root of 2.

Should not the CLE be some integral from -infinity to d divided by the square root of 2?”

And if you grab your copy of Visible Learning and check, he’s right! The CLEs are just the Effect Size divided by the square root of 2.

He never received a reply to his letter.

If we look at this article that tells us how to calculate the CLE – How to calculate the Common Language Effect Size Statistic – we see that dividing by the square root of 2 actually finds the z value. This should then have been converted into the probability using standard Normal distribution tables, a very basic statistical technique that we teach to Year 12s in S1 A Level.

Throughout the book, Visible Learning, John Hattie has calculated the z values and used them as his CLEs when he should have converted them into probabilities.

Three very worrying things about all this –

1.   John Hattie doesn’t know the difference between z values and the probabilities you get from z values. Really, really, basic stuff.

2.   John Hattie knows about this mistake but has chosen not to publicise it. This could mean that many teachers are still relying on it to instruct their teaching.

2.   No-one picked up on it for years, despite the fact the CLE is meant to be a probability. So, throughout the book he is saying that probability can be negative or more than 100%. So, who is checking John Hattie’s work? Because the academic educational establishment doesn’t appear to be.

Again we are left with two options to choose from

1.   John Hattie is a genius who is doing things that even Mathematicians don’t understand.

2.   John Hattie is a well meaning man with a Social Sciences degree who has made a mistake in using statistical techniques he didn’t realise were unknown to Mathematicians and incorrect.

The choice is yours.

 

The ‘Effect Size’ is not a recognised mathematical technique

Three things you should know about the ‘Effect Size’

1.   Mathematicians don’t use it

2.   Mathematics textbooks don’t teach it.

3.   Statistical packages don’t calculate it.

Despite a public challenge in March 2013, none of the advocates of the ‘Effect Size’ have been able to name a Mathematician, Mathematics textbook or Statistical package that uses it. They are welcome to correct this in the comments below.

John Hattie admits that half of the Statistics in Visible Learning are wrong

At the researchED conference in September 2013, Professor Robert Coe, Professor of Education at Durham University, said that John Hattie’s book, ‘Visible Learning’,  is “riddled with errors”. But what are some of those errors?

The biggest mistake Hattie makes is with the CLE statistic that he uses throughout the book. In ‘Visible Learning, Hattie only uses two statistics, the ‘Effect Size’ and the CLE (neither of which Mathematicians use).

The CLE is meant to be a probability, yet Hattie has it at values between -49% and 219%. Now a probability can’t be negative or more than 100% as any Year 7 will tell you.

This was first spotted and pointed out to him by Arne Kare Topphol, an Associate Professor at the University of Volda and his class who sent Hattie an email.

In his first reply –  here , Hattie completely misses the point about probability being negative and claims he actually used a different version of the CLE than the one he actually referenced (by McGraw and Wong). This makes his academic referencing, hmm, the word I’m going to use here is ‘interesting’.

In his second reply –  here , Hattie reluctantly acknowledges that the CLE has in fact been calculated incorrectly throughout the book but brushes it off as no big deal that out of two statistics in the book he has calculated one incorrectly.

There are several worrying aspects to this –

Firstly, it took 3 years for the mistake to be noticed, and it’s not as though it’s a subtle statistical error that only a Mathematician would spot, he has probability as negative for goodness sake. Presumably, the entire Educational Research community read the book when it came out and they all completely missed it. So, the question must be asked, who is checking John Hattie’s work? As a Bachelor of Arts is he capable of spotting Mathematical errors himself?

In Mathematics, new or unproven work is handed over to unbiased judges who go through it with a fine toothcomb before it is considered to have the stamp of approval of the Mathematical community. Who is performing this function for the Educational community?

Secondly, despite the fact that John Hattie has presumably known about this error since last year there has been no publicity telling people that part of the book is wrong and should not be used. Surely he could have found time between flying round the world to his many Visible Learning conferences to squeeze in a quick announcement.

As one of the letter writer’s stepfather, a Professor of Statistics said

“People who don’t know that Probability can’t be negative, shouldn’t write books on Statistics”

Sources –

Book review – Visible Learning by @twistedsq

Can we trust educational research? – (“Visible Learning”: Problems with the evidence)

EDIT – Since this post we have also discovered why the CLEs are all wrong and the reason is shocking. Read about it here – John Hattie admits that half of the Statistics in Visible Learning are wrong (Part 2).

The Age effect which means the ‘Effect Size’ is useless

In 2007, four American researchers looked at the data from seven national tests in Reading and six national tests in Maths across an age range from six to seventeen. They were looking for patterns in the Effect Sizes.

Empirical Benchmarks for Interpreting Effect Sizes in Research by Hill, Bloom, Black and Lipsey (2007)

Image

As we can see there is a clear downward trend and the hinge figure of 0.40 is never achieved again after the age of 10.

Image

Again there is a downward trend and the figure of 0.40 is never achieved after the age of 11. The authors of the paper also found the same trend when they studied national test results for Social Studies and Science.

This means that Hattie’s hinge figure of 0.40 is spectacularly misleading. Educational research done in Primary schools will usually do better than 0.40, whereas Teachers teaching in Secondary Schools will find that their Effect Size is usually below 0.40 and gets worse the older the children are, no matter how effectively they are teaching.

To get any kind of fair comparison for educational studies, we need to know the age of the children studied, as well as their results. We can then compare fairly with the typical Effect Size for their age range, instead of a headline figure of 0.40.

One possible reason that we are seeing this pattern is that the ‘Effect Size’ is really (inversely) measuring how spread out the pupils are, not how well they are progressing.

In Year 1, there’s not as big a difference between the top and the bottom child, because even the quickest child hasn’t learned that much. This means the standard deviation (how spread out the pupils are) is small. When you divide by something small you get a big number.

In Year 11, the opposite is true, there is a large difference between the top pupils and the bottom pupils. A big spread means a large standard deviation and dividing by a large number gives you a small number.

Hat Tip to @dylanwiliam

How did the inventor of the Effect Size use it? (Not the way Hattie does.)

In 1969, Psychologist Jacob Cohen released his book ‘Statistical Power Analysis for the Behavioral Sciences’. In this book Jacob Cohen introduced the Effect Size for the first time and explained how to use it.

So, how did Jacob Cohen, the inventor of the Effect Size, use it?

Image

Quick translation – I noticed that people in the Behavioral Sciences sometimes did badly designed experiments because they didn’t understand Statistics well enough, so, I decided to help them by making some easy look-up tables.

Image

Quick translation – There are four ways to do Power Analysis, but two of them are rarely needed. The two main ways you need to check your experiment before you do it are, firstly, check the Statistical Power is high enough or alternately check you have planned to test enough people.

Image

Image

Quick translation – To use the Statistical Power tables, you need to know the number of people in your experiment, the Statistical Significance you want and the Effect Size.

And here is a Statistical Power table from Jacob Cohen’s book, notice the Effect Size (d) at the top. There are dozens of pages of these tables in his book.

Image

And here he gives an example of how to use the Statistical Power tables.

Image

The other thing you need to check is the Sample Size.

Image

Quick translation – The other way to check your experiment is with the Sample Size table. To use this your need the Statistical Power, the Statistical Significance and the Effect Size.

And here is a Sample Size table, notice the Effect Size (d) at the top. Again there are dozens of pages of these tables in the book.

Image

And he gives an example of how to use the Sample Size table.

Image

Now, every modern user of the Effect Size cites Cohen and they always quote him about small, medium and large effects. This gives the impression that they are just continuing his work, yet, they are using it in a completely different way to him.

Jacob Cohen, the inventor of the Effect Size, used it to check the Statistical Power and the Sample Size of an experiment before you did the experiment. He did this using look-up tables.

A closer look at Hattie’s top two Effect Sizes

Hattie’s top two Effect Sizes in Visible Learning are

Self-reported grades – 1.44

Piagetian programmes – 1.28

in fact these are the only two that are above 1.0

Kristen DiCerbo has had a closer look at Self-reported grades. Hat-tip to @Mrsdaedalus.

Piagetian stages were proposed by Jean Piaget. Basically, as children develop they pass through various stages of development, firstly with motor skills as babies, then thinking skills as young children.

Piagetian programmes cites only one meta-analysis, Jordan and Brownlee (1981). Unfortunately, I can’t find the full paper, only an abstract. The abstract does show two things though.

Firstly, the original studies weren’t calculated as Effect Sizes, they were calculated as correlations. Hattie has again converted correlation coefficients into Effect Sizes. The study is basically saying that kids who develop faster when they are babies (because they are more intelligent) do better at tests a few years later (because they are more intelligent). Hardly earth-shattering stuff. And the same as the Self-reported grades, this is not an intervention, there’s nothing you can do about it, it’s just a correlation.

Secondly, the students in the study had an average age of just 7 years old. Hattie has used this to extrapolate to all students aged 5 to 18. We teach pupils not to extrapolate outside the data range at GCSE.

Remember that both of these Effect Sizes were used when Hattie calculated his 0.40 average, so, if they are wrong, then so is the 0.40 hinge point. And we could have included any number of correlations in here and changed them to Effect Sizes. It just shows that his 0.40 ‘hinge point’ is completely arbitrary.

Also, it may be worth pointing out at this point the differences between the correlation coefficient and the Effect Size.

The correlation coefficient – Proposed in 1880 by Karl Pearson who is considered by many to be the Father of Mathematical Statistics and founded the first University Statistics department. Explanation of the reasoning behind it and derivation using Algebra in every Statistics textbook. Learnt by Mathematicians either at A Level or first year of University.

The Effect Size – Proposed in 1985 by Gene Glass, an Educational Psychologist. No explanation or derivation ever given even today. Appears in no Maths textbooks. No Mathematician has ever heard of it.