When I was investigating the ‘Effect Size’ I found lots of criticism of significance testing on Social Science websites. Remember, this is once again, Social Scientists, often but not always Psychologists, criticising the way Mathematicians and Scientists do Statistics.
This is actually a fundamental part of the ‘Effect Size’ story as their failure to understand the significance testing procedure has led directly to the ‘Effect Size’ as they try to solve a ‘problem’ that isn’t really a problem, only a misunderstanding on their part.
It is also vital to recognise that the ‘Effect Size’ isn’t just another statistical method to choose from amongst many, it is the tip of the ice-berg of a completely different ethos. The people who advocate using the ‘Effect Size’ think that the whole way Mathematicians and Scientists do Statistics is wrong so they’ve decided to invent their own version. This has been mistakenly copied by people in Education like John Hattie.
In my next post I’ll be looking at the Maths of significance testing, but, what if you don’t know anything about Alpha levels or Type 1 and 2 errors, how could you judge? Well, a good place to start would be the mathematical credentials of the people making the criticism. So let’s have a look at the people who are criticising significance testing.
If we type in ‘Criticism of Significance testing’ into Google, the first ten results are –
http://community.dur.ac.uk/r.j.coe/teaching/critsig.htm – Number one on the list, our old friend Robert Coe, Professor of Education at Durham University
http://en.wikipedia.org/wiki/Statistics – A general article on Statistics by Wikipedia
http://www.cem.org/attachments/publications/CEMWeb037%20The%20Case%20Against%20Statistical%20Significance%20Testing.pdf – CEM, Professor Coe’s organisation publishing an article by Ronald P.Carver, Professor of Education and Psychology at the University of Missouri
http://errorstatistics.com/2012/12/24/13-well-worn-criticisms-of-significance-tests-and-how-to-avoid-them/ – Deborah Mayo, Professor of Philosophy at the University of Pennsylvania
http://www.johndcook.com/blog/2008/11/18/five-criticisms-of-significance-testing/ – John D Cook, Consultant in Applied Mathematics and Computing
http://www.uic.edu/classes/psych/psych548/fraley/ – R.Chris Fraley, Professor of Psychology at the University of Chicago
http://www.johnmyleswhite.com/notebook/2012/05/10/criticism-1-of-nhst-good-tools-for-individual-researchers-are-not-good-tools-for-research-communities/ – John Myles White, PhD student in Psychology
http://www.andrews.edu/~rbailey/Chapter%20two/7217331.pdf – Andrews University Education department. Authors, Jeffery Gliner, retired Professor of Psychology, Associate Professor Nancy Leech, PhD in Philosophy and MA in Counselling, George Morgan, retired Professor of Education
http://lesswrong.com/lw/g13/against_nhst/ – No information
http://www.ncbi.nlm.nih.gov/pubmed/17002771 – Authors – Dr Fiona Fidler, Environmental Science, background in Psychology and Philosophy, Mark Burgman, Environmental Science, background Zoology, Geoff Cummnigs, retired professor of Psychology, Robert Buttrose, background in Philosophy, Neil Thomason, historical and philisophical studies
And so it goes on, page after page of Psychologists, Philosophers and Education Professors critisicing the way Mathematicians and Scientists do Statistics.
So, you can judge for yourself the quality of the people criticising the way Mathematicans do Maths. Though this time we do seem to have a lot of Philosophers as well as Psychologists.
Now, this is important because, their mistakes in significance testing have led to the ‘Effect Size’, which has led to Education research being done incorrectly, which has an impact on real children in real classrooms.
In my next post, I will deal with the more Mathsy side of things. I will show that their criticisms of significance testing are baseless and just show their poor understanding of Statistics.