Posts tagged ‘causality’

August 22, 2013

A brief case study in causal inference: age and falling grades

by James Thorniley

An interesting claim I found in the press: there is some concern because GCSE grades in England this year were lower than last year. What caused this?

One reason for the weaker than expected results was the higher number of younger students taking GCSE papers. The JCQ figures showed a 39% increase in the number of GCSE exams taken by those aged 15 or younger, for a total of 806,000.

This effect could be seen in English, with a 42% increase in entries from those younger than 16. While results for 16-year-olds remained stable, JCQ said the decline in top grades “can, therefore, be explained by younger students not performing as strongly as 16-year-olds”.

Newspapers seem to get worried whenever there are educational results out that there might be some dreadful societal decline going on, and that any change in educational outcomes might be a predictor of the impending collapse of civilisation. This alternative explanation of reduced age is therefore quite interesting, I thought it would be worth trying to analyse it formally to see if it stands up.

August 29, 2012

Randomised controlled trials – the “gold standard”?

by James Thorniley

The UK government’s (ever so slightly creepily named) “Behavioural Insights Team” released a report [PDF] (relatively) recently called “Test, Learn, Adapt” (the authors include Ben Goldacre, well known for the book “Bad Science”, and the director of the York Trials Unit, David Torgerson) arguing that more policy decisions should be made on the basis of evidence from randomised controlled trials (RCTs). The report is a really good plain-English explanation of what RCTs are and how they work. It also gives examples of how RCTs can perhaps help to inform policies, by testing whether interventions such as back-to-work schemes or educational programs, um, “work”. According to the report’s blurb:

RCTs are the best way of determining whether a policy or intervention is working.

It’s not hard to find opinion pieces backing up the report’s central idea, and the thesis that RCTs are the best way to “find things out”. Here’s one by Tim Harford, a writer who covers economics; a similar argument made by Paul Johnson who is the director of an economics research group, the Institute for Fiscal Studies; and Prateek Buch, who is a research scientist. A phrase that keeps popping up is “gold standard”. RCTs are “the gold standard in evidence”, says Johnson, or the “gold-standard for showing that medical interventions are effective” according to Buch. Mark Henderson’s book, “The Geek Manifesto” says that the RCT is “commonly considered the ‘gold standard’ for medical research because it seeks systematically to minimise potential bias through a series of simple safeguards”. What exactly does all this mean? I think it’s a question worth asking, since not all science involves RCTs. The Higgs boson for example, was recently “discovered” (if that’s the word) without (as far as I can tell) the need to randomise test subjects. So are RCTs in fact the “gold standard”?

August 16, 2011

The causal Markov confusion

by James Thorniley

Out in the pacific there are two islands named Foo and Bar. Two ferries, the good ship Fizz and the good ship Buzz pass between them once per day each (that is, if Fizz starts the day on Foo it will end the day on Bar). For each ship then, we can write out the list of islands it ends the day on using the initials: F for Foo and B for Bar:

Fizz:    F B F B F B F B F B F B F B F B
Buzz:    B F B F B F B F B F B F B F B F

However, on Foo island, Fry is currently searching for his friend Bender. But Bender is on Bar. Fry learns of this and resolves to hop aboard the good ship Buzz and be reunited with Bender in the evening. But alas! Bender has similarly reasoned that Fry is on Foo and therefore set sail with the Fizz. Having passed each other during the day, Fry is stranded on Bar and Bender on Foo, they will have to wait until tomorrow before they can do anything about it.

July 17, 2011

Reification: just because a thing has a name, doesn’t mean it is a thing

by James Thorniley

Science does not rely on investigators being unbiased “automatons.” Instead, it relies on methods that limit the ability of the investigator’s admittedly inevitable biases to skew the results.

So says a paper by J. E. Lewis et al in which they claim Steven Jay Gould was wrong when he said early 19th century craniometrist Samuel George Morton “finagled” his data to match his own racist preconceptions. They had another look at the data, and actually remeasured some of Morton’s skulls, and claim that Morton’s reported results actually fit his racial bias less than a fully accurate study would have.

Depressingly a number of modern day internet racists seem picked up on the headline message “Gould was wrong” and assumed that means the paper supports racial theories about intelligence or other differences. The paper doesn’t support any such ideas, and that’s not the subject of this post. It’s just worth pointing that out.

What this paper is about is whether scientists’ personal biases influence the results they get. This isn’t about whether Morton was “right” in a scientific sense, because everyone agrees he wasn’t. It’s about whether he made the right conclusions based on the evidence available to him. It’s a historical question – modern anthropology has essentially nothing to do with this.

Follow

Get every new post delivered to your Inbox.

Join 478 other followers