Posts tagged ‘statistics’

October 1, 2013

Intuitive example of expectation maximization

by James Thorniley

I’ve been looking at the Expectation-Maximization (EM) algorithm, which is a really interesting tool for estimating parameters in multivariate models with latent (i.e. unobserved) variables. However, I found it quite hard to understand it from the formal definitions and explanations on Wikipedia. Even the more “intuitive” examples I found left me scratching my head a bit, mostly because I think they are a bit too complicated to get an intuition of what’s going on. So here I will run through the simplest possible example I can think of.

August 22, 2013

A brief case study in causal inference: age and falling grades

by James Thorniley

An interesting claim I found in the press: there is some concern because GCSE grades in England this year were lower than last year. What caused this?

One reason for the weaker than expected results was the higher number of younger students taking GCSE papers. The JCQ figures showed a 39% increase in the number of GCSE exams taken by those aged 15 or younger, for a total of 806,000.

This effect could be seen in English, with a 42% increase in entries from those younger than 16. While results for 16-year-olds remained stable, JCQ said the decline in top grades “can, therefore, be explained by younger students not performing as strongly as 16-year-olds”.

Newspapers seem to get worried whenever there are educational results out that there might be some dreadful societal decline going on, and that any change in educational outcomes might be a predictor of the impending collapse of civilisation. This alternative explanation of reduced age is therefore quite interesting, I thought it would be worth trying to analyse it formally to see if it stands up.

July 4, 2013

Every bad model of a system is a bad govenor of that system

by Lucas Wilkins

The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.Ada Lovelace

James recently posted about a Guardian article on “big data”. The article outlines the many roles which algorithms play in our lives, and some of the concerns their prevalence raises. James’ contention, as far as I understand, was with the focus on algorithms, rather than the deeper issues of control, freedom, and agency. These later issues are relevant to all types of automation, from assembly lines to artificial intelligences.

Automation flies in the face of any attempt to give some human activity special merit, whether this is our capability to produce, to create, make choices, procreate, socialise or whatever. It relentlessly challenges our existential foundation: I am not made human by what I make as I can be replaced by a robot, it’s not what I think, I can be replaced by a computer, etc. Each new automation requires us to rethink what we are.

July 2, 2013

“Algorithm”… You keep using that word…

by James Thorniley

The Guardian has a feature article entitled How Algorithms Rule the World:

From dating websites and City trading floors, through to online retailing and internet searches (Google’s search algorithm is now a more closely guarded commercial secret than the recipe for Coca-Cola), algorithms are increasingly determining our collective futures.

The strange thing about this is that the algorithms mentioned are nothing like the algorithms you learn about in computer science. Usually, an algorithm refers to a (generally) deterministic sequence of instructions that allow you to compute a particular mathematical result; a classic example (the first offered by Wikipedia) being Euclid’s algorithm for finding the greatest common divisor (it doesn’t have to strictly involve numbers – anything symbolic will do: one could easily create an algorithm to transliterate this entire post so that it was ALL IN CAPITALS, for example).

By contrast the “algorithms” talked about by the Guardian are all about extracting correlations from data: working out what you are going to buy next, if and when you will commit a crime and so on. What they are talking about, I think, is statistics or machine learning. If you want a more trendy term, perhaps data sciencebut as far as I can tell these are all pretty much the same thing.

To say that the world was ruled by statistics would sound a bit twentieth century perhaps, so the hip and happening Guardian has maybe just found a more exciting term for an old phenomenon. But I think there is something more to their use of the word algorithm: I don’t think it is the right word, but there is something else they are trying to capture, as one of their interviewees says:

“… The questions being raised about algorithms at the moment are not about algorithms per se, but about the way society is structured with regard to data use and data privacy. It’s also about how models are being used to predict the future. There is currently an awkward marriage between data and algorithms. As technology evolves, there will be mistakes, but it is important to remember they are just a tool. We shouldn’t blame our tools.”

The issue is not the standard use of statistics to find interesting stuff in data. The problem is how the results of this are used in society: applying the results from statistics in an automated way. This automation is the only commonality that I can see with the traditional meaning of an algorithm.  In the case of the crime detection, insurance calculations or banking systems the problem is not that there is some data with correlations in it, but that decisions are being at least in part automated, producing either a politically disturbing denial of people’s individual agency or simply some dangerous automatic trades that can crash a stock market.

The term algorithm is being used here to describe something that has a “life of its own” – something Euclid’s algorithm clearly does not have. Euclid’s algorithm couldn’t “rule the world” if it tried (and it can’t try, because you have to being a conscious agent to do that). Algorithms are being talked about here as if they have their own agency: they can “identify” patterns (rather than be used by people to identify patterns), they can make trades all by themselves. They are scurrying about behind the scenes doing all sorts of things we don’t know about, being left to their own devices to live (semi) autonomous lives of their own.

I think that’s what scares people. Not algorithms as such but the idea of autonomous computational agents doing stuff without oversight, particularly if that stuff (like stock market trading or making decisions for the police) might later have an impact on real people’s lives.

June 1, 2013

Friston’s Free Energy for Dummies

by Lucas Wilkins

People always want an explanation of Friston’s Free Energy that doesn’t have any maths. This is quite a challenge, but I hope I have managed to produce something comprehensible.

This is basically a summary of Friston’s Entropy paper (available here). A friend of jellymatter was instrumental in its production, and for this reason I am fairly confident that my summary is going in the right direction, even if I have not emphasised exactly the same things as Friston.

I’ve made a point of writing this without any maths, and I have highlighted what I consider to be the main assumptions of the paper and maked them with a P.

September 24, 2012

More wrong interpretations of P values – “repeated sampling”

by James Thorniley

A while ago I wrote a little rant on the (mis)interpretation of P-values. I’d like to return to this subject having investigated a little more. First, this post, I’m going to point to an interesting little subtlety pointed out by Fisher that I hadn’t thought about before, in the second post, I will argue why P-values aren’t as bad as they are sometimes made out to be.

So, last time, I stressed the point that you can’t interpret a P-value as a probability or frequency of anything, unless you say “given that the null hypothesis is true”. Most misinterpretations, e.g. “the probability that you would accept the null hypothesis if you tried the experiment again”, make this error. But there is one common interpretation that is less obviously false: “A P-value is the probability that the data would deviate as or more strongly from the null hypothesis in another experiment, than they did in the current experiment, given that the null hypothesis is true”. This is something that you might think is a more careful statement, but the problem is that in fact when we calculate P values we take into account aspects of the data not necessarily related to how strongly they deviate from the prediction of the null hypothesis. This could be misleading, so we’ll build it up more precisely in this post.

December 13, 2011

Proofs of God in a photon

by Lucas Wilkins

I’ve been reading this article in the independent: “Proofs of God in a photon”. The article is ultimately about some anthropic principle stuff. But the comments are full of silly things that make reluctant to call myself a scientist in case I am associated with the authors. So, as therapy, I shall call a number of the commenters on their bullshit. First, a well meaning guy called Dan,

December 8, 2011

Poll discussion: The Monty Hall Controversy

by Lucas Wilkins

The latest Jellymatter poll has been up for a while now, time to discuss what the correct solution is. As well as sounding like a question from a Voight-Kampff test, it is a “double trick question”, based on the Monty Hall problem. It was a little mean of me to post it with my own agenda in mind.

For me, the interesting thing about the Monty Hall problem is vehemency of those who argue for “switch”, option. The argument is nearly always unjustified. Whilst arguing this I will talk about how the problem has been stated in the past: It’s history shows how quickly someones brief, informal argument can change into an unintuitive answer to a ill-posed question and then into a dogmatic belief.

December 6, 2011

Poll: Cups and a pea

by Lucas Wilkins

You’re walking down a back alley and find a man with the archetypal three cups and pea. You decide to gamble with him in a game of ‘guess where the pea is'; after all the odds are reasonable and he has assured you that he will demonstrate that at the pea is under one of the cups. He places the pea under one of the cups and shuffles them rapidly and you choose one of the cups. At this point the man overturns one of the cups you did not choose – there is no pea underneath it. He then asks you whether you would like to choose the other upright cup instead…

December 5, 2011

When should you use a null hypothesis test? Probably never

by James Thorniley

I came across a nice paper called “The Null Ritual” by Gerd Gigerenzer et al recently, it’s an excellent read in my opinion, and sums up a lot of the things that are wrong with null-hypothesis testing. This process is pervasive in many areas of science, particularly psychology (which is what these authors are mainly talking about), and it’s flawed in too many ways to count. Gigerenzer’s paper is worth reading, I’m going to attempt to summarise it, focussing on the things that really bother me. A good point that this paper makes is that its not actually the test itself that is intrinsically “wrong”, it’s more to do with the way that it has permeated in scientific culture. This is what they are calling the “Null Ritual” – the process of more or less automatically doing a null hypothesis test, without there necessarily being a good reason to, except perhaps that some journals or reviewers seem to require it. Before reading this, you might want to try filling in the Jellymatter poll on the subject (which is taken from “The Null Ritual”), before I discuss the correct answer below.

Follow

Get every new post delivered to your Inbox.

Join 479 other followers