People always want an explanation of Friston’s Free Energy that doesn’t have any maths. This is quite a challenge, but I hope I have managed to produce something comprehensible.
This is basically a summary of Friston’s Entropy paper (available here). A friend of jellymatter was instrumental in its production, and for this reason I am fairly confident that my summary is going in the right direction, even if I have not emphasised exactly the same things as Friston.
I’ve made a point of writing this without any maths, and I have highlighted what I consider to be the main assumptions of the paper and maked them with a P.
I think it is fairly intuitive that the smiley in the title is upside down. But why is this?
Generally, when we look at a face we look at the eyes first. These days it is pretty easy to track where people look, the equipment is cheap and easily available – one simply uses a camera to look at the pupil and then calculates where the subject is looking. A preference for beginning with the eyes is a widely observed phenomenon (‘eyes are special’).
English readers, like readers in most languages, scan left to right when reading. With reading we constantly train ourselves to prefer moving left to right, something which leads to a phenomenon often called a readers bias (see this). It is not only during reading that the direction from left to right is preferred.
So, it is not surprising that we should think that smilies with eyes on the left are correct: left to right is preferred for reading, and eyes to mouth is preferred for viewing faces.
Is it okay to post art on a science blog? Well this is kind of science, so I guess it’s kind of okay.
Here is a litte piece of computer-generated music that I created yesterday:
As Twitter user @DanieleTatti noted, it sounds like a sort of Scottish raga. But what I wanted to post about was the algorithm used to generate that ever changing sequence of pitches and warbles. It’s quite a simple idea – simple enough in fact that the whole piece is generated by the following 140 characters of SuperCollider code:
This post is about a classic probability puzzle. It goes something like this: I place two envelopes on the table in front of you. One of them contains a Prize, which is an amount of money in pounds, but you don’t know how much it is. The other one contains a Special Bonus Prize, which is worth exactly twice as much money as the Prize. It’s your lucky day — but you can only choose one envelope. Which do you choose?
“Well,” you say to yourself, “it doesn’t matter, they’re both the same,” so you pick one at random. Let’s say it’s the one on the left. But now I ask you if you want to change your mind.
“Well,” you might say to yourself, “let x be the amount of money in the envelope I’m holding. This envelope has a 50% chance of being the Prize, in which case the other envelope contains 2x. On the other hand, there’s a 50% chance that this is the Special Bonus Prize, in which case the other envelope contains 0.5x. But still, the expected value of the other envelope is 0.5*2x + 0.5*0.5x = 1.25x. So on the balance of probabilities I should definitely switch.” But then I offer to let you switch again, and again, and again, and every time you go through the same reasoning, never managing to settle on a particular envelope because each one seems like it should contain more money than the other. Clearly something is wrong with this reasoning, but what is it?
In this post, I’ll solve this problem in what I consider to be the proper Bayesian way, pinpointing exactly where the problem is. You might want to think about the question for a bit and come up with your own idea of its solution before reading on.
A while ago I wrote a little rant on the (mis)interpretation of P-values. I’d like to return to this subject having investigated a little more. First, this post, I’m going to point to an interesting little subtlety pointed out by Fisher that I hadn’t thought about before, in the second post, I will argue why P-values aren’t as bad as they are sometimes made out to be.
So, last time, I stressed the point that you can’t interpret a P-value as a probability or frequency of anything, unless you say “given that the null hypothesis is true”. Most misinterpretations, e.g. “the probability that you would accept the null hypothesis if you tried the experiment again”, make this error. But there is one common interpretation that is less obviously false: “A P-value is the probability that the data would deviate as or more strongly from the null hypothesis in another experiment, than they did in the current experiment, given that the null hypothesis is true”. This is something that you might think is a more careful statement, but the problem is that in fact when we calculate P values we take into account aspects of the data not necessarily related to how strongly they deviate from the prediction of the null hypothesis. This could be misleading, so we’ll build it up more precisely in this post.
The UK government’s (ever so slightly creepily named) “Behavioural Insights Team” released a report [PDF] (relatively) recently called “Test, Learn, Adapt” (the authors include Ben Goldacre, well known for the book “Bad Science”, and the director of the York Trials Unit, David Torgerson) arguing that more policy decisions should be made on the basis of evidence from randomised controlled trials (RCTs). The report is a really good plain-English explanation of what RCTs are and how they work. It also gives examples of how RCTs can perhaps help to inform policies, by testing whether interventions such as back-to-work schemes or educational programs, um, “work”. According to the report’s blurb:
RCTs are the best way of determining whether a policy or intervention is working.
It’s not hard to find opinion pieces backing up the report’s central idea, and the thesis that RCTs are the best way to “find things out”. Here’s one by Tim Harford, a writer who covers economics; a similar argument made by Paul Johnson who is the director of an economics research group, the Institute for Fiscal Studies; and Prateek Buch, who is a research scientist. A phrase that keeps popping up is “gold standard”. RCTs are “the gold standard in evidence”, says Johnson, or the “gold-standard for showing that medical interventions are effective” according to Buch. Mark Henderson’s book, “The Geek Manifesto” says that the RCT is “commonly considered the ‘gold standard’ for medical research because it seeks systematically to minimise potential bias through a series of simple safeguards”. What exactly does all this mean? I think it’s a question worth asking, since not all science involves RCTs. The Higgs boson for example, was recently “discovered” (if that’s the word) without (as far as I can tell) the need to randomise test subjects. So are RCTs in fact the “gold standard”?
In order to understand economics, you must first understand chemistry. That’s my story at least, and I’m sticking to it. I’m neither an economist nor a chemist (not a real one anyway), but I’ve been thinking a lot about how to understand economics in chemical terms.
In a previous post I discussed autocatalysis, the mechanism by which a bunch of different molecules can react with each other in such a way that they end up producing more of themselves, at the cost of using something else up. The ideas in that post don’t only apply to chemistry – you can use them to think about just about any kind of physical process. In this post I’ll talk about how to think about the economy as a whole in autocatalytic terms. But let’s start with something on a smaller scale, the process of baking bread:
This post is about an idea I’ve had for a long time, about an experiment to test whether plants can learn. I’m very far from being a plant biologist, so I’m unlikely to ever be in a position to do this experiment, but it’s an interesting thing to think about.
I’ve read a couple of interesting books recently, one was “The End of Science” by John Horgan, and the other was “Radical Embodied Cognitive Science” by Anthony Chemero. Horgan’s theme was the question of whether the fundamentals of science are now so solid that before long nothing genuinely “new” will be left to find, and science will be reduced to either obsolescence, or puzzle-solving type application of existing theories to particular problems. The only other type of science that still exists, according to Horgan, is “ironic” science. A kind of semi-postmodern project to explain or describe what we already know in more “beautiful” or appealing forms, but which never produces hypotheses that are empirically testable, and for this reason, don’t actually advance knowledge. Horgan is distinctly dismissive of this kind of science, as being not “proper” science (he deliberately compares it to postmodern literary criticism, which he seems to have particular contempt for, having once been a student of it himself). Chemero would be, I’m sure, classified by Horgan as an ironic scientist. I don’t think Chemero would be able to deny that in a sense, his philosophy is empirically untestable, but he certainly argues that it is pragmatic in the sense of being useful to scientists engaged in solving real world problems.