January 5, 2014

## More weird properties of chaos: non-mixing

We’ve done “chaos is not randomnessbefore. Here’s another interesting property to do with mixing.

Mixing is a property of dynamical systems whereby the state of the system in the distant future cannot be predicted from its initial state (or any given state a long way in the past). This is pretty much the same as the kind of mixing you get when you put milk in a cup of tea and swirl it around: obviously when you first put the milk in, it stays roughly where you put it, but after time it spreads out evenly. The even spread of the milk will be the same no matter where you put the milk in originally. More formally, if

$P(x_0)$

is a “distribution” or density function of where the “particles” of milk are when you have just put them in the tea, and

$P ( x_t )$

is the distribution after $t$ seconds. “Mixing” is formally defined as

$\lim_{t \to \infty} P(x_0, x_t) = P(x_0)P(x_t)$

You don’t have to think about these distributions as probability distributions, but I find it easier if you do. For those that know probability, it is obvious that what the above is saying is that the distribution of milk after a long time is probabilistically independent of its distribution at the start.

In cups of tea, this happens (mostly) because of the “random” Brownian motion of the milk (possibly enhanced by someone swirling it with a spoon).

October 1, 2013

## 3D printed bee colour spaces

Think3DPrint3D has generously donated 3D printer time and plastic filament to the unusual task of rendering the usually intangible concept of honeybee colour spaces into real, physical, matter!

The Spaces

The first two spaces we printed were chosen by me, in part because I think they are theoretically interesting, and in other part, because unlike some other colour spaces they are finite sized, 3D objects.

June 24, 2013

## Measure Theoretic Probability for Dummies: Part I

Nothing makes me empathise more with those struggling with probability theory than reading things like this on Wikipedia:

Let (Ω, F, P) be a measure space with P(Ω)=1. Then (Ω, F, P) is a probability space, with sample space Ω, event space F and probability measure P.

This is written so that only the people who already know what it is saying can understand it. The only possible value of this sentence would be to someone who managed to study measure theory without being exposed to it’s most widespread application; in other words: no one! Whilst the attitude this, and soooo many Wikipedia pages displays encourages people to be precise in a way that mathematicians cherrish, it also alienates a lot of perfectly capable, intelligent people who just run out of patience in the face of the relentless influx of oblique statements.

Personally, I think that understanding probability spaces is very important, but for the reasons including those I mention above, most people find the measure theoretic formalisation daunting. Here I have tried to outline the most widely used formalisation, which has turned out to be far more work than I expected…

June 1, 2013

## Friston’s Free Energy for Dummies

People always want an explanation of Friston’s Free Energy that doesn’t have any maths. This is quite a challenge, but I hope I have managed to produce something comprehensible.

This is basically a summary of Friston’s Entropy paper (available here). A friend of jellymatter was instrumental in its production, and for this reason I am fairly confident that my summary is going in the right direction, even if I have not emphasised exactly the same things as Friston.

I’ve made a point of writing this without any maths, and I have highlighted what I consider to be the main assumptions of the paper and maked them with a P.

April 26, 2013

## Why (: is upside down

I think it is fairly intuitive that the smiley in the title is upside down. But why is this?

Generally, when we look at a face we look at the eyes first. These days it is pretty easy to track where people look, the equipment is cheap and easily available – one simply uses a camera to look at the pupil and then calculates where the subject is looking. A preference for beginning with the eyes is a widely observed phenomenon (‘eyes are special’).

English readers, like readers in most languages, scan left to right when reading. With reading we constantly train ourselves to prefer moving left to right, something which leads to a phenomenon often called a readers bias (see this). It is not only during reading that the direction from left to right is preferred.

So, it is not surprising that we should think that smilies with eyes on the left are correct: left to right is preferred for reading, and eyes to mouth is preferred for viewing faces.

September 24, 2012

## More wrong interpretations of P values – “repeated sampling”

A while ago I wrote a little rant on the (mis)interpretation of P-values. I’d like to return to this subject having investigated a little more. First, this post, I’m going to point to an interesting little subtlety pointed out by Fisher that I hadn’t thought about before, in the second post, I will argue why P-values aren’t as bad as they are sometimes made out to be.

So, last time, I stressed the point that you can’t interpret a P-value as a probability or frequency of anything, unless you say “given that the null hypothesis is true”. Most misinterpretations, e.g. “the probability that you would accept the null hypothesis if you tried the experiment again”, make this error. But there is one common interpretation that is less obviously false: “A P-value is the probability that the data would deviate as or more strongly from the null hypothesis in another experiment, than they did in the current experiment, given that the null hypothesis is true”. This is something that you might think is a more careful statement, but the problem is that in fact when we calculate P values we take into account aspects of the data not necessarily related to how strongly they deviate from the prediction of the null hypothesis. This could be misleading, so we’ll build it up more precisely in this post.

July 1, 2012

## Visualizing the mutual information and an introduction to information geometry

For a while now I have had an interest in information geometry. The maxims that geometry is intuitive maths and information theory is intuitive statistics seem pretty fair to me, so it’s quite surprising to find a lack of easy to understand introductions to information geometry. This is my first attempt, the idea is to get an geometric understanding of the mutual information and to introduce a few select concepts from information geometry.

April 21, 2012

## An interesting relationship between physics and information theory

Lately I’ve been hanging out on Physics Stack Exchange, a question-and-answer site for physicists and people interested in physics. Someone asked a question recently about the relationship between thermodynamics and a quantity from information theory.  It lead me to quite an interesting result, which I think is new.

January 12, 2012

## An easy synchronisation experiment

Imagine a circuit that causes a little light to flash on and off. Imagine that the frequency of that flashing is itself dependent on the light at a particular sensor. Imagine that such a circuit is placed next to another identical circuit, such that the light from each circuit is directed at the sensor on the other circuit. What do you expect to see? Find out after the break…

January 4, 2012

## A secret message from another dimension

We’ve touched on the difference between chaos and randomness before.  One strange property of chaotic systems is that they are able to synchronise to each other, so that in spite of their intrinsic tendency to vary wildly, a chaotic system can (actually quite easily) be persuaded to match the behaviour of another chaotic system. As this post will show, it is possible to use this property for a kind of secret message transmission.