Archive for ‘Potentially Useful Stuff’

January 29, 2014

Voronoi tessellation on the surface of a sphere (python code)

by Lucas Wilkins

Today I needed to do perform a Voronoi tessellation. If I have a set of points on a surface, this is the way of splitting a surface up into areas that are closest to each of the points. Like this:

voronoi

The blue points are the set of points I started with, and the black lines show the edges of the Voronoi tessellation. Doing a planar tessellation is quite simple, but I wanted to do it on the surface of a sphere. It’s conceptually quite simple, but the algorithm was really annoying to debug. So to save other people the same frustrations, I thought I’d post my python class.

September 25, 2013

Rotation matrix from one vector to another in n-dimensions

by Lucas Wilkins

Sometimes you need to find a rotation matrix that rotates one vector to face in the direction of another. Here’s some code to do it for vectors of arbitrary dimension. The code is at the bottom of the post.

June 24, 2013

Measure Theoretic Probability for Dummies: Part I

by Lucas Wilkins

Nothing makes me empathise more with those struggling with probability theory than reading things like this on Wikipedia:

Let (Ω, F, P) be a measure space with P(Ω)=1. Then (Ω, F, P) is a probability space, with sample space Ω, event space F and probability measure P.

This is written so that only the people who already know what it is saying can understand it. The only possible value of this sentence would be to someone who managed to study measure theory without being exposed to it’s most widespread application; in other words: no one! Whilst the attitude this, and soooo many Wikipedia pages displays encourages people to be precise in a way that mathematicians cherrish, it also alienates a lot of perfectly capable, intelligent people who just run out of patience in the face of the relentless influx of oblique statements.

Personally, I think that understanding probability spaces is very important, but for the reasons including those I mention above, most people find the measure theoretic formalisation daunting. Here I have tried to outline the most widely used formalisation, which has turned out to be far more work than I expected…

June 1, 2013

Friston’s Free Energy for Dummies

by Lucas Wilkins

People always want an explanation of Friston’s Free Energy that doesn’t have any maths. This is quite a challenge, but I hope I have managed to produce something comprehensible.

This is basically a summary of Friston’s Entropy paper (available here). A friend of jellymatter was instrumental in its production, and for this reason I am fairly confident that my summary is going in the right direction, even if I have not emphasised exactly the same things as Friston.

I’ve made a point of writing this without any maths, and I have highlighted what I consider to be the main assumptions of the paper and maked them with a P.

July 1, 2012

Visualizing the mutual information and an introduction to information geometry

by Lucas Wilkins

For a while now I have had an interest in information geometry. The maxims that geometry is intuitive maths and information theory is intuitive statistics seem pretty fair to me, so it’s quite surprising to find a lack of easy to understand introductions to information geometry. This is my first attempt, the idea is to get an geometric understanding of the mutual information and to introduce a few select concepts from information geometry.

March 13, 2012

Printing .PDF stickynotes in linux

by Lucas Wilkins

Not the usual kind of topic, but this needs to go out into the wide world. How to print the annotations that sometimes one might get on a pdf file:

First install or upgrade Adobe Reader to 9.0:
sudo apt-get install acroread
or
sudo apt-get upgrade acroread

Then back up and open the following file in your home directory: “~/.adobe/Acrobat/9.0/Preferences/reader_prefs”
cd ~/.adobe/Acrobat/9.0/Preferences/
cp reader_prefs reader_prefs.backup
gedit reader_prefs

gedit will complain about the encoding, but ignore it and click “edit anyway” (we have a backup if anything goes badly wrong). Find the bit where it says:
/printCommentPopups [/b false]
and change “false” to “true”. Save the file. So it looks like
/printCommentPopups [/b true]

Now you can just open up your file in adobe reader
acroread filename.pdf
and print, making sure to select the “Documents and Markups” option in the “Comments and Forms” combo box in the print dialogue.

January 4, 2012

A secret message from another dimension

by James Thorniley

We’ve touched on the difference between chaos and randomness before.  One strange property of chaotic systems is that they are able to synchronise to each other, so that in spite of their intrinsic tendency to vary wildly, a chaotic system can (actually quite easily) be persuaded to match the behaviour of another chaotic system. As this post will show, it is possible to use this property for a kind of secret message transmission.

December 1, 2011

Faster JSON in Python

by Lucas Wilkins

Retracted due to my massive fuckwittery.

March 31, 2011

Drawing Confidence Ellipses and Ellipsoids

by Lucas Wilkins

I’ve seen some really bad methods for drawing confidence ellipsoids recently, they all seem to make it really complicated and confusing (and specific). So I thought I would show how to calculate points on an ellipse corresponding to a covariance matrix – this method works for any number of dimensions without any need to change it.

For all those that don’t care why, the method to generate the points of an ellipsoid is as follows:

1) make a unit n-sphere (which for 2D is a circle with radius 1), call these points X:

If it is an elipse you want, make a matrix with columns of \sin(\theta) and \cos(\theta) for some incrementing \theta values between 0 and 2\pi(=)

2) apply the following linear transformation to get the points of your ellipsoid (Y):

Y = M + kC(\Sigma)X

where M is the vector of the means (center of the ellipsoid) and \Sigma is the covariance matrix. C represents the Cholesky Decomposition, sort of a matrix square root. k is the number of standard deviations at which one wishes to draw the ellipse

The Cholesky decomposition can be accessed as, “numpy.linalg.cholesky” in Python, “Cholesky” in R (matrix package), “chol” in MATLAB and “spotrf” (amongst others, I think) in LAPACK

For those who care, here is why this works…

Follow

Get every new post delivered to your Inbox.

Join 474 other followers