Is it okay to post art on a science blog? Well this is kind of science, so I guess it’s kind of okay.
Here is a litte piece of computer-generated music that I created yesterday:
As Twitter user @DanieleTatti noted, it sounds like a sort of Scottish raga. But what I wanted to post about was the algorithm used to generate that ever changing sequence of pitches and warbles. It’s quite a simple idea – simple enough in fact that the whole piece is generated by the following 140 characters of SuperCollider code:
This post is about a classic probability puzzle. It goes something like this: I place two envelopes on the table in front of you. One of them contains a Prize, which is an amount of money in pounds, but you don’t know how much it is. The other one contains a Special Bonus Prize, which is worth exactly twice as much money as the Prize. It’s your lucky day — but you can only choose one envelope. Which do you choose?
“Well,” you say to yourself, “it doesn’t matter, they’re both the same,” so you pick one at random. Let’s say it’s the one on the left. But now I ask you if you want to change your mind.
“Well,” you might say to yourself, “let x be the amount of money in the envelope I’m holding. This envelope has a 50% chance of being the Prize, in which case the other envelope contains 2x. On the other hand, there’s a 50% chance that this is the Special Bonus Prize, in which case the other envelope contains 0.5x. But still, the expected value of the other envelope is 0.5*2x + 0.5*0.5x = 1.25x. So on the balance of probabilities I should definitely switch.” But then I offer to let you switch again, and again, and again, and every time you go through the same reasoning, never managing to settle on a particular envelope because each one seems like it should contain more money than the other. Clearly something is wrong with this reasoning, but what is it?
In this post, I’ll solve this problem in what I consider to be the proper Bayesian way, pinpointing exactly where the problem is. You might want to think about the question for a bit and come up with your own idea of its solution before reading on.
In order to understand economics, you must first understand chemistry. That’s my story at least, and I’m sticking to it. I’m neither an economist nor a chemist (not a real one anyway), but I’ve been thinking a lot about how to understand economics in chemical terms.
In a previous post I discussed autocatalysis, the mechanism by which a bunch of different molecules can react with each other in such a way that they end up producing more of themselves, at the cost of using something else up. The ideas in that post don’t only apply to chemistry – you can use them to think about just about any kind of physical process. In this post I’ll talk about how to think about the economy as a whole in autocatalytic terms. But let’s start with something on a smaller scale, the process of baking bread:
This post is about an idea I’ve had for a long time, about an experiment to test whether plants can learn. I’m very far from being a plant biologist, so I’m unlikely to ever be in a position to do this experiment, but it’s an interesting thing to think about.
Lately I’ve been hanging out on Physics Stack Exchange, a question-and-answer site for physicists and people interested in physics. Someone asked a question recently about the relationship between thermodynamics and a quantity from information theory. It lead me to quite an interesting result, which I think is new.
Here’s an interesting fact: apparently, chemical self-reproduction is easier to achieve in gases than in liquids. This leads me to an interesting idea: maybe the very first steps in the origins of life took place not in the oceans but in the atmosphere. The mechanisms by which molecules can produce more of themselves are interesting, and in this post I’ll explore a bit about how such molecular reproduction (or, to use the technical term, autocatalysis) works.
While we’re on the subject of Things Whose Existence is Surprising, here’s a another one. The reason this particular thing’s existence is surprising is that this machine solves a problem that most people would never have imagined was a problem.
That problem is this: imagine you’re in a cafeteria of some kind, eating some kind of egg salad. This salad contains many slices of delicious hard-boiled egg. But, oh no, these egg slices are inconsistently sized. Worse, some of them don’t even contain any of the yolk. If only there was some way to make every slice perfect, just like the ones that go right through the centre of the egg.
It seems like there should be a word that goes in the bottom-right here:
However, as far as I’m aware no such word exists, so we’ll have to make one up. Does anyone have any good ideas?
To be clear, what I’m after is a general term for any quantity whose units are entropy-units-per-time-unit, i.e. or . The term “entropy production” is currently in use for the rate at which systems create entropy, but I want a word that can also refer to the rate at which systems extract negative entropy from their surroundings. (You can have a power loss as well as a power gain.)
The only thing I can think of is “empowerment”, which sort-of makes sense but is icky.
I’ve just been reading this New Scientist article about something called “hyperbolic discounting“. It’s an experimental observation about human behaviour that economists don’t like because they think it’s irrational. However, a quick back-of-the-envelope calculation shows that it is the economists who are being irrational in thinking this.
Humans are the only animal to have these weird little tufts of hair above their eyes. I find the question of why this is to be a surprisingly interesting one, worthy of a long and rambling post about human evolution.