Archive for ‘Opinion’

October 14, 2013

Small, far away

by James Thorniley

Can you tell the difference between small and far away?

It’s funny because it should be obvious, but actually distinguishing between small and far away based on visual information is slightly tricky to explain.

July 4, 2013

Every bad model of a system is a bad govenor of that system

by Lucas Wilkins

The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.Ada Lovelace

James recently posted about a Guardian article on “big data”. The article outlines the many roles which algorithms play in our lives, and some of the concerns their prevalence raises. James’ contention, as far as I understand, was with the focus on algorithms, rather than the deeper issues of control, freedom, and agency. These later issues are relevant to all types of automation, from assembly lines to artificial intelligences.

Automation flies in the face of any attempt to give some human activity special merit, whether this is our capability to produce, to create, make choices, procreate, socialise or whatever. It relentlessly challenges our existential foundation: I am not made human by what I make as I can be replaced by a robot, it’s not what I think, I can be replaced by a computer, etc. Each new automation requires us to rethink what we are.

July 2, 2013

“Algorithm”… You keep using that word…

by James Thorniley

The Guardian has a feature article entitled How Algorithms Rule the World:

From dating websites and City trading floors, through to online retailing and internet searches (Google’s search algorithm is now a more closely guarded commercial secret than the recipe for Coca-Cola), algorithms are increasingly determining our collective futures.

The strange thing about this is that the algorithms mentioned are nothing like the algorithms you learn about in computer science. Usually, an algorithm refers to a (generally) deterministic sequence of instructions that allow you to compute a particular mathematical result; a classic example (the first offered by Wikipedia) being Euclid’s algorithm for finding the greatest common divisor (it doesn’t have to strictly involve numbers – anything symbolic will do: one could easily create an algorithm to transliterate this entire post so that it was ALL IN CAPITALS, for example).

By contrast the “algorithms” talked about by the Guardian are all about extracting correlations from data: working out what you are going to buy next, if and when you will commit a crime and so on. What they are talking about, I think, is statistics or machine learning. If you want a more trendy term, perhaps data sciencebut as far as I can tell these are all pretty much the same thing.

To say that the world was ruled by statistics would sound a bit twentieth century perhaps, so the hip and happening Guardian has maybe just found a more exciting term for an old phenomenon. But I think there is something more to their use of the word algorithm: I don’t think it is the right word, but there is something else they are trying to capture, as one of their interviewees says:

“… The questions being raised about algorithms at the moment are not about algorithms per se, but about the way society is structured with regard to data use and data privacy. It’s also about how models are being used to predict the future. There is currently an awkward marriage between data and algorithms. As technology evolves, there will be mistakes, but it is important to remember they are just a tool. We shouldn’t blame our tools.”

The issue is not the standard use of statistics to find interesting stuff in data. The problem is how the results of this are used in society: applying the results from statistics in an automated way. This automation is the only commonality that I can see with the traditional meaning of an algorithm.  In the case of the crime detection, insurance calculations or banking systems the problem is not that there is some data with correlations in it, but that decisions are being at least in part automated, producing either a politically disturbing denial of people’s individual agency or simply some dangerous automatic trades that can crash a stock market.

The term algorithm is being used here to describe something that has a “life of its own” – something Euclid’s algorithm clearly does not have. Euclid’s algorithm couldn’t “rule the world” if it tried (and it can’t try, because you have to being a conscious agent to do that). Algorithms are being talked about here as if they have their own agency: they can “identify” patterns (rather than be used by people to identify patterns), they can make trades all by themselves. They are scurrying about behind the scenes doing all sorts of things we don’t know about, being left to their own devices to live (semi) autonomous lives of their own.

I think that’s what scares people. Not algorithms as such but the idea of autonomous computational agents doing stuff without oversight, particularly if that stuff (like stock market trading or making decisions for the police) might later have an impact on real people’s lives.

June 1, 2013

Friston’s Free Energy for Dummies

by Lucas Wilkins

People always want an explanation of Friston’s Free Energy that doesn’t have any maths. This is quite a challenge, but I hope I have managed to produce something comprehensible.

This is basically a summary of Friston’s Entropy paper (available here). A friend of jellymatter was instrumental in its production, and for this reason I am fairly confident that my summary is going in the right direction, even if I have not emphasised exactly the same things as Friston.

I’ve made a point of writing this without any maths, and I have highlighted what I consider to be the main assumptions of the paper and maked them with a P.

April 26, 2013

Why (: is upside down

by Lucas Wilkins

I think it is fairly intuitive that the smiley in the title is upside down. But why is this?

green_smiley

Generally, when we look at a face we look at the eyes first. These days it is pretty easy to track where people look, the equipment is cheap and easily available – one simply uses a camera to look at the pupil and then calculates where the subject is looking. A preference for beginning with the eyes is a widely observed phenomenon (‘eyes are special’).

English readers, like readers in most languages, scan left to right when reading. With reading we constantly train ourselves to prefer moving left to right, something which leads to a phenomenon often called a readers bias (see this). It is not only during reading that the direction from left to right is preferred.

So, it is not surprising that we should think that smilies with eyes on the left are correct: left to right is preferred for reading, and eyes to mouth is preferred for viewing faces.

September 25, 2012

Paradoxes of probability theory: the two envelopes

by Nathaniel Virgo

This post is about a classic probability puzzle. It goes something like this: I place two envelopes on the table in front of you. One of them contains a Prize, which is an amount of money in pounds, but you don’t know how much it is. The other one contains a Special Bonus Prize, which is worth exactly twice as much money as the Prize. It’s your lucky day — but you can only choose one envelope. Which do you choose?

“Well,” you say to yourself, “it doesn’t matter, they’re both the same,” so you pick one at random. Let’s say it’s the one on the left. But now I ask you if you want to change your mind.

“Well,” you might say to yourself, “let x be the amount of money in the envelope I’m holding. This envelope has a 50% chance of being the Prize, in which case the other envelope contains 2x. On the other hand, there’s a 50% chance that this is the Special Bonus Prize, in which case the other envelope contains 0.5x. But still, the expected value of the other envelope is 0.5*2x + 0.5*0.5x = 1.25x. So on the balance of probabilities I should definitely switch.” But then I offer to let you switch again, and again, and again, and every time you go through the same reasoning, never managing to settle on a particular envelope because each one seems like it should contain more money than the other.  Clearly something is wrong with this reasoning, but what is it?

In this post, I’ll solve this problem in what I consider to be the proper Bayesian way, pinpointing exactly where the problem is.  You might want to think about the question for a bit and come up with your own idea of its solution before reading on.

August 29, 2012

Randomised controlled trials – the “gold standard”?

by James Thorniley

The UK government’s (ever so slightly creepily named) “Behavioural Insights Team” released a report [PDF] (relatively) recently called “Test, Learn, Adapt” (the authors include Ben Goldacre, well known for the book “Bad Science”, and the director of the York Trials Unit, David Torgerson) arguing that more policy decisions should be made on the basis of evidence from randomised controlled trials (RCTs). The report is a really good plain-English explanation of what RCTs are and how they work. It also gives examples of how RCTs can perhaps help to inform policies, by testing whether interventions such as back-to-work schemes or educational programs, um, “work”. According to the report’s blurb:

RCTs are the best way of determining whether a policy or intervention is working.

It’s not hard to find opinion pieces backing up the report’s central idea, and the thesis that RCTs are the best way to “find things out”. Here’s one by Tim Harford, a writer who covers economics; a similar argument made by Paul Johnson who is the director of an economics research group, the Institute for Fiscal Studies; and Prateek Buch, who is a research scientist. A phrase that keeps popping up is “gold standard”. RCTs are “the gold standard in evidence”, says Johnson, or the “gold-standard for showing that medical interventions are effective” according to Buch. Mark Henderson’s book, “The Geek Manifesto” says that the RCT is “commonly considered the ‘gold standard’ for medical research because it seeks systematically to minimise potential bias through a series of simple safeguards”. What exactly does all this mean? I think it’s a question worth asking, since not all science involves RCTs. The Higgs boson for example, was recently “discovered” (if that’s the word) without (as far as I can tell) the need to randomise test subjects. So are RCTs in fact the “gold standard”?

August 1, 2012

The Chemistry of Economics

by Nathaniel Virgo

In order to understand economics, you must first understand chemistry.  That’s my story at least, and I’m sticking to it.  I’m neither an economist nor a chemist (not a real one anyway), but I’ve been thinking a lot about how to understand economics in chemical terms.

In a previous post I discussed autocatalysis, the mechanism by which a bunch of different molecules can react with each other in such a way that they end up producing more of themselves, at the cost of using something else up.  The ideas in that post don’t only apply to chemistry – you can use them to think about just about any kind of physical process.  In this post I’ll talk about how to think about the economy as a whole in autocatalytic terms. But let’s start with something on a smaller scale, the process of baking bread:

July 30, 2012

Can plants learn?

by Nathaniel Virgo

This post is about an idea I’ve had for a long time, about an experiment to test whether plants can learn. I’m very far from being a plant biologist, so I’m unlikely to ever be in a position to do this experiment, but it’s an interesting thing to think about.

July 9, 2012

Ironic science, pragmatism, and the “is best viewed as” argument

by James Thorniley

I’ve read a couple of interesting books recently, one was “The End of Science” by John Horgan, and the other was “Radical Embodied Cognitive Science” by Anthony Chemero. Horgan’s theme was the question of whether the fundamentals of science are now so solid that before long nothing genuinely “new” will be left to find, and science will be reduced to either obsolescence, or puzzle-solving type application of existing theories to particular problems. The only other type of science that still exists, according to Horgan, is “ironic” science. A kind of semi-postmodern project to explain or describe what we already know in more “beautiful” or appealing forms, but which never produces hypotheses that are empirically testable, and for this reason, don’t actually advance knowledge. Horgan is distinctly dismissive of this kind of science, as being not “proper” science (he deliberately compares it to postmodern literary criticism, which he seems to have particular contempt for, having once been a student of it himself). Chemero would be, I’m sure, classified by Horgan as an ironic scientist. I don’t think Chemero would be able to deny that in a sense, his philosophy is empirically untestable, but he certainly argues that it is pragmatic in the sense of being useful to scientists engaged in solving real world problems.

Follow

Get every new post delivered to your Inbox.

Join 478 other followers