Every bad model of a system is a bad govenor of that system

by Lucas Wilkins

The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.Ada Lovelace

James recently posted about a Guardian article on “big data”. The article outlines the many roles which algorithms play in our lives, and some of the concerns their prevalence raises. James’ contention, as far as I understand, was with the focus on algorithms, rather than the deeper issues of control, freedom, and agency. These later issues are relevant to all types of automation, from assembly lines to artificial intelligences.

Automation flies in the face of any attempt to give some human activity special merit, whether this is our capability to produce, to create, make choices, procreate, socialise or whatever. It relentlessly challenges our existential foundation: I am not made human by what I make as I can be replaced by a robot, it’s not what I think, I can be replaced by a computer, etc. Each new automation requires us to rethink what we are.

Algorithms, in the sense of the Guardian article, are the means by which decision making is automated.

Whilst I agree with much of what James said, I have a slightly different perspective. Whilst James said that the fear lies in a perception of computer programs as autonomous, with a “life of their own”, going around doing unknown things, I think there are other fears. The first I mentioned above: loosing ones own value as a human being by becoming out of touch and incompetent. This is the nature of all technology. Although this is something we should genuinely be concerned about, the only way we can respond is by either changing with the times, or, by throwing spanners into the works. But, for automated decisions there seems to be another, very specific, dimension to it.

We all have an intuitive understanding of the kind of stupidity that computers are capable of. The make bad inferences and are poorly informed. For example, Amazon seems to think that my interests lie only in Bill Hicks, trance music, gibbons and the social construction of reality. There is a small seed of truth in this, but it is a pretty poor model of my tastes. Of course, this is just Amazon’s marketing, but I’m sure you can think of plenty of other examples of computers not doing things quite right.

Usually, software does a very bad job of capturing the subtlety of any given situation. How can we trust machines when we have such direct experience of their failings? And when layers and layers of abstractions and approximations are built on top of each other, what happens then? What fidelity does our model have? How can we keep track of it?

The fact is we don’t. Yet slowly and silently, we entrust more and more of our thinking to computers – because if we don’t, we become as obsolete as a two year old iPhone.

Bad models being taken as truth is the most justified of concerns. The spiraling complexity of computational systems means we can even forget that there is something to go wrong in the first place. The model becomes opaque and we are forced to accept its output at face value: thoroughly vetting and understanding it is a practical impossibility. Eventually, we may even forget that there is something there to understand – the bad model is no longer a model, but what we meant all along.

Statistics of markets become markets themselves. Criminality risks assumption made by demographics. Descriptions are reified into objects. The signifier and signified are conflated.

The fear is of change, but this change is in how we think and what we know. What is more, our experience makes it easy to see how it could be a change for the worse.

So where James sees a fear of software with a life of its own, I see more, a fear of things that are dumb, unknown, and inescapable.

About these ads

3 Comments to “Every bad model of a system is a bad govenor of that system”

  1. There’s two possible fears perhaps – the Skynet/Matrix scenario which perhaps some of the more extreme coverage of the NSA stuff suggests isn’t really that realistic. I don’t think its likely that some malevolent AI is going to try and take over the world any time soon. What you’re talking about is a more realistic “Kafkaesque” scenario — if Amazon (inexplicably) decides you might be interested in reading The Protocols of the Elders of Zion, should you be regarded as a suspected terrorist? How could you possibly prove that you’re not, and what justification could you ask from Amazon for their decision — the system was just doing its job.

    So realistically, these algorithms making dumb mistakes is worrying, but its only worrying if people are then going to allow the algorithms to determine decisions in an opaque way, without anyone having to provide a justification.

    I read an article about deontology somewhere recently*… as I understand it Kant argued that you can’t rely on the outcomes of an action to determine what’s good — it matters what a person intended (or if they had “good will”), something like that. The problem with machines making decisions is that they can seem to have a certain amount of practical autonomy (ability to make “decisions” on their own) without having some other component (“existential foundation” as you call it, perhaps) that would give them *moral* autonomy — the ability to do things for good or bad reasons.

    * Ok it was probably just the wikipedia page https://en.wikipedia.org/wiki/Deontological_ethics

  2. To put it another way, dumbness is scary generally — there are many dumb *people* in the world who make bad decisions (possibly with good intentions, but they just don’t get it right). Many of those people might be policemen, politicians, judges etc and their bad decisions might have bad consequences for the rest of us. But its much less scary because no matter how dumb someone is they can at least in theory be held responsible (within reason – thinking of legal defence based on “insanity” for example). A computer could be dumb or clever, but it can’t be good or bad, sane or insane — these categories just don’t apply, the system is still amoral.

    • Yes, I agree. I had a part about lack of responsibility, but I deleted it as it was a whole new thing. I had an image from Little Britain, and a reference to automatic phone queues – then it seemed to drift off into other territory. Clearly someone should be responsible for machines doing things, if a factory worker gets welded by a car robot then the factory is responsible. If a government uses computers to make decisions, then they are responsible. But what if you forget there is something there for someone to be held accountable for.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 479 other followers

%d bloggers like this: