The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. – Ada Lovelace
James recently posted about a Guardian article on “big data”. The article outlines the many roles which algorithms play in our lives, and some of the concerns their prevalence raises. James’ contention, as far as I understand, was with the focus on algorithms, rather than the deeper issues of control, freedom, and agency. These later issues are relevant to all types of automation, from assembly lines to artificial intelligences.
Automation flies in the face of any attempt to give some human activity special merit, whether this is our capability to produce, to create, make choices, procreate, socialise or whatever. It relentlessly challenges our existential foundation: I am not made human by what I make as I can be replaced by a robot, it’s not what I think, I can be replaced by a computer, etc. Each new automation requires us to rethink what we are.
Algorithms, in the sense of the Guardian article, are the means by which decision making is automated.
Whilst I agree with much of what James said, I have a slightly different perspective. Whilst James said that the fear lies in a perception of computer programs as autonomous, with a “life of their own”, going around doing unknown things, I think there are other fears. The first I mentioned above: loosing ones own value as a human being by becoming out of touch and incompetent. This is the nature of all technology. Although this is something we should genuinely be concerned about, the only way we can respond is by either changing with the times, or, by throwing spanners into the works. But, for automated decisions there seems to be another, very specific, dimension to it.
We all have an intuitive understanding of the kind of stupidity that computers are capable of. The make bad inferences and are poorly informed. For example, Amazon seems to think that my interests lie only in Bill Hicks, trance music, gibbons and the social construction of reality. There is a small seed of truth in this, but it is a pretty poor model of my tastes. Of course, this is just Amazon’s marketing, but I’m sure you can think of plenty of other examples of computers not doing things quite right.
Usually, software does a very bad job of capturing the subtlety of any given situation. How can we trust machines when we have such direct experience of their failings? And when layers and layers of abstractions and approximations are built on top of each other, what happens then? What fidelity does our model have? How can we keep track of it?
The fact is we don’t. Yet slowly and silently, we entrust more and more of our thinking to computers – because if we don’t, we become as obsolete as a two year old iPhone.
Bad models being taken as truth is the most justified of concerns. The spiraling complexity of computational systems means we can even forget that there is something to go wrong in the first place. The model becomes opaque and we are forced to accept its output at face value: thoroughly vetting and understanding it is a practical impossibility. Eventually, we may even forget that there is something there to understand – the bad model is no longer a model, but what we meant all along.
The fear is of change, but this change is in how we think and what we know. What is more, our experience makes it easy to see how it could be a change for the worse.
So where James sees a fear of software with a life of its own, I see more, a fear of things that are dumb, unknown, and inescapable.