What isn’t a computer!?!

by Lucas Wilkins

Carrying on the great jellymatter tradition of expressing a similar opinion under a contrary title, here’s my response to James’ post about brains and computers.

Computers

Before I begin, here’s the etymology of the word computer from www.etymonline.com:

1640s, “one who calculates,” agent noun from compute. Meaning “calculating machine” (of any type) is from 1897; in modern use, “programmable digital electronic computer” (1945; theoretical from 1937, as Turing machine). ENIAC (1946) usually is considered the first. Computer literacy is recorded from 1970; an attempt to establish computerate (adj., on model of literate) in this sense in the early 1980s didn’t catch on. Computerese “the jargon of programmers” is from 1960, as are computerize and computerization.

I think it’s useful to make the distinction between ‘computers’ and ‘PCs’. With computer being a general term, meaning something or someone that computes and PC meaning something that you might find atop your desk or lap. Computers, as described by Alan Turing’s Turing machine or by cellular automata, Petri nets etc. are essentially abstract. They’re the same kind of thing as numbers: you don’t find a raw naked 5 sitting in a tree. But you might find five blackbirds, or five of something else. Collections of five things have certain general properties, for example, being partitionable into a collection of two things and a collection of three things; five isn’t a physical thing but it can be used to describe physical things. Likewise, ‘computer’ is a way of describing something so that we can deduce other properties from it, asking for example, “for each of a set of inputs, how long will it be until I get an output?”.

So, taking a pragmatist approach, I suggest that
A PC is a computer
is equivalent to saying
A PC can be usefully described as a computer or
A PC can be usefully described as something which computes

much the same way as
The tree contains five blackbirds
can be thought of as
The contents of the tree may be usefully described as five blackbirds

If you do not agree with this being what is meant by “is” (contains X = the contents is X) then much of what I have to say looses it’s grounding, but I reckon it’s reasonable.

One observation I want to make is that people confuse the abstract computer with the physical computing device. There are no ones and zeros floating around in a PC – swooping about carving out the trajectories of boolean logic and lambda calculus. What makes up a PC is lumps of doped silicon dioxide, wires, switches and motors. These parts are carefully orchestrated: predictably and reliably performing a particular set of tasks.

When we say that it’s a computer, we abstract these tasks, ignoring lots of things the PC does. When we think of it as something which computes, we are ignoring the fact the PC produces heat, makes whirring noises and sometimes stops working for no apparent reason. We ignore the physicality of the system: looking at it as only something that has an input and a corresponding, well defined, output.

A PC can be usefully described as something which computes
becomes
A PC can be usefully described as something which takes an input and produces a well defined output

A brain can be a computer, there is no reason why not. If I can provide it with an input and expect a particular kind of output, then it’s a computer – It computes – If we choose to look at it that way. There’s nothing wrong with doing this, as long as we don’t reify (link, link, link) the computer abstraction and say the brain is nothing other than a computer. A person with a brain can be usefully described as a computer.

This kind of definition is consistent with what many people in natural computing think. Dropping an apple can be thought of as a way of computing the forces on a dropped apple. Then, ignoring lots of the properties of the apple (abstraction) allows us to think of it as not just calculating apple properties, but computing more general things, like the acceleration due to gravity.

Anything can be thought of as a computer, as long as it can be seen as doing some well defined thing fairly predictably.

Minds and Machines

This computational way of looking at something is called operationalization, it is often argued that minds are different as they cannot be operationalized. This is true if you think a mind is the same thing as I do.

Like the computer approach, I’ll start from:
A person has a mind
is the same as
A person can be usefully described as having a mind

But when compared to a computer, minds are a completely different way of looking at something, a way of looking at things as having intent, beliefs and desires, purpose and all that. So,

A person can be usefully described as having a mind
is the same as
A person can be usefully described in terms of beliefs and desires etc.

In this way the non-operationalization of minds is the assertion that this description is not equivalent to the computational description I gave before.

This leads me to a question: what makes describing things as having inputs and outputs different to describing something as having beliefs and desires? Could we not, for example, describe a PC as desiring to produce the correct output for an input – as a subservient agent that believes anything we tell it and desires nothing more than to do our bidding – a naive mechanical slave – a robot?

The answer to this is, that this misses the entire point of talking about desires: when we say that something has desires, we are saying that it is behaving as if it wanted to do it’s own thing. We are saying it has it’s own beliefs and desires – that it has some understanding of the world that we are unaware of, and some goal we haven’t explicitly given it. We are saying that it deviates from being a computer to some extent – it doesn’t do what we want it to. Similarly: If we choose to describe a PC as something which desires to do calculations for us, then we in effect saying that it has potential to do something other than instructed if it so wished: an admission of uncertainty about it’s behavior – uncertainty about something’s behavior is not part of a computational description.

When we examine others, we see that they have both the qualities of computers and of minds: They are often predictable, reflexive, computers. But also, they are not; they behave with intent and express creativity – they have minds. Both descriptions, even though they are polar opposites, describe what we find in other people. Whilst minds and computers are mutually exclusive abstractions, when we look at the things we find in reality, both are useful. The difference, when it comes down to it, is the computer abstraction ignores certain uncertainties (like the potential for a power cut to turn off my PC) and the mind abstraction relies on them (privileged information). It’s up to us whether we use one or the other at any particular time, and to use them wisely.

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 477 other followers

%d bloggers like this: