“Algorithm”… You keep using that word…

by James Thorniley

The Guardian has a feature article entitled How Algorithms Rule the World:

From dating websites and City trading floors, through to online retailing and internet searches (Google’s search algorithm is now a more closely guarded commercial secret than the recipe for Coca-Cola), algorithms are increasingly determining our collective futures.

The strange thing about this is that the algorithms mentioned are nothing like the algorithms you learn about in computer science. Usually, an algorithm refers to a (generally) deterministic sequence of instructions that allow you to compute a particular mathematical result; a classic example (the first offered by Wikipedia) being Euclid’s algorithm for finding the greatest common divisor (it doesn’t have to strictly involve numbers – anything symbolic will do: one could easily create an algorithm to transliterate this entire post so that it was ALL IN CAPITALS, for example).

By contrast the “algorithms” talked about by the Guardian are all about extracting correlations from data: working out what you are going to buy next, if and when you will commit a crime and so on. What they are talking about, I think, is statistics or machine learning. If you want a more trendy term, perhaps data sciencebut as far as I can tell these are all pretty much the same thing.

To say that the world was ruled by statistics would sound a bit twentieth century perhaps, so the hip and happening Guardian has maybe just found a more exciting term for an old phenomenon. But I think there is something more to their use of the word algorithm: I don’t think it is the right word, but there is something else they are trying to capture, as one of their interviewees says:

“… The questions being raised about algorithms at the moment are not about algorithms per se, but about the way society is structured with regard to data use and data privacy. It’s also about how models are being used to predict the future. There is currently an awkward marriage between data and algorithms. As technology evolves, there will be mistakes, but it is important to remember they are just a tool. We shouldn’t blame our tools.”

The issue is not the standard use of statistics to find interesting stuff in data. The problem is how the results of this are used in society: applying the results from statistics in an automated way. This automation is the only commonality that I can see with the traditional meaning of an algorithm.  In the case of the crime detection, insurance calculations or banking systems the problem is not that there is some data with correlations in it, but that decisions are being at least in part automated, producing either a politically disturbing denial of people’s individual agency or simply some dangerous automatic trades that can crash a stock market.

The term algorithm is being used here to describe something that has a “life of its own” – something Euclid’s algorithm clearly does not have. Euclid’s algorithm couldn’t “rule the world” if it tried (and it can’t try, because you have to being a conscious agent to do that). Algorithms are being talked about here as if they have their own agency: they can “identify” patterns (rather than be used by people to identify patterns), they can make trades all by themselves. They are scurrying about behind the scenes doing all sorts of things we don’t know about, being left to their own devices to live (semi) autonomous lives of their own.

I think that’s what scares people. Not algorithms as such but the idea of autonomous computational agents doing stuff without oversight, particularly if that stuff (like stock market trading or making decisions for the police) might later have an impact on real people’s lives.

About these ads

7 Responses to ““Algorithm”… You keep using that word…”

  1. Is it the case then, that there are 3 types of memes that regulate human behaviour?
    Internalized: individuals behave as per the memes they carry with them and -over which they exert some control (Dennett would deny this)-
    Social: individuals behave as per external memes enacted by other members of society over which a particular individual has no control (I walk through pathways whose design was not up to me but up to other people)
    Cybernetic: individuals behave as per external memes developed and enacted by artificial autonomous agents. Police would be guided by theses agent’s instructions or business executives will decide what to do as a result of the price of their company’s shares being moved by autonomous agents.
    Internalized memes seem to be as old as brains, social memes exist at least from the time of some distant ape ancestors; what’s new in the world is this layer of human behaviour regulation, not designed by religion or politics, but by an artificial intelligence.

    • Personally (maybe I disagree with Dennett here) I think most if not all of the “real” control lies with human beings. The police can’t honestly claim they are doing what they are doing because the algorithm told them to — they (or people they instructed) were the ones that implemented the algorithm in the first place and they didn’t have to do that, nor do they really *really* have to do what the algorithm says if they don’t like it. So they have to therefore have some ethical justification of their own (that’s not impossible – say you have some information that a crime is going to be committed, would it not be unethical *not* to try and stop it?)

      I agree with your distinction between social and technological as well though – again even if you feel constrained to act towards social norms you have the choice whether or not to go along with it. But having someone else tell you to do something is different to having a machine that you yourself originally programmed telling you to do something. Also, perhaps algorithms of the sort the article discusses could be used as mechanisms for social control as well — for example you could buy Google AdWords terms so that when someone searches for say “protest”, a link to a government website about how dangerous it is to attend a protest pops up (ok perhaps that’s a provocative example, I’m sure there are much more subtle ways it could happen).

      So yes, perhaps what this article is identifying as you say is that the technological aspect of this is new at least in some sense. If you can make an “algorithm” that is clever enough to appear somewhat autonomous, then it feels more like you can let it take responsibility for its own decisions, and perhaps then the cybernetic control seems a lot more like the social control (when really they are different)?

  2. See slide 3 of this presentation by Geoffrey Hinton for an interesting perspective on the difference (in emphasis) between statistics and machine learning: http://www.cs.toronto.edu/~hinton/ucltutorial.pdf

    Given that definition, I think it’s reasonable to say that article’s mostly about machine learning, rather than algorithms in general. (But then it also mentions algorithmic trading – I don’t know for sure but I imagine a lot of algorithms for that are pretty simple, basically just buy or sell when certain conditions are met, but do it *really* quickly.)

    • Yes Hinton’s distinction makes sense, though I’d say its heuristic – those properties mentioned are things that are generally the case for one or the other, but there is a lot of overlap too. Generally I agree though that this is much more like machine learning or AI.

      The algorithmic trading this is heavily dependent on machine learning as well though – the idea as I understand it is to use historical data from a market and perhaps lots of other markets, or contextual time series like weather (which might affect crop yields or whatever) to make a prediction. I think what makes it an “algorithm” in the sense that the original article meant though is just the automation of actually trading, rather than doing the machine learning to find the structure in the data then let some people make the decisions about what to do about it.

      • The distinction is definitely a heuristic one.

        I know that there is a lot of algorithmic trading that relies on machine learning, but I’d be surprised if there wasn’t also a lot of it where a trader just programs in “sell this stock if it falls below this price”. My (not to be trusted) intuition tells me that the latter type of “algorithm” probably trades a lot more money than the machine learning type. That kind of simple algorithm could lead to results that are just as mysterious and unpredictable once you have a whole market full of them.

  3. Algorithmic trading, as far as I know, relies heavily on machine learning. There are, of course, thousands of people that use trivial algorithms with their own, typically small, funds to play the markets and are likely to lose in the long run. But the heavyweights, that play with everyone else’s money, contract, as the article points out, ML experts to develop state of the art trading software which can be extraordinarily complex and profitable for them.

    Perhaps the main distinction between a computer program that is “just an algorithm” and one that is AI has to do with its origin: the former is of human origin, the latter is not. In the first case a human designs and has control of every detail in the program, as in any other engineering project. The second case is more like a farmer that prepares the ground and looks after environmental conditions to ensure the seed grows into a plant. In AI you create an environment within which a solution may be found, together with an algorithm that will go and find the solution. We have no idea what the solution may look like; it may surprise us and it may be beyond out intuitive comprehension (hard to look at the weights of a neural network and say “that’s just what I thought!”) Therefore it is the learning algorithms, or the genetic algorithms, that are the authors of the solution, not the human that unleashed them. The software produced by them, and their intelligence, is truly artificial. Since we don’t really understand the numbers in the solution we can’t tweak them; if there is a bug we would not know which numbers to change; we cannot predict what these programs will do; they are the best predictors of their behaviour. If they were to encounter circumstances that we did not test for, who knows what they might do? We can make no warranties about their behaviour and, I think I heard Luc saying, they are not allowed in some application domains because of this.

    Yet we are quickly installing this new cybernetic control layer of human behaviour in our society. As an individual you may reject Amazon’s suggestions, but as a crowd, we all use contamination-producing vehicles and buy plastic-wrapped goods. We are trapped; born with the original tag.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Get every new post delivered to your Inbox.

Join 483 other followers

%d bloggers like this: