Reification: just because a thing has a name, doesn’t mean it is a thing

by James Thorniley

Science does not rely on investigators being unbiased “automatons.” Instead, it relies on methods that limit the ability of the investigator’s admittedly inevitable biases to skew the results.

So says a paper by J. E. Lewis et al in which they claim Steven Jay Gould was wrong when he said early 19th century craniometrist Samuel George Morton “finagled” his data to match his own racist preconceptions. They had another look at the data, and actually remeasured some of Morton’s skulls, and claim that Morton’s reported results actually fit his racial bias less than a fully accurate study would have.

Depressingly a number of modern day internet racists seem picked up on the headline message “Gould was wrong” and assumed that means the paper supports racial theories about intelligence or other differences. The paper doesn’t support any such ideas, and that’s not the subject of this post. It’s just worth pointing that out.

What this paper is about is whether scientists’ personal biases influence the results they get. This isn’t about whether Morton was “right” in a scientific sense, because everyone agrees he wasn’t. It’s about whether he made the right conclusions based on the evidence available to him. It’s a historical question – modern anthropology has essentially nothing to do with this.

Misconduct and finagling

Gould claimed in his book The Mismeasure of Man and a paper in Science that Morton had “finagled” his data to suit his expectations (i.e. a racial ranking with whites at the top). According to Lewis et al:

Samuel George Morton, in the hands of Stephen Jay Gould, has served for 30 years as a textbook example of scientific misconduct.

But Gould never accused Morton of misconduct. Scientific misconduct would be faking results, lying, that sort of thing. As Gould says in his Science paper:

I suppose that truly deliberate fraud to prevent the exposure of a suspected truth is rare in science. When we do uncover a case, we excommunicate its perpetrator, smugly declare that science purifies itself, and get back to work … [Such cases’] hortatory value in the moralistic tradition permits us to avoid the issue; for we can pose our objective ideal against the transgression and pretend that the vast middle ground does not exist.

If you’re a scientist, you don’t really have to worry about misconduct in your day to day dealings. You probably get taught at some stage early in your career that it’s not ok to make up results. From that point on, if you do make up some results, well, when the time comes, you will have to face your creator (or a po-faced Richard Dawkins, if you prefer).

Gould is interested in unconcious bias, and Morton’s “finagling” was the example. In spite of using some confusing language, Lewis et al did actually mostly address the unconscious bias point. They make a strong argument that Morton presented and analysed his data as well as he could have, and thus claim unconscious bias isn’t a problem.

There isn’t space to go in to all of their points here, and I wouldn’t be the best judge (I don’t have nearly enough time or requisite knowledge). But one of the most substantive points Gould makes (but Lewis et al don’t seem to cover) is that Morton appears to have failed to consider the correlation with overall height. The thing is, small people have small heads. Morton’s original data sets show this very clearly. So why bother categorizing by race?


The correlation point brings us on (slightly tenuously) to something much more interesting, which was actually what the larger part of The Mismeasure of Man is about (Morton only takes up one chapter, I think). We know that head size is correlated with overall height, and other obvious things like sex, and less obvious things like climate. Potentially there are hundreds of variables that could correlate with head size, and with each other (I’ll leave that to your imagination).

An interesting question might be, which of these variables: head size, climate, etc, are most strongly correlated with each other? This type of question underlies intelligence metrics. Charles Spearman came up with a formula to produce a value g for “general intelligence” based on the results of a set of tests. Most intelligence tests are pretty similar, so people who score highly on one will score highly on another. If there is a test in the mix where the results aren’t closely related to the other tests, it’s probably not measuring the same thing. Spearman’s g works by finding the tests that are positively correlated – i.e. most predictive of each other. A value of g thus gives a prediction of a person’s score across many intelligence tests.

What most of Gould’s book is about, is the “reification” of g. That is, a sort of category mistake made initially by Spearman but then by many many others, of assuming that because g was a value that could be measured, it was therefore a thing – that there was some kind of “general intelligence” actually in people causing them to score well on the tests or not. But there isn’t. Spearman’s g is a statistical artefact, nothing more.

This isn’t much more than saying that a particular correlation does not prove any particular causal explanation.This is well known, but when there’s complicated-sounding maths like “factor analysis” (i.e. Spearman’s g) involved people get bamboozled. People who cling to racial intelligence theories often try to say: “You don’t understand the maths! We have advanced methods!” The conceit is breathtaking*, statistics are not that hard a concept. A simple average (e.g. mean) is a statistic. Spearman’s g is harder to come up with, but it’s the exact same class of thing.

And that’s the nub of what I wanted to get at with this very long winded post. This isn’t really about Morton, it’s about the more interesting points that Gould makes about reification. In the early 20th century we had factor analysis, but fancy sounding statistics that are more popular today (I’m thinking about “information” and “entropy”), are still just that – statistics, with no more intrinsic causal power than the common or garden mean. We have to be careful not to reify them and think that because we can measure “information” that there is some mysterious “information” entity out there in the universe, our brains, our genes, or anywhere else.

Information is often a useful concept (as is factor analysis), and I’m not saying it could never measure a “real” thing, just that it could measure a not-real thing as well.

Lewis et al are quite disparaging of Gould. There’s a lot of dismissive remarks about “science studies” (the scare-quotes are theirs). But this is unfair. They’ve probably got some points about Morton, but they are massively over-egging it to say they’ve shown that science is objective and unbiased.

Gould finishes his book with the following quote about reification, which I like:

The tendency has always been strong to believe that whatever received a name must be an entity or being, having an independent existence of its own. And if no real entity answering to the name could be found, men did not for that reason suppose that none existed, but imagined that it was something peculiarly abstruse and mysterious.

John Stuart Mill

P.S. Back to the quote I started with: “Science does not rely on investigators being unbiased ‘automatons.’ Instead, it relies on methods…” So what is the difference between a biased human scientist following this unbiased methodology, and an unbiased robot programmed to perform that methodology? Would it be possible to distinguish them in a sort of scientific method Turing test? Perhaps we could replace the human scientists with robots?

* I came across one comment on a blog that actually said in relation to a similar argument: “Sadly, understanding this point requires just enough mathematical ability that it has eluded all but a small number of experts.” Stunning.

7 Responses to “Reification: just because a thing has a name, doesn’t mean it is a thing”

  1. Mr. Thorniley – Could you please e-mail me at I believe your post contains some mischaracterizations/errors, and would like to discuss this with you privately, but have been unable to locate an e-mail address for you.

    I’ll give one example here to show this isn’t a spam comment: you say that the paper is “quite disparaging of Gould.” Really? Because the coverage of this paper in both Nature and the NY Times made special note of its civil tone and the admiration expressed for Gould’s larger body of work. And the leading paleoanthropology blog had this to say of the paper: ““The authors wrote in an even tone and lay out the facts in a very straightforward way. As a reader, I can’t see how they managed to keep their cool.”

    • You can contact jellymatter through jellymatter at It is me who gets the messages, but I can forward them.

  2. This is presumably referring to this: – it’s generally positive about the paper but I don’t see where it particularly highlights the tone of the Lewis paper as being civil.

    Well I think whether or not something is disparaging is a judgement call – everyone is entitled to their opinion including the NYT and Nature. But in my opinion it is disparaging, at least by the standards of peer reviewed papers. It twice sarcastically refers to “science studies” (using sarcasm-quotes in the original both times) and suggests Gould is somehow a founder or main pillar of this discipline (which as far as I am aware he’s not).


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Get every new post delivered to your Inbox.

Join 483 other followers

%d bloggers like this: