The Madness of Crowds

Alex Mason
12 min readApr 5, 2021
Front cover of the madness of crowds by douglas murray

Around 18 months ago I read Douglas Murray’s The Madness of Crowds. I originally intended to review the book, but I’m a very distractible person and instead managed to write a 3000+ word piece on a paragraph Judith Butler once wrote, which he quotes in the book. Oops. So this is my second attempt at writing a review of this book.

I should say up front that whereas I do not share Douglas Murray’s politics, I do have a level of respect for him as a thinker. I also think identity politics is a topic that should be critiqued at a level slightly above a bearded YouTube man shouting about how cancer is feminism while citing studies he hasn’t read.

Which makes it a shame to say that I can’t in good faith recommend this book. It does have some interesting stuff in it, but it also has things that are just plain shoddy: either highly misleading or outright wrong. Even worse, these errors are then invariably used as evidence for a predetermined picture Douglas has in his head.

I’m going to focus on three examples to illustrate this. One to do with social science, one to do with machine learning, and one small example that also serves as an addendum to my previous piece on this book.

After the end of the first chapter of the book, there’s an interlude entitled “The Marxist Foundations”. In it, Murray argues that universities are rife with Marxists and post-Marxists who write nothing but gobbledegook nonsense with political aims, and can be hardly regarded as science at all.

The gobbledegook claim I covered in the Judith Butler piece, so I’ll focus on the claim about it not being real science. This claim is substantiated by way of highlighting cases of academic fraud. Namely, the “Sokal Squared” scandal, where a team of three academics submitted a series of hoax articles to several journals to see if they would publish them.

Murray writes:

One of the most beautiful things to happen in recent years was ‘The Conceptual Penis as a Social Construct’. This was an academic paper published in 2017 which proposed that:

The penis vis-à-vis maleness is an incoherent construct. We argue that the conceptual penis is better understood not as an anatomical organ but as a gender-performative, highly fluid social construct.

The claim was peer-reviewed and published in an academic journal called Cogent Social Sciences.

Murray makes it sound like Cogent Social Sciences is a reputable outfit in the social science world — it’s peer-reviewed! But in fact, it’s an open-access pay-to-play journal. That is, they’ll publish pretty much anything if you pay the fee. It also wasn’t the hoaxers first choice of journal. They had previously attempted to get it published in a journal called NORMA: The International Journal of Men’s Studies, but that journal rejected the paper. A pertinent piece of information that’s weirdly omitted from the book.

Less reputable journals publishing anything is not a phenomenon specific to the social sciences. In computer science, there’s a dedicated tool to generate fake papers called SCIgen. Several papers generated using it have been published in open-access computer science journals, and Chinese scientists started using the tool to boost their publication records, leading to over 100 conference papers being retracted in 2013. The problem got so bad Springer released a tool to detect SCIgen papers in 2015. Mathematics has a similar tool called Mathgen, with at least one journal accepting a paper generated using it for publication.

The first page of an auto-generated science paper, featuring two columns and a diagram
Example of a paper generated by SCIGen

After the “Conceptual Penis as a Social Construct” example, Murray then gives other examples of hoax papers, including: “Human Reactions to Rape Culture and Queer Performativity at Urban Dog Parks in Portland, Oregon”, published in Gender Place and Culture: A Journal of Feminist Geography. He writes:

This paper claimed that dog-humping in Portland parks was further evidence of the ‘rape-culture’ which many academics and students had by then begun to claim was the most perceptive lens through which to see our societies.

I read this paper, you can find it here. What struck me about it is how well written it is. I was expecting an obvious hoax, with sentences that don’t follow on from each other, and references that lead to nowhere. But no, someone put time and effort into this. The main giveaway is jargon is used in an overly broad sense e.g. “I used a slightly modified inductive grounded theory approach that articulated and generated emerging themes from my recorded observations”. I kept expecting sentences like this to turn out to mean nothing, but every time I checked they seemed reasonable. In this example, grounded theory is a real thing, and its use sounds reasonable if the author actually was collecting data like this. It’s just suspicious how many broad, not-saying-very-much sentences are employed.

But to be honest, the more I read the more I was into this paper. Sure it’s a ludicrous premise, but are you not slightly interested in knowing the breakdown of victims of dog humping? Are more women humped than men? Do male dogs hump more? Is most dog-on-dog action heterosexual? Do the dog interactions look like they’re mainly consensual? Non-consensual? How do their owners react? Is there a difference between how male dog owners vs female dog owners react?

I’m genuinely interested in the answers to those questions, and so I don’t think the premise is itself disqualifying. The thing that makes the paper a hoax is the data which underlies the author’s claims has been faked. And from that fake data, theories and speculation have been built up. But the speculation itself is only a 6/10 on the “oh no you didn’t” chart for me. The author is clearly having a hoot writing this, but I fear they got slightly too into it and made too many interesting points, which somewhat undermines the hoax aspect. There’s no way for a reviewer to test the claim that someone went and stared at a load of dogs in parks. You sort of have to take it on faith that they did, but then again why would anyone lie about that?

This seems a problem for all fields, for example in 2013 science journalist John Bohannon wrote an article called “Who’s Afraid of Peer Review?”, where he outlines his investigation of peer review. He submitted a “credible but mundane scientific paper, one with such grave errors that a competent peer reviewer should easily identify it as flawed and unpublishable”, to 304 open-access journals. Over half of them accepted it.

I think we can safely say it’s easier to get an article published in a social science journal, even a peer-reviewed one, than journals for more “rigorous” subjects. But that’s not the claim Murray is making. He paints a picture of a problem specific to the social sciences and the way that they write, but that claim simply does not stand up to scrutiny.

After the second chapter of the book, there’s an interlude entitled “The Impact of Tech”. In that interlude, there’s a subsection entitled “Machine Learning Fairness” (MLF). It’s all about technology and bias in machine learning, and Murray discusses some search terms on Google image search that give counter-intuitive results:

If you search on Google Images for ‘Gay couple’, you will get row after row of photos of happy gay couples. They are handsome, gay people. Search for ‘Straight couple’ by contrast and at least one to two images on each line of five images will be of a lesbian couple or a couple of gay men. Within just a couple of rows of images for ‘Straight couple’ there are actually more photographs of gay couples than there are of straight ones, even though ‘Straight couple’ is what the searcher has asked for.

He then later claims the reason for this is because of human intervention at Google:

What appears to be happening is that something is being layered over a certain amount of MLF: it is MLF plus some human agency. And this human agency seems to have decided to ‘stick it’ to people towards whom the programmers or their company feel angry. This would explain why the searches for black couples or gay couples give you what you want whereas searches for white couples or straight couples are dominated by their opposites.

Assuming intent is always a bit dicey, but it’s especially dicey when you try and do it with political enemies and you have no idea how Google search works. And that’s the key problem here: Douglas Murray does not understand the basics of how Google search works.

Now granted, I’m a computer scientist: obviously I’m going to know more about this than Douglas Murray. But given he’s putting this in a book, you’d have thought he’d have checked with some computer scientists that his claims weren’t completely outlandish. But he did not do that, and it seriously harms the credibility of this work.

The key issue here is how Google knows what is contained in an image. Say I google “cats”, how does Google know to return pictures of cats and not dogs? You may think they use fancy machine learning algorithms to categorise all pictures, and they may even actually do that to some extent. But in terms of top image search results, they rely far more on your search term appearing on the page containing the image. If the picture is placed directly next to a caption that says “cat”, then it’s going to be prioritised more by Google.

I don’t think you need a degree in computer science to work this out, it’s obvious from merely looking at the top results. And given just that piece of information, it should be clear why Murray’s argument is preposterous.

If you search for “gay couple”, you’re going to get gay couples because gay couples are described as “gay couples” in the pages that feature those images. But straight couples aren’t widely described as “straight couples” unless it’s contextually relevant they are straight, or if it’s an LGBTQ+ source. Everyone else just calls them “couples”.

Here are the results I get for “straight couple” as of 4th April 2021:

Top two rows of results for a Google image search for the phrase straight couple
Brought to you by Google’s SJW team

For “straight couple” we see the top result is an image from a BBC article about a straight couple wanting a civil partnership, which makes sense for why it’d call them a “straight couple”. The second result is a 2015 article in an LGBT publication about a married straight couple in Australia who vowed to get divorced in protest if gay marriage was legalised. Gay marriage later was legalised in Australia, and the couple promptly reneged on their vow. The third result is almost the complete opposite of the second, in that it’s a story about a straight couple getting married, after six years of refusing to in protest to gay marriage not being legal in the US. The fourth result is a story about a straight couple in Austria applying for a civil partnership (which was introduced solely for gay couples). Finally, the fifth result is a story in a gay sports publication about a straight couple who met while participating in the D.C. Gay Flag Football League as straight allies.

Of the first 10 results, 4 of them are from gay websites that happen to have articles containing the phrase “straight couple” in them. This should not be surprising. Google image search prefers images from news sources because those sources produce a lot of content that is linked to a lot.

Here are the results I get for “couple” as of 4th April 2021:

Top two rows of results for a Google image search for the term couple
What a real couple should look like

What a difference! Not a Pride flag in sight, and the only black faces are silhouettes. I kept scrolling to see when the first gay couple would appear, and it was around the 57th image.

If I were using Douglas Murray’s argument, I could say it’s clearly because the staff at Google have a large bias against the gays, and have doctored the algorithm to try and indoctrinate us all. Only, that would be dishonest of me to claim. Heterosexuals are the majority, far more articles and pages are written about them, and it makes sense that the top results for the vague term “couple” are dominated by them. The algorithm would have to have a bias built-in for that not to happen. But it doesn’t, and so that’s not what we see.

Murray also argues the same point regarding “white couple” vs “black couple”, and you’ll never guess what but exactly the same thing applies:

Top two rows of results for a Google image search for the phrase white couple
Something seems off about the content of these white couples’ characters

You get disproportionately non-white looking couples for the term “white couple”. You even get two images of the same off-white couple, because there was a heavily-reported story about Martina Big who had been injecting melanin in order to identify as black.

The top result is a picture from a Reddit thread explicitly about the disparity when googling “white couple”, and there’s also a result further down (not featured in the above image) of a blog post explicitly talking about this very phenomena and pointing out photos of white couples are not usually tagged or identified by race.

Murray gives several other examples, but they all boil down to the same thing: he just doesn’t understand how the algorithm works and so assumes bias and malice. Ironically he’s right about the bias, just not the source. There is a very large bias of straight white cis people assuming they’re the default to the point they never bother to specify the fact they’re straight, white, or cis, and so it skews search results for them.

In my last piece on this book, I methodically went through a quote by Judith Butler that Douglas Murray cited as an example of “deliberately obstructive style ordinarily employed when someone either has nothing to say or needs to conceal the fact that what they are saying is not true”. The quote in question was Butler talking about the book Hegemony and Socialist Strategy, following an email exchange she had with one of the authors.

As an addendum to that piece, while re-reading some of The Madness of Crowds for this piece, I noticed that Murray actually talks about that very book in the same chapter. He says:

It should be said that this is not some obscure work but one that is regularly cited. Indeed, Google Scholar shows it to have been cited more than 16,000 times. In Hegemony and Socialist Strategy as well as other works, including Socialist Strategy: Where Next?, Laclau and Mouffe are perfectly frank about what they think could be achieved and how.

And despite how frank the authors were, less than 10 pages later Murray apparently fails to notice Judith Butler is summarising the content of that very same book he’s definitely read. And this is despite the fact she opens the second paragraph of her article with: “first of all, I think that I was drawn to the work of Laclau and Mouffe when I began to read Hegemony and Socialist Strategy”.

I can think of nothing better to sum up Murray’s approach to analysing the social sciences, than the fact he failed to notice the clear nefarious Marxist plan that birthed identity politics and the impenetrable prose that can’t possibly mean anything are, in reality, both talking about the exact same thing.

There are lots of other examples of Murray being a shoddy journalist, but I think you get the idea. These are all things he could have very easily looked up, and the fact he didn’t bother to do that seriously damages the credibility of the book. It’s one thing to be biased, it’s another thing to be shamelessly biased using supporting evidence that’s just plain wrong. For me, it brings to mind Michael Crichton’s Gell-Mann Amnesia effect:

Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward — reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them. In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.

Murray in this instance is Murray Gell-Mann, not Douglas. But this kind of error is a regular feature in The Madness of Crowds, and with each one it becomes harder to turn the page and forget what you know.

--

--