Home Blog Page 40

Charles appoints alt-med fan Dr Michael Dixon as Head of the Royal Medical Household

The British Royal Household has its own team of medics, who look after the health of the Royal family. They are led by a position titled ‘Head of the Royal Medical Household’. Previously this post was held by the eminent Prof Sir Huw Thomas, consultant at King Edward VII’s hospital and St Mary’s hospital in London, as well as professor of gastrointestinal genetics at Imperial College London. However, now that Charles has taken charge, he has appointed a new man: Dr. Michael Dixon.

Dr. Dixon is a retired general practitioner whom I know well. When I started my research at the University of Exeter 30 years ago, we collaborated on several projects. I am not the only one to be familiar with Dr Dixon – because of his notoriously fallacious thinking, the US skeptic Steven Novella once called him a ‘pyromaniac in a field of (integrative) straw men’.

Before the new appointment, Dixon had been a ‘Medical Advisor to The Prince of Wales’ for the last 20 years. What binds the two together is their enthusiasm of so-called alternative medicine. Charles has his whole adult life promoted particularly those alternative health modalities that most overtly fly in the face of science, and it seems to me that Dixon has somewhat followed in those footsteps the best he can.

In 1998, for instance, Dixon published a study in the Journal of the Royal Society of Medicine, analysing the effectiveness of spiritual healing, in which he concluded that the laying on of hands:

“may be an effective adjunct for the treatment of chronically ill patients presenting in general practice.”

After he became Charles’ advisor, he was appointed as medical director of ‘The Prince’s Foundation for Integrated Health’. This was the organization which had to close in 2010 admits allegations of fraud and money laundering – allegations which saw its finance director go to prison for three years. Following its demise, the foundation quickly morphed into the ‘The College of Medicine and Integrated Health’, of which Charles is a patron and Dixon the chair.

The college is highly active in promoting various types of complementary and alternative medicine, including those that are both implausible and unproven such as ‘Neurolinguistic Programming’, ‘Thought Field Therapy’, homeopathy and Reiki.

Charles once famously said that he is proud to be an enemy of the enlightenment but he also insisted that he would stop his lobbying for unproven or disproven health interventions once he had ascended the throne. The UK scientific community had been wondering whether he would be able to keep this promise. The appointment of Michael Dixon as ‘Head of the Royal Medical Household’ was discrete, and went unnoticed by the UK press. It does perhaps, however, go some way towards answering the question as to whether Charles can control his ambitions to advocate for ineffective treatments.

Could it be that, rather than pursuing his anti-science agenda himself, he will now delegate it to those long-term allies who he has appointed to positions of influence?

Definitions matter – say what you mean, and mean the agreed-upon definitions

The difference between the almost right word and the right word is really a large matter–it’s the difference between the lightning bug and the lightning. – Mark Twain

The word, ‘woke’ has been used since the 1940’s. In the 1971 play, Garvey Lives!, Barry Beckham wrote, “I been sleeping all my life. And now that Mr Garvey done woke me up, I’m gon’ stay woke. And I’m gon’ help him wake up other black folk.” Its original meaning was becoming woken up, or sensitised to, issues of justice. In 2017, this definition was added to the Oxford English Dictionary. Within, it is defined as being ‘aware’ or ‘well-informed’ in a political or cultural sense.

In a poll from Ipsos in 2023, 39% of the respondents believed that woke means “to be overly politically correct and police others’ words.” Bethany Mandel, co-author of Stolen Youth: How Radicals are Erasing Innocence and Indoctrinating a Generation, defined woke in a tweet as, “A radical belief system suggesting that our institutions are built around discrimination, and claiming that all disparity is a result of that discrimination. It seeks a radical redefinition of society in which equality of group result is the endpoint, enforced by an angry mob.”

Tweet from Bethany S. Mandel: "A radical belief system suggesting that our institutions are built around discrimination, and claiming that all disparity is a result of that discrimination. It seeks a radical redefinition of society in which equality of group result is the endpoint, enforced by an angry mob"

Imagine Barry Beckham and Bethany Mandel having a lively debate about wokeness, and how confusing the debate would be. The definition of the word is essential to the debate and the two would quickly find themselves talking past each other.

Adam Serwer summed up the complexity of the meaning of woke when he tweeted, “Sometimes when people say “woke” they mean “liberals being self-righteous and vicious about trivial things” and sometimes they mean “integration,” or “civil rights laws” or “black people on television” and it’s convenient not to have to explain what you actually mean.”

Tweet from Adam Serwer: “Sometimes when people say “woke” they mean “liberals being self-righteous and vicious about trivial things” and sometimes they mean “integration,” or “civil rights laws” or “black people on television” and it’s convenient not to have to explain what you actually mean.”

It is convenient not to have to explain what you mean, but it is also confusing. When we are having a conversation with someone, we need to agree on what words mean, else it is like having two separate conversations that give the illusion of meeting but never actually connect.

In a conversation I had recently, we struggled to come to an agreement on the definition of racism. If that sounds preposterous to you, go and ask a few people whether it is possible for a member of a minority to be racist, or if it is possible for a person to be racist against their own race. You will probably get a few different answers. If racism were simple to define, then we should all immediately agree on the application of the word.

By the way, the answer to both questions is yes. A minority can be racist, because the definition is ‘to come to a prejudgment based on the race of an individual’; it is not only used for white people judging non-white people. And, of course, someone can prejudge a person of their own race on this basis.

Related to the confusion over those aspects of racism is the common deflection people use, “I am not racist, some of my closest friends are [race].” Regardless of the race (or gender, or religion, or political affiliation) of your closest friends, when you meet someone, do you make assumptions about that person based on their race (or gender, or religion, or political affiliation)? You can make assumptions about strangers, even if you have friends that share similarities with that stranger. It is possible for individuals to move beyond their initial prejudices and still apply those prejudices when they meet someone new.

When we are speaking with someone in the pub or on the debate stage, we need to agree on the definition of words that might be ambiguous. If accepting a definition that we disagree with is distasteful, then we might need to walk away from the conversation. On the other hand, we can move beyond the disagreeable definition and continue the conversation by saying, “While I don’t agree with that definition, for the sake of the conversation, let’s assume that it is accurate,” and then move on.

Definitions get some people in trouble, and keep other people safe

In April 2022, the head of the US Department of Homeland Security stated that his department had operational control over the southern border. The problem is that the phrase, ‘operational control’, has a very specific definition. Operational Control is the prevention of all unlawful entries into the United States, including entries by terrorists, other unlawful aliens, instruments of terrorism, narcotics, and other contraband. By that definition, is it ever possible to operational control of any national border? This questioning occurred under oath, and at a hearing in Congress. I feel sorry for the head of DHS who was put in the position of either being truthful or appearing competent. He must have realized the problem, because in March of 2023 he corrected his opinion by stating under oath that they did not have operational control of the southern border.

Say what you will about former president Donald Trump, but he has a preternatural skill for saying things that imply illegal instruction without explicitly ordering illegal activity. On the 6th of January, 2021, at the rally he held while the election was being finalised in Congress, he told supporters that they should march on the Capitol, and that they will never take back the country with weakness. He did not tell them to use violence, but he told them if they were weak, they would not succeed. In his phone call to the Secretary of State of Georgia, he asked Brad Raffensperger to find 12,000 votes. As if the state of Georgia is so disorganised that they often misplace tens of thousands of votes… or perhaps he is telling the man in charge of elections to change a few numbers in a spreadsheet in order to make the election work out in Trump’s favour. You decide. In both situations, he did not tell anyone to act in an illegal manner, but it was clear to everyone what he wanted.

When then-President Clinton denied having sexual relations with intern Monica Lewinsky, he was attempting to thread the needle of a very narrow definition of what sexual relations are. The definition was decided in the same deposition. By the agreed upon definition, Lewinsky had sexual relations with Clinton, but Clinton did not have sexual relations with Lewinsky. While most people would agree that the definition was too narrow and that he perjured himself, Clinton disagreed and because the definition was agreed upon by both sides, he was correct.

Definitions matter. Sometimes it feels like the conversation is being derailed in order to debate the meanings of words, but if we cannot agree on what words mean, we will never agree on the things those words describe.

Matt Le Tissier talks to The Skeptic about Covid, vaccines, 15-minute cities and climate change

Yesterday, I wrote about a recent experience at Essex Skeptics in the pub, in which I was heckled by some anti-vaxxers who came along to tell me how wrong I was about everything, and who left telling me what a fantastic time they’d had talking to us about all the various ways we disagreed.

Partly, I wanted to express the importance of talking to people we disagree with, rather than shouting at them – I really do feel that our only way out of this epistemic mess is to continue to engage with people and ensure they’re within reach. But my extended thesis on the value of patience, before it evolved legs and ran away with itself, was actually intended as an introduction to a different conversation I had recently.

Talking to people I disagree with has been a hobby of mine for over a decade now, and it’s the whole purpose of my podcast, Be Reasonable (available everywhere podcasts live, naturally!). As a show, Be Reasonable is something of an acquired taste – the most prominent piece of feedback I get from listeners is “I love the show; I can’t listen to a full episode of it”. That’s because the purpose of the show is to hear from people we disagree with, and to try to understand what makes them tick… which means that, rather than arguing with my guests, or correcting them every time they say something that’s not true, I prioritise building a rapport, letting the conversation flow, and trying to get to the points that I think are most important, avoiding distracting dead ends as much as possible.

Be Reasonable, as a show, grew out of one of the first podcasts in the UK skeptical community, Righteous Indignation, where Hayley Stevens, Trystan Swale and I would interview people we found interesting. Over time, we realised that the community was well served for interviews with prominent skeptics, but there was nobody trying to hear what the other side were saying, and so that’s what we did. When RI ended, in 2013 Hayley and I picked up the interview idea with Be Reasonable; after a while Hayley left to pursue other projects, and I’ve been hosting the show ever since.

For those who are new to the show, there’s a back catalogue of more than 80 episodes, including half a dozen interviews with flat earthers, a number of people claiming to have psychic skills (check out the interview with Vicki Monroe for a live psychic reading that borders on excruciating), conversations with a chap who believes that pterosaurs still live in America and another who believes that humans are the result of cross-species hybridisation. I’ve talked to the inventor of a bleach-based miracle cure that has killed thousands of people, I’ve spoken to more than one person who believes in conversion therapy for gay people, and I interviewed a white supremacist who went on to become the subject of a major documentary. There was the interview with a homeopath who felt that insulting me was the best way to make his point, and the conversation with a hollow earther which takes a hard turn in a completely unexpected direction.

Each of these conversations, I feel, teaches me something – something about how people construct and justify their beliefs, or how their personal circumstances lead them into positions that most people will consider to be obviously false, or how beliefs that seem on the surface harmless can mask far darker implications.

All of which leads me to the most recent episode of the show, with perhaps my most well-known guest to date. Matthew Le Tissier was a footballing idol at during a 16 year career at Southampton, and is widely regarded as one of the most creative and talented players ever to grace the English game. Since retiring, he became a household name as a pundit for Sky Sport’s football coverage – a position he would lose in August 2020, over his ongoing social media criticism of the global narrative around COVID-19. Since then, he has built a significant following among Covid conspiracy theorists and anti-vaccine communities, including appearing at in-person events with QAnon promoter Mark Attwood, and with Gareth Icke, son of famed conspiracy theorist (and, by coincidence, another former footballer) David Icke.

I sat down for an hour on Zoom with Matt, to find out what brought him to his current positions on Covid, vaccines, 15-minute cities and climate change. Our conversation was released in Episode 81 of Be Reasonable, for April 2023, and you can watch it in full below:

Patience isn’t just a virtue, it’s a crucial part of showing compassion for conspiracy theorists

People often tell me that I have a lot of patience. It comes up more often than you might imagine. Just last week, for example, I was giving a talk for Essex Skeptics in the Pub in Chelmsford, outlining my research into the White Rose antivaccination movement and its role in radicalising people into extreme views, when I was interrupted by a voice at the back of the room. “This is bullshit, I can’t listen to any more of this”, the lady yelled, “He’s trying to tell us that all these people are racists, when they’re not!”.

She was convinced, we’d go on to learn, that I was building towards a conclusion that everyone who spends their time on Telegram asking questions about COVID-19 and the safety and efficacy of the vaccine must also be antisemitic conspiracy theorists, obsessed with ideas like the Great Replacement. Of course, this wasn’t the argument I was making, nor something I’d even agree with, but she felt so certain in her prediction of my direction that it wasn’t worth hearing me finish my talk, it was important to interrupt and make everyone aware of what she assumed was coming next.

While the heckling was a little off-putting, her motivation was understandable: she felt certain that there was more to the Covid story than we were being told, and she had nothing but distrust for the government who told us it, so anyone who disagreed with her must have ulterior motives for discrediting the movement she had such sympathy for.

This wasn’t the first time one of my talks has attracted dissent – a previous version of the talk in Glasgow was gate-crashed by more than a dozen members of the local Stand in the Park antivaccination group, who made for a lively Q&A (where they denied knowing each other, yet still stood outside at the end of the night handing out copies of The Light Paper and inviting people along to their group), and my pre-pandemic tour of Skeptics in the Pub groups to talk about the flat earth movement often attracted flat earth proponents, intent on filming themselves arguing with the arrogant round-earther. In those moments, the hardest thing to manage isn’t the pointed questions of those who disagree with me, but the reactions of the rest of the room – as a speaker, the last thing I want is for the frustrations of a roomful of skeptics with the one or two dissenting voices to bubble over into ridicule, or aggression, and so keeping the evening from descending into a fruitless shouting match becomes the primary goal.

I also recall one memorable encounter at Nottingham Skeptics in the Pub, during my talk about our campaign to end NHS provision of homeopathy. Right at the end of the talk, as I was wrapping up, three men wandered in – they’d been stuck in traffic, having driven down from Liverpool just to see me (presumably unaware that I also live in Liverpool), and managed to miss the entire talk they’d come to see… yet they still happily monopolised the Q&A with attempted gotcha questions about the ills of Big Pharma. I answered as many of their questions as felt fair, in an audience of 90+ people who had actually listened to the talk, but promised I’d speak to them afterwards and talk about anything they wanted.

So, once the regular Q&A was over, we sat in the pub’s beer garden while they warned me about fluoride and the dangers of drinking unfiltered tap water, and why we have to be so careful about what we eat because we have no idea what chemicals are in our food these days, and how many of those chemicals are carcinogenic… points that were all punctuated, with zero trace of irony, by long draws on the cigarettes they were all smoking. Perhaps the known carcinogens are less harmful than the unknown ones; or perhaps the hardest thing to spot is our own inconsistencies.

Back in Essex, once the heckling subsided, I was able to finish the talk, highlighting (as was always the intended conclusion) that people who find themselves disaffected and outside of the mainstream are more vulnerable to being manipulated by bad actors – which is precisely the reason why Tommy Robinson’s channel, and other far-right influencers, see anti-vaccination spaces as such a fruitful recruiting pool. That doesn’t mean that everyone who joins the anti-vaccination movement is far right, but it does mean that they’re going to be regularly exposed to material designed to radicalise and recruit them, and a percentage of those people will, over time, find themselves swayed – especially because, having left mainstream society over their Covid conspiracy views, it is much harder for people to then turn their back on their new anti-vax community if it seems overly comfortable platforming extremist views. How many communities can you walk away from before there are no more left to turn to?

The Q&A at Essex Skeptics was an enjoyable affair, as I tried to answer to the best of my abilities, as honestly as possible, the questions and concerns of our new anti-vax friends. “Why didn’t the BBC cover the march against lockdown?”, they demanded. I explained that there was a fair bit of coverage of the march, not least when a speaker called for the execution of nurses and doctors, but it probably wasn’t as much coverage as the anti-vax movement had wanted, because we all think the thing we care about most deserves more attention than the rest of society gives it.

“Why have they never isolated the virus?”, they asked. They have, I explained, and if you don’t accept that, it might be because your definition of what counts as isolating a virus isn’t one that any modern virologist would accept. “Why hasn’t anyone won that prize money that will go to the first person who can prove the virus exists?”, they asked. Probably because the criteria they’ve set for winning the prize isn’t reasonable, it’s deliberately too strict for anyone to be able to meet. It’s akin to asking you to prove that Mars exists, but the only proof I’ll accept is you handing me some soil from the surface of Mars that you’ve personally collected. If the bar from what constitutes proof is set unreasonably high, there is little mystery as to why it can’t be met.

As the night progressed, and the Q&A ended, our new friends joined the organisers for a post-event drink, and we continued to talk. “I wouldn’t describe myself as a flat earther”, explained a friendly, fairly shy chap as he put down his drink, “But I just don’t believe that the Sun could be 93 million miles away”. “Why not?”, I asked him, “How do you think things would look if the Sun actually was 93 million miles away?”. He paused. “I just have to go by what I can observe, that’s all we can do really, we have to just trust what we can see for ourselves and prove for ourselves. And it just doesn’t look like it’s that far away”. “So, if you were asked to choose, what shape would you say the world was?” “Well”, said the not-a-flat-earther, “I’d probably lean more towards it being flat”.

The conversation worked its way back to Covid. “Well, I personally don’t believe in germ theory, but there are so many scientists who’ve said that Covid isn’t real, and that the vaccine is dangerous. Why aren’t those people being listened to?”

“Because they’re in the minority”, I explained, “and more than 99.9% of the people in the same field as them completely disagree with them”.

“That’s because all those other one’s are being paid off!”, he replied.

“How do you know that?”, I asked. “You only trust the evidence you’ve seen for yourself, you said, so what evidence do you have that makes you say that all the other doctors and scientists are being paid to lie? Other than that they disagree with you?”

He laughed. “Ok, right, you’ve got me there. So, what do you think about 9/11?”

And so the conversation moved on, and we talked about why the BBC announced the collapse of World Trade Center 7 twenty minutes before it actually fell. Was that evidence of a chaotic and confusing day, in where several other buildings called “World Trade Center <number>” had already fallen? Or proof that the BBC was part of an international plot… in which several nation states had decided to fake a terrorist attack to take down two of the most famous buildings in the world, but had decided to first alert the BBC to their plans (because, presumably, the BBC wouldn’t have covered the event correctly if they believed it was a real terrorist attack?), and to pass that information down through producers and news execs, to the person who programmed the teleprompter and on-screen captions, without anyone in that complex chain admitting to their role in over 20 years? You decide…

After a few hours of meandering conversation, taking in everything from the moon landing to Miracle Mineral Supplement to the ways in which alternative medicine clinics exploit vulnerable cancer patients, secure in the knowledge that the dead can’t tell their stories, the venue owners politely informed us it was closing time. “This was brilliant”, said the lady who heckled me. “I’ve really enjoyed this”, said her formerly shy companion, “I’ll definitely be coming again”.

Of course, I didn’t change their minds – as one of them had told me, they had been ‘awake’ for nearly 30 years now, and a single conversation was never going to be enough to overturn a lifetime of doubting the mainstream. But, hopefully, we were at least able to demonstrate that it is possible to disagree with their views, not because we’re paid to sow dissent, but because we have genuine concerns. And we were able to agree that, at the heart of it, neither they nor we were evil, and that we both wanted the best for people – we just disagreed as to what that might look like, and how best to achieve it.

This, I think is the real value in having these conversations. Not only do we come to understand why people diverge from mainstream consensus positions, and why they come to believe in things that fly in the face of evidence, but we also can demonstrate to the people we disagree with that we are not arrogant, smug know-it-alls, nor paid shills for Big Pharma. If we are going to be able to reach people, and if we are going to be successful in stemming the flow of misinformation, conspiracy and paranoia, we have to be able to be patient and personable, to be genuinely curious about other people and their motivations, and to understand as much as possible what can lead people off the beaten track, and into the wilderness.

Regality in the modern world: can a skeptic be a monarchist?

After the party, some reflection is needed. Charles III’s coronation was impressive. Presumably, many in the skeptic community were thrilled by Lionel Richie’s tunes, and the pageantry and elegance of the event. But a fundamental question ought to remain in the mind of skeptics: in a 21st Century Western nation, is monarchy justified?

Monarchies have long been associated with irrational thinking. Regal selections have often been left to the dictum of prophets and ordeals. If the Bible story is to be believed, David became king because Samuel – a man claiming to speak for God – chose him in that capacity after hearing God’s instructions. Sometimes, corruption has made its way through the process. Herodotus tells the amusing story of how Darius became the Persian king. There were six candidates, and they decided that the person whose horse neighed first at sunrise would become king. Darius secretly instructed a slave to rub his hands on a mare’s genitals, and then place the hands near the nostril of Darius’ horse. The horse was naturally excited, it neighed, and the rest is history.  

Throughout the ancient world, kings were sacred. They were often attributed with supernatural powers. Pharaohs were thought to become the god Osiris when they died. Roman religion was big on emperor worship. By the Middle Ages, kings were not explicitly worshipped, but they were still believed to hold magical powers, such as the royal touch: kings could allegedly cure diseases merely by touching people, not unlike Jesus and other miracle workers.

It is not surprising that monarchies are imbedded with religious concepts: their very power relies not on rational political deliberation, but rather, appeals to mysterious supernatural forces. As modernity and the disenchantment of the world took hold in Western societies as a result of the Enlightenment, monarchies began to disappear. In the modern Western world, some monarchies have managed to survive only to the extent that they have preemptively diminished their own powers. Louis XVI’s head rolled because he stubbornly refused to compromise his absolutism.

In contrast, the British monarchy is still around, largely because royalsor more likely, their advisors – realised that a dilution of monarchial power in a parliamentary system could keep revolutionary fervour in check. As long as they are merely ceremonial figures, people in modern societies are more willing to accept monarchs. The late Queen Elizabeth understood this concept, and gracefully remained apolitical. There are doubts about whether King Charles will be as wise. After all, this is the man who wrote the infamous “black spider memos,” using his influence to lobby politicians into favouring his pet agendas.

But even devoid of absolutist powers, monarchies ultimately rely on an irrational principle that skeptics ought to challenge. That is the heredity principle. A society maximises its efficiency and justice by allotting roles on the basis of merits and capacities. The accident of having been born in a particular family cannot be a relevant criterion. Unlike elected officials, kings are not hired for their job via a contract; their birth status determines their role and privilege. We would find it laughable if David Beckham’s son were designated as Manchester United’s star midfielder, solely on the grounds of his father’s glorious antics at Old Trafford. We appreciate the senior Beckham’s amazing feats, but if the lad wants a place in the Premier League, he must earn it with his own merits. Why should it be any different with heads of State?

Admittedly, not every position in society can be contractual. Parents naturally rule over their children. Some monarchists have made the case that a king acts very much as a father, and therefore, needs no contract. This was the argument put forth by 17th Century philosopher Robert Filmer in Patriarcha, a classic defense of royal absolutism. Such arguments may hold some water if applied to tribal chieftains or petty kings, as their domains may resemble family structures. But the intimate world of family decisions is no match for the complexities of modern industrialised nations.

Monarchies may have some advantages. They provide a sense of national unity. But often, this delves into jingoism. To the extent that they are founded on blind allegiance and little deliberation, love of king ultimately resembles religious zealotry, and very few good things can ever come out of that.

Monarchies may also provide political stability. But that stability is similar to imperial peace: conflict is diminished, but ultimately at the expense of fairness and freedom.

It is debatable whether monarchies – and especially the British monarchy – contribute to a nation’s public revenue via tourism. But even if they did, skeptics ought to aim for higher principles. The sale of indulgences also contributed to public revenue, and thanks to that business scheme, we have the marvellous Saint Peter’s Basilica in Rome. Nevertheless, we are justifiably repulsed by Johann Tetzel’s infamous slogan, “As soon as the coin in the coffer rings, the soul from purgatory springs.” In the same manner, perhaps royal symbols help a nation make a buck. But in the long term, wealth production is sustained by efficiency and rationality. This implies that contract must prevail over status, and positions must be allotted on the basis of merit, not birthright. If we seek to apply these meritocratic principles throughout society, what is stopping us from applying them to the very top position a nation can allot?

The fifth horseman: environmental determinism rides again in ‘An Inconvenient Apocalypse’

The cover of "An Inconvenient Apocalypse" - a black book cover with a drawing of a lit match on the front and the title in white. It also says "Environmental collapse, climate crisis and the fate of humanity" by Wes Jackson and Robert Jensen.

An Inconvenient Apocalypse’, by sustainable agriculture pioneer Wes Jackson and journalist and academic Robert Jensen, is a manifesto for acceptance of society’s imminent collapse based on ancient ideas about the fixity of human nature. We’re told we’re on a road to nowhere, having made a wrong turn 10,000 years ago with the adoption of agriculture, and there’s little we can do but brace ourselves.

Ever since humans learned to domesticate plants and animals and take up a sedentary existence, so the argument goes, we have been addicted to dense energy in the form of rich sources of carbon. Adoption of agriculture led to food surpluses, the division of labour, social hierarchies and inequality, and the only cure for this affliction is a radical reduction in the size of the human population. The book’s motto fewer and less – fewer people, less stuff – is underlined by the flaming match on the cover. A soft landing is out of the question; we are about to be cooked.

How cooked? “Hard times are coming for everyone…there are no workable solutions to the most pressing problems of our historical moment. The best we can do is minimize the suffering and destruction” (p 10). “…the human future, even if today’s progressive social movements were to be as successful as possible, will be gritty and grim” (p 11). “We assume that coming decades will present new challenges that require people to move quickly to adapt to the fraying and eventual breakdown of existing social and biophysical systems” (p 58). “…the bad ending will not be contained to specific societies but will be global” (p 63).

How soon? “I work on the assumption that if not in my lifetime, it’s likely coming within the lifetime of my child”  says Jensen(p 75) making it within the next twenty to fifty years, with the qualifier “…it is folly to offer precise predictions” (p 68).

Justification for the extent, severity and timing of the coming collapse relies on the claim that “…people who pay attention to the ecological data – whether or not they acknowledge it to others – are thinking apocalyptically” (p 76).

What follows? The authors’ vision of a sustainable post-apocalyptic future is humanity reorganised into communities of no more than 150 people collectively managing their birth and death rates such that the aggregate global population remains under two billion. A tough ask, to which this warning is added: “Finding a humane and democratic path to that dramatically lower number will not be easy. It may not be possible” (p 54).

If that after-thought triggers memories of the brutal ideological experiments of the twentieth century, we are encouraged to stay strong as failure to grapple with the hard question of population is “…an indication of moral and intellectual weakness” (p 48).

What makes them so sure? In two words, environmental determinism. The view that the physical environment shapes human culture, in this case that agriculture was the beginning of the end. This ancient idea, recently revived in the popular culture, is not as the authors suggest an inescapable aspect of human culture but a contested concept within history, geography, anthropology and philosophy. An alternative view, that human culture is jointly shaped by free will, biology and the physical environment, is not contradictory as the authors claim (p132) but entirely compatible, one held by the majority of 7,600 philosophers surveyed in 2020.

The version of environmental determinism present here demonises dense energy as the metaphorical apple in the garden of Eden, but there is nothing intrinsically wrong with dense energy. The problem is that we are consuming it faster than it is being produced, in ways that liberate more greenhouse gases (GHGs) than are sequestered during its formation. With only 14% of global energy derived from renewables, half of which are biofuels that are typically net contributors to GHGs, we have a mountain to climb. But as the authors acknowledge at the outset, that task is political; “If we can’t align our living arrangements with the laws of physics and chemistry, we are in trouble” (p 3). By chapter two the ‘if’ has gone and trouble is inevitable.

What environmental determinism tends to overlook is that evolution is evolving. Complementing Darwinian natural selection where the environment acts as selector, animals have learned to participate in their own selection. By learning from each other, by predicting the consequences of their actions, by using tools. All of which blurs the distinction between human nature and human culture.

What will ultimately determine our fate is not an argument from first principles that we are slaves to our environment but the difference between the rate of growth in the human population (which has been in decline for forty years) and the rate of increase in the use of carbon neutral energy (which is accelerating). And while humanity may not change behaviour rapidly enough to avoid multiple crises for decades to come, the ‘…breakdown of existing social and biophysical systems’ is neither predetermined nor inevitable.

At one point the authors make the perfectly reasonable proposition that our ignorance of the world will always vastly outweigh our knowledge (p 67). Despite this they confidently predict our fate on the basis of a narrative about events occurring 10,000 years ago that are in dispute and constantly being reinterpreted with new discoveries. In the last 30 years evidence has emerged that hierarchical hunter gatherers predated agriculture and egalitarian farming communities predated the great civilisations of antiquity, implying that the means of subsistence does not adequately explain the origin of social hierarchies, greed and inequality.

A clue to the authors’ conviction comes in chapter three ‘We Are All Apocalyptic Now’ based on a previous book of Jensen’s. It suggests a secular reading of the Hebrew prophets, the apocalyptic literature and the Christian concept of grace helps us comes to terms with our fate. Strip these texts of their supernatural content and we are left with sin and retribution without the prospect of salvation or redemption. ‘Ecospheric grace’, gratitude for the gift of life, is offered In their place.

For those who believe humanity is shaped by the interplay between genes, environment and free will, a more helpful guide to the future would be a secular reading of Luke 3:23 “Physician heal thyself” for as they say, ‘…a predisposition does not condemn us to act out our instincts’ (p 122). The dominant culture might be incapable of change due to the power of vested interests, but that is in our hands. Not the gods, the landscape or the laws of physics.

An Inconvenient Apocalypse: Environmental Collapse, Climate Crisis and the Fate of Humanity’ by Wes Jackson and Robert Jensen, University of Notre Dame Press 2022, ISBN 978-0-268-20366-5 £17.47

A double-edge sword: should we be labelling kids?

There is no doubt, at least in my mind, that childhood, and certainly teenage years, have become progressively harder over the last 30 years. There are many and varied reasons for this. Social media is undoubtedly a mixed blessing, benefitting some youngsters and harming others, but the pressures on children to conform are greater than ever. Being continually measured against your classmates, for the league table that the school’s reputation depends on, is another factor. Rates of anxiety and depression in teenagers are have been steadily rising, and this has got worse over the pandemic. Along with this increase, more children are being diagnosed as neurodiverse. This is partly because the parameters for the diagnosis have changed, but also because of increased recognition in children who aren’t seriously disabled by their condition.

Before we look at some of the pros and cons of labels, I should mention that there is a substantial literature on labelling theory which I don’t propose to examine here. I will simply look at a few common applications of labels and whether they are helpful, or otherwise, in practice.

Labels can be helpful…

Labels change the attitude of the observer. If a teacher knows that a child is neurodiverse, they may cease to use unhelpful labels such as ‘naughty’, and have ready plans that can be implemented. The label may also help the individual understand some of their own difficulties: “I am autistic, which is why I’m having to work harder to understand what someone is feeling, when my friends just seem to know.”

A mental health label can also help a youngster to feel validated. “I am depressed, I’m not just being difficult, so it’s valid for me to take time out”. The label can also suggest potential ways forward, such as engaging in therapy. A label is often a diagnosis, and NICE (The National Institute for Clinical Excellence) recommends treatments by diagnosis; looking at the evidence to recommend the best treatments and therapies.

Labels, therefore, can be useful: they can validate the individual, suggest treatments, and act as shorthand for others to begin to understand and help.

…or not so helpful

Labels also have many and various downsides. The harms and benefits depend on the type of label, so it’s worth addressing them separately

Identifying that a child is neurodivergent is helpful for that child. But what about the borderline between diagnosis and no diagnosis? Although we talk about a spectrum, and on the face of it we acknowledge that there is no hard and fast cut off, in reality the situation is treated as a clear binary: you are either neurodivergent or you are not. Someone may have many traits, but just miss the cut off. So despite having almost the same difficulties as the child who falls just the other side of the line, they will be treated entirely differently.

Schools often say that they treat the child not the label, but the reality is that without the label there may be no help forthcoming, and certainly no extra resources or exam breaks. This is not a problem of the label for the child who has one, but is a problem inherent in any label which has a relatively arbitrary cut off. It would be excellent if we were able to treat all children as individuals, who have traits along a spectrum, some of which needing more support than others. But the realities of our society and its resource allocation make this impossible.

More serious psychiatric labels can be more contentious, primarily because most psychiatric diagnoses are still based on an understanding of clusters of symptoms rather than underlying pathology. We know now, for example, that schizophrenia is probably the final manifestation of a number of underlying processes which may respond differently to treatment, and take different pathways through life. NICE does a good job of identifying which treatments are likely to be most effective for whom; their recommendations are based on large RCTs, but they are statistical, so don’t necessarily indicate which treatment will be best for any specific individual.

This difficulty can be seen in trials for antidepressants. Trials show them to be somewhat effective, while clinical experience suggests that they vary from being life-saving to being completely ineffective. With no way to differentiate between the groups, a large trial may find such a treatment, on average, moderately effective. Work is being done to understand more, but there are still many unknowns.

Talking therapies, social support, and psychological support are also important, but different people will appreciate different forms of assistance. A label can induce a lazy, “That person has X, I know what to do with X,” kind of approach. Yet one person’s experience, and what they found helpful, may not carry at all to someone else. It is common to hear that those who have been through something understand it best. But they may simply understand how it was for them, and then misapply that understanding to other people.

There is a risk that others will see the label, and not the person. This can lead to a lack of curiosity about why someone may be struggling, assuming that the label tells you everything you need to know about them. Medically it can lead to ‘diagnostic overshadowing’: new problems being overlooked or not investigated properly as they are assumed to be part of the diagnosis already known.

People, especially teenagers, can do this to themselves. It is fairly common now to see someone describe everything they do as being because of their label. “My ADHD made be do it” for example, when ‘it’ has nothing to do with ADHD. A harmless example on social media is self-described ‘empaths’, as if empathy wasn’t a normal human trait. This may just help someone feel a bit special. But more pernicious examples include the numerous ‘introvert’ memes, which in their extreme forms can lead to someone avoiding all social contact and, attributing it to their introvert status. In reality, these memes often characterise social phobia, and while they may appear to offer comfort, misidentifying a problem which they could get help with, and withdrawing from social contact, is unlikely to be the best way forward.

Identifying with an erroneous self-diagnosis also happens. Self-diagnosis can be helpful as a step towards getting more formal confirmation (where it is possible to do so), and getting help, where it exists. Commonly people make reasonably accurate observations about themselves, but not always. This can happen for teenagers, particularly those with a troubled or chronic trauma background, who are looking for explanations for their dysphoria. They may decide that they are autistic, or that they have bipolar disorder – labels which might make life more bearable, but might impede them receiving the kind of support they need. Upon learning that their self-diagnosis was erroneous, some people are relieved to learn they do not have a specific diagnosis, but it can leave others feeling bereft if they lose what they thought was the reason for their dysphoria.

The balance

It may seem that there are more downsides to mental health labels than there are advantages. But the advantages can be so significant that in practice they are not only here to stay, but useful. It is however important to bear in mind the downsides in order mitigate them as much as possible.

Replicating a classic false memory study: Lost in the mall again

One of the most influential and highly cited studies in the history of psychology was that reported by Elizabeth Loftus and Jaqueline Pickrell in 1995: “The formation of false memories”. The study is widely referred to as the “Lost in the mall” study, because it claimed to demonstrate that it was relatively easy to implant full or partial false memories in some adults of a childhood event that never actually happened – specifically, of getting lost in a shopping mall at the age of five. The study is not only described in virtually all introductory psychology textbooks but is often cited by expert witnesses in cases involving allegations of childhood abuse that may potentially be based upon false memories.

In recent years, however, the study has been the target of criticism from commentators who believe that the ease with which false memories can be implanted may have been exaggerated. Attention has been drawn to a number of shortcomings of the original study. The major criticisms of the 1995 study are the small sample size (only 24 participants took part), the lack of clear definitions of what was meant by a “partial” or “full” false memory, the lack of clear descriptions of any coding system used to categorise memory reports, and finally the lack of direct replications by other researchers. Fortunately, the results of a recent replication attempt which address all of these criticisms have just been published, showing that the conclusions of the original study are essentially sound.

Before going any further, it is worth understanding the original study by Loftus and Pickrell. Participants were informed that they were taking part in a study of childhood memories and asked to remember as much as they could about four events that were said to have taken place in childhood. Three of the events really had taken place, according to close family members, but one of them had not. The fictitious event was getting lost in a shopping mall at the age of five, being very upset, and eventually being reunited with parents. A week or two after the initial presentation of the four events, participants were interviewed and asked if they could remember any more details. They were asked to try to remember as much as they could prior to a second interview one or two weeks after the first. It was claimed that six of the participants developed full or partial memories of the target event.

The recent replication attempt, led by Gillian Murphy, was carried out by researchers at University College Cork and University College Dublin. Apart from changes to address the methodological weaknesses of the original study described above, the same methodology was generally followed, and details of the planned data analysis were pre-registered. A much larger sample of participants took part (N = 123) and clear definitions of a “full” and “partial” memory were given. Coders were trained, and followed detailed instructions on how memories should be coded. Overall, 35% of participants reported full (8%) or partial (27%) false memories for the target event based upon the coding system used.

The replication study went further than the original study by presenting statements describing 111 false memories from participants’ interviews to over a thousand respondents in an online “mock jury” study. In general, the mock jurors were very likely to believe the false memory reports.

The original “Lost in the mall” study has been criticised on the grounds that the necessary deception involved is unethical and might upset those taking part once it is revealed. Murphy and her colleagues took the opportunity to actually ask their participants and their familial informants how they felt about the deception once they had been fully debriefed. It turned that both groups held generally positive attitudes about taking part, indicating that they had enjoyed the experience and had learned something interesting about memory.

Perhaps the results of this replication should not come as a surprise. Although no direct replications of Loftus and Pickrell’s study had been reported prior to that of Murphy and colleagues, Alan Scoboria and colleagues had previously reported the results of a “mega-analysis” of interview transcripts of eight published memory implantation studies (total N = 423) using the same approach as that pioneered by Loftus and Pickrell. Across the studies, a range of different false childhood memories had been implanted including getting into trouble with a teacher, taking a trip in a hot air balloon, and spilling a bowl of punch over the bride’s parents at a wedding. The original studies had reported a wide range of estimates of the rate of successful memory implantation reflecting the use of different coding systems to define full and partial false memories by different investigators. Scoboria and colleagues came up with their own coding system based upon memory science and applied that same standard system to the transcripts from the eight studies. On that basis, some 30.4% of cases were classified as false memories, a result pretty much in line with that of Murphy and colleagues.

The recent recognition of the value of direct replication studies is to be welcomed. In a previous article for the Skeptic, I reported that a large, multi-lab study aimed at replicating a controversial paranormal effect had in fact demonstrated pretty conclusively that the original effect was not real. In the words of the researchers, “the original experiment was likely affected by methodological flaws or it was a chance finding”. The current successful replication of a classic memory study should suffice to silence critics of the original study. The results of both unsuccessful and successful replication attempts are of great value to science.