Home Blog

The Skeptic Podcast: Episode #021

The Skeptic podcast, bringing you the best of the magazine’s expert analysis of pseudoscience, conspiracy theory and claims of the paranormal since its relaunch as online news source in September 2020. 

On this episode:

Subscribe to the show wherever you get your podcasts, or to support the show, take out a small voluntary donation at patreon.com/theskeptic.

The relentless march of reiki in public universities in Brazil

0

Brazilian academia has shown itself to be a receptive home for reiki, a pseudoscientific doctrine that presupposes the existence of a universal reservoir of “vital energy” that can be accessed by trained therapists. This energy, transmitted through the laying on of hands, is said to be capable of curing illnesses, reducing stress and increasing well-being. On the weekend of 21 and 22 September, the São Paulo State University (Unesp) not only hosted, but officially sponsored the First Brazilian Reiki Congress.

The conference is just the latest step in a climb that includes a study on “reiki via cell phone” conducted under the auspices of the Oswaldo Cruz Foundation, and more than a dozen postgraduate works – including master’s dissertations and doctoral theses – defended at public universities, such as USP, Unesp and Unifesp. All of them were completed in this century – the most recent being a doctorate on the effect of therapy on the anxiety levels of pregnant women, defended in 2024.

The idea of ​​“vital energy” has no scientific basis – in fact, renowned physicists such as Sean Carroll and the late Victor Stenger point out that the existence of a force in nature capable of affecting objects on the scale of the human body, but which has not yet been detected by scientific instruments, is virtually inconceivable.

Despite this, reiki is not only integrated into the Unified Health System (SUS) as one of the 29 integrative and complementary practices authorised by the Ministry of Health, but has also found shelter within public education and research institutions.

The trend, in fact, seems to be accelerating: a search in the USP digital library for theses and dissertations with “reiki” in the synopsis or among the keywords shows one work before 2010, two between 2011 and 2020 and four since 2021.

Interestingly, the infamous 2003 master’s dissertation, which used kitchen gloves as a “placebo” to test the effect of the technique on the immune system of mice, does not appear in this search, because it is careful not to highlight the word “reiki”. The progress, therefore, seems to have been not only quantitative, but also cultural: from 2003 onwards, the university of reiki has stopped being ashamed to say its name. One could suggest the hypothesis that the normalisation was accelerated by the integration of the practice into the SUS, in 2017.

Internal policy

In the 1980s, a group of European sociologists launched what became known as the “strong program of the sociology of science.” This “program” aimed to explain discoveries, advances, and the formation of scientific consensus in strictly social terms — for example, the universal acceptance of the idea of ​​the existence of the electron would be better explained as the result of political machinations and power plays within university physics laboratories and departments than as the fruit of rational analysis of experimental results.

The strong programme led to postmodernism, which in turn became a subsidiary line of global warming denialism, and as a result ended up losing much of its charm in academia, although some surfers of the recent “decolonial” wave have been showing signs that they would like to rescue it.

But, more than being politically inconvenient, the programme died because it proved unfeasible: it is impossible to explain the construction of the natural sciences without recognising that, at an essential level, their practitioners are discovering and describing facts that exist independently of the subjectivity and intentions of scientists: solid things that are “out there.”

In a footnote to his book Progress and Its Problems, philosopher Larry Laudan suggests, perhaps with a touch of malice, that sociologists initially accepted the idea that the content of the natural sciences was defined by departmental political pettiness because that is how the content of much of the sociology of science is, in fact, defined. “The general thesis of the sociology of knowledge … was based on the hope that all other forms of knowledge were as subjective as sociology clearly was,” he writes.

Ironies aside, however, in 2011 philosophers Maarten Boudry and Filip Buekens published an article in the journal Theoria showing that the model proposed in the “strong program” correctly describes at least one type of academic activity: that associated with psychoanalysis. It is not very difficult to generalise the diagnosis to other pseudosciences.

When doctrines without a basis in fact take root in academia, it is not because of scientific merit – because they objectively describe the world “out there” – but because someone skilfully conducted political manoeuvres; and the “knowledge” generated by these disciplines does not come from the world either, it is constructed from the clash of egos – not from the clash between hypothesis and reality.

The advancement of reiki needs to be understood (and confronted) in this light, before it takes root (as homeopathy did over more than a century) or causes greater embarrassment, such as the late Center for the Study of Paranormal Phenomena (NEFP) at UnB, established in 1989 and closed this century after a scandal involving a psychic and a murder. Pseudoscience can only occupy prestigious spaces because it relies on the complacency and complicity – through convenience or omission – of those who are responsible for safeguarding the good name of educational and research institutions.

History

Reiki originated as a form of religious healing in the 1920s in Japan, after master Mikao Usui claimed to have received a revelation that made him feel one with “the energy and consciousness of the Universe.” The enlightenment was said to have been the result of a 21-day fast.  

The manual prepared by Usui states that “any part of a practitioner’s body can radiate light and energy, particularly the eyes, mouth and hands,” and that “toothache, colic, stomach ache, headache, breast tumours, wounds, cuts, burns and other swellings and pains can be quickly relieved and disappear.” The version of reiki that has become popular in the West is a largely commercial practice, established through a franchise system created by Japanese immigrants in Hawaii in the 1970s.

Today, there are many different reiki lineages, some of which incorporate elements of other esoteric doctrines, alternative therapies, and spiritual traditions; some have even become structured businesses with registered trademarks (Mai Reiki, Karuna Reiki, Real Reiki, Holy Fire Reiki). Some of these lineages have established commercial connections with broader sectors of the health and wellness industry, such as manufacturers of vitamin and dietary supplements, as well as selling courses and training.

The intangible and invisible energy of the spirit of the Universe flows from therapist to patient, from teacher to student, but the solid and visible money always goes in the opposite direction.

The Ockham Awards 2025: recognising the best in skepticism, and the worst in pseudoscience

Since 2012, The Skeptic has had the pleasure of awarding the Ockham Awards our annual awards celebrating the very best work from within the skeptical community. The awards were founded because we wanted to draw attention to those people who work hard to get a great message out. The Ockhams recognise the effort and time that have gone into the community’s favourite campaigns, activism, blogs, podcasts, and outstanding contributors to the skeptical cause.

Nominations for the 2025 Ockham Awards are now open! Simply complete the nomination form to submit your nominations.

The Ockham award logo

Last year’s Ockham winner was Dr Flint Dibble, for his four-hour appearance on the Joe Rogan show debunking the ahistorical theories of  writer and pseudoarchaeologist Graham Hancock. Dr Dibble went into the belly of the beast, and gave a calm, well-reasoned and enormously patient account of himself and of the evidence, showing that skepticism can be brought to even the most theoretically hostile of audiences, if presented thoughtfully and skillfully.

Other past Ockham winners include the Knowledge Fight podcast, BBC’s disinformation unit, Dr Elizabeth BikDr Natália PasternakProfessor Edzard Ernst, the European Skeptics PodcastBritt Hermes, and more.

While we recognise the best in skepticism, our awards are also an opportunity to highlight the danger posed by promoters of pseudoscience with our Rusty Razor award. The Rusty Razor is designed to spotlight individuals or organisations who have been prominent promoters of unscientific ideas within the last year.

Last year’s Rusty Razor went to Elon Musk, whose purchase of Twitter saw an explosion of misinformation, scams, inauthentic accounts and hate speech. Meanwhile, Musk personally intervened to restore the accounts of hundreds of conspiracy theorists, including Sandy Hook ‘truther’ Alex Jones, misogynist and alleged sex trafficker Andrew Tate, and far-right leader and anti-vaccine conspiracist Tommy Robinson – the latter of whom Musk cited and retweeted on multiple occasions.

Previous Rusty Razor winners have included Dr Aseem Malhotra for his influential scaremongering about the alleged dangers of the COVID-19 vaccine, the Global Warming Policy Foundation for their promotion of climate change denialism, Dr Mike Yeadon for his anti-vaccination scaremongering, Dr Didier Raoult for his promotion of hydroxychloroquine as a treatment for COVID-19, Andrew Wakefield for his ongoing promotion of anti-vaxx misinformation, and Gwyneth Paltrow for her pseudoscience-peddling wellness empire, Goop.

One of the most important elements of our awards are that the nominations come from you – the skeptical community. It is that time again, we ask you to tell us who you think deserves to receive the Skeptic of the Year award, and who deserves to receive the Rusty Razor.

Submit your nominations now!

Nominations are open now and will close on October 10th. Winners will be chosen by our editorial board, and they will be announced at QED in Manchester on October 25th.

No, placebos probably aren’t getting stronger over time

There is a broadly accepted narrative that posits that the ‘placebo effect’, the apparent change in the condition of a patient following an inert intervention, demonstrates the amazing power the mind has over the body. Through some psycho-biological alchemy, convincing a patient they have taken a drug will cause them to experience the effects of that drug, even if all we really gave them was a fake pill.

Proponents of this narrative are often light on detail for how this might actually work, invoking instead the wishy-washy sounding ‘mind-body healing process’, or something similar. On rare occasions they might invoke endorphins or dopamine but, while these chemicals can be produced in response to psychological changes, they have a limited range of effects.

An alternative interpretation of the same observations says that placebo effects are mostly made up of statistical effects and biased reporting. Convince a patient they have taken a drug, and they tell you they are experiencing the effects of that drug, which is not the same thing as actually experiencing it. What happened and what the patient says happened are obviously related, but patient reports are also influenced by the biases and opinions of the clinician and the patient themselves.

Psychological factors like the Subject Expectancy Effect can mean that patients report what they think should be happening, rather than what is happening. Demand Characteristics can mean that patients report what they think their doctor wants to hear. Even simple politeness can result in misleading answers coming from otherwise well-intentioned patients, who don’t want to upset or disappoint the researchers.

Beyond this, we can also recognise that some fraction of patients will see an improvement anyway, no matter what you do. Many medical conditions will run their course and spontaneously resolve. Other conditions wax and wane, and patients who join a trial when their symptoms are at their worst will naturally improve regardless of the intervention.

One researcher even contacted The Skeptic to highlight cases where doctors have exaggerated the severity of their patient’s condition to ensure they meet the eligibility criteria of a trial. Which means those patients appear to make an immediate and miraculous improvement, regardless of whether they get real medicine or a sham control.

An unlabelled white pill bottle with an array of white/coloured pill capsules spilled out beside it on a white wood surface
‘The powerful placebo’ also gives alternative medicines and supplements a convenient justification when evidence for their efficacy is limited. Image by AVAKA photo from Pixabay

If we view the placebo effect as the bucket into which we toss our biases and other contextual effects, there is no need to invoke a mysterious mind-body healing process, or decree that the placebo effect is one of the strongest medical responses there is.

Over the past decade or so, numerous media reports have outlined how the placebo effect is somehow getting more powerful. For example, the gap between the effectiveness of painkillers and placebos in clinical trials has narrowed significantly since the 1990s. In one report from 2015, a research team led by Alexander Tuttle found that the ‘treatment advantage’ (the improvement from the active treatment over and above the placebo group) had diminished from 27% in 1996, to just 9% by 2013. This reduction was driven by an increase in the placebo response; as the mysterious placebo effect grows in strength, drugs are struggling to compete.

The phenomenon itself is real enough. Tuttle is not the only researcher to have documented that placebo responses in trials appear to be increasing over time, particularly for subjective outcomes. The question is not whether this trend exists, but how we interpret it.

All in the interpretation

Under the standard mind-over-matter narrative, this is a strange and mysterious thing. Why should placebos work better today than they did 20 years ago? Is it because we have greater faith in doctors and medical science? Is it because of television advertising, promoting how powerful and effective drugs are? Somehow, this makes dummy pills more effective painkillers than they used to be?

Perhaps a more parsimonious explanation is this: placebo responses have not increased because of any therapeutic effect, but because trials have become better at isolating treatment effects from noise. As methodological standards improve, non-specific effects that previously leaked into the treatment arm are more accurately contained within the control group.

Or to put it another way, the treatment advantage was always 9% and when we measured it at 27% in the past we were in error. Our trials at time were not sufficiently well designed or conducted to offer accurate results.

These competing interpretations are not merely academic, since each generates distinct empirical predictions. If placebo effects were genuinely becoming more powerful, we would expect to see gains not only in control groups but also in groups receiving the active treatment. Since any drug effect adds to the placebo effect, both arms should benefit from a growing placebo response. The rising tide lifts all boats, as it were.

An old wooden rowboat sits atop a large, cracked mud flat, with some of the same cracked pattern inside the boat where mud has flooded in and dried out. The whole scene has a sepia tone but the photod oesn't seem to be false coloured.
“The rising tide lifts all boats” – everyone and every treatment should be affected if the placebo effect were strengthening. Image by George Hodan, via publicdomainpictures.net

However, if placebo responses are increasing due to improved trial methodology, the overall effect would remain unchanged and the gap between treatment and placebo would narrow. Non-specific effects that had previously inflated the apparent treatment effect are now correctly controlled for.

When researchers have examined how placebo and treatment responses have changed over time, the results align better with this second explanation. Tuttle analysed US trials of neuropathic pain and found that, while placebo responses increased, outcomes in active treatment arms did not. A similar pattern emerged in antidepressant trials. These findings are difficult to reconcile with the idea of an increasingly powerful placebo effect.

The geographic distribution of this effect provides further evidence. Tuttle reported that the increased placebo response is most pronounced in trials conducted in the United States, while trials elsewhere have shown no comparable trend. Crucially, these US-based trials also tend to have longer durations and larger sample sizes, features associated with greater methodological rigour.

Proponents of the powerful placebo hypothesis have attributed the US-specific increase to cultural factors, such as direct-to-consumer drug advertising. But this explanation falls apart when we consider that New Zealand (the only other country permitting such advertising) has not reported similar increases in placebo responses.

What’s the harm?

Modern trials are longer, larger, better blinded, and more rigorously monitored than their predecessors. They employ more sophisticated randomisation techniques, standardised outcome measures, and stricter protocols for handling dropouts and protocol violations. These improvements serve to isolate the specific effects of interventions from the many confounding factors that can masquerade as treatment benefits.

When the gap between treatment and control narrows in modern trials, the correct interpretation is not that placebos have grown stronger, but that we have grown better at distinguishing signal from noise.

The misinterpretation of rising placebo responses as evidence of therapeutic potential carries real risks. Some trial sponsors have begun relocating studies to regions where placebo responses tend to be lower. Ostensibly this is for economic reasons, but it also serves to effectively avoid the methodological rigour that higher placebo responses represent. 

This trend should be deeply troubling. We do not obtain better evidence by weakening our controls to maximise the chance of a statistically significant result. Rather than celebrating improved methodology that better isolates true treatment effects, the pharmaceutical industry risks undermining the very advances that make modern trials more reliable than their predecessors.

From the archives: ‘Brainsex’, and the folly of sex-based neuroscience

0

This article originally appeared in The Skeptic, Volume 5, Issue 5, from 1991.

The book which ruined my reputation forever as a sane tube traveller was recommended to me by an otherwise intelligent, sophisticated, well educated woman. You have to imagine the scene: people sitting quietly on the train, coming home from work. And then there’s me, screaming at this Penguin paperback.

Actually, I suspect the authors of Brainsex, Moir and Jessel, were rather hoping people would be enraged by their book; it would mean they were onto something. In my case, all they’ve come up against is my fury at being categorised. It was exactly the same when people told me that I couldn’t sing songs I liked because I ‘wasn’t the type’. How would I know if I didn’t try?

What bothers me is not the scientific research in Brainsex. If research genuinely shows that there are significant biological differences between the brains of men and women, then we’ll all just have to grit our teeth and accept it. What bothers me is Moir and Jessel’s arguments, which seem to me poor, to say the least.

Moir and Jessel’s central tenet is that we’re different, we might as well accept we’re different, and instead of railing against it accommodate ourselves to it. How are we different? Well, according to them, the male brain is superior at abstract thought, at the single-minded pursuit of a goal that Moir and Jessel have decided is the hallmark of genius, and at spatial relationships. The female brain, on the other hand, is more emotional, more intuitive, blessed with a superior understanding of human relationships. They bolster their theory with quotes from scientific research. Women, they say, are making a mistake and measuring their achievements by the male standard; instead, we should revalue our work (like child-rearing and housekeeping) according to our values, not men’s.

Now, let’s think about this one. I agree that there are happy housewives, and I know from reading their stories that they feel let down by the women’s movement’s assumption that their work is a) valueless and b) unfulfilling. But the dramatic changes in women’s lives we call the women’s movement did not come about because some small hormone-influenced clique decided women ought to be unhappy. It came about because many, many women are and were unhappy and dissatisfied with the limitations of their lives. Women demanded the change.

A woman wearing a white headband and red cleaning gloves, with her hair in a ponytail, moves items in the kitchen. Only her upper body is visible. She's by an extractor fan and there's an open cupboard with a rack of plates inside. The decor is quite dated.
Why wouldn’t people be satisfied with being forced into domestic labour for their whole lives, based on an accident of their birth? Photo by Sebastián Santacruz on Unsplash

One of the questionable items Moir and Jessel call upon to bolster their argument is the fact that girls tend to score lower on IQ tests. No matter how scientists worked to remove the sex bias, they say, boys still scored higher. Their conclusion: it can’t be anything wrong with the tests. Really? This sort of reasoning is very well explored in Stephen Jay Gould’s brilliant The Mismeasure of Man, recommended reading for every skeptic (or indeed non-skeptic), which traces the history of white male science’s attempts to prove that white middle-class men are smarter than everyone else on the face of the planet. We have a word for this: bigotry.

Another questionable theory: men are biologically unsuited to marriage (and school, by the way), so the worldwide success of the institution of marriage is entirely due to women’s brilliant social engineering. But men have a choice, in every culture. They are physically stronger (I admit that). Logically, therefore, if men had an innate unsuitability for marriage, marriage would not exist.

Moir and Jessel love quoting mothers about how their children conform to sexual stereotypes even though they’ve made an effort to raise them in opposite ways. Well, take a couple of kids of my acquaintance, aged 11 and 7. She (11) is a whizz at math; he is struggling with it. He is a brilliant reader; she is now, but she wasn’t at his age. He loves cuddling. She is more distant, and was at his age as well. And so on: completely backward. But this, would say Moir and Jessel, is not significant because it’s just one case.

I maintain that Moir and Jessel’s book would not have been written in the US, not because Americans are less willing to accept challenges to our prejudices, but because American gender roles have changed much faster than those in the UK. As a journalist I have had occasion to track down experts in a number of science and technology fields both here and in the US, and there is one thing that stands out in the US: there are a lot of professional women out there. In fact, one consistent lament among expatriate American professional women is that they miss having a community of other professional women around them. They come to this country to be welcomed by snide comments, hostility, and prejudice among their male colleagues, and they are shocked.

Society has taken millions of years to evolve while women were regularly incapacitated by pregnancy; we have only had control of our fertility for 30 years, a very short time in which to change whole cultures. My prediction, for what it’s worth, is that Moir and Jessel will be proved dramatically wrong in their assumptions about what men and women can and cannot do.

Moir and Jessel would undoubtedly look at me and the way I live and work and conclude that I was doused with male hormones while I was still in my mother’s womb. Anyone got a time machine? Let’s go back and check this out.

What age was actually considered ‘old’ in Medieval Europe?

0

Picture a stereotypical scene in a medieval village. What do you imagine? Children playing in the dirty unpaved street perhaps, maybe two men on top of a cottage fixing the thatch, perhaps a young woman sweeping the front step, worrying about her elderly 35-year-old mother who is dying in the back room… of old age. 

What’s wrong with this picture?

According to many articles discussing popular misconceptions about history, there’s a pervasive myth that people died of old age in their mid 30s, and that ancient Greeks or Romans “would have been flabbergasted to see anyone above the age of 50 or 60.” 

On her blog, medievalist Dr Eleanor Janega says (emphasis mine): 

“One of the really rampant myths that I deal with on a regular basis is about life expectancy in the medieval period. What gets trotted out, over and over, is the idea that “the average life expectancy in the medieval period was 35, so when you were 32 you were considered […] old”. Friends, this is extremely not true, and this myth is also damaging to us now.”

My search skills must be off, because I’ve really struggled to find this myth anywhere. Though in fairness, Google’s AI kind-of reports it as a genuine belief, as when queried with “peasants used to die in their thirties” Gemini replied: 

“This statement is generally considered true; due to poor sanitation, lack of medical care, harsh working conditions, and frequent outbreaks of disease like the Black Death, the average life expectancy for peasants in the Middle Ages was often around 30 years old, meaning many died before reaching their forties.”

Presumably Dr Janega has spoken with some who believe that, in times gone by, people were considered old in their thirties, but it certainly isn’t a pervasive myth in any written sources I could find, nor even in popular culture. 

To take the most famous movie set in medieval Europe from my own childhood, Robin Hood: Prince of Thieves, the male and female romantic and action leads, Kevin Costner and Mary Elizabeth Mastrantonio, are both in their thirties and certainly not depicted as elderly. The main co-star is played by Morgan Freeman, then in his fifties, and the main villain by Alan Rickman, as a vigorously evil forty-something Sheriff of Nottingham. The main characters depicted as elderly are Mortianna, played by Geraldine McEwan, then nearly 60, and Lord Locksley – Robin’s father – played by a fifty-something Brian Blessed, who may be portrayed as an older man but is clearly still in full bluster, and far from death until he meets his untimely and violent end. 

Or to take an arguably even less historically accurate depiction of that period, Braveheart, in which the titular action lead was played by Mel Gibson, then nearing forty, and the ageing king was played by Patrick McGoohan, then in his late sixties. Neither film matches the idea of someone approaching decrepitude in their mid-thirties, because anyone who thinks about it for even a few seconds knows that people who’d be old enough to collect a pension in 2025 also existed before the modern era. 

It isn’t just Dr Janega who has had this experience, though, and the BBC claims that “it’s common belief that ancient Greeks or Romans would have been flabbergasted to see anyone above the age of 50 or 60”. This is contradicted by no less a source than Bill and Ted’s Excellent Adventure, which features a 60-something Tony Steedman as Socrates, and is backed up by this year’s disappointing Gladiator sequel, which co-stars 70-year-old Denzel Washington as an occasionally-sword fighting Macrinus, and features an elderly senator played by 86-year-old British thespian Sir Derek Jacobi. 

I don’t doubt that the authors of these articles have spoken to at least some bafflingly ill-informed people who have this belief, however, and their error seems to arise from a combination of people being bad at maths, and not understanding that life expectancy and lifespan are two different things. 

As per the Max Planck Institute for Biology of Ageing (emphasis in original): 

Life expectancy is the amount of time a person is expected to live based on the year they were born, their current age and various demographic factors, including gender. It is always statistically defined as the average number of years of life remaining at a given age. So life expectancy is basically the average lifespan of a population. In contrast, maximum lifespan is the maximum time that one or more members of a population have been observed to survive between birth and death. The oldest woman in the world lived to over 122 years old, so the maximum human lifespan is often given as 120 years.”

Both lifespan and life expectancy have increased over the years, but for someone in their mid-thirties to be elderly, their lifespan would have to be only a little older, and the lifespan of humans has – as per a famous Bible quote – been 70 or more years since the start of recorded history. Indeed evidence suggests that it may well have been close to that since early modern humans have existed.

On the other hand, if you have a life expectancy at birth of 35 then of course you can still live to 70 or more, you’re just statistically unlikely to reach that age, because you’ve died from illness, starvation, accident or violence.

A white woman in a red dress and blue headscarf arranges (anachronistic) ceramic plates in a wooden bowl at a medieval reenactment site in the woods, with small black steaming/smoking hanging cauldrons or pans behind her, under a canvas shelter. A man sits in the background. There's a pile of chopped wood on the grass and some Christian symbols in the scene, with a second, more elaborate tent in the background.
A medieval festival reenactment, complete with anachronistic plates. Image by Franck Barske from Pixabay

Calculating historical life expectancy and causes of mortality is quite difficult for obvious reasons – written records before the early modern period are somewhere between incomplete and non-existent – which leaves space for research in several overlapping academic fields, from health economics to biological anthropology. While I will not attempt to communicate a full survey here, the majority of academic sources I found suggested that life expectancy at birth was very low indeed by modern standards in the medieval period. 

According to Robb et al (2021) in the International Journal of Paleopathology, “life expectancy at birth for females is 25.0 years, for males, 22.8 years” in the Middle Ages, which is similar to historical demographer LR Poos’ mention of life expectancy at birth of around 25 to 28 for cohorts from medieval Cambridgeshire and Yorkshire in his 1986 paper “Life expectancy and age at first appearance in medieval manorial court rolls” in the journal Local Population Studies. 

Health economist John Yfantopoulus cites a slightly higher figure of 30 to 40 years life expectancy at birth for the early middle ages and medieval period in the abstract for his article “Life expectancy from Prehistoric times to the 21st Century” in Deltos:

“During ancient times, several historical sources from Egypt, Greece and Rome estimate life expectancy also at 20 to 35 years. Warfare, infectious diseases, malnutrition, and high rates of infant mortality are recorded as the main factors for this short life span. In the Middle Ages (500–1500 AD), the great killers like the Plagues (Black Death) had a significant impact on the reduction of population. Life expectancy fluctuated around 30 to 40 years.”

Despite his reference to the plague, Yfantopoulos notes later that in the late Middle Ages, “30 percent of infants died within their first year”, and this is indeed the main cause of a lower life expectancy. When a third of the population never makes it far from their crib, the arithmetic mean life expectancy is of course savagely reduced. 

While I’d argue against hand-waving away horrific levels of infant death as an issue for statisticians, what about those who don’t die as a child? Some of these myth-busting articles have made claims like the following

“If a medieval person survived to adulthood, he would likely live into his 60s or 70s.”

On the face of it, this seems far-fetched. Thinking about just your immediate circle, how many people do you know with conditions that would have killed them before modern medicine – perhaps something serious like one of several treatable cancers, or diabetes, or perhaps something apparently more minor like a cut, which could develop into sepsis and kill you? Quite a few, I’d guess. Then consider those who never got ill from the various fatal diseases that have been eradicated due to vaccines, like smallpox or polio. It feels like there must have been a lot of premature deaths in Ye Olde England, but as skeptics we need evidence, not gut-feeling. What do we know?

We can of course start with basic statistics, and anyone with a basic head for numbers knows that a 30% infant death rate alone cannot reduce life expectancy from 75 to 35. 

Robb et al note that while infant death is far and away the largest cause of “Years of Life Lost”, their model suggests other very significant causes of death for people in that period, with tuberculosis, fever, viral pulmonary infections, and diarrhoeal and GI infections all following in significant numbers in the second tier, and all likely to have been a major cause of mortality before old age. 

Indeed, the available statistics bear this out. Records of English male landholders from the Middle Ages are used by MA Jonker in the Journal of the Royal Statistical Society to suggest a life expectancy at 25 of a further 23.3 to 25.7 years – so an average age of death around 50.

What about wealthier people? A review of the ages at death of (male) members of the medieval English nobility finds that 50% were dead before 50, with only 11% making it past the age of 70. 

The claim that a person surviving to adulthood would likely live into their 60s or 70s is simply not supported by the evidence. 

The idea advanced by some myth-busting academics that “many people would have lived much longer, into their 70s, 80s, and even older” is likewise not supported by the evidence, unless we think that 11% of even the highest-born adults living until their 70s counts as “many people”. 

In other, perhaps less privileged circumstances, your chance of making old bones was even worse. Victoria Russeva, in her 2003 study using skeletal remains from 8th to 10th Century Bulgarian grave sites, notes that “in some populations, no survivals over 60 years have been ascertained.”

Why does this pedantry matter? As historical fiction author Sarah Woodbury says

“It isn’t that medieval people somehow were biologically different, but the structure of their lives, their resources, and their healthcare were dramatically different, ensuring that far fewer people lived as long as the average person does now.”

To ignore the advances of modern medicine, pharmaceuticals, sanitation and birthing practices is to risk losing them, as we are already finding out with lowering vaccination rates, increasing antimicrobial resistance and, of course, emerging infection risks.

The Skeptic Podcast: Episode #020

The Skeptic podcast, bringing you the best of the magazine’s expert analysis of pseudoscience, conspiracy theory and claims of the paranormal since its relaunch as online news source in September 2020. 

On this episode:

Subscribe to the show wherever you get your podcasts, or to support the show, take out a small voluntary donation at patreon.com/theskeptic.

AI authoritarianism? We should be wary of outsourcing our thinking to the machine

I have been on this quiet crusade against my own, and others’, use of AI. People have started to call me out on it by saying, “Why do you hate ChatGPT so much?” That question made me reflect on and interrogate what exactly I’m resisting. I noticed a belief I hold that AI has conservatism baked into every aspect of it.

My reluctance isn’t technophobic, I understand the power and potential of artificial intelligence, and to a degree recognise it as an extraordinary human achievement. I just find it ironic that the same humans who helped create this super-intelligent computer are the ones encouraging its use to minimise human involvement.

I believe that AI’s widespread use, especially unquestioningly welcoming it into our lives, reinforces a worldview that prioritises efficiency over nuance. In recent months, especially as I’ve watched more people around me lean on this tool, I can’t help but see its usage promote values that align disturbingly well with conservative populism – namely deference to authority, suspicion of intellectualism and individualism masquerading as empowerment.

When I try to explain this to people, I realise how conspiratorial I sound. As if I’m muttering, “This is what they want you to do,” while clutching a tinfoil hat. The ‘they’ in question is a capitalist system that rewards conformity and thrives on our desire to feel efficient and in control.

Using academia as an example, AI tools can locate articles and summarise complex texts in seconds, and what begins as a time-saving convenience quickly becomes a shortcut that bypasses the intellectual labour of learning.

I’m not proud to admit that when ChatGPT launched during my final year at university, overwhelmed by a challenging essay, I let ChatGPT write large chunks of it. I told myself it was a pragmatic choice to avoid failure. However, despite many claims that AI democratises information, I learned far less from that assignment than from the ones I struggled through myself.

What I did learn is how seductive anti-intellectualist sentiment can be when masked by language of efficiency. I could have asked my professor for help, but instead I chose the easy route, and in doing so inadvertently aligned my thinking with a mindset I believed I opposed.

A male-presenting brown-skinned person in a blue and white small-checked shirt writes in a notepad with a biro, next to his laptop on a desk.
It’s common for people to say they remember/learn better with writing. Image from StockSnap via Pixabay

This is where the political dimension of AI use became obvious to me. The mindset that often accompanies reliance on AI means relying on a single, centralised ‘intelligence’ to simplify complexity, rather than the more complex prospect of wrestling with conflicting viewpoints. With conservatism, and more extremely, authoritarianism, complexity is traded for clarity. The idea of “cutting through the noise” and “telling it like it is” is all over the alt-right sphere – it’s basically page one of the alt-right grifter manual.

I am no longer seeing it as much of a leap between the likes of Joe Rogan appealing to the uncomplicated everyman and the simplification and shrinking of nuance and humanity in AI. When we let AI think for us, even in small ways, we begin to erode our capacity for the messy hard work of independent thought, which leaves us open to the quick, simple fixes that the alt-right promises.

We are seeing within this right wing a rebranding of (especially the young) people on the right as more down-to-earth, ‘real’ people, speaking the common sense left behind by the rise of progressive politics. This rebrand relies on an attitude that dismiss academics as elitist and celebrates “common sense” over expertise. AI amplifies the cultural drift toward simplicity, suspicion, and submission to authority.

I think we should be cautious of the use AI for all the so-called menial thinking tasks it supposedly frees us from. I’ve seen people in my life use it for meal plans, travel itineraries, career and even moral advice. You could argue these are low-stakes uses, but they signal a willingness to outsource not just tasks, but judgement.

The paradox of right-wing rhetoric promoting ‘free thinking’ and ‘your own research’ while just parroting the same talking point is just as present in AI. It makes you think you’re making life easier, but instead it is actually decreasing your ability to make hard decisions.

We are all exhausted. Late capitalism demands constant self-optimisation, and it’s no wonder people want tools that make life easier. However, this time-saving efficiency mindset mirrors the conservative influencers that promote the importance of strict discipline and manicured, time-optimised routines aimed at squeezing maximum productivity from every day. They push the idea that output and personal growth are morally virtuous. And AI fits perfectly into that narrative, flattening individuality under the guise of self-improvement.

The irony is that with alt-right neoliberal individualism – which encourages you to see yourself less as a person and more of a project – you think only about yourself, while losing everything about yourself that makes you unique.

I can’t help but feel that every question outsourced to AI that could be thought through for yourself becomes a small step towards a gradual withdrawal from the process of living our lives. We no longer trust ourselves to make small decisions. We’re encouraged to believe that algorithms know us better than we know ourselves.

Most of the AI content online that is targeted my way features a smug young man in his bedroom, speaking on behalf of a tech company. He tells me my life is a mess, and I need an AI assistant to make every decision for me. I don’t believe these companies are directly malicious – rather they are just eager to monetise the human impulse to feel less overwhelmed. However, I do think they do so without considering long-term consequences of pushing a belief that thinking for yourself is wasteful, that slowing down is a liability, and that your intuition is an obstacle to efficiency.

The end goal, I feel, might be that the ideal human is an effortless productivity machine. The less you think, the more you can do.

AI promises a freedom from burden, but the more we depend on it, the more we become entangled in a system that thrives on disconnection. Conservative capitalism doesn’t want you to think collectively. It wants you atomised, hyper-productive, and too distracted with individualism to notice the structures holding you in place.

Conservatism, at its root, needs you to stop thinking for yourself and simultaneously to only think about yourself. And AI is such a powerful tool in allowing you to do both.

It is our responsibility to use AI as a tool for learning, not a shortcut for doing. The stakes are higher than grades or grocery lists. The more we abandon the difficult, time-consuming work of thinking critically and collaboratively, the more vulnerable we become – to authoritarianism, to populism, and to losing what makes us human in the first place.