Home Blog Page 12

Left-wing conspiracy theories and ‘missing’ votes in the wake of Trump’s election win

My subject of interest is conspiracy theories. I’ve written 20 articles for this site and only a few are about something other than conspiracy theories. I’ve never really encountered a lack of material for the subject; conspiracy theories exist about everything and anything. In the course that I teach on the subject, I initially had to focus on the “greatest hits” of conspiracy theories: Moon landing deniers, JFK assassination theories, 9/11 “truth”, and the then growing Flat Earth movement. I used conspiracy theories as a way to teach critical thinking – here is why the people shilling for gold are wrong (it’s just as much of a currency as anything else), these are the fallacies you need to accept to believe the 9/11 truth movement, etc.

Then the world changed. My country elected President Trump on the basis – at least in part – of a series of anti-immigrant conspiracy theories, economic pseudoscience, and a purposeful misunderstanding of climate science. The UK Brexited based on – again, at least in part – anti-immigrant conspiracy theories, misrepresented economic regulations, and the shapes of bananas. Conspiracy theories moved from the fringe sections of conservative parties to the centres of them. As actual government policies began to shift, we started to understand that those elected officials weren’t just giving lip service to the conspiracy theory believer, they were believers as well.

One of my dissertation advisers wrote that it was not our differences that divide us, it is division. Conspiracy theories serve to widen those gaps. They place one group in a perceived state of oppression that they claim the other is responsible for. This was most obvious during the pandemic, where “they” were trying to get us to take ‘poison’ vaccine shots to prevent a disease that didn’t exist.

How could anyone refuse the Covid-19 vaccine? It didn’t make sense, because the disease had so fundamentally changed the world, and the vaccine was going to change it back. During my dissertation defence, which was over Zoom because of the pandemic, I did something rare for a philosophy dissertation: I made a prediction. I predicted that if a vaccine was created, society wasn’t ready to face the refusal of it – unfortunately, I was right. The fact that the vaccines were going to fix the world did not matter, because the emotional weight of the conspiracy theories was far heavier.

Plenty of writers for this magazine, including myself, have stressed that what attracts people to conspiracy theorising (and I include CAM and pseudoscience in the term “conspiracy theory”) is never the facts. Moon landing denialists aren’t attracted to the theory based on the “c-rock” or the radiation from the Van Allen Belts; it’s their suspicion that the government is lying to us, that they are smarter than the rest of us, or that they have a need to feel unique. Skeptics have spilled lots of ink and burned many pixels making this point, so I won’t retread those points in this article.

A green electrical box in front of a brick wall with white spraypaint graffiti reading "9-11: Lies!"
Conspiracy graffiti in Clifton, UK. Image by Hayley Constantine, via Flickr. CC BY-NC-ND 2.0

The unimportance of facts is so important that we must repeat it to those around us, and then slap our forehead when no one listens. I can present a series of links to academic studies that argue facts do not work and, when people still won’t believe me… well, that just proves the point, actually. What matters, the most important thing in dealing with conspiracy theories, is the motivation.

In recent memory, it’s been kind of easy to deal with conspiracy theories because they were all things that “they” believed. I prefer not to call conspiracy theory believers my “opponents” because that implies a desire for conflict when what I want is unity. I feel for these people, and I share their frustrations in the world around us. It was easy because conspiracy theories about Brexit or Trump were not “our” beliefs. We could feel a sense of righteous justification when we tried to combat the theories because they were not only objectively wrong, but also involved the dehumanisation of groups of people. Skepticism about conspiracy theories is easy, when it’s them. It’s much harder when it’s us. It’s much more difficult to look at our allies and friends to ask, “How do you know that is true?”

As the world knows now, my country re-elected a twice-impeached convicted felon to the office of President. This result is largely shocking to those of us in the US skeptical community, and I would hazard to claim the rest of you as well. It was only a few months ago that I wrote about the president-elect’s conspiracy views. When election night rolled around, I was nervous, anxious, but hopeful. Harris had momentum and Trump, I reasoned, could not have picked up new supporters. I assumed that Harris would inherit President Biden’s numbers and, worst-case scenario, Trump would just hold the line. I was not expecting a repeat of 2004, when your press asked how America could be so dumb (It was the Daily Mirror but still…).

All of the -isms (racism, nationalism, etc), the warnings about Project 2025, the misogyny, the threat to use the military against protestors; none of that mattered. None of it mattered because all of that was subservient to the anger and frustration that his voters felt. Yes, they were motivated by conspiratorial reasoning. Yes, he lost votes, but not enough.

When the dust settled and he was, once again, president-elect, those on the political left began noticing something weird, or “interesting.” It wasn’t writers like Aaron Rabinowitz, writing here about what compels young men in the US to vote for Trump. Nor is it the numerous think pieces on what Harris did wrong, what Biden did wrong, or what the Democrats did wrong (there are way too many to link, and they just keep coming).

It was that 15m people who voted for Biden/Harris in 2020 apparently didn’t vote for Harris/Walz in 2024. This was interesting, because it didn’t seem that there would be much difference in a Harris administration and a Biden administration, especially given that she is his Vice President. The salient difference is that she’s much younger than him, which ought to have overcome the trepidation that voters had regarding Biden’s re-election campaign. So, they seemingly just didn’t show up.

6 blue 'I VOTED' stickers with the United States of America's flag, stuck lightly to a grey metal surface.
Classic “I voted” sticker – but, clearly, a lot of people didn’t. Photo by KOMUnews, via Flickr. CC BY 2.0

Conspiracy theories from the left began erupting. There must have been interference in the election process. Whether that came from the Chinese or Russian governments, Trump’s own people, or right-wing election terrorists; something must explain why that 15m sat it out. It wasn’t the first time we saw this kind of thinking – a few months prior, left-wing conspiracy theories also abounded about the attempted assassination of Trump, too. At the time, conspiracy theories exploded, claiming that the attempt was faked in order to gain support. The biggest mystery of that shooting, though, is how quickly we moved on from it; but questioning these notions earned the ire of people that were supposed to be skeptical of such theories.

If there is anything that serves as proof for the emotional draw of conspiracy theories, it’s when it comes from our own side. Conspiracy theories about the missing 20m, 15m, or 8m votes, are just that – conspiracy theories. The idea that the missing votes were stolen is much easier to swallow than the possibility that the number just reflects voter apathy toward the Harris/Walz candidacy.

It’s easier to believe in shenanigans than accept the idea that so many people were unconcerned about the possibility of the incoming administration. Women who were unconcerned about the direct loss of bodily autonomy that Trump’s last administration caused through judicial appointments, people of various immigrant statuses who believe that it’s not “them” that the administration will target, and those people who know that the incoming administration does not understand what a tariff does; so this number needs to explain why a large portion of the Biden/Harris voters decided to stay on the bench for the Harris/Walz.

Maybe. The ultimate problem with the conspiracy theory is that the 2020 number is a final number while the 2024 number is a developing one. While the election is over, the final tally is going to be something that we must wait for. The election will not be officially certified until January, and only then will we get the final number of voters in the 2024 US election.

The conspiracy theory is a knee-jerk reaction to a shocking result. Any conspiracy theory in a developing story is like that. What is important for skeptics is that we understand that while our emotional reaction to events is not something we can control, we can control our ability to recognise it.

Will we soon face AI-related risks? Maybe, but they are probably overestimated

0

On 22 March 2023, Future of Life Institute published an open letter titled “Pause Giant AI Experiments: An Open Letter”. In this letter, eminent AI researchers and venture investors such as Yoshua Bengio, Elon Musk, and Steve Wozniak called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”. The motivation for this letter were alleged risks of loss of control on AI and massive job loss due to the replacing humans by AI. Geoffrey Hinton, who has recently won the Nobel Prize for neural network research, expresses similar concerns. In his interview after winning the Nobel Prize, he said that AI could “get smarter than we are” and “take control”.

These concerns of the scientists are in line with the common trend: we are now facing the rise of “AI anxiety”, the widespread concerns that AI can undermine our safety or activities. Can they have a real basis? It is difficult to answer this question definitely, but let’s try to address it and consider it point by point.

Taking control, or “machine rebellion”

The fear of a “machine rebellion” has been influencing the public consciousness long before the emergence of modern-type AI. The full-fledged scenario of robot takeover can be found in R.U.R., a famous play of Karel Čapek. The same literary writing introduced the term “robot” into our language – so the cultural concept of machine rebellion appears to be as old (if not older) as the scientific concept of robot. By the early 2000s, the concept of machine rebellion – occurring as soon as we have self-learning robots – came even in children’s literature, for example in the novel Eager by Helen Fox. Wide implementation of neural networks in our life has just triggered these pre-existing fears.

Modern neural networks are unlikely to have self-awareness and even any type of real cognition, yet, when dealing with language models – especially chatbots – it can seem that they do. Microsoft’s AI chatbot confesses love for a user, its counterpart by Google seems outraged by a prompt and asks the user to die. Such behaviour might convey the impression that these AI’s can experience emotions and produce speech in a voluntary manner, but they don’t really understand the meaning of a single word they actually say.

A graphical representation of an artificial neural network
A neural network graphic, via MDPI

The things that we call neural networks are essentially a complicated way of fitting mathematical models connecting input and output information. As David Adger, Professor of Linguistics at Queen Mary University of London, explained to Serious Science, AI doesn’t even grasp any model of grammar like we do in our mind. Instead, AI uses a pre-fitted statistical model describing the probability that any specific word B will follow the word A.

For example, I didn’t writea pre-fitted statistical modelled” because I know that an adjective usually requires a noun, not a verb in the past tense. And the verb ‘uses’ requires the same. AI doesn’t know that ‘model’ is a kind of entity described by a noun. It just “knows” that the chance of meeting the word ‘model’ after the words ‘uses’ and ‘statistical is much larger than the chance to meet ‘modelled. And, if an AI wrote this text, it would use ‘model’ just because it is more probable. Not because it makes sense. Modern AIs do not consider any “sense”.

This leads to the understanding that modern AIs are far from sentient. Serg Masís, an American data scientist and the author of a bestselling book Interpretable Machine Learning with Python, explains:

If AI is to supersede or even complement human intelligence in more than a narrow way, it has to improve at generalizing. And right now, AI is powered by deep learning, which is a very brute force resource-intensive approach that lacks the kind of guardrails natural intelligence has, such as a lattice of symbolic, physical, and causal reasoning

Artificial general intelligence may really be once created, but this will probably require new technologies. Neutral networks we have now are bio-inspired but this doesn’t make them sentient. As I explained in my article for The Biochemist, neural networks are not the only type of bio-inspired AIs. At least, there are also artificial immune systems, which doesn’t sound quite as scary. Yet, neural networks are not more self-aware than artificial immune systems.

Such types of AI have no awareness, will, or emotions. Without motivation and awareness, any “rebellion” is impossible. “If the concern is about robots taking over, it’s not going to happen anytime soon” – Serg Masís agrees.

But what about job loss?

Job loss, or “they will replace us”

I work as a freelance medical translator. Often, I receive orders for post-editing the machine translation. Even if this machine translation is loaded into a well-configured computer-assisted translation (CAT) system with all the translation databases, some tasks appear to be extremely arduous. Sometimes, the in-house neural network of the customer hallucinates a table full of numbers, and you have to edit each number manually. Or it translates genetic terms incorrectly, and the painstaking process of hunting down these mistakes awaits me.

Even if a good AI engine is used (for example, DeepL), it cannot cope with all the terminological nuances of the biological text, and editing the text translated with it is much more frustrating than editing the text translated by a human colleague.

People who work in other creative jobs have similar impressions about AI. Diana Masalykina, a freelance illustrator and animator, is skeptical of the possibility that currently existing AIs outcompete human artists:

There’s still a lot of drawing to do in neural illustrations. In general, neural networks are of little help in illustration so far; they could rather be used as a source of inspiration.

The accounts of people using neural networks as a co-pilot for creative jobs confirm the idea expressed by Filip Vester in his article for The Skeptic: creativity is the sphere where existing AI cannot replace humans. Serg Masís takes the same view: “Narrow AI (the AI that exists today) will slowly take jobs that are cognitively manual and repetitive (honestly jobs nobody wants) and enhance other jobs by automating manual and repetitive portions of those jobs”. So, AI is unlikely to deprive us of the dream of finding a creative job. 

It is still just a tool, not a full-fledged independent creator. But can it be misused in a dangerous way?

AI misuse, or “the root of evil”

For this final question, my answer is probably yes. Unfortunately.

In the beginning of 2023, an university student Erika Schafrick tearfully told her TikTok audience of a “zero” grade in Philosophy: “Like sorry I didn’t <freaking> cheat and use ChatGPT just like everyone else in the <freaking> course probably did who passed. I actually tried to do it myself and use my own ideas. But that’s what I get right. That’s what I <freaking> get.”

Irrespective of whether her explanation of this grade was true, her emotional speech again revealed the emerging problem of academic cheating with AI. It is probably widespread in universities, and – much worse! – such cases are increasingly identified in scholarly publications.

While Erika Schafrick sparked a discussion on TikTok, one of the highest-cited chemists, Rafael Luque, was caught in the middle of a scandal. His unusually high publishing activity has attracted the attention of the scientific community: on average, he published an article every 37 hours! Moreover, he once confessed that used ChatGPT to “polish” his texts. University of Córdoba fired him from multiple affiliations as a formal cause. One month later, a Danish biologist Henrik Enghof found his name repeating in a scientific preprint, but didn’t find any of his works cited: instead, all citations were just hallucinated by a neural network, referring to papers that never existed.

A flock of Agent Smiths (Hugo Weaving) in The Matrix Revolutions, dressed in their signature black suits and ties with silver tie clip and black sunglasses.
Hugo Weaving plays the oppressive, indefatigable Agent Smith in the Matrix films, one of the most famous machine apocalypse settings. In The Matrix Revolutions, he copies himself many times. Via wired.it

The academic community faces an unprecedented challenge for scientific integrity. Now, any text submitted to a journal could appear to be generated by AI. We have some automated methods to identify such a misuse: unusual words in the text or the traces of a probabilistic way of generating texts (which I mentioned above) can be the sign that a text has been generated by an AI. But such checks require additional time and give rise to a climate of mistrust in science.

One more problem is the possibility of illegitimate use of AI to control and track people. The documentary book The Perfect Police State by Geoffrey Cain, using the example of China, shows how such control can provide a technological basis for mass reprisals in authoritarian states. These concerns have been reflected in the EU Artificial Intelligence Act, which directly prohibits using AI for social scoring, real-time biometric identification, and assessing the risk of an individual committing a criminal offence. Unfortunately, it is only the first act of its kind and applies only in the territory where the risk of using such technologies is minimal. But this is a perfect framework for regulating the use of AI, providing the key ideas behind its use needing to be prohibited as potentially dangerous.

Does AI itself pose any threat to the values of scientific integrity and democracy? In my opinion, no – it is humans that pose such a threat. AI is just a tool. We still need to find out how to regulate its use to minimise all related risks. But we must remember that now, like 100 and 200 years ago, all illegitimate actions are committed by humans. Not by AI.

Like hundreds of years ago, technologies may not be evil, per se. Only humans do evil, and nothing about that has changed yet.

It’s far to soon too tell patients platelet-rich plasma injections can treat infertility

0

Platelet-rich plasma (PRP) injections are all the rage these days, for a lot of things. They’re responsible for the “vampire facial”. They’re the key ingredient of the “O” – an injection into the vagina and clitoris to apparently improve orgasm. They’re also key to the penile equivalent, the “P” shot, where PRP is injected directly into the head of the penis.

PRP is produced by taking a patient’s blood into a test tube and spinning it in a centrifuge. The heavy red blood cells are pulled to the bottom of the test tube and the lighter plasma – made of platelets and white blood cells – is separated up at the top of the tube. This plasma can be removed to a fresh tube and further concentrated by spinning it once more in a centrifuge, so that the heavier platelets coalesce at the very end of the tube as a pellet. Then, the top two thirds of the plasma is removed before recombining the platelet pellet into the remaining plasma.

There are other methods of doing this, one involving spinning the blood at a higher speed so it separates into three fractions – the red blood cell layer, the plasma layer and, between the two, the “buffy coat”, where the white blood cells and the platelets sit together. Then you can remove the plasma from the top layer and very carefully take the buffy coat out separately. I’ve actually done this before in the lab, but it’s tricky because the buffy coat is very sticky and likes to take some red blood cell off with it, if you’re not careful or skilled.

What’s so great about platelet-rich plasma?

Platelets are one of the many different things floating around in our blood. We have red blood cells – the ones that carry oxygen and carbon dioxide around our body. And then we have white blood cells – there’s many different types of those: granulocytes include basophils, eosinophils, neutrophils, and mast cells and agranulocytes which include lymphocytes (those are your T and B cells and your natural killer cells) and monocytes, which can become macrophages. The white blood cells are part of our immune system.

But we also have platelets. Platelets aren’t really cells; they’re more like fragments of cells. They’re made by really big cells that hang out in the bone marrow called megakaryocytes, which grow really fat and then package up some of their insides into pre-platelet structures and then release them. Once they’ve done their job, they get shifted out into the blood and head to the lungs for the lung immune cells (alveolar macrophages) to eat them.

Platelets are important because they help us plug wounds. When you get a wound, any platelets that are already in the area first grab on to the edges of the wound. Pretty much immediately after this happens, a signalling pathway that activates platelets is triggered. At this point, the platelets start to change shape – they send out protrusions, which are like fibres, to allow all the platelets in the area to grab onto each other and aggregate, forming a plug that fills up the wound and prevents bleeding.

Platelets are also full of growth factors, which help stimulate the pathways needed to heal the wound once it’s been temporarily plugged. The theory is that we can use PRP in any situation where we want to promote healing – it’s been trialled for arthritis, rotor cuff damage, and elbow tendinitis for example.

However, those aren’t the applications I want to discuss. I want to talk about the application I found while scrolling the webpages of Goop, when I came across the headline “Is Ovarian Rejuvenation an Effective Fertility Treatment?”. Of course my interest was piqued.

The article, published in May this year, says:

“Platelet-rich plasma (PRP) therapy, which uses your own plasma to try to restore the cells and tissues in your body, is widely used for orthopedics, dentistry, hair growth, and skin care. But its use for ovarian rejuvenation—a procedure where a doctor injects PRP directly into the ovary—is a new frontier in the fertility world.”

I am always hesitant with “new frontiers” when it comes to any medical treatment. I think science is incredible, and has made remarkable strides when it comes to improving healthcare. But it is not magic. It can be a slow process, filled with lots of false starts. There are many exciting treatments that have lots of scientific plausibility that just never pan out. And there are many others that do eventually pan out, but take decades or more to develop. It’s not that I’m cynical about “new frontiers”, it’s just that I’m pragmatic. We need to wait it out until we have more information. And while we are waiting, we need to be careful when it comes to giving patients false hope around these novel treatments.

The Goop article continues:

Pink baby shoes on a bench

Photo by 🇸🇮 Janko Ferlič on Unsplash

“It is both promising and potentially too good to be true: Studied effects include increased chances of conception, improved hormone regulation, and no response at all. It is too early to know definitively whether it’s worth your time and your money. But the early research is certainly intriguing.”

This is precisely where I think we need to be careful. Patients exploring fertility treatments are often particularly vulnerable to the risk of false hope or the weight or expectation and pressure. I don’t think you can add a couple of caveats and think you are absolved of any responsibility. 

That being said – the Goop article is surprisingly balanced. In an article with fewer than 800 words, it mentions the possibility or likelihood of this treatment not working no less than seven times. It’s just hard to know how much their readers will take those caveats into consideration.

Platelet-rich plasma and infertility

A systematic review was published earlier this year in BMC Pregnancy Childbirth, looking at 14 studies to find out more about the potential application of platelet-rich plasma for infertility.  

It concluded:

“Although there was an improvement of baseline hormones (AMH, FSH and E2) after intraovarian injection of PRP, this improvement failed to reach statistical significance (except the improvement of serum AMH analyzed in quasi-experimental studies).”

In other words, there was no improvement in baseline hormones identified, because failing to reach statistical significance means those differences could just be normal variation.

The strongest positive effect the meta-analysis found was an increase in antral follicle count – this is a measure taken by counting the number of follicles visible on the ovaries using ultrasound and is a reasonable measure of fertility, as fertility measures go – we don’t really have any good measures of fertility. But there are a number of limitations to the data in this study.

Firstly, none of the studies were randomised control trials, studies where patients are randomly assigned to either a test group or a control group. There is a high possibility of bias when patient groups aren’t randomised in this way.

Most of the studies didn’t look at “clinically significant outcomes such as pregnancy and live birth rates” – and as I said before, we don’t have great fertility measures. The best way to measure someone’s fertility is to look at the pregnancies or births.

Where studies did report pregnancy and births, they didn’t compare the results to pre-treatment measures, or controls. So, we can’t actually know if there’s been an increase or not.

All 14 studies were very variable in their study design, baseline hormonal levels, timing of PRP injection, the time for the outcomes assessment and reporting of outcomes. This makes it very hard to compare the results across these studies.

For me, the take-home message from this review is that it’s far, far too early to say anything about PRP intra-ovarian PRP injections. What we have is almost as good as no data. It might be enough to encourage further studies, but we shouldn’t be offering false hope to patients that this is a potential treatment, or one they might have access to any time soon.

Too often, we’re too quick to share the next big possibility when it comes to helping patients and we don’t question whether doing so is ethical or reasonable. I work in open research – I believe in making research accessible, but that also means we have to work hard to contextualise it, and avoid sensationalising it.

From the archive: Memory, fantasy and past lives – reflections on past life regressions

This article originally appeared in The Skeptic, Volume 4, Issue 6, from 1990.

Melvin Harris’s article ‘Many Happy Returns’ published in The Skeptic Vol 4 No 4 reports a commendable journalistic investigation of claims for evidence of reincarnation derived from hypnotic ‘past life regressions’. Ian Wilson has also written a similar account in his book Mind Out of Time [1]. I should like to add my own thoughts to these discussions although I ought to say that I myself have not undertaken any ‘past life’ regression work, only age regression for therapeutic and demonstration purposes.

Memory as a creative process

My first point is that, while I do not believe that subjects of ‘past life regressions’ actually relive previous incarnations, I feel that the explanations offered by both Harris and Wilson in terms of cryptomnesia are rather weak and do not draw adequately on what is known about the nature of human memory. Contrary to one’s subjective impressions, memory, like perception, is an active, constructive process, whereby the mind creates an image, idea, experience or whatever from a limited set of raw sensations or memory traces.

We can think of a memory as an inference based on fragments of material; memories and fantasies are rapidly synthesised around such data points, both internal and external, so that cues and prompts greatly facilitate the process of remembering, and recognition memory, of course, is considerably superior to spontaneous unaided recall.

So memory is not like a videotape loop. Ian Wilson uses that simple model to account for the vividness and accuracy of detail of some ‘past life regressions’. He refers to work of the neurosurgeon Wilder Penfield to support the model. Penfield [2] electrically stimulated portions of exposed cortex of patients on whom he was operating for the alleviation of severe refractory epilepsy. Some patients thus stimulated appeared to relive with great vividness seemingly remote events in their lives, as though indeed an internal ‘videotape loop’ had been triggered. However only a small minority of patients (less than 4%) showed this effect and there is no reason to suppose that their experiences represented accurate memories.

The nature of fantasy

I believe we underestimate people’s ability to fantasise creatively and with great vividness and detail. The act of imagining one has a previous life may provide a focus around which fantasies and actual memories may crystallise and form a quite elaborate structure. This is not so different from those tasks set by our history teacher such as ‘Imagine you are a sailor in Nelson’s fleet’. Similar creative activities are the improvisation of a role by an actor or actress and the development of a character by a novelist.

The effects of priming and compliance

With hypnotic age regressions, as with any regression, we also need to take into account the possible effects of priming, that is, the subjects’ knowing in advance that they are going to be ‘regressed to a previous life’. Under such circumstances, subjects would be able to rehearse their impending performance. Neither should we underestimate the motivation of the subject to comply with the demands of the ‘past life regressor’, even to the point of denying prior knowledge of material elicited. (Here I find the ideas of Wagstaff [3] very useful and convincing.) I find it hard to accept Melvin Harris ‘s interpretation of Jane Evans ‘s numerous past lives in terms of cryptomnesia, because of the sheer weight of material provided, apparently extracted from several novels. A blanket of source amnesia for such a plethora of detail seems intuitively unlikely. Moreover the literature on hypnosis does not indicate that the recall of factual information is enhanced by hypnosis (although there is good anecdotal evidence from clinical practice of a facilitation of retrieval of repressed traumatic memories). I suspect that the extensive ‘cryptomnesia’ evident in some cases is mostly due to the demands made on the subject to produce a large quantity of seemingly factual information and to deny any previous awareness of it. This is essential for the authenticity of the ‘past life’ experience.

Variations on past life regression instructions

Informal experiments [4] have suggested that if you ask subjects to pretend as hard as they can that they are reliving a past life, then their enactments are no less convincing than those of subjects who are put through the hypnotic induction and regression procedures. (As the Bloxham tapes testify, most will be mundane and unconvincing [5] but there are some subjects, perhaps those that have a particularly well-developed facility for creative fantasy who produce vivid and convincing enactments of ‘previous lives’.) From what is known about hypnotic age regression this finding would not be surprising.

I predict that if the instructions to return to a past life are framed in such a way as to permit the subject to display prior know ledge of the details elicited, then source amnesia will diminish without compromising the realism and intensity of the experience. In fact I would go as far as to say that the hypnotic regression procedure imposes unnecessary restrictions on the capacity of many subjects to experience a ‘past life’ because of the demand for source amnesia. This demand may be removed by, for example, informing subjects that by using all their knowledge of history gleaned from lessons at school, films, television programmes, books, etc, they will find that their imagination will allow them to enact vividly and realistically the role of a person who lived some time before they were born. (Perhaps the period of history could be chosen by the subject beforehand.) A preliminary period of relaxation and contemplation may be useful to help the subject think himself or herself into the role but, as with age regression, a key factor is the behaviour of the experimenter who must guide and encourage the subject to develop his or her imagery and must adopt a role which is congruent with the role adopted by the subject.

Conclusion

It is understandable that the topic of ‘past life’ regression should be shunned by psychologists because of its obvious occult and unscientific connotations. If however we view this phenomenon in the context of the study of role enactment and the capacity of individuals to have creative fantasies (and, incidentally, how this capacity may be developed) then there is no reason why experimental psychologists should not regard the subject as a valid one for scientific enquiry.

References

  • [1] Wilson, Mind out of Time? Reincarnation Investigated, Victor Gollancz, London, (1980).
  • [2] W Penfield and L Roberts, Speech and Brain Mechanisms., Princeton University Press, Princeton, NI, (1959).
  • [3] G F Wagstaff, Hypnosis, Compliance and Belief, Harvester Press, Brighton, (1981).
  • [4] M T Orme, Paper presented at ‘Measurement and experimental control in hypnosis’. Symposium of the Metropolitan Branch of the British Society of Experimental and Clinical Hypnosis, University College London, (1982).

“Grey content”: how mainstream journalism accidentally fueled Covid vaccine hesitancy

0

This story was originally written in Portuguese, and published to the website of Revista Questão de Ciência. It appears here with permission.

Factually accurate news stories with headlines or wording suggesting that Covid-19 vaccines could be harmful had nearly 50 times more impact on increasing vaccine hesitancy among Facebook users than outright lies spread by anti-vaxxer groups. This is one of the key findings of a recent study on misinformation and vaccines published in the journal Science.

The authors of the paper suggest in their conclusions that “instead of focusing exclusively on the accuracy of the facts they report, journalists should also consider whether the resulting narratives leave the reader with an accurate view of the world.”

The study, Quantifying the impact of misinformation and vaccine-skeptical content on Facebook, used surveys involving thousands of people and computer simulations to estimate the impact of misleading content about vaccines on Facebook users.

“Impact”, in this case, was defined as the combination of persuasive power (how much the content tends to affect the reader’s opinion) and reach (how many people actually had access to the content). The authors found that fake news is more persuasive, but that “grey content” – an expression adopted to designate material that does not contain lies, but that induces the reader to see exaggerated risks in vaccination – receives much more exposure.

Material categorised as misinformation (“false” and/or “out of context”) by fact-checkers accounted for just 0.3% of views of vaccine content on Facebook in the first few months of 2021. By comparison, a single headline from the Chicago Tribune, categorised as “grey” – “Healthy doctor dies two weeks after receiving COVID vaccine; CDC investigates” – was viewed by 54.9 million Facebook users, or 20% of all registered users in the US. Posts containing this story were viewed 67.8 million times, or six times more than the combined views of all content flagged by fact-checkers.

The authors calculate that “grey content” reduced the intention to get vaccinated against Covid-19 by 2.3 percentage points among Facebook users in the US. They estimate that, with a “conversion rate” between stated intention and actual behaviour of 60% (found in the literature), this represents, in absolute numbers, approximately 3 million people who would have stopped receiving the vaccine because of biased journalistic material. False or clearly misleading content would have had an impact of only 0.05 percentage points, effectively affecting 65,000 people.

False bogeyman

These results reinforce the warning, already issued by numerous experts, that misinformation (and bad information) are relevant factors, but not crucial or predominant, in the composition of most vaccine hesitancy scenarios; the overall impact observed was less than five percentage points.

Trust in the vaccine may be irrelevant if the vaccine is not available, or is expensive, or is only administered during business hours, or by unprepared people, or in places that are difficult to access. “Misinformation” ends up being a convenient scapegoat, a bogeyman to whom failures caused by inadequacies in health systems or incompetence of their managers are attributed.

The study does not look at the causes of the huge difference in reach between fake news, and irresponsible professional journalism on Facebook, but speculates that it may derive both from the filters created by the platform to eliminate or reduce the visibility of false content, and from the audience base of these sources – reputable and well-established newspapers and magazines tend to have more followers and readers than individual influencers or activist groups.

Image of a person with a gloved hand holding a dose of the Pfizer BioNTech vaccine. Image by Lisa Ferdinando (CC BY 2.0) https://www.flickr.com/photos/secdef/50721647742/
Army Spc. Angel Laureano holds a vial of the Covid-19 vaccine, Walter Reed National Military Medical Center, Bethesda, Md., 14 December 2020. (DoD photo by Lisa Ferdinando)

Furthermore, it is not surprising that lies have proven, individually, to be more persuasive than grey journalism: in the case highlighted by the researchers, the news story published by the Chicago Tribune, the headline is appealing and leads the reader to infer a (non-existent) cause-and-effect relationship between the vaccine and the death of the “healthy” doctor, but the text is factually correct and points out the uncertainties involved. A lie is constructed to persuade; the journalistic text is not. Even the appealing headline does not seek to convince – it merely insinuates and, in doing so, draws attention.

According to the authors of the study, “the best predictor of negative persuasive influence [on vaccination intention] is the degree to which the narrative implies health risks from the vaccine.” And it doesn’t matter whether that narrative is based on false or true claims.

Grey journalism

The usual discourse from journalists and news organisations about the dangers of disinformation is marked by a corporatism that sometimes sounds naive, sometimes cynical, and always monumental in its self-indulgence. Ultimately, the message that is conveyed is that professional journalism – defined by the conjunction of career journalists and traditional news organisations – represents, when it comes to quality of information, 100% solution and 0% problem. An assessment, to say the least, somewhat disconnected from reality.

It is true that these professionals and companies have reputations to maintain, which generates a strong incentive to seek and publish the most accurate information possible – in addition to having a series of standards, practices, processes and protocols that function as filters and quality controls.

But there are problems. One is mere human fallibility, which guarantees that no system of controls and processes will be invulnerable to errors and biases. There is also the fact that it is perfectly possible to follow these protocols “in letter” while violating their raison d’être, disrespecting them “in spirit”.

Finally, even adherence to the norms and rules of good old journalism is at risk today, with the systematic downsizing of newsrooms, the replacement of rigor with speed, and the ever-increasing pressure to gain audiences at any cost, in which ethical concerns are sacrificed on the altar of clickbait. The Chicago Tribune’s shameful headline fits this bill.

World vision

The study authors’ call for journalists to reflect on the accuracy of the “worldview” they are conveying to their readers, and for media companies to consider that the public can react to news in ways that “cause real-world harm” – a call that, at first glance, clashes with industry common sense. The latter advocates that both the reader’s “worldview” and what they do with the information published should be treated as irrelevant—the journalist’s duty, after all, is to inform, not engage in social engineering.

The principle is sound, but it hides several subtleties. The first is that the way in which facts – information – are presented telegraphs ways of seeing the world: the choice of words, the sequence of facts, the framing of events and, returning to the example of the Chicago Tribune, the wording of the headline, all can, consciously or not, privilege in the eyes of the reader a certain interpretation among many possibilities. In this regard, a concern for accuracy would not be out of place.

News organisations have also shown themselves to be sensitive to social science findings that indicate the harmful effects of certain types of news. This is why, almost unanimously, responsible media outlets do not report on suicides or, when they do, they address the issue with caution. There are some scientific findings that indicate a “social contagion” effect between news about suicide and a subsequent increase in the number of suicide cases, mainly affecting adolescents.

It would be great if the recent publication in  Science led to an equally sober reaction when it comes to vaccines – and, why not, health issues in general.

Everything you ever wanted to know about astrology – for free

0

On Saturday, 16 November 2024, I presented a 30-minute talk on “Astrology and Science” at Conway Hall in London as part of “Divination Day”, organised by the London Fortean Society (subtitle: Astrology, Tarot, Tomorrow and Us”). It was quite a strange experience as I was the only speaker of the day who actually offered a skeptical perspective on divination. It was clearly not what the audience wanted to hear. Maybe readers of the Skeptic will appreciate it more – so here it is.

In preparing my short talk, I made a lot of use of the volume “Understanding Astrology: A critical review of a thousand empirical studies 1900-2020” by Geoffrey Dean, Arthur Mather, David Nias, and Rudolf Smit, published in 2022 by AinO Publications in Amsterdam. This massive volume (948 pages) is a labour of love by four of the world’s leading experts on empirical tests of astrology that has taken many decades to compile. Although the book can be purchased directly from the publisher, this invaluable resource can also be downloaded absolutely free. This book really does tell you everything you ever wanted to know about astrology.

As for my talk, I started in the traditional manner with a definition. Astrology is defined by the Cambridge online dictionary as follows: “the study of the movements and positions of the sun, moon, planets, and stars in the belief that they affect the character and lives of people”. I pointed out that astrology is also used, amongst other things, in attempts to predict major terrestrial events such as wars, natural disasters, and famines.

By way of preliminary comments, I made a few points that in and of themselves ought to prompt a certain degree of caution in assessing the bold claims of astrologers. For example, although my talk dealt mainly with Western astrology, there are several other versions of astrology, such as Chinese, Hindu, and more – and they all contradict each other. Clearly, they cannot all be valid (and, as it turns out, none of them are).

While no one really knows what the origins of astrology are, one thing we can be absolutely certain of is that, contrary to the claims of many astrologers, it is not based upon careful observation and empirical analysis. For one thing, there are no known records of any such exercise. For another, “there are far too many combinations [of factors] in astrology for our unaided abilities to make sense of” (Dean et al, 2022, p. 70). It is also worth noting that, according to physics, there is no known mechanism whereby astrology could work.

A section of a newspaper's astrology column, by Phillip Alder, "...greater satisfactions will come from working with another on something of mutual importance. Enjoy the relationship. CANCER (June 21-July 22) It is advisable to get your mate's opinion before making a major decision. They may have ideas that surprise you."
Cancer in a newspaper astrology column. Photo by Amayzun, Flickr, CC BY-NC-ND 2.0

Professional astrologers are often very dismissive of star sign (or sun sign) astrology in spite of the fact that many of them are handsomely rewarded for writing star sign columns for newspapers, magazines, and online sites.

They insist that the power of astrology can only be revealed by examination of “the real thing”; that is, the casting of a full horoscope by a professional astrologer based on exact birth details which is then interpreted in a face-to-face consultation between astrologer and client. The truth is that, when properly tested, the “real thing” turns out to have exactly the same level of validity as star sign astrology; that is to say, none whatsoever.

I also wanted to deal up-front with a couple of pseudo-controversies that skeptics often delightedly (and, in my view, misguidedly) point to as allegedly killer blows for astrology. The first of these is the phenomenon of precession. Because the axis of the Earth’s rotation has a slight wobble, the stars overhead do not appear to be in the same position as they were 2,000 years when Western astrology originated. In fact, the constellations that the signs of the zodiac were named after have all moved by roughly one sign. Despite this, Western astrology today is much the same as it was 2,000 years ago. How can this be when, for example, the Sun was actually in Taurus when people who think they are Geminis were born?

Furthermore, it is often claimed that there are, in fact, 13 signs of the zodiac, not 12. The Babylonians knew all about Ophiuchus, the so-called 13th sign, but simply chose to ignore it as they had a bit of a thing about the number 12. The point is that neither precession nor the “13th sign” are recent discoveries.

Astronomers did not suddenly look at the night sky and declare, “Oh my goodness, there’s a constellation that we missed!”. Astrologers have long known about both of these astronomical phenomena but they insist that they are totally irrelevant when it comes to the validity of astrology. For one thing, constellations and astrological signs are not the same thing, despite having the same names. More importantly, astrologers insist, they know astrology is valid – because it works! But does it?

When I was running my anomalistic psychology module at Goldsmiths I used to set the following essay title as a tutorial topic: Does astrology work? The best essays were those which essentially answered, “Yes and no”. In what sense does astrology work? Dean (2016, p. 45) summarises the case for astrology:

Astrology is among the most enduring of human beliefs and has undisputed historical importance. A warm and sympathetic astrologer can provide wisdom and therapy by conversation with great commitment that in today’s society can be hard to find. To many people astrology is a wonderful thing, a complex and beautiful construct that draws their attention to the heavens, making them feel they are an important part of the universe.

But when it comes to the question of whether or not astrology works in the sense of having any scientific validity, the answer is a resounding, “No”! Sometimes astrologers claim that science is simply incapable of testing astrology – and yet you can guarantee they will enthusiastically embrace any empirical findings that appear, at first sight, to offer support for astrology.

This is nicely illustrated by the reaction of astrologers to the publication in 1978 of the results of a study by Mayo, White, and Eysenck. At the time, Hans J. Eysenck was the most influential living British psychologist (albeit that his reputation has taken quite a battering since then). Extraversion scores, assessed using the Eysenck Personality Inventory, were collected from a large sample of adults (n = 2324) and plotted against astrological sign. According to traditional Western astrology, the so-called fire and air signs (Aries, Leo, and Sagittarius; and Gemini, Libra, and Aquarius, respectively) should be more extravert than the earth (Taurus, Virgo, and Capricorn) and water signs (Cancer, Scorpio, and Pisces) – and this is precisely what was found. The results were hailed by astrologers as “possibly the most important development for astrology in this century”.

Their reaction changed somewhat when it was subsequently realised that this pattern of results was due to an artefact known as self-attribution bias. If astrology really worked, it should work whether or not the individuals assessed know anything about their own star sign. In fact, this pattern of results was only found for people familiar with their own star sign. It appears that, whether one believes in astrology or not, most of us know enough about the supposed astrological characteristics of our star sign for it to have enough influence on the way that we complete a personality questionnaire to artefactually produce the reported results.

Astrology diagrams
Astrology diagrams on paper, by Mira Cosic, Pixabay

As indicated by the subtitle of Dean et al’s volume, the predictions of astrology have now been put to the empirical test (at least) a thousand times. The results overwhelmingly fail to support the validity of astrology. When occasional apparently positive significant results in support of astrology are reported, they are invariably found to be due to such factors as inappropriate statistical analysis, inadequate sample sizes, and a range of other artefacts as meticulously detailed by Dean and colleagues.

By far the longest chapter in Understanding Astrology is Chapter 7, which provides critical analysis of a plethora of individual studies of astrology’s claims. Helpfully, Chapter 8 provides overviews of tests divided up into a number of different categories. With respect to tests of signs, the conclusion is clear: “signs (not just sun signs) [are] the most tested and most disconfirmed idea in astrology. In short, no factor involving signs can have any practical effect beyond that due to expectation and role-playing” (Dean et al, p. 772).

When it comes to tests of astrologers themselves, the picture is equally bleak. For one thing, the degree of agreement between astrologers assessing the same chart is abysmally low. Tests relating to geophysical factors, time twins, predictions, horary astrology, mind-related factors, divination, and wrong charts fare no better. The overall conclusion is inescapable. Despite a huge amount of time, effort, and resources having been directed at testing astrology, there is no evidence whatsoever that it has any validity.

In my talk, I also considered the issue of the scientific status of astrology. Up until fairly recently, astrologers were keen to claim that astrology was a true science as shown by a plethora of quotations on page 86 of Understanding Astrology. Indeed, some went so far as to describe it as “the oldest science in existence” and “the most exact of all the exact sciences”. More recently, many astrologers have adopted a negative attitude towards science, no doubt as a result of the accumulation of consistently negative findings from properly conducted scientific tests of astrology.

When it comes to discriminating between science and pseudoscience, a number of commentators have proposed lists of the features that typically characterise pseudoscience. My favourite such list was that proposed by the late Scott Lilienfeld (2005):

  • A tendency to invoke ad hoc hypotheses, which can be thought of as “escape hatches” or loopholes, as a means of immunising claims from falsification
  • An absence of self-correction and an accompanying intellectual stagnation
  • An emphasis on confirmation rather than refutation
  • A tendency to place the burden of proof on skeptics, not proponents, of claims
  • Excessive reliance on anecdotal and testimonial evidence to substantiate claims
  • Evasion of the scrutiny afforded by peer review
  • Absence of “connectivity”, that is, a failure to build on existing scientific knowledge
  • Use of impressive-sounding jargon whose primary purpose is to lend claims a façade of scientific respectability
  • An absence of boundary conditions, that is, a failure to specify the settings under which claims do not hold.

When astrology is assessed in terms of these features, I think any reasonable person would agree that astrology is indeed the Queen of Pseudosciences.

If astrology has no scientific validity whatsoever, the obvious question raised is why do so many people believe in it? Dean et al do an excellent job of addressing this question in Chapter 9 on Artifacts. I only had time to address this issue briefly (my whole talk was only 30 minutes long after all!) in terms of such factors as the Barnum effect (AKA Forer effect), cold reading (intentional and unintentional), subjective validation, making the chart fit the client, making the client fit the chart, selective memory, self-fulfilling prophecies, and the use of (generally) positive readings.

I finished with my favourite astrology cartoon from Punch. A bemused man is staring at the TV as a newscaster reports that, “The practice of astrology took a major step towards achieving credibility today when, as predicted, everyone born under the sign of Scorpio was run over by an egg lorry”. I got the distinct impression that my audience did not appreciate the cartoon as much as I did.

References

  • Dean, G. A. (2016). Does astrology need to be true? A 30-year update. Skeptical Inquirer, 40(4), 38-45.
  • Dean, G., Mather, A., Nias, D., & Smit, R. (2022). Understanding Astrology: A critical review of a thousand empirical studies 1900-2020. Amsterdam: AinO Publishers.
  • Mayo, J., White, O., & Eysenck, H. J. (1978). An empirical study of the relation between astrological factors and personality. Journal of Social Psychology, 105, 229-236.

First do no harm? Treatments don’t need to be harmless, as long they do good

According to a wide-spread belief, the demand to ‘first do no harm’ originates from the Hippocratic oath, which all doctors take when finishing medical school. Few of us appreciate that both of these assumptions are incorrect. Firstly, doctors do not normally take the Hippocratic oath – you only need to read it to understand why; it contains many points that would make little sense today. And secondly, the Hippocratic oath does not actually include the phrase ‘first do no harm’.

Nevertheless, you might assume, doctors are obliged to do no harm. After all, isn’t this is an important principle of medical ethics? In fact, this assumption is also not entirely correct.

Strictly speaking, doctors need to do harm all the time. Their injections hurt, their diagnostic procedures can be painful, their medications can cause adverse effects, their surgical interventions are full of risks, and so on. None of this would be remotely acceptable if doctors had to adhere to the principle of first doing no harm.

The ethical imperative of doing no harm has therefore long been changed to the demand of doing more good than harm. Of course, doctors must be allowed to do harm, even quite serious harm, as long as their actions can be expected to generate more good overall.

In medical terms, we speak of the risk/benefit balance of an intervention. If the known risks of a treatment are greater than the expected benefits, we cannot ethically administer or prescribe it; however, if the benefits demonstrably outweigh the risks, we can consider it a reasonable option.

But what about the many treatments where there is uncertainty regarding either the risks or benefits, or both? In such cases of incomplete evidence, we need to look at the best data currently available and, together with the patient, try to make an informed judgement.

Perhaps this is best explained by running through a few exemplary scenarios in which homeopathy (the classic example of a therapy that is promoted as being entirely harmless) is being employed.

Small, white, spherical homeopathic pills spill out of their brown glass bottle, positioned horizontally on a slate, onto a green leaf
Homeopathy: there’s nothing in it

Scenario 1: Patient with a self-limiting condition

Let’s assume our patient has a common cold and consults her physician, who prescribes a homeopathic remedy. One could argue that no harm is done in such a situation. The treatment cannot be expected to help beyond a placebo effect, but the cold will disappear in just a few days, and the patient will not suffer any side effects of the prescription. This attitude might be common, but it disregards the following potential for harm:

  1. The cost of the treatment
  2. The possibility that our patient suffers needlessly for several days with cold symptoms that might easily be treatable with a non-homeopathic therapy
  3. The possibility of our patient getting the erroneous impression that homeopathy is an effective therapy (because, after all, the cold did eventually go away), and therefore opts to use it for future, more serious illnesses.

What if the physician only prescribed homeopathy because the patient asked him to do so? Strictly speaking, the above issues of harm also apply in this situation. The ethical response of the doctor would have been to inform the patient what the best evidence tells us (namely that homeopathy is a placebo therapy), provide assurance about the nature of the condition, and prescribe effective symptomatic treatments as needed.

And what if the physician does all of these things and, in addition, prescribes homeopathy because the patient wants it? In this case, the possibility of harms one and three still apply.

Scenario 2: Patient with a chronic condition

Consider a patient suffering from chronic painful arthritis who consults her physician, who prescribes homeopathic remedies as the sole therapy. In such a situation, the following harms need to be considered:

  1. The cost for the treatment
  2. The possibility that our patient suffers needlessly from symptoms that are treatable. As these symptoms can be serious, this would often amount to medical negligence.

What if the physician only prescribed homeopathy because the patient asked him to do so, and refused any conventional therapies? In such cases, it is the physician’s ethical duty to inform the patient about the best evidence as it pertains to homeopathy as well as effective conventional treatments for their condition. Failure to do so would amount to negligence. The patient is then free to decide, of course. But so is the physician; nobody can force them to prescribe ineffective treatments. If no agreement can be reached, the patient might have to change physician.

And what if the physician does inform the patient adequately, but also prescribes homeopathy because the patient insists on it? In this case, the possibility of the above harms still applies.

Scenario 3: Patient with a life-threatening condition

Consider a young man with testicular cancer (a malignancy with a good prognosis if adequately treated). He consults his doctor, who prescribes homeopathic remedies as the sole therapy. In such a situation, the physician is grossly negligent and could be struck off because of it.

What if the physician prescribed homeopathy because the patient asked him to do so, and refused conventional therapies? Again, in such a case, the physician has an ethical duty to inform the patient about the best evidence as it pertains to homeopathy and to the conventional treatment for his cancer; failure to do so would be negligent. Again, the patient is free to decide what they want to do, as is the physician. If no agreement can be reached, the patient might wish to change his doctor.

And what if the physician does inform the patient adequately, makes sure that he receives effective oncological treatments, but also prescribes homeopathy because the patient insists on it? In this case, there is still the possibility of harms regarding cost potential to leave the patient with the impression that homeopathy is effective, which may lead to further harm in the future.

In conclusion

These scenarios are of course theoretical and, in everyday practice, many other factors might need considering. They nevertheless demonstrate why the demand ‘first do no harm’ is today obsolete, and had to be replaced by ‘do more good than harm’.

The latter principle does not support homeopathy (or any other ineffective so-called alternative medicine [SCAM]). In other words, the use of allegedly harmless but ineffective treatments is not ethical. But what if a clinician strongly believes in the effectiveness of homeopathy (or other SCAM)? In this case, they are clearly not acting according to the best available evidence – and that, of course, is also unethical.

The conclusions of all this are, I think, twofold. First, the ethical imperative of ‘first do no harm’ is often misunderstood, particularly in the realm of SCAM. Second, it cannot provide a sound justification for employing therapies that are (allegedly) free of adverse effects.

Pacemakers don’t work when they’re switched off – we should doubt studies that say otherwise

0

‘Nerdstock’ was the less-than-flattering name that BBC Four gave to Robin Ince’s ‘Nine Lessons and Carols’ when it was broadcast in early 2010. I was unable to attend in person, so I was more than happy to discover that the BBC had recorded it for later broadcast, even if the title wasn’t my cup of tea.

One particularly vivid recollection is of Ben Goldacre, author of the excellent book Bad Science, bounding on stage with that terrific, restless enthusiasm, and giving a whistle-stop tour of the amazing power of the placebo effect and its evil twin, the nocebo.

‘Pacemakers,’ he spilled into the microphone, ‘improve congestive cardiac failure after they’ve been put in, but before they’ve been switched on!’

The comment was clearly crafted and timed to elicit the laugh it earned, but it was also a comment which made me sit up and pay attention. Pacemakers improve congestive cardiac failure after they’ve been put in, but before they’ve been switched on? As Goldare quickly commented, this is a ‘properly outrageous’ finding. And it’s one which piqued my interest.

The study behind this claim was published in the American Journal of Cardiology in 1999. Linde et al. recruited 81 patients with obstructive hypertrophic cardiomyopathy (HOCM, a thickening of the heart wall) and fitted them with pacemakers. The patients were randomly assigned to one of two pacemaker settings: ‘atrioventricular synchronous pacing’ or ‘atrial inhibited mode’ at 30 beats per minute.

In AV synchronous pacing, the pacemaker coordinates the electrical activity between the atria and the ventricles. It detects the natural electrical signals of the atria and delivers a pacing pulse to the ventricles with an appropriate delay, mimicking the heart’s normal conduction pathway and ensuring proper timing between contractions. In other words, the pacemaker is switched on.

In contrast, the atrial inhibited mode only stimulates the heart if it fails to beat normally. In this study, the inhibited pacemakers were set to 30 beats per minute, meaning the heart would have to pause for two full seconds before a pulse was triggered. Since this won’t typically happen, these pacemakers were effectively placebos. They are fully implanted, but do not stimulate the heart.

Several key metrics were recorded at baseline. The New York Heart Association (NYHA) functional classification is a scale to measure the extent of the patient’s heart failure, ranging from one (you have no symptoms and no limitation on physical activity) to four (severe limitation, symptoms even at rest, usually bedridden).

Another metric was left ventricular outflow tract (LVOT) gradient, a measurement of the pressure difference between the aorta and the left ventricle during blood ejection. In patients with HOCM, higher pressure is expected because the thickened heart muscle forces the same blood volume through a narrower passage.

Linde also recorded the time in minutes each patient could tolerate exercise, their peak oxygen uptake, their peak heart rate during exercise, and systolic anterior motion, an abnormal movement of the mitral valve.

Three months after implantation, the patients were brought back and the measurements were repeated. Linde reported that patients with inactive pacemakers saw a statistically significant decrease in their LVOT gradient.

Pacemakers improve congestive cardiac failure after they’ve been put in, but before they’ve been switched on!

Of course, there is far more nuance to this study than that.

As we have touched upon in previous articles, statistical significance is typically measured using a p-value. This is a number between zero and one which expresses the probability of getting these results, or more extreme results, even if there is no true effect.

By convention, scientists have collectively agreed that the threshold for what is considered ‘significant’ should be 0.05, or 5%. While this is a widely accepted convention, it is ultimately arbitrary. Some researchers advocate for a stricter threshold, moving it from 0.05 (5%) to 0.01 (1%) or even to 0.005 (0.5%).

The LVOT gradient change reached statistical significance with a p-value of 0.04. Although this meets the conventional threshold, it is marginal and would not qualify as significant under stricter criteria. By contrast, the change in LVOT with the active pacemaker had a significance level of < 0.0001 – below one tenth of one percent!

Furthermore, the other measurements don’t support the claim that this was a real clinical improvement. Exercise tolerance and peak heart rate significantly improved in the active pacemaker group but showed no improvement in the placebo group. There was also no change in the NYHA functional class for the placebo group, meaning these patients did not experience an improvement in their overall heart failure symptoms. In contrast, the active pacemaker group improved by a full class, on average, from 2.6 to 1.7.

In total, of the six objective measures assessed, five showed no significant change for the placebo pacemaker group. The only measure that showed an effect, LVOT gradient, was borderline significant, and is likely spurious.

A p-value is meaningful for an individual outcome measure, but every additional measure you make is another opportunity to record a fluke finding by chance. Measure two things, you’re twice as likely to find something significant. Three things, and you’re three times as likely, and so on. If Linde’s figures were correctly adjusted to account for the many different outcomes recorded, the LVOT gradient change would be exposed as random noise.

Alongside the objective measurements recorded so far, Linde also records several subjective measurements, such as chest pain, dizziness, and reported palpitations. As we have discussed elsewhere, subjective measurements must be interpreted cautiously because of potential for them to be modified by bias. Subject expectancy effects, answers of politeness, and other forms of response bias can lead to patients reporting changes which don’t actually reflect any real world difference. Bearing this in mind, it is notable that of the 14 subjective outcomes which were measured, only five showed a significant effect in the placebo group, and all but one of these (palpitations) disappear when an adjustment is made for the number of outcomes measured in this study.

Another problematic aspect of this study was that three patients had their placebo pacemakers reconfigured for ‘active’ pacing part way through the study, because they complained to their doctors that the treatment wasn’t working. The paper isn’t clear what happened to the data from these patients. Was it removed from the analysis altogether? It doesn’t appear to have been, as the size of the inactive pacing group is unchanged at the end. Were the patients assessed early and their data included anyway? No such early assessment is mentioned. Unfortunately, either approach risks skewing the data in favour of the placebo effect by deemphasising the patients who failed to respond to placebo.

In short, while Linde does not provide strong evidence for a real therapeutic placebo effect, it is the kind of research that continues to be cited as evidence for the power of placebo. The reported effects are vanishingly small, and the clinical utility is dubious. I remain skeptical.