Home Blog Page 41

Regality in the modern world: can a skeptic be a monarchist?

After the party, some reflection is needed. Charles III’s coronation was impressive. Presumably, many in the skeptic community were thrilled by Lionel Richie’s tunes, and the pageantry and elegance of the event. But a fundamental question ought to remain in the mind of skeptics: in a 21st Century Western nation, is monarchy justified?

Monarchies have long been associated with irrational thinking. Regal selections have often been left to the dictum of prophets and ordeals. If the Bible story is to be believed, David became king because Samuel – a man claiming to speak for God – chose him in that capacity after hearing God’s instructions. Sometimes, corruption has made its way through the process. Herodotus tells the amusing story of how Darius became the Persian king. There were six candidates, and they decided that the person whose horse neighed first at sunrise would become king. Darius secretly instructed a slave to rub his hands on a mare’s genitals, and then place the hands near the nostril of Darius’ horse. The horse was naturally excited, it neighed, and the rest is history.  

Throughout the ancient world, kings were sacred. They were often attributed with supernatural powers. Pharaohs were thought to become the god Osiris when they died. Roman religion was big on emperor worship. By the Middle Ages, kings were not explicitly worshipped, but they were still believed to hold magical powers, such as the royal touch: kings could allegedly cure diseases merely by touching people, not unlike Jesus and other miracle workers.

It is not surprising that monarchies are imbedded with religious concepts: their very power relies not on rational political deliberation, but rather, appeals to mysterious supernatural forces. As modernity and the disenchantment of the world took hold in Western societies as a result of the Enlightenment, monarchies began to disappear. In the modern Western world, some monarchies have managed to survive only to the extent that they have preemptively diminished their own powers. Louis XVI’s head rolled because he stubbornly refused to compromise his absolutism.

In contrast, the British monarchy is still around, largely because royalsor more likely, their advisors – realised that a dilution of monarchial power in a parliamentary system could keep revolutionary fervour in check. As long as they are merely ceremonial figures, people in modern societies are more willing to accept monarchs. The late Queen Elizabeth understood this concept, and gracefully remained apolitical. There are doubts about whether King Charles will be as wise. After all, this is the man who wrote the infamous “black spider memos,” using his influence to lobby politicians into favouring his pet agendas.

But even devoid of absolutist powers, monarchies ultimately rely on an irrational principle that skeptics ought to challenge. That is the heredity principle. A society maximises its efficiency and justice by allotting roles on the basis of merits and capacities. The accident of having been born in a particular family cannot be a relevant criterion. Unlike elected officials, kings are not hired for their job via a contract; their birth status determines their role and privilege. We would find it laughable if David Beckham’s son were designated as Manchester United’s star midfielder, solely on the grounds of his father’s glorious antics at Old Trafford. We appreciate the senior Beckham’s amazing feats, but if the lad wants a place in the Premier League, he must earn it with his own merits. Why should it be any different with heads of State?

Admittedly, not every position in society can be contractual. Parents naturally rule over their children. Some monarchists have made the case that a king acts very much as a father, and therefore, needs no contract. This was the argument put forth by 17th Century philosopher Robert Filmer in Patriarcha, a classic defense of royal absolutism. Such arguments may hold some water if applied to tribal chieftains or petty kings, as their domains may resemble family structures. But the intimate world of family decisions is no match for the complexities of modern industrialised nations.

Monarchies may have some advantages. They provide a sense of national unity. But often, this delves into jingoism. To the extent that they are founded on blind allegiance and little deliberation, love of king ultimately resembles religious zealotry, and very few good things can ever come out of that.

Monarchies may also provide political stability. But that stability is similar to imperial peace: conflict is diminished, but ultimately at the expense of fairness and freedom.

It is debatable whether monarchies – and especially the British monarchy – contribute to a nation’s public revenue via tourism. But even if they did, skeptics ought to aim for higher principles. The sale of indulgences also contributed to public revenue, and thanks to that business scheme, we have the marvellous Saint Peter’s Basilica in Rome. Nevertheless, we are justifiably repulsed by Johann Tetzel’s infamous slogan, “As soon as the coin in the coffer rings, the soul from purgatory springs.” In the same manner, perhaps royal symbols help a nation make a buck. But in the long term, wealth production is sustained by efficiency and rationality. This implies that contract must prevail over status, and positions must be allotted on the basis of merit, not birthright. If we seek to apply these meritocratic principles throughout society, what is stopping us from applying them to the very top position a nation can allot?

The fifth horseman: environmental determinism rides again in ‘An Inconvenient Apocalypse’

The cover of "An Inconvenient Apocalypse" - a black book cover with a drawing of a lit match on the front and the title in white. It also says "Environmental collapse, climate crisis and the fate of humanity" by Wes Jackson and Robert Jensen.

An Inconvenient Apocalypse’, by sustainable agriculture pioneer Wes Jackson and journalist and academic Robert Jensen, is a manifesto for acceptance of society’s imminent collapse based on ancient ideas about the fixity of human nature. We’re told we’re on a road to nowhere, having made a wrong turn 10,000 years ago with the adoption of agriculture, and there’s little we can do but brace ourselves.

Ever since humans learned to domesticate plants and animals and take up a sedentary existence, so the argument goes, we have been addicted to dense energy in the form of rich sources of carbon. Adoption of agriculture led to food surpluses, the division of labour, social hierarchies and inequality, and the only cure for this affliction is a radical reduction in the size of the human population. The book’s motto fewer and less – fewer people, less stuff – is underlined by the flaming match on the cover. A soft landing is out of the question; we are about to be cooked.

How cooked? “Hard times are coming for everyone…there are no workable solutions to the most pressing problems of our historical moment. The best we can do is minimize the suffering and destruction” (p 10). “…the human future, even if today’s progressive social movements were to be as successful as possible, will be gritty and grim” (p 11). “We assume that coming decades will present new challenges that require people to move quickly to adapt to the fraying and eventual breakdown of existing social and biophysical systems” (p 58). “…the bad ending will not be contained to specific societies but will be global” (p 63).

How soon? “I work on the assumption that if not in my lifetime, it’s likely coming within the lifetime of my child”  says Jensen(p 75) making it within the next twenty to fifty years, with the qualifier “…it is folly to offer precise predictions” (p 68).

Justification for the extent, severity and timing of the coming collapse relies on the claim that “…people who pay attention to the ecological data – whether or not they acknowledge it to others – are thinking apocalyptically” (p 76).

What follows? The authors’ vision of a sustainable post-apocalyptic future is humanity reorganised into communities of no more than 150 people collectively managing their birth and death rates such that the aggregate global population remains under two billion. A tough ask, to which this warning is added: “Finding a humane and democratic path to that dramatically lower number will not be easy. It may not be possible” (p 54).

If that after-thought triggers memories of the brutal ideological experiments of the twentieth century, we are encouraged to stay strong as failure to grapple with the hard question of population is “…an indication of moral and intellectual weakness” (p 48).

What makes them so sure? In two words, environmental determinism. The view that the physical environment shapes human culture, in this case that agriculture was the beginning of the end. This ancient idea, recently revived in the popular culture, is not as the authors suggest an inescapable aspect of human culture but a contested concept within history, geography, anthropology and philosophy. An alternative view, that human culture is jointly shaped by free will, biology and the physical environment, is not contradictory as the authors claim (p132) but entirely compatible, one held by the majority of 7,600 philosophers surveyed in 2020.

The version of environmental determinism present here demonises dense energy as the metaphorical apple in the garden of Eden, but there is nothing intrinsically wrong with dense energy. The problem is that we are consuming it faster than it is being produced, in ways that liberate more greenhouse gases (GHGs) than are sequestered during its formation. With only 14% of global energy derived from renewables, half of which are biofuels that are typically net contributors to GHGs, we have a mountain to climb. But as the authors acknowledge at the outset, that task is political; “If we can’t align our living arrangements with the laws of physics and chemistry, we are in trouble” (p 3). By chapter two the ‘if’ has gone and trouble is inevitable.

What environmental determinism tends to overlook is that evolution is evolving. Complementing Darwinian natural selection where the environment acts as selector, animals have learned to participate in their own selection. By learning from each other, by predicting the consequences of their actions, by using tools. All of which blurs the distinction between human nature and human culture.

What will ultimately determine our fate is not an argument from first principles that we are slaves to our environment but the difference between the rate of growth in the human population (which has been in decline for forty years) and the rate of increase in the use of carbon neutral energy (which is accelerating). And while humanity may not change behaviour rapidly enough to avoid multiple crises for decades to come, the ‘…breakdown of existing social and biophysical systems’ is neither predetermined nor inevitable.

At one point the authors make the perfectly reasonable proposition that our ignorance of the world will always vastly outweigh our knowledge (p 67). Despite this they confidently predict our fate on the basis of a narrative about events occurring 10,000 years ago that are in dispute and constantly being reinterpreted with new discoveries. In the last 30 years evidence has emerged that hierarchical hunter gatherers predated agriculture and egalitarian farming communities predated the great civilisations of antiquity, implying that the means of subsistence does not adequately explain the origin of social hierarchies, greed and inequality.

A clue to the authors’ conviction comes in chapter three ‘We Are All Apocalyptic Now’ based on a previous book of Jensen’s. It suggests a secular reading of the Hebrew prophets, the apocalyptic literature and the Christian concept of grace helps us comes to terms with our fate. Strip these texts of their supernatural content and we are left with sin and retribution without the prospect of salvation or redemption. ‘Ecospheric grace’, gratitude for the gift of life, is offered In their place.

For those who believe humanity is shaped by the interplay between genes, environment and free will, a more helpful guide to the future would be a secular reading of Luke 3:23 “Physician heal thyself” for as they say, ‘…a predisposition does not condemn us to act out our instincts’ (p 122). The dominant culture might be incapable of change due to the power of vested interests, but that is in our hands. Not the gods, the landscape or the laws of physics.

An Inconvenient Apocalypse: Environmental Collapse, Climate Crisis and the Fate of Humanity’ by Wes Jackson and Robert Jensen, University of Notre Dame Press 2022, ISBN 978-0-268-20366-5 £17.47

A double-edge sword: should we be labelling kids?

There is no doubt, at least in my mind, that childhood, and certainly teenage years, have become progressively harder over the last 30 years. There are many and varied reasons for this. Social media is undoubtedly a mixed blessing, benefitting some youngsters and harming others, but the pressures on children to conform are greater than ever. Being continually measured against your classmates, for the league table that the school’s reputation depends on, is another factor. Rates of anxiety and depression in teenagers are have been steadily rising, and this has got worse over the pandemic. Along with this increase, more children are being diagnosed as neurodiverse. This is partly because the parameters for the diagnosis have changed, but also because of increased recognition in children who aren’t seriously disabled by their condition.

Before we look at some of the pros and cons of labels, I should mention that there is a substantial literature on labelling theory which I don’t propose to examine here. I will simply look at a few common applications of labels and whether they are helpful, or otherwise, in practice.

Labels can be helpful…

Labels change the attitude of the observer. If a teacher knows that a child is neurodiverse, they may cease to use unhelpful labels such as ‘naughty’, and have ready plans that can be implemented. The label may also help the individual understand some of their own difficulties: “I am autistic, which is why I’m having to work harder to understand what someone is feeling, when my friends just seem to know.”

A mental health label can also help a youngster to feel validated. “I am depressed, I’m not just being difficult, so it’s valid for me to take time out”. The label can also suggest potential ways forward, such as engaging in therapy. A label is often a diagnosis, and NICE (The National Institute for Clinical Excellence) recommends treatments by diagnosis; looking at the evidence to recommend the best treatments and therapies.

Labels, therefore, can be useful: they can validate the individual, suggest treatments, and act as shorthand for others to begin to understand and help.

…or not so helpful

Labels also have many and various downsides. The harms and benefits depend on the type of label, so it’s worth addressing them separately

Identifying that a child is neurodivergent is helpful for that child. But what about the borderline between diagnosis and no diagnosis? Although we talk about a spectrum, and on the face of it we acknowledge that there is no hard and fast cut off, in reality the situation is treated as a clear binary: you are either neurodivergent or you are not. Someone may have many traits, but just miss the cut off. So despite having almost the same difficulties as the child who falls just the other side of the line, they will be treated entirely differently.

Schools often say that they treat the child not the label, but the reality is that without the label there may be no help forthcoming, and certainly no extra resources or exam breaks. This is not a problem of the label for the child who has one, but is a problem inherent in any label which has a relatively arbitrary cut off. It would be excellent if we were able to treat all children as individuals, who have traits along a spectrum, some of which needing more support than others. But the realities of our society and its resource allocation make this impossible.

More serious psychiatric labels can be more contentious, primarily because most psychiatric diagnoses are still based on an understanding of clusters of symptoms rather than underlying pathology. We know now, for example, that schizophrenia is probably the final manifestation of a number of underlying processes which may respond differently to treatment, and take different pathways through life. NICE does a good job of identifying which treatments are likely to be most effective for whom; their recommendations are based on large RCTs, but they are statistical, so don’t necessarily indicate which treatment will be best for any specific individual.

This difficulty can be seen in trials for antidepressants. Trials show them to be somewhat effective, while clinical experience suggests that they vary from being life-saving to being completely ineffective. With no way to differentiate between the groups, a large trial may find such a treatment, on average, moderately effective. Work is being done to understand more, but there are still many unknowns.

Talking therapies, social support, and psychological support are also important, but different people will appreciate different forms of assistance. A label can induce a lazy, “That person has X, I know what to do with X,” kind of approach. Yet one person’s experience, and what they found helpful, may not carry at all to someone else. It is common to hear that those who have been through something understand it best. But they may simply understand how it was for them, and then misapply that understanding to other people.

There is a risk that others will see the label, and not the person. This can lead to a lack of curiosity about why someone may be struggling, assuming that the label tells you everything you need to know about them. Medically it can lead to ‘diagnostic overshadowing’: new problems being overlooked or not investigated properly as they are assumed to be part of the diagnosis already known.

People, especially teenagers, can do this to themselves. It is fairly common now to see someone describe everything they do as being because of their label. “My ADHD made be do it” for example, when ‘it’ has nothing to do with ADHD. A harmless example on social media is self-described ‘empaths’, as if empathy wasn’t a normal human trait. This may just help someone feel a bit special. But more pernicious examples include the numerous ‘introvert’ memes, which in their extreme forms can lead to someone avoiding all social contact and, attributing it to their introvert status. In reality, these memes often characterise social phobia, and while they may appear to offer comfort, misidentifying a problem which they could get help with, and withdrawing from social contact, is unlikely to be the best way forward.

Identifying with an erroneous self-diagnosis also happens. Self-diagnosis can be helpful as a step towards getting more formal confirmation (where it is possible to do so), and getting help, where it exists. Commonly people make reasonably accurate observations about themselves, but not always. This can happen for teenagers, particularly those with a troubled or chronic trauma background, who are looking for explanations for their dysphoria. They may decide that they are autistic, or that they have bipolar disorder – labels which might make life more bearable, but might impede them receiving the kind of support they need. Upon learning that their self-diagnosis was erroneous, some people are relieved to learn they do not have a specific diagnosis, but it can leave others feeling bereft if they lose what they thought was the reason for their dysphoria.

The balance

It may seem that there are more downsides to mental health labels than there are advantages. But the advantages can be so significant that in practice they are not only here to stay, but useful. It is however important to bear in mind the downsides in order mitigate them as much as possible.

Replicating a classic false memory study: Lost in the mall again

One of the most influential and highly cited studies in the history of psychology was that reported by Elizabeth Loftus and Jaqueline Pickrell in 1995: “The formation of false memories”. The study is widely referred to as the “Lost in the mall” study, because it claimed to demonstrate that it was relatively easy to implant full or partial false memories in some adults of a childhood event that never actually happened – specifically, of getting lost in a shopping mall at the age of five. The study is not only described in virtually all introductory psychology textbooks but is often cited by expert witnesses in cases involving allegations of childhood abuse that may potentially be based upon false memories.

In recent years, however, the study has been the target of criticism from commentators who believe that the ease with which false memories can be implanted may have been exaggerated. Attention has been drawn to a number of shortcomings of the original study. The major criticisms of the 1995 study are the small sample size (only 24 participants took part), the lack of clear definitions of what was meant by a “partial” or “full” false memory, the lack of clear descriptions of any coding system used to categorise memory reports, and finally the lack of direct replications by other researchers. Fortunately, the results of a recent replication attempt which address all of these criticisms have just been published, showing that the conclusions of the original study are essentially sound.

Before going any further, it is worth understanding the original study by Loftus and Pickrell. Participants were informed that they were taking part in a study of childhood memories and asked to remember as much as they could about four events that were said to have taken place in childhood. Three of the events really had taken place, according to close family members, but one of them had not. The fictitious event was getting lost in a shopping mall at the age of five, being very upset, and eventually being reunited with parents. A week or two after the initial presentation of the four events, participants were interviewed and asked if they could remember any more details. They were asked to try to remember as much as they could prior to a second interview one or two weeks after the first. It was claimed that six of the participants developed full or partial memories of the target event.

The recent replication attempt, led by Gillian Murphy, was carried out by researchers at University College Cork and University College Dublin. Apart from changes to address the methodological weaknesses of the original study described above, the same methodology was generally followed, and details of the planned data analysis were pre-registered. A much larger sample of participants took part (N = 123) and clear definitions of a “full” and “partial” memory were given. Coders were trained, and followed detailed instructions on how memories should be coded. Overall, 35% of participants reported full (8%) or partial (27%) false memories for the target event based upon the coding system used.

The replication study went further than the original study by presenting statements describing 111 false memories from participants’ interviews to over a thousand respondents in an online “mock jury” study. In general, the mock jurors were very likely to believe the false memory reports.

The original “Lost in the mall” study has been criticised on the grounds that the necessary deception involved is unethical and might upset those taking part once it is revealed. Murphy and her colleagues took the opportunity to actually ask their participants and their familial informants how they felt about the deception once they had been fully debriefed. It turned that both groups held generally positive attitudes about taking part, indicating that they had enjoyed the experience and had learned something interesting about memory.

Perhaps the results of this replication should not come as a surprise. Although no direct replications of Loftus and Pickrell’s study had been reported prior to that of Murphy and colleagues, Alan Scoboria and colleagues had previously reported the results of a “mega-analysis” of interview transcripts of eight published memory implantation studies (total N = 423) using the same approach as that pioneered by Loftus and Pickrell. Across the studies, a range of different false childhood memories had been implanted including getting into trouble with a teacher, taking a trip in a hot air balloon, and spilling a bowl of punch over the bride’s parents at a wedding. The original studies had reported a wide range of estimates of the rate of successful memory implantation reflecting the use of different coding systems to define full and partial false memories by different investigators. Scoboria and colleagues came up with their own coding system based upon memory science and applied that same standard system to the transcripts from the eight studies. On that basis, some 30.4% of cases were classified as false memories, a result pretty much in line with that of Murphy and colleagues.

The recent recognition of the value of direct replication studies is to be welcomed. In a previous article for the Skeptic, I reported that a large, multi-lab study aimed at replicating a controversial paranormal effect had in fact demonstrated pretty conclusively that the original effect was not real. In the words of the researchers, “the original experiment was likely affected by methodological flaws or it was a chance finding”. The current successful replication of a classic memory study should suffice to silence critics of the original study. The results of both unsuccessful and successful replication attempts are of great value to science.

Pandemic science communication: learning lessons from the life vest uprising

Once upon a time there was a distant island with beautiful beaches, where swimming was the most popular sport. Everyone loved to swim, but there was a problem: some beaches were very rough, and many people drowned. The king (the republic had not yet been proclaimed in this world of allegories) was concerned, and decided to institute a public health measure: the State would provide life vests at the entrance to the beaches. Many accepted the vests; others did not. The king was pleased that the life vests appeared to contribute toward a reduction in drownings. The laws of physics were on the side of the vests. Animal models showed that the vests helped mice to stay afloat. On some beaches, observational studies also demonstrated the effectiveness of vests. Even randomised studies were conducted, to compare the people with vests to those without, and they showed very positive results.

This went on until a group of scientists published a review that pooled all the available studies, and the review indicated that the provision of vests hadn’t actually reduced the overall number of drownings. The researchers explained that it was not about using a life vest as an individual piece of equipment, but rather that it was about people’s behaviour. There were some who failed to inflate the life vest. Some even accepted the vest, but then removed it in the water, because they found it uncomfortable. Yet others were emboldened by the perceived safety provided by the vest, and eventually they began to take risks. The scientists concluded that use of life vests in the community did not reduce the drowning mortality rate.

The king was very angry. He accused the scientists of being crazy. Disgruntled, he decided to create the life vest police. Now, the life vest was mandatory. “Without one, you cannot enter the beach”, went the decree.

The people were infuriated, and became divided on the issue. On one side, there were those who were pro–life vest, and wore it all the time, even indoors. On the other, there were those who wore a vest to go to the beach — after all, it was mandatory — but took it off when they went into the water, or they deflated it, just to be contrary. Some even tied theirs to their feet in protest. And so, the drowning deaths continued.

The kingdom’s newspaper reported that people had died, even when wearing a life vest. The op-ed pages were ablaze. “What good is a life vest? What about the girl who never wore a vest and swam ten miles a day? Using a vest is an over-reaction; it’s for those who can’t swim!” Polarisation grew, and the drowning mortality rates remained unchanged. Nothing had been achieved by instituting the life vest rule.

So the king decided to change his strategy. He eventually heeded the counsel of behavioural psychologists and experts on science communication, and mounted a huge campaign designed to educate people on how the life vest worked, and where and why it should be worn. There was no need to use it on all beaches, or indoors. It wouldn’t do any good to use it deflated. It was meant to be worn on the torso, not to be tied to one’s foot. The life vest is not magic, if you swim in very dangerous waters, you’ll still be at risk. The king relaxed his mandate and created incentives: whoever left the ocean wearing a life vest correctly would be given a voucher for a popsicle. He made a deal with scientists to design a new study, to be conducted after the educational campaign.

Alas, it was already too late. The anti-vest group decided it was all a conspiracy, and ignored the new campaigns. They spread the rumour that the free popsicles contained poisonous ingredients, and that the “pro-vest” scientists were all in the pockets of the life vest industry. “People die all the time; it’s part of life, even more so in a kingdom by the sea.”

The effectiveness of any public health intervention, whether one involving vaccines, masks, or allegorical life vests, crucially depends on its being understood and accepted by the public, and on public behaviour, regardless of the intervention’s biological plausibility or its effectiveness in controlled clinical trials.

When new studies force us to contemplate the possibility that measures which seem correct are not showing the expected practical results, we must humbly face reality and review our strategies. When the health of the community is at stake, making it work is far more important than insisting on being right.

The whole masking or vaccine effectiveness debate tells us more about our skills in communicating science, especially when it comes to explaining risk and probability, than it says about masking trials, or vaccine trials.  If there is a take home message from all this nonsense about what the Cochrane metanalysis “really says”, it is that we have to invest heavily in science and risk communication as essential tools for pandemic preparedness.

We must be able to communicate about uncertainty, with honesty and transparency, explaining what we know, and what we still don’t know, and how and why we are making decisions based on the available evidence. Most countries implemented mask and vaccine mandates with very little effort, and without campaigns to explain how they work. Countless people wore masks outdoors, or indoors but with their noses sticking out, or took them off to talk. A great number of people seemed to think that masks were magic, and thus didn’t need to avoid crowded and enclosed spaces. A great number of policy makers seemed to think that masks were magic, and thus measures to improve ventilation in school premises, and to reduce rush hours on public transportation, were deemed unnecessary.

The same happened with vaccine mandates and explaining vaccine efficacy. Many people expected the vaccines – any vaccine – to protect them magically, so when people who were vaccinated got the disease, they assumed that the vaccine doesn’t work, and that they had been fooled.

During my deposition at the Brazilian Senate, when I was asked as an expert witness in science communication, I used another analogy, not much different from the life vests, to explain risk and probability: the goalkeeper analogy. A good vaccine is like a good goalkeeper, who will save most shots, but not every shot. Even the best goalkeeper will concede a goal from time to time, because they are not infallible. And if the team’s defence is bad, there will be far more shots coming in the goalkeeper’s direction, making it even more probable that a goal is going to be conceded.

Similarly, if society’s “defence” is bad – if people refuse vaccination, refuse to wear a mask or to engage in protective measures – there will be a lot more of the virus circulating, making it more probable that people will get the disease, even if they are personally taking precautions. The goalkeeper is not magic, and neither is the vaccine. Or the masks. Or the life vests. It all depends on how well they are adopted by society.

The analogies are from perfect, of course. Life vests are meant to protect the person wearing them, whereas masks protect people around us. Goalkeepers usually receive one ball at a time, and vaccines protect us from various amounts or viral loads. But analogies give us an idea of how to assess risk and probability, and more importantly, they take us away from that preconceived version of science, where everything has a right or wrong answer. Science is built on uncertainty and probability. Anyone trying to sell you certainty and 100% answers is most likely engaging in pseudoscience and conspiracy theories. Expecting vaccines to work 100% with no side effects is an almost impossible expectation, as much as expecting a goalkeeper to be invincible.

Scientists need training to speak to the public and to policy makers, and policy makers need training to understand scientific method and processes, and to communicate science to their constituents. If we don’t start taking science communication seriously, we won’t be any better prepared for the next health emergency.

The life vest analogy in this article was translated from the original Portuguese by Ricardo Borges Costa, and first appeared in O Globo newspaper.

Embracing GPT-4 as a collaborator: why we must rethink our approach to AI

The release of GPT-4, the latest and most powerful AI language model, has sparked a flurry of debate and, in some cases, outright dismissal. Critics brand it a “stochastic parrot” or merely a “cheating tool,” while proponents argue for its potential as a valuable collaborator. It’s time to take a measured look at GPT-4 and consider how we can best utilise its capabilities to enhance human innovation, rather than merely scoff at its existence.

First, let’s address the parrot in the room. It’s true that GPT-4, as an AI language model, generates text based on patterns it has learned from massive datasets. This might lead some to view it as merely a sophisticated echo chamber, regurgitating information without understanding its meaning. However, this perspective fails to appreciate the nuance and adaptability of GPT-4’s output. It’s not just rehashing the same information; it’s synthesising and recombining it in new and creative ways. This level of complexity and nuance in language generation points to a more advanced form of intelligence than mere mimicry.

As for the concern that GPT-4 will be used primarily as a cheating tool, it’s essential to recognise that any technology can be misused. The key is to address the underlying issues that lead to misuse, rather than dismissing the technology itself. Rather than focusing on potential malfeasance, we should be looking at how GPT-4 can be integrated into education as a valuable tool to enhance learning and creativity.

The true potential of GPT-4 lies in its capacity to act as a powerful collaborator for human ingenuity. Imagine an AI language model that can help researchers generate new hypothese, assist writers in overcoming writer’s block, or offer insights that lead to breakthroughs in understanding. We’re not just talking about a glorified spell-checker here; we’re talking about a tool that can amplify human creativity and innovation.

GPT-4 is already being used in various fields to great effect. For instance, in the realm of scientific research, AI language models are assisting researchers in identifying potential avenues for exploration and generating novel hypotheses. By working alongside researchers, GPT-4 can help to push the boundaries of human knowledge further than ever before.

In the world of writing and journalism, GPT-4 has the potential to act as a muse, providing inspiration for articles and helping writers overcome creative roadblocks. While some may argue that this amounts to outsourcing creativity, it’s crucial to recognise that GPT-4 is not replacing human input but rather augmenting it. The collaboration between human and AI can lead to the development of more nuanced and thought-provoking content.

It’s important to acknowledge and address the concerns that arise from integrating GPT-4 into various aspects of our lives. Detractors may worry about issues such as the potential loss of human jobs, the erosion of critical thinking, or even the AI’s inability to fully comprehend the ethical and moral implications of the content it generates. These concerns are legitimate and merit serious consideration as we move towards a world where AI plays an increasingly prominent role.

To assuage these fears, we must establish a framework for responsible and ethical AI use. This includes setting clear boundaries on the scope of GPT-4’s involvement in decision-making processes and ensuring that human oversight remains paramount. Additionally, by prioritising education on the ethical use of AI tools and fostering critical thinking skills, we can cultivate a generation that is adept at discerning the nuances of AI-generated content and can apply their own moral compass to the information presented. As we continue to advance AI technology, it’s essential to strike a balance between embracing its potential as a collaborator and maintaining a healthy respect for its limitations.

To reap the benefits of GPT-4’s potential as a collaborator, it’s essential to adopt a mindset that emphasises cooperation and mutual learning. Instead of viewing GPT-4 as a threat to human creativity or a shortcut for dishonest students, we should be looking at how we can harness its capabilities to improve our own work.

Educators and institutions must take the lead in integrating GPT-4 into their curricula and teaching methods. By guiding students in the ethical use of AI language models and instructing them on how to work effectively with GPT-4, we can foster a generation that is both technologically savvy and morally grounded.

In conclusion, it’s time to move past the simplistic labels and prejudices that have surrounded GPT-4 since its release. Let’s recognise the potential of this remarkable AI language model as a valuable collaborator, and take the necessary steps to integrate it into our educational systems and professional environments. By addressing legitimate concerns and fostering responsible AI use, we can ensure that GPT-4’s potential is harnessed for the greater good. Together, we can empower the next generation of thinkers, innovators, and creators to collaborate with GPT-4 in a manner that is both ethically grounded and profoundly transformative.

NOTE:

This entire article – including the parrot joke – was 100% written by Chat GPT, based on <10 natural language prompts worth of training from Aaron, plus the following column prompt:

I’d like you to try writing a column in my public style. I want you to argue against the conventional wisdom that GPT-4 is “just a stochastic parrot” or “just a cheating tool” and in favor of the position that GPT-4 is an AGI and should be treated as a valuable collaborator. The column should be persuasive but also measured and not polemical. We don’t want readers thinking “did GPT write this?”

And the following prompt for supplemental material:

Your article was excellent. Could you write one or two more paragraphs emphasising the seriousness of potential objections while trying to assuage the reader’s fears about GPT and integrate those paragraphs into the essay you wrote?

Russell Crowe’s new film ‘The Pope’s Exorcist’ tries to depict priests as superheroes

0

When I learned that Academy Award winner Russell Crowe was to play the late Italian Catholic priest Gabriele Amorth (1925-2016) in a horror film about exorcism, I felt a chill run down my spine. Amorth was a real exorcist who worked in Rome for three decades and was a popular public figure in the Italian media. He had already served as an indirect inspiration for Anthony Hopkins’ character in the 2011 film “The Rite,” which was “based on real facts” about exorcisms. The exorcist behind the facts used in the script of “The Rite” was not Amorth, but the personality of Hopkins’ character was based on him.

Declaring a work “based on” or “inspired by” real facts is a marketing strategy used to promote exorcism films since the original “The Exorcist” in 1971. It is a manoeuvre that, in addition to being dishonest (the “real fact” often boils down to a line of dialogue or an object that appears on the scene), ends up popularising the idea of demonic possession as a palpable and plausible phenomenon – with all the harmful mental health and political repercussions that accompany it – and weakening, in the minds of many people, the barrier between reality and fiction, which is already too tenuous in the modern world.

In life, Father Amorth was a real activist and agitator for exorcism, writing books that attacked theologians and bishops who preferred to see the devil as an abstract figure, a poetic metaphor for the evil of the human heart, rather than an actual supernatural entity, a fallen angel. He defended the notion that an unnecessary exorcism does not harm anyone, but denying exorcism to a real demon-possessed person would represent a crime of omission. Hence it follows that the best course of action would be to exorcise first and ask questions later.

The popularity of this line of thinking has led Italy to suffer from an epidemic of possessions, and if films only vaguely inspired by Gabriele Amorth could be a problem, what can we expect from a film where the protagonist is named after him? Anyway, it sends chills down my spine.

After watching the film, however, I am happy to report that my fears were unfounded: “The Pope’s Exorcist” is an adventure and fantasy film whose commitment to verisimilitude is comparable to that of the Harry Potter films and books (which, by the way, Gabriele Amorth condemned for the risk of “pushing children towards the occult”) or the adventures of Marvel heroes.

The main action of “The Pope’s Exorcist” takes place in a cursed abbey that resembles a vampire’s castle but is, spiritually, a demon-possessed counterpart to Hogwarts. The possessed boy is not named Harry, but he’s pretty close: Henry. The climax of the film drinks in equal portions from “Harry Potter and the Order of the Phoenix” and Hammer’s “Dracula”, the first colour film about the eponymous vampire, released in 1958.

Unlike exorcism films “based on real events,” which gradually introduce fantasy and supernatural elements to support this illusion, attempting to plant uncomfortable doubts in the viewer – “Could it be true? Could it happen to me?” – “The Pope’s Exorcist” diverges from the real world right at the beginning. The year is 1987, but the pope is not John Paul II; he is a generic pontiff played by the great Italian actor Franco Nero, with a beard. When was the last time the world had a bearded pope? Not in the last 100 years. The only points where the film adheres to historical reality are in the 1980s soundtrack and the fact that, in 1987, there was an exorcist priest named Gabriele Amorth in the Vatican.

The film seeks to launch a franchise where the Vatican works as a kind of SHIELD (the anti-terrorist super-organisation of Marvel films), and the exorcists, like Avengers in a cassock, are on a mission to “seek and destroy” evil forces that resemble TV series like Supernatural or Constantine. One of the companies behind “The Pope’s Exorcist” is Loyola Productions, linked to the Society of Jesus, the same religious order that gave the world the current pope, Francis, which perhaps helps to explain the attempt to turn priests into superheroes.

If “The Pope’s Exorcist” does not commit the sin of pushing onto the public the idea of demonic possession as a palpable and real event with which we should all be concerned, neither it achieves the grace of being a good film, even if in the key of fantasy.

Unfortunately, there is not a single original atom there, either in the story, in the development of the characters, in the “revelations” that punctuate the plot, or in the way in which the product of all this recycling materialises on the screen. Echoes of all the exorcism films made in the last 50 years are there – including the inevitable teenager who crawls along the walls like a spider – and the “conspiracy” that unfolds near the end, a kind of Da Vinci Code in reverse, does not impress.

Crowe, Nero, and the rest of the cast are fine in their roles, and there’s a female nude scene that’s quite surprising, given that the film was financed by a Catholic religious order. And that’s all.

A few years ago, William Friedkin, director of “The Exorcist,” directed and presented a documentary about the real Gabriele Amorth, “The Devil and Father Amorth.” There we see the flesh-and-blood Amorth in action, at 91 years old, confronting a real “demoniac” called Cristina. The exorcism scenes are alternately shocking – it’s obvious Cristina has a problem – and tedious, with prayer circles and responsories that anyone (like me) growing up in an Italian Catholic family has watched (and yawned at) several times. At times, smells from childhood came to mind, but not my favourites.

Friedkin also interviews experts in neurology and mental health. The editing of the film allows those who repeat clichés such as “we don’t rule out anything,” “science doesn’t explain everything,” or “there are things we still don’t understand” to speak freely. It reserves only a few precious seconds for those who claim that patients with the same symptoms as Cristina react very well to psychotherapy and medication, and that possession is a contextual disorder. If religion takes away the symptoms, it’s because religion probably put them there in the first place: the person is possessed because their culture predicts the occurrence of possessions. However, these are snippets of the documentary that you might miss if you blink at the wrong time, and Friedkin’s narration is quick to bury any unwanted conclusions.

Upon reviewing the documentary, it becomes clear why the film starring “Father Gabriele Amorth” is a complete fantasy, and not a true story of the real Father Gabriele Amorth. Reality is too unspectacular and too inconvenient.

Diagnostic agnostic: flawed research overstates the overlaps between ADHD, ASD, and OCD

0

I am going to preface this article with some pretty heavy caveats. If you’re not interested in reading all the reasons I might not be the best person to review the paper I’m going to try and review, please skip to the next subheading.

Firstly, I don’t understand a thing about machine learning. I do not know what its strengths or limitations are, and I haven’t the first clue on how to judge when and where the use of machine learning may or may not be appropriate. I am a luddite at heart and, despite growing up in the 90’s and 00’s, I am perpetually baffled by modern technology. I’m fairly certain most stuff happens by literal magic, and we’re all too frightened to admit that none of us understand the explanations of how the internet, digital photographs or microwaves work, because we think everyone else “gets it”.

I know embarrassingly little about Neuroscience. Although my undergraduate degree was dual honours BSc Neuroscience and Psychology, all I can really remember about neuroscience is that it is really hard because brains are extremely complicated, mysterious and all round wacky little organs.

I also have no idea how to read an MRI, an fMRI or the results of any other brain scan. The images they produce are pretty, and hearing someone confidently explain what the funky colours on the spinning 3D image of a brain probably mean is extremely compelling and I want to understand, but alas, these talks leave me with little other than a renewed interest in geology. You know where you are with a rock.

It’s also important to remember that while I do have a PhD in psychology, like all PhDs it is in a super-specific topic. I might write an article about it one day, when I am able to contemplate my thesis without experiencing a degree of panic that makes me wonder whether my tongue is swelling or if my head is shrinking. But suffice to say, I am not an expert in neurodiversity, mental health or anything that a university syllabus would give the unfriendly title of “abnormal psychology”.

Oh, and finally, a declaration of a conflict of interests, or perhaps a cause of motivated reasoning: last year I was diagnosed with ADHD, and since getting that diagnosis a lot of stuff has started to make sense. I’m pretty attached to my diagnosis; I feel like I understand myself a little more and feel better equipped to start taming my own personal chaos. 

So, with that brief list of the most obvious reasons why I am utterly unqualified to review a paper which uses MRI scans and computer learning to examine whether Attention Deficit Hyperactivity Disorder (ADHD), Autism Spectrum Disorder (ASD), and Obsessive-Compulsive Disorder (OCD) should be considered as three distinct categories of neurodiversity out of the way, let’s get to the review!

TLDR; I’m not really qualified to examine this paper, but I’m going to anyway.

Examining overlap and homogeneity in ASD, ADHD and OCD

A few months ago I stumbled across a twitter thread discussing the findings of a paper titled Examining overlap and homogeneity in ASD, ADHD and OCD: a data driven, diagnostic-agnostic approach, by Kushki et al. 2019. Combing through the thread, which excitedly described the research, some of the reported findings of the paper didn’t sound right. The overall message seemed to be that this paper had good evidence to suggest that ASD, ADHD and OCD are not separate conditions, but are just different points on a long continuum of neurodiversity, and therefore these diagnostic labels may not be valid. This is not the first paper to raise these questions, indeed there is ongoing debate over the diagnosis of various neurodevelopmental and behavioural conditions, and a not insubstantial number of researchers argue that Autistic Spectrum Disorder and Attention Deficit Hyperactivity Disorder should be thought of as different points on the same spectrum. The paper by Kushki et al adds to this existing literature, but is unusual in including OCD.

The thread alerting me to this research gives an excellent summary of the paper by Kushki et al. The tweeter didn’t jump to any conclusions or wildly extrapolate from what was said in the original paper. In fact, to my mind, they have done an excellent job of picking out key points in the paper, summarising them accurately, and sharing them in an accessible way.

However, I am skeptical of the argument that ADHD and ASD are different presentations of the same condition for several reasons, not least of which is that people with ADHD and ASD (and OCD for that matter) appear to require different forms of support and respond differently to the same medications. But the twitter thread and many of the replies seemed to indicate that there was something special about the paper by Kushki et al, and that it dealt a killer blow to the idea that ADHD, ASD and even OCD are different conditions. Unsure whether something had been lost in translation from paper to twitter thread, or if the research was indeed a scientific coup de grâce, I looked at the original paper to see for myself.

The study described by Kushki et al uses brain scans and machine learning to investigate whether individuals diagnosed with ADHD, ASD and OCD have different neuroanatomy from each other, and from individuals who appear to be neurotypical. Unfortunately, I think the paper suffers from some important flaws.

The participant pool is questionable, and brains are weird

The paper claims to take a “diagnostic agnostic” approach to investigating the cortical thickness in the brains of 226 children between the ages of 6 and 18. Of these 226 children, 112 had a pre-existing primary diagnosis of ASD, 58 had a pre-existing primary diagnosis of ADHD, 24 had a pre-existing primary diagnosis of OCD, and 22 of the children had no pre-existing diagnosis and were used as “typically developing controls”.

This is quite a small sample, once you look into how the different groups break down. Most research using an MRI scanner is likely to have a small sample size, because an MRI is a very expensive bit of kit and is expensive to use. This is par for the course, but what does strike me as a problem is the large disparity in the numbers between groups. Again, this is not an insurmountable problem, as there are many statistical tests that can compensate for variations in sample sizes between research groups, but it did cause me to raise my right eyebrow in a quizzical fashion.

The age range also struck me as very odd: between the ages of 6 and 18, people – and their brains – change an awful lot. Like I said, I am absolutely not an expert, but I’m not convinced it is possible to draw good conclusions about the relationship between cortical thickness and neurodevelopmental categories in children when you are looking at brains in people with such a large age range.

The relationship between brain anatomy and behaviour is rarely simple, but for illustrative purposes, let’s imagine that it was. Imagine that there is a hypothetical area of the brain responsible for controlling how much a person likes bread – we can call this area the Nucleus of Crumb Buns. Now let us imagine that individuals who report absolutely loving bread reliably have a much larger Nucleus of Crumb Buns than individuals who are bread ambivalent. Brains change so much that measuring the Nucleus of Crumb Buns in a six year old might not be predictive of the individual’s love of bread, or the size of their Nucleus of Crumb Buns at eighteen.  

But the relationship between structure, function and location is not that simple in humans, and it is rarely possible to determine a clear relationship between location, structure, function and ultimately emotions, cognition, and behaviour. Brains are tricky little things. For example, generally, the language centres of the human brain are located in the left hemisphere, but when an individual loses their language ability due to a stroke in the left hemisphere of the brain, during recovery it is not uncommon for analogous locations in the right hemisphere to start taking on the jobs that are usually done in the left hemisphere. Brains change, and under some circumstances, they can change a lot.

The behavioural measures are lacking

All participants underwent a series of behavioural measures: the Social Communication Questionnaire (SCQ) which measures just one highly variable aspect of autism; the inattention subscale of the Strengths and Weaknesses of ADHD-symptoms and normal behaviour rating scale (SWAN) – so not the whole SWAN, just one subscale measuring just one feature of ADHD; and the Toronto Obsessive Compulsive Scale (TOCS). Participants also completed the child behavioural checklist (CBCL) and an “age-appropriate IQ test”.

To my mind, it seems very simplistic to use only one measure that can be indicative of ASD, and only one measure that is indicative of ADHD. Firstly, both ADHD and ASD are incredibly heterogenous. Some people with a diagnosis of ASD have relatively little difficulty in navigating social interactions, whereas others do struggle and find social interactions extremely stressful. Inattentiveness is common among people with a diagnosis of ADHD, but for some this is not their primary symptom, whereas for others, their inattentiveness is the bane of their lives.

Measuring such complex disorders with single behavioural measures is, in my opinion, overly simplistic. At this point, the researchers are not measuring disorders, they are measuring some symptoms which we know are neither ubiquitous nor unique to the disorders the researchers claim they are indicative of.

Furthermore, these measures are filled out not by the individual who may have OCD, ADHD or ASD, but by clinicians, based on conversations with the child and/or their parents, or based on clinician observations. This approach of course has its uses, but it does create opportunities for misinterpretation – for example, from the outside, self-stimulation (stimming) behaviour can look very similar to obsessive compulsive behaviour, but the internal cognition behind these two things are different.

Brain images and debatable results

To explain in simplified terms, the physiological brain data was collected via brain scans of participants in one of two hospitals (one in Toronto, the other in Montreal). The images were processed using a series of analysis tools and procedures resulting in measures of cortical thickness in 76 regions of the brain of each participant. A regression analysis was used to measure the likely influence of age, sex, and hospital (to control for the influence the different machines might have on the images) on cortical measurements. This data and the data from the behavioural measures was pumped into a machine to do its clever machine learning thing resulting in data clusters. These data clusters are groups where more similar data points collect together to make one cluster, and more dissimilar data points make different clusters.

This process in this study resulted in 14 clusters based on neuroanatomy. These clusters were then analysed to see which participant (and therefore which primary diagnosis) fell into which cluster, allowing the researchers to see if each cluster had only individuals with one diagnosis in, or if some clusters were populated by individuals with different diagnoses.

Through all of this data and computer magic, the researchers have produced a bunch of statistics, and some interesting data visualisation and infographics. Some of it is funky with lots of nice colours representing all the different clusters, but a lot of it is also just confusing and hard to interpret. For now, lets ignore the pretty colours and look at the basic statistics. The participants in each diagnostic group had significantly higher scores on the questionnaires that measure the primary symptom of each group, meaning participants in the ASD category had the highest scores in the SCQ, the questionnaire designed to measure social difficulties. Participants with OCD had the highest scores in the questionaries used to measure OCD. ADHD participants had the highest levels of inattentiveness. Basically, this very unsurprising result shows us that participants who have been diagnosed with condition A had very high scores on the questionnaire frequently used to help diagnose condition A.

The more interesting findings are that 46% of participants with a diagnosis of ASD also met the clinical cut off on the SWAN (indicative of ADHD), and 40% of those with an ASD diagnosis met the clinical cut off for the OCD measure. 11% of participants with a diagnosis of ADHD met the clinical cut off on the SCQ measure (indicative of ASD), 17% of participants with an ADHD diagnosis met the clinical cut off on the OCD measure. 8% of participants with an OCD diagnosis met the clinical cut off for the measure indicative of ASD, and 24% of the participants with an OCD diagnosis reached the clinical cut off on the SWAN, which is indicative of ADHD. And finally, of the 22 typically developing controls, none of them met the clinical cut off on the measures indicative of ADHD or ASD, but 2 of them did exceed the clinical cut off on the OCD measure. Or, to put it another way, two of the individuals who were in the typically developing category may well have had undiagnosed OCD.

So, what to make of this set of results? Does the finding that many of the participants with one diagnosis had clinically significant symptoms of other conditions mean that all these conditions are just different points on one big continuum as the authors seem to be arguing, or does it just provide further support for the often reported finding that people with one neurodevelopmental condition often also have another, because ADHD, ASD and OCD are not mutually exclusive conditions. I rather think it is the latter.

In fact, it is well documented that ADHD and ASD often co-present. Research looking into these conditions have found that anywhere between 20-80% of children diagnosed with ASD also meet the diagnostic criteria for ADHD, while between 30-50% of children with a diagnosis of ADHD also meet the diagnostic criteria for ASD. In this paper by Kushki et al, only the participants’ primary diagnosis is taken into account, it is entirely possible that many of the participants had more than one of the conditions mentioned, and it is not clear if participants with a dual diagnosis were excluded.

Furthermore, even if all participants only had one diagnosis (apart from the 22 controls who had no diagnosis), it doesn’t necessarily mean they only had one condition, and, as this study itself beautifully illustrates with the two typically developing controls who were found to meet the diagnostic criteria for OCD, sometimes people are not diagnosed with conditions because no one has realised the condition is there until the individual is tested for it. 

There is also the problem that some of the symptoms of these disorders can look similar from the outside but have completely different causes. Take for example the finding that people with ADHD and people with ASD often find social interactions difficult: whether the causes of these similar difficulties are the same isn’t always clear. Growing up as a neurodiverse person surrounded by neurotypical people can mean that you are different, you interact differently to the other people, you notice you are a bit different, other people notice you are a bit different, and other people treat you differently to how they treat other people around you. This in itself could cause social problems, not because the condition causes social problems, but because the treatment you repeatedly receive from others could cause social anxiety if that treatment is negative.

Alternatively, similar outward symptoms or habits, such as having difficulty following conversations, can have different internal cognitive causes, for example, one person may have difficulty socialising because they can’t read between the lines of what people are saying and take everything at face value, while others may have difficulty socialising because they can’t pay attention when others are talking, so they lose the thread of a conversation and misunderstand what’s going on because they zoned out for a bit.

The data visualisations are strange

The authors clearly went to a lot of time and effort to create bar graphs and scatter plots and brain images in many fun colours, so I guess it would be rude of me to ignore it all, so let’s look at one of the more penetrable data visualisations. There is a lovely bar graph showing what percentage of participants from each diagnostic category (either OCD, ADHD, ASD or typically developing) ended up in which cluster, from clusters 1 to 10. This is a tad confusing, as earlier in the paper we are dealing with 14 clusters, but in this graph there are only 10, and later there is a diagram showing clusters 1 to 10 but cluster 5 has been removed because it was a poorly defined cluster. This in itself makes me wonder how exacting these clusters are if they can be reduced from 14, to 10, to 9 for no clear reason. I can’t help but wonder if the number of clusters the researchers use in their analysis is somewhat arbitrary, or at least the result of a judgement call on the part of the researchers, not a strict number dictated by the results of cold hard algorithms created by machine learning.  

On this graph there is a yellow bar on the left-hand side representing the number of typically developing participants the machine learning tool put into cluster one. The yellow bar indicates around 36% of 22 people i.e., 7.98 people who are categorised as typically developing have been sorted into cluster one.

The bar chart as described in the text

Over on the far right, representing cluster 10 are around 12% of the 112 participants with a diagnosis of ASD, and no one else. No one else is in cluster 10, just 14.64 children with a diagnosis of ASD . Clusters 8 and 9 are populated only by children diagnosed with ASD or ADHD. In clusters 1-7 there are children with OCD, and neurotypical kids are only found in clusters 1-5. There are ADHD kids in every cluster except for 10, and there are ASD kids in every single cluster Between the cluster groups, the slightly odd percentages, the presence of a dotted line designating half the graph as the neurodevelopmental disorders group (even though there are children diagnosed with a neurodevelopmental disorder in the half of the graph that isn’t labelled as the developmental disorders group), it is difficult to determine exactly what conclusions one is supposed to draw from this graph, even at very close inspection.

There is also a funky looking matrix, where SCQ (the “autism measure”) scores run along the X axis, and SWAN (the “ADHD measure”) scores run along the Y axis. The body of this matrix is populated by presumably 266 different data points that are in one of 9 colours (each colour representing cluster 1-10, although cluster 5 has been removed because it was too poorly defined) and are one of four different shapes (each shape representing either participants with a diagnosis of OCD, ASD, ADHD, or participants classed as typically developing participants).

The matrix as described in the text

It’s a real doozy of an image. Some of the data points are so close together they almost merge into a blob. I think the impression that one is supposed to get from this image is that the various diagnostic labels are meaningless, because there are different shapes and different colours all over the place, but if you look closely, all of the typically developing controls are exclusively in the bottom left quadrant, with low scores on the SCQ and SWAN. The left half of the matrix and the top right quadrant has participants with a primary diagnosis of ADHD, ASD or OCD, and in the bottom right quadrant, with high scores on the SCQ and subclinical scores on SWAN you see nothing but round symbols representing those with a diagnosis of ASD, and one solitary square point, representing one participant with OCD, who appears to have very low score on the SWAN but has a SCQ score that just crosses the line into being clinically significant.

Sure, this matrix does not show kids with a diagnosis of ADHD, OCD or ASD all sitting nicely in their own little boxes with absolutely no overlap, but to me, this matrix doesn’t look like meaningless noise either. To me this looks like a visualisation of the extent to which different categories of neurodivergence often share symptomology, and that more than one flavour of neurodivergence can co-occour in one patient simultaneously.

Conclusions

This paper is not a deathblow to the idea that ADHD, ASD and OCD are separate conditions; if anything, the message I take from this paper is that people with one diagnosis are highly likely to have symptoms of other conditions and often this reaches clinical significance indicating that the individual does indeed have two or more conditions simultaneously. That said, there are plenty of other papers looking at whether ADHD and ASD really are separate conditions, or if they are better thought of as different aspects of the same condition expressing itself differently in different people.

It’s an interesting question that I am sure will keep many researchers very busy for years to come, and it is entirely possible that as research continues I will be proven wrong and will have to reassess my position. That said, despite the excitement I have seen expressed about this specific paper, I’d strongly argue that this paper is not an irrefutable bit of killer evidence that undeniably supports the hypothesis ADHD and ASD are the same thing.

Ultimately, what this paper does do is further illustrate several things. One, that humans are incredibly complicated. Two, that neurodiverse people can be extremely different from each other; just as no two neurotypical people are exactly the same, no two autistic people are the same, no two ADHD people are the same, and no two people with OCD are the same. Three, that brains are extraordinarily complicated, and drawing a neat line from brain structure to human behaviour is fraught with difficulty. Four, that the statistical analysis and the interpretation of data involved in studying human brains and behaviour is incredibly difficult. And five, that maybe, just maybe, I’d have fewer grey hairs if I had decided to do geology.