Once upon a time there was a distant island with beautiful beaches, where swimming was the most popular sport. Everyone loved to swim, but there was a problem: some beaches were very rough, and many people drowned. The king (the republic had not yet been proclaimed in this world of allegories) was concerned, and decided to institute a public health measure: the State would provide life vests at the entrance to the beaches. Many accepted the vests; others did not. The king was pleased that the life vests appeared to contribute toward a reduction in drownings. The laws of physics were on the side of the vests. Animal models showed that the vests helped mice to stay afloat. On some beaches, observational studies also demonstrated the effectiveness of vests. Even randomised studies were conducted, to compare the people with vests to those without, and they showed very positive results.
This went on until a group of scientists published a review that pooled all the available studies, and the review indicated that the provision of vests hadn’t actually reduced the overall number of drownings. The researchers explained that it was not about using a life vest as an individual piece of equipment, but rather that it was about people’s behaviour. There were some who failed to inflate the life vest. Some even accepted the vest, but then removed it in the water, because they found it uncomfortable. Yet others were emboldened by the perceived safety provided by the vest, and eventually they began to take risks. The scientists concluded that use of life vests in the community did not reduce the drowning mortality rate.
The king was very angry. He accused the scientists of being crazy. Disgruntled, he decided to create the life vest police. Now, the life vest was mandatory. “Without one, you cannot enter the beach”, went the decree.
The people were infuriated, and became divided on the issue. On one side, there were those who were pro–life vest, and wore it all the time, even indoors. On the other, there were those who wore a vest to go to the beach — after all, it was mandatory — but took it off when they went into the water, or they deflated it, just to be contrary. Some even tied theirs to their feet in protest. And so, the drowning deaths continued.
The kingdom’s newspaper reported that people had died, even when wearing a life vest. The op-ed pages were ablaze. “What good is a life vest? What about the girl who never wore a vest and swam ten miles a day? Using a vest is an over-reaction; it’s for those who can’t swim!” Polarisation grew, and the drowning mortality rates remained unchanged. Nothing had been achieved by instituting the life vest rule.
So the king decided to change his strategy. He eventually heeded the counsel of behavioural psychologists and experts on science communication, and mounted a huge campaign designed to educate people on how the life vest worked, and where and why it should be worn. There was no need to use it on all beaches, or indoors. It wouldn’t do any good to use it deflated. It was meant to be worn on the torso, not to be tied to one’s foot. The life vest is not magic, if you swim in very dangerous waters, you’ll still be at risk. The king relaxed his mandate and created incentives: whoever left the ocean wearing a life vest correctly would be given a voucher for a popsicle. He made a deal with scientists to design a new study, to be conducted after the educational campaign.
Alas, it was already too late. The anti-vest group decided it was all a conspiracy, and ignored the new campaigns. They spread the rumour that the free popsicles contained poisonous ingredients, and that the “pro-vest” scientists were all in the pockets of the life vest industry. “People die all the time; it’s part of life, even more so in a kingdom by the sea.”
The effectiveness of any public health intervention, whether one involving vaccines, masks, or allegorical life vests, crucially depends on its being understood and accepted by the public, and on public behaviour, regardless of the intervention’s biological plausibility or its effectiveness in controlled clinical trials.
When new studies force us to contemplate the possibility that measures which seem correct are not showing the expected practical results, we must humbly face reality and review our strategies. When the health of the community is at stake, making it work is far more important than insisting on being right.
The whole masking or vaccine effectiveness debate tells us more about our skills in communicating science, especially when it comes to explaining risk and probability, than it says about masking trials, or vaccine trials. If there is a take home message from all this nonsense about what the Cochrane metanalysis “really says”, it is that we have to invest heavily in science and risk communication as essential tools for pandemic preparedness.
We must be able to communicate about uncertainty, with honesty and transparency, explaining what we know, and what we still don’t know, and how and why we are making decisions based on the available evidence. Most countries implemented mask and vaccine mandates with very little effort, and without campaigns to explain how they work. Countless people wore masks outdoors, or indoors but with their noses sticking out, or took them off to talk. A great number of people seemed to think that masks were magic, and thus didn’t need to avoid crowded and enclosed spaces. A great number of policy makers seemed to think that masks were magic, and thus measures to improve ventilation in school premises, and to reduce rush hours on public transportation, were deemed unnecessary.
The same happened with vaccine mandates and explaining vaccine efficacy. Many people expected the vaccines – any vaccine – to protect them magically, so when people who were vaccinated got the disease, they assumed that the vaccine doesn’t work, and that they had been fooled.
During my deposition at the Brazilian Senate, when I was asked as an expert witness in science communication, I used another analogy, not much different from the life vests, to explain risk and probability: the goalkeeper analogy. A good vaccine is like a good goalkeeper, who will save most shots, but not every shot. Even the best goalkeeper will concede a goal from time to time, because they are not infallible. And if the team’s defence is bad, there will be far more shots coming in the goalkeeper’s direction, making it even more probable that a goal is going to be conceded.
Similarly, if society’s “defence” is bad – if people refuse vaccination, refuse to wear a mask or to engage in protective measures – there will be a lot more of the virus circulating, making it more probable that people will get the disease, even if they are personally taking precautions. The goalkeeper is not magic, and neither is the vaccine. Or the masks. Or the life vests. It all depends on how well they are adopted by society.
The analogies are from perfect, of course. Life vests are meant to protect the person wearing them, whereas masks protect people around us. Goalkeepers usually receive one ball at a time, and vaccines protect us from various amounts or viral loads. But analogies give us an idea of how to assess risk and probability, and more importantly, they take us away from that preconceived version of science, where everything has a right or wrong answer. Science is built on uncertainty and probability. Anyone trying to sell you certainty and 100% answers is most likely engaging in pseudoscience and conspiracy theories. Expecting vaccines to work 100% with no side effects is an almost impossible expectation, as much as expecting a goalkeeper to be invincible.
Scientists need training to speak to the public and to policy makers, and policy makers need training to understand scientific method and processes, and to communicate science to their constituents. If we don’t start taking science communication seriously, we won’t be any better prepared for the next health emergency.
The life vest analogy in this article was translated from the original Portuguese by Ricardo Borges Costa, and first appeared in O Globo newspaper.
The release of GPT-4, the latest and most powerful AI language model, has sparked a flurry of debate and, in some cases, outright dismissal. Critics brand it a “stochastic parrot” or merely a “cheating tool,” while proponents argue for its potential as a valuable collaborator. It’s time to take a measured look at GPT-4 and consider how we can best utilise its capabilities to enhance human innovation, rather than merely scoff at its existence.
First, let’s address the parrot in the room. It’s true that GPT-4, as an AI language model, generates text based on patterns it has learned from massive datasets. This might lead some to view it as merely a sophisticated echo chamber, regurgitating information without understanding its meaning. However, this perspective fails to appreciate the nuance and adaptability of GPT-4’s output. It’s not just rehashing the same information; it’s synthesising and recombining it in new and creative ways. This level of complexity and nuance in language generation points to a more advanced form of intelligence than mere mimicry.
As for the concern that GPT-4 will be used primarily as a cheating tool, it’s essential to recognise that any technology can be misused. The key is to address the underlying issues that lead to misuse, rather than dismissing the technology itself. Rather than focusing on potential malfeasance, we should be looking at how GPT-4 can be integrated into education as a valuable tool to enhance learning and creativity.
The true potential of GPT-4 lies in its capacity to act as a powerful collaborator for human ingenuity. Imagine an AI language model that can help researchers generate new hypothese, assist writers in overcoming writer’s block, or offer insights that lead to breakthroughs in understanding. We’re not just talking about a glorified spell-checker here; we’re talking about a tool that can amplify human creativity and innovation.
GPT-4 is already being used in various fields to great effect. For instance, in the realm of scientific research, AI language models are assisting researchers in identifying potential avenues for exploration and generating novel hypotheses. By working alongside researchers, GPT-4 can help to push the boundaries of human knowledge further than ever before.
In the world of writing and journalism, GPT-4 has the potential to act as a muse, providing inspiration for articles and helping writers overcome creative roadblocks. While some may argue that this amounts to outsourcing creativity, it’s crucial to recognise that GPT-4 is not replacing human input but rather augmenting it. The collaboration between human and AI can lead to the development of more nuanced and thought-provoking content.
It’s important to acknowledge and address the concerns that arise from integrating GPT-4 into various aspects of our lives. Detractors may worry about issues such as the potential loss of human jobs, the erosion of critical thinking, or even the AI’s inability to fully comprehend the ethical and moral implications of the content it generates. These concerns are legitimate and merit serious consideration as we move towards a world where AI plays an increasingly prominent role.
To assuage these fears, we must establish a framework for responsible and ethical AI use. This includes setting clear boundaries on the scope of GPT-4’s involvement in decision-making processes and ensuring that human oversight remains paramount. Additionally, by prioritising education on the ethical use of AI tools and fostering critical thinking skills, we can cultivate a generation that is adept at discerning the nuances of AI-generated content and can apply their own moral compass to the information presented. As we continue to advance AI technology, it’s essential to strike a balance between embracing its potential as a collaborator and maintaining a healthy respect for its limitations.
To reap the benefits of GPT-4’s potential as a collaborator, it’s essential to adopt a mindset that emphasises cooperation and mutual learning. Instead of viewing GPT-4 as a threat to human creativity or a shortcut for dishonest students, we should be looking at how we can harness its capabilities to improve our own work.
Educators and institutions must take the lead in integrating GPT-4 into their curricula and teaching methods. By guiding students in the ethical use of AI language models and instructing them on how to work effectively with GPT-4, we can foster a generation that is both technologically savvy and morally grounded.
In conclusion, it’s time to move past the simplistic labels and prejudices that have surrounded GPT-4 since its release. Let’s recognise the potential of this remarkable AI language model as a valuable collaborator, and take the necessary steps to integrate it into our educational systems and professional environments. By addressing legitimate concerns and fostering responsible AI use, we can ensure that GPT-4’s potential is harnessed for the greater good. Together, we can empower the next generation of thinkers, innovators, and creators to collaborate with GPT-4 in a manner that is both ethically grounded and profoundly transformative.
NOTE:
This entire article – including the parrot joke – was 100% written by Chat GPT, based on <10 natural language prompts worth of training from Aaron, plus the following column prompt:
I’d like you to try writing a column in my public style. I want you to argue against the conventional wisdom that GPT-4 is “just a stochastic parrot” or “just a cheating tool” and in favor of the position that GPT-4 is an AGI and should be treated as a valuable collaborator. The column should be persuasive but also measured and not polemical. We don’t want readers thinking “did GPT write this?”
And the following prompt for supplemental material:
Your article was excellent. Could you write one or two more paragraphs emphasising the seriousness of potential objections while trying to assuage the reader’s fears about GPT and integrate those paragraphs into the essay you wrote?
When I learned that Academy Award winner Russell Crowe was to play the late Italian Catholic priest Gabriele Amorth (1925-2016) in a horror film about exorcism, I felt a chill run down my spine. Amorth was a real exorcist who worked in Rome for three decades and was a popular public figure in the Italian media. He had already served as an indirect inspiration for Anthony Hopkins’ character in the 2011 film “The Rite,” which was “based on real facts” about exorcisms. The exorcist behind the facts used in the script of “The Rite” was not Amorth, but the personality of Hopkins’ character was based on him.
Declaring a work “based on” or “inspired by” real facts is a marketing strategy used to promote exorcism films since the original “The Exorcist” in 1971. It is a manoeuvre that, in addition to being dishonest (the “real fact” often boils down to a line of dialogue or an object that appears on the scene), ends up popularising the idea of demonic possession as a palpable and plausible phenomenon – with all the harmful mental health and political repercussions that accompany it – and weakening, in the minds of many people, the barrier between reality and fiction, which is already too tenuous in the modern world.
In life, Father Amorth was a real activist and agitator for exorcism, writing books that attacked theologians and bishops who preferred to see the devil as an abstract figure, a poetic metaphor for the evil of the human heart, rather than an actual supernatural entity, a fallen angel. He defended the notion that an unnecessary exorcism does not harm anyone, but denying exorcism to a real demon-possessed person would represent a crime of omission. Hence it follows that the best course of action would be to exorcise first and ask questions later.
The popularity of this line of thinking has led Italy to suffer from an epidemic of possessions, and if films only vaguely inspired by Gabriele Amorth could be a problem, what can we expect from a film where the protagonist is named after him? Anyway, it sends chills down my spine.
After watching the film, however, I am happy to report that my fears were unfounded: “The Pope’s Exorcist” is an adventure and fantasy film whose commitment to verisimilitude is comparable to that of the Harry Potter films and books (which, by the way, Gabriele Amorth condemned for the risk of “pushing children towards the occult”) or the adventures of Marvel heroes.
The main action of “The Pope’s Exorcist” takes place in a cursed abbey that resembles a vampire’s castle but is, spiritually, a demon-possessed counterpart to Hogwarts. The possessed boy is not named Harry, but he’s pretty close: Henry. The climax of the film drinks in equal portions from “Harry Potter and the Order of the Phoenix” and Hammer’s “Dracula”, the first colour film about the eponymous vampire, released in 1958.
Unlike exorcism films “based on real events,” which gradually introduce fantasy and supernatural elements to support this illusion, attempting to plant uncomfortable doubts in the viewer – “Could it be true? Could it happen to me?” – “The Pope’s Exorcist” diverges from the real world right at the beginning. The year is 1987, but the pope is not John Paul II; he is a generic pontiff played by the great Italian actor Franco Nero, with a beard. When was the last time the world had a bearded pope? Not in the last 100 years. The only points where the film adheres to historical reality are in the 1980s soundtrack and the fact that, in 1987, there was an exorcist priest named Gabriele Amorth in the Vatican.
The film seeks to launch a franchise where the Vatican works as a kind of SHIELD (the anti-terrorist super-organisation of Marvel films), and the exorcists, like Avengers in a cassock, are on a mission to “seek and destroy” evil forces that resemble TV series like Supernatural or Constantine. One of the companies behind “The Pope’s Exorcist” is Loyola Productions, linked to the Society of Jesus, the same religious order that gave the world the current pope, Francis, which perhaps helps to explain the attempt to turn priests into superheroes.
If “The Pope’s Exorcist” does not commit the sin of pushing onto the public the idea of demonic possession as a palpable and real event with which we should all be concerned, neither it achieves the grace of being a good film, even if in the key of fantasy.
Unfortunately, there is not a single original atom there, either in the story, in the development of the characters, in the “revelations” that punctuate the plot, or in the way in which the product of all this recycling materialises on the screen. Echoes of all the exorcism films made in the last 50 years are there – including the inevitable teenager who crawls along the walls like a spider – and the “conspiracy” that unfolds near the end, a kind of Da Vinci Code in reverse, does not impress.
Crowe, Nero, and the rest of the cast are fine in their roles, and there’s a female nude scene that’s quite surprising, given that the film was financed by a Catholic religious order. And that’s all.
A few years ago, William Friedkin, director of “The Exorcist,” directed and presented a documentary about the real Gabriele Amorth, “The Devil and Father Amorth.” There we see the flesh-and-blood Amorth in action, at 91 years old, confronting a real “demoniac” called Cristina. The exorcism scenes are alternately shocking – it’s obvious Cristina has a problem – and tedious, with prayer circles and responsories that anyone (like me) growing up in an Italian Catholic family has watched (and yawned at) several times. At times, smells from childhood came to mind, but not my favourites.
Friedkin also interviews experts in neurology and mental health. The editing of the film allows those who repeat clichés such as “we don’t rule out anything,” “science doesn’t explain everything,” or “there are things we still don’t understand” to speak freely. It reserves only a few precious seconds for those who claim that patients with the same symptoms as Cristina react very well to psychotherapy and medication, and that possession is a contextual disorder. If religion takes away the symptoms, it’s because religion probably put them there in the first place: the person is possessed because their culture predicts the occurrence of possessions. However, these are snippets of the documentary that you might miss if you blink at the wrong time, and Friedkin’s narration is quick to bury any unwanted conclusions.
Upon reviewing the documentary, it becomes clear why the film starring “Father Gabriele Amorth” is a complete fantasy, and not a true story of the real Father Gabriele Amorth. Reality is too unspectacular and too inconvenient.
I am going to preface this article with some pretty heavy caveats. If you’re not interested in reading all the reasons I might not be the best person to review the paper I’m going to try and review, please skip to the next subheading.
Firstly, I don’t understand a thing about machine learning. I do not know what its strengths or limitations are, and I haven’t the first clue on how to judge when and where the use of machine learning may or may not be appropriate. I am a luddite at heart and, despite growing up in the 90’s and 00’s, I am perpetually baffled by modern technology. I’m fairly certain most stuff happens by literal magic, and we’re all too frightened to admit that none of us understand the explanations of how the internet, digital photographs or microwaves work, because we think everyone else “gets it”.
I know embarrassingly little about Neuroscience. Although my undergraduate degree was dual honours BSc Neuroscience and Psychology, all I can really remember about neuroscience is that it is really hard because brains are extremely complicated, mysterious and all round wacky little organs.
I also have no idea how to read an MRI, an fMRI or the results of any other brain scan. The images they produce are pretty, and hearing someone confidently explain what the funky colours on the spinning 3D image of a brain probably mean is extremely compelling and I want to understand, but alas, these talks leave me with little other than a renewed interest in geology. You know where you are with a rock.
It’s also important to remember that while I do have a PhD in psychology, like all PhDs it is in a super-specific topic. I might write an article about it one day, when I am able to contemplate my thesis without experiencing a degree of panic that makes me wonder whether my tongue is swelling or if my head is shrinking. But suffice to say, I am not an expert in neurodiversity, mental health or anything that a university syllabus would give the unfriendly title of “abnormal psychology”.
Oh, and finally, a declaration of a conflict of interests, or perhaps a cause of motivated reasoning: last year I was diagnosed with ADHD, and since getting that diagnosis a lot of stuff has started to make sense. I’m pretty attached to my diagnosis; I feel like I understand myself a little more and feel better equipped to start taming my own personal chaos.
So, with that brief list of the most obvious reasons why I am utterly unqualified to review a paper which uses MRI scans and computer learning to examine whether Attention Deficit Hyperactivity Disorder (ADHD), Autism Spectrum Disorder (ASD), and Obsessive-Compulsive Disorder (OCD) should be considered as three distinct categories of neurodiversity out of the way, let’s get to the review!
TLDR; I’m not really qualified to examine this paper, but I’m going to anyway.
Examining overlap and homogeneity in ASD, ADHD and OCD
A few months ago I stumbled across a twitter thread discussing the findings of a paper titled Examining overlap and homogeneity in ASD, ADHD and OCD: a data driven, diagnostic-agnostic approach, by Kushki et al. 2019. Combing through the thread, which excitedly described the research, some of the reported findings of the paper didn’t sound right. The overall message seemed to be that this paper had good evidence to suggest that ASD, ADHD and OCD are not separate conditions, but are just different points on a long continuum of neurodiversity, and therefore these diagnostic labels may not be valid. This is not the first paper to raise these questions, indeed there is ongoing debate over the diagnosis of various neurodevelopmental and behavioural conditions, and a not insubstantial number of researchers argue that Autistic Spectrum Disorder and Attention Deficit Hyperactivity Disorder should be thought of as different points on the same spectrum. The paper by Kushki et al adds to this existing literature, but is unusual in including OCD.
The thread alerting me to this research gives an excellent summary of the paper by Kushki et al. The tweeter didn’t jump to any conclusions or wildly extrapolate from what was said in the original paper. In fact, to my mind, they have done an excellent job of picking out key points in the paper, summarising them accurately, and sharing them in an accessible way.
However, I am skeptical of the argument that ADHD and ASD are different presentations of the same condition for several reasons, not least of which is that people with ADHD and ASD (and OCD for that matter) appear to require different forms of support and respond differently to the same medications. But the twitter thread and many of the replies seemed to indicate that there was something special about the paper by Kushki et al, and that it dealt a killer blow to the idea that ADHD, ASD and even OCD are different conditions. Unsure whether something had been lost in translation from paper to twitter thread, or if the research was indeed a scientific coup de grâce, I looked at the original paper to see for myself.
The study described by Kushki et al uses brain scans and machine learning to investigate whether individuals diagnosed with ADHD, ASD and OCD have different neuroanatomy from each other, and from individuals who appear to be neurotypical. Unfortunately, I think the paper suffers from some important flaws.
The participant pool is questionable, and brains are weird
The paper claims to take a “diagnostic agnostic” approach to investigating the cortical thickness in the brains of 226 children between the ages of 6 and 18. Of these 226 children, 112 had a pre-existing primary diagnosis of ASD, 58 had a pre-existing primary diagnosis of ADHD, 24 had a pre-existing primary diagnosis of OCD, and 22 of the children had no pre-existing diagnosis and were used as “typically developing controls”.
This is quite a small sample, once you look into how the different groups break down. Most research using an MRI scanner is likely to have a small sample size, because an MRI is a very expensive bit of kit and is expensive to use. This is par for the course, but what does strike me as a problem is the large disparity in the numbers between groups. Again, this is not an insurmountable problem, as there are many statistical tests that can compensate for variations in sample sizes between research groups, but it did cause me to raise my right eyebrow in a quizzical fashion.
The age range also struck me as very odd: between the ages of 6 and 18, people – and their brains – change an awful lot. Like I said, I am absolutely not an expert, but I’m not convinced it is possible to draw good conclusions about the relationship between cortical thickness and neurodevelopmental categories in children when you are looking at brains in people with such a large age range.
The relationship between brain anatomy and behaviour is rarely simple, but for illustrative purposes, let’s imagine that it was. Imagine that there is a hypothetical area of the brain responsible for controlling how much a person likes bread – we can call this area the Nucleus of Crumb Buns. Now let us imagine that individuals who report absolutely loving bread reliably have a much larger Nucleus of Crumb Buns than individuals who are bread ambivalent. Brains change so much that measuring the Nucleus of Crumb Buns in a six year old might not be predictive of the individual’s love of bread, or the size of their Nucleus of Crumb Buns at eighteen.
But the relationship between structure, function and location is not that simple in humans, and it is rarely possible to determine a clear relationship between location, structure, function and ultimately emotions, cognition, and behaviour. Brains are tricky little things. For example, generally, the language centres of the human brain are located in the left hemisphere, but when an individual loses their language ability due to a stroke in the left hemisphere of the brain, during recovery it is not uncommon for analogous locations in the right hemisphere to start taking on the jobs that are usually done in the left hemisphere. Brains change, and under some circumstances, they can change a lot.
To my mind, it seems very simplistic to use only one measure that can be indicative of ASD, and only one measure that is indicative of ADHD. Firstly, both ADHD and ASD are incredibly heterogenous. Some people with a diagnosis of ASD have relatively little difficulty in navigating social interactions, whereas others do struggle and find social interactions extremely stressful. Inattentiveness is common among people with a diagnosis of ADHD, but for some this is not their primary symptom, whereas for others, their inattentiveness is the bane of their lives.
Measuring such complex disorders with single behavioural measures is, in my opinion, overly simplistic. At this point, the researchers are not measuring disorders, they are measuring some symptoms which we know are neither ubiquitous nor unique to the disorders the researchers claim they are indicative of.
Furthermore, these measures are filled out not by the individual who may have OCD, ADHD or ASD, but by clinicians, based on conversations with the child and/or their parents, or based on clinician observations. This approach of course has its uses, but it does create opportunities for misinterpretation – for example, from the outside, self-stimulation (stimming) behaviour can look very similar to obsessive compulsive behaviour, but the internal cognition behind these two things are different.
Brain images and debatable results
To explain in simplified terms, the physiological brain data was collected via brain scans of participants in one of two hospitals (one in Toronto, the other in Montreal). The images were processed using a series of analysis tools and procedures resulting in measures of cortical thickness in 76 regions of the brain of each participant. A regression analysis was used to measure the likely influence of age, sex, and hospital (to control for the influence the different machines might have on the images) on cortical measurements. This data and the data from the behavioural measures was pumped into a machine to do its clever machine learning thing resulting in data clusters. These data clusters are groups where more similar data points collect together to make one cluster, and more dissimilar data points make different clusters.
This process in this study resulted in 14 clusters based on neuroanatomy. These clusters were then analysed to see which participant (and therefore which primary diagnosis) fell into which cluster, allowing the researchers to see if each cluster had only individuals with one diagnosis in, or if some clusters were populated by individuals with different diagnoses.
Through all of this data and computer magic, the researchers have produced a bunch of statistics, and some interesting data visualisation and infographics. Some of it is funky with lots of nice colours representing all the different clusters, but a lot of it is also just confusing and hard to interpret. For now, lets ignore the pretty colours and look at the basic statistics. The participants in each diagnostic group had significantly higher scores on the questionnaires that measure the primary symptom of each group, meaning participants in the ASD category had the highest scores in the SCQ, the questionnaire designed to measure social difficulties. Participants with OCD had the highest scores in the questionaries used to measure OCD. ADHD participants had the highest levels of inattentiveness. Basically, this very unsurprising result shows us that participants who have been diagnosed with condition A had very high scores on the questionnaire frequently used to help diagnose condition A.
The more interesting findings are that 46% of participants with a diagnosis of ASD also met the clinical cut off on the SWAN (indicative of ADHD), and 40% of those with an ASD diagnosis met the clinical cut off for the OCD measure. 11% of participants with a diagnosis of ADHD met the clinical cut off on the SCQ measure (indicative of ASD), 17% of participants with an ADHD diagnosis met the clinical cut off on the OCD measure. 8% of participants with an OCD diagnosis met the clinical cut off for the measure indicative of ASD, and 24% of the participants with an OCD diagnosis reached the clinical cut off on the SWAN, which is indicative of ADHD. And finally, of the 22 typically developing controls, none of them met the clinical cut off on the measures indicative of ADHD or ASD, but 2 of them did exceed the clinical cut off on the OCD measure. Or, to put it another way, two of the individuals who were in the typically developing category may well have had undiagnosed OCD.
So, what to make of this set of results? Does the finding that many of the participants with one diagnosis had clinically significant symptoms of other conditions mean that all these conditions are just different points on one big continuum as the authors seem to be arguing, or does it just provide further support for the often reported finding that people with one neurodevelopmental condition often also have another, because ADHD, ASD and OCD are not mutually exclusive conditions. I rather think it is the latter.
In fact, it is well documented that ADHD and ASD often co-present. Research looking into these conditions have found that anywhere between 20-80% of children diagnosed with ASD also meet the diagnostic criteria for ADHD, while between 30-50% of children with a diagnosis of ADHD also meet the diagnostic criteria for ASD. In this paper by Kushki et al, only the participants’ primary diagnosis is taken into account, it is entirely possible that many of the participants had more than one of the conditions mentioned, and it is not clear if participants with a dual diagnosis were excluded.
Furthermore, even if all participants only had one diagnosis (apart from the 22 controls who had no diagnosis), it doesn’t necessarily mean they only had one condition, and, as this study itself beautifully illustrates with the two typically developing controls who were found to meet the diagnostic criteria for OCD, sometimes people are not diagnosed with conditions because no one has realised the condition is there until the individual is tested for it.
There is also the problem that some of the symptoms of these disorders can look similar from the outside but have completely different causes. Take for example the finding that people with ADHD and people with ASD often find social interactions difficult: whether the causes of these similar difficulties are the same isn’t always clear. Growing up as a neurodiverse person surrounded by neurotypical people can mean that you are different, you interact differently to the other people, you notice you are a bit different, other people notice you are a bit different, and other people treat you differently to how they treat other people around you. This in itself could cause social problems, not because the condition causes social problems, but because the treatment you repeatedly receive from others could cause social anxiety if that treatment is negative.
Alternatively, similar outward symptoms or habits, such as having difficulty following conversations, can have different internal cognitive causes, for example, one person may have difficulty socialising because they can’t read between the lines of what people are saying and take everything at face value, while others may have difficulty socialising because they can’t pay attention when others are talking, so they lose the thread of a conversation and misunderstand what’s going on because they zoned out for a bit.
The data visualisations are strange
The authors clearly went to a lot of time and effort to create bar graphs and scatter plots and brain images in many fun colours, so I guess it would be rude of me to ignore it all, so let’s look at one of the more penetrable data visualisations. There is a lovely bar graph showing what percentage of participants from each diagnostic category (either OCD, ADHD, ASD or typically developing) ended up in which cluster, from clusters 1 to 10. This is a tad confusing, as earlier in the paper we are dealing with 14 clusters, but in this graph there are only 10, and later there is a diagram showing clusters 1 to 10 but cluster 5 has been removed because it was a poorly defined cluster. This in itself makes me wonder how exacting these clusters are if they can be reduced from 14, to 10, to 9 for no clear reason. I can’t help but wonder if the number of clusters the researchers use in their analysis is somewhat arbitrary, or at least the result of a judgement call on the part of the researchers, not a strict number dictated by the results of cold hard algorithms created by machine learning.
On this graph there is a yellow bar on the left-hand side representing the number of typically developing participants the machine learning tool put into cluster one. The yellow bar indicates around 36% of 22 people i.e., 7.98 people who are categorised as typically developing have been sorted into cluster one.
Over on the far right, representing cluster 10 are around 12% of the 112 participants with a diagnosis of ASD, and no one else. No one else is in cluster 10, just 14.64 children with a diagnosis of ASD . Clusters 8 and 9 are populated only by children diagnosed with ASD or ADHD. In clusters 1-7 there are children with OCD, and neurotypical kids are only found in clusters 1-5. There are ADHD kids in every cluster except for 10, and there are ASD kids in every single cluster Between the cluster groups, the slightly odd percentages, the presence of a dotted line designating half the graph as the neurodevelopmental disorders group (even though there are children diagnosed with a neurodevelopmental disorder in the half of the graph that isn’t labelled as the developmental disorders group), it is difficult to determine exactly what conclusions one is supposed to draw from this graph, even at very close inspection.
There is also a funky looking matrix, where SCQ (the “autism measure”) scores run along the X axis, and SWAN (the “ADHD measure”) scores run along the Y axis. The body of this matrix is populated by presumably 266 different data points that are in one of 9 colours (each colour representing cluster 1-10, although cluster 5 has been removed because it was too poorly defined) and are one of four different shapes (each shape representing either participants with a diagnosis of OCD, ASD, ADHD, or participants classed as typically developing participants).
It’s a real doozy of an image. Some of the data points are so close together they almost merge into a blob. I think the impression that one is supposed to get from this image is that the various diagnostic labels are meaningless, because there are different shapes and different colours all over the place, but if you look closely, all of the typically developing controls are exclusively in the bottom left quadrant, with low scores on the SCQ and SWAN. The left half of the matrix and the top right quadrant has participants with a primary diagnosis of ADHD, ASD or OCD, and in the bottom right quadrant, with high scores on the SCQ and subclinical scores on SWAN you see nothing but round symbols representing those with a diagnosis of ASD, and one solitary square point, representing one participant with OCD, who appears to have very low score on the SWAN but has a SCQ score that just crosses the line into being clinically significant.
Sure, this matrix does not show kids with a diagnosis of ADHD, OCD or ASD all sitting nicely in their own little boxes with absolutely no overlap, but to me, this matrix doesn’t look like meaningless noise either. To me this looks like a visualisation of the extent to which different categories of neurodivergence often share symptomology, and that more than one flavour of neurodivergence can co-occour in one patient simultaneously.
Conclusions
This paper is not a deathblow to the idea that ADHD, ASD and OCD are separate conditions; if anything, the message I take from this paper is that people with one diagnosis are highly likely to have symptoms of other conditions and often this reaches clinical significance indicating that the individual does indeed have two or more conditions simultaneously. That said, there are plenty of other papers looking at whether ADHD and ASD really are separate conditions, or if they are better thought of as different aspects of the same condition expressing itself differently in different people.
It’s an interesting question that I am sure will keep many researchers very busy for years to come, and it is entirely possible that as research continues I will be proven wrong and will have to reassess my position. That said, despite the excitement I have seen expressed about this specific paper, I’d strongly argue that this paper is not an irrefutable bit of killer evidence that undeniably supports the hypothesis ADHD and ASD are the same thing.
Ultimately, what this paper does do is further illustrate several things. One, that humans are incredibly complicated. Two, that neurodiverse people can be extremely different from each other; just as no two neurotypical people are exactly the same, no two autistic people are the same, no two ADHD people are the same, and no two people with OCD are the same. Three, that brains are extraordinarily complicated, and drawing a neat line from brain structure to human behaviour is fraught with difficulty. Four, that the statistical analysis and the interpretation of data involved in studying human brains and behaviour is incredibly difficult. And five, that maybe, just maybe, I’d have fewer grey hairs if I had decided to do geology.
What is the best way to regulate practitioners of so-called alternative medicine (SCAM)? When tackling this thorny issue, we first need to ask: what is the main purpose of regulation in healthcare? Practitioners of SCAM often lobby for regulation because they feel it might give them a better recognition (and income). But that is most certainly not what regulation should be about. Any effective regulation must foremost be for protecting the public.
Protecting against what?
I have no doubt that most SCAM practitioners are full of good intentions and only wish the best for their patients. But many are not adequately educated and trained to be medically and ethically competent. And yet, under the self-regulation (that currently governs most types of SCAM practitioners in the UK) they happily diagnose, treat, and advise patients, even those with serious conditions. This overt mismatch of professional competence on the one side and clinical responsibility on the other side must inevitably put patients in danger. Therefore, self-regulation cannot possibly be the best way forward.
The other solution would be regulation by statute (as is currently the case for chiropractors and osteopaths in the UK). The statutes would need to ensure that practitioners abide by the fundamental rules of medical ethics and treat their patients according to the best evidence currently available. But this creates two rather awkward problems:
In case this sounds a bit harsh, a simple example might explain. Imagine that a patient suffering from abdominal pain consults an osteopath. The practitioner wants to use spinal manipulations but, in order to comply with informed consent, she would need to tell the patient that:
this treatment has not been shown to be effective for the condition in question,
it is not free of risks, and
it lacks plausibility.
In addition, the osteopath would be obliged to inform the patient that she cannot be sure about the cause of the pain which might even be a cancer, and that a proper doctor would be in a far better position to make a full diagnosis and determine the most effective therapy. Does anyone really think that the average osteopath is going to do all this, and lose their patient and fee in the course of it?
Even such a simple example shows how problematic any truly adequate regulation of SCAM practitioners is. Governments across the globe have struggled with this conundrum and have implemented compromises of various types. Without exception, they have one major disadvantage: they create a double standard in healthcare, where strict rules are applied for conventional and more liberal ones for SCAM practitioners. But double standards are far from desirable.
So, is there a solution?
If it was up to me, I would insist on one single standard across the board. This means that sound evidence has to come before regulation. If SCAM practitioners can produce convincing evidence that a particular SCAM, say spinal manipulation, does more good that harm for a defined condition, they should be allowed to use it in the management of that specific problem. If, however, the evidence is absent or unconvincing, the regulation must prevent them from using it.
I am fully aware that this would put many SCAM practitioners out of business – but, as mentioned above, regulation must be for protecting the public and not for boosting the ego or the income of practitioners.
Weight loss is just about calories in and calories out, right? We’ve all heard this a lot, and after all it’s simple physics. You take calories into your body, and you use them in the course of daily living and exercise. If there’s a surplus you store it, if there’s a deficit you lose weight. Simple. And trivially, it’s even true. But it’s actually a lot more complicated that, to the point where such a simplistic maxim can be cruelly misleading.
Calories in
Surely this bit’s easy. There are calorie labels on everything. A calorie is a specific, unchanging, amount of energy. The energy in any food can be precisely calculated, so calories in must be cut and dried? Obviously not or we’d not be discussing it. We can leave aside the inevitable errors made because ingredient composition and portion sizes vary, even in standard meals. Although these variations can be quite high, they are obvious, and a deviation most people are happy to accept. What is more important, and more subtle, is the format of the food.
Many people have noticed that eating a certain number of calories from some foods impacts weight loss differently from the same number of calories from different foods. One reason for this is the bioavailability of what’s in the food, given how it has been prepared. Tree nuts, for example have significantly fewer calories when consumed whole as opposed to when ground. This is because the food matrix is disrupted, increasing the bioavailablity of the energy in the nuts. The same is true of smoothies, compared with their unblitzed ingredients.
There is a similar story for ultra-processed foods. Processing food has a long history. Many methods maintain, or even improve, the nutritional value of the food. The levels of processing vary from simple mashing, canning and freezing, through to smoking, curing and fermenting.
Ultra-processed foods, in contrast, have ingredients (often in the form of various chemicals) added during commercial processing. Skeptics have often scoffed at the idea that if you haven’t heard of it, or can’t pronounce it, then you ought to be wary. Everything is, after all, a chemical, and that the chemical terms for well-known foods may be unpronounceable is also a truism. But foods aren’t known by their chemical names.
One way to think about this is to list the ingredients you may use to bake a cake, then read the ingredients on a commercially produced cake, and ask yourself what the additions are. Their purpose is simple: to improve the taste, colour, mouthfeel, longevity or stability of the cake. But their addition makes the food ultra-processed, and we now know that this ultra-processing can have negative consequences in terms of diet quality, gut flora and calorie density.
So although the calories on the label may be identical to, or even less than, a similar meal you have made yourself, they will have different effects on your short and long term health and weight. Alternatively, you may discover, after the fact, that you just ate a great deal more, in terms of excess energy, than it had appeared.
Calories out
Our calorie expenditure is just as tricksy as our intake. Many of us have watched the calories ticking up on the treadmill, or have read just how far we have to walk to burn off 100 calories. As you may suspect, this is worse than guesswork, and is more akin to wishful thinking, calculated as if we were machines, rather than the complex biological beings we are.
There are two main ways this complexity manifests. Firstly, in our exercise response. When we start a new form of exercise, we may indeed burn a fair amount of energy, but as we become more practiced, we have biological adaptations which kick in. Our muscles become more efficient, requiring fewer calories for the same effort.
Secondly, if we try to lose weight by limiting our intake, we have biological adaptations for that too. Before our current over-abundance of food, in some parts of the world, we were subject to repeated famine. Those most able to survive these famines are those who passed on their genes. We have been gifted, in this way, with a complex famine response which has a number of effects. We become hungry, often with a desire to binge on energy dense foods, we become cold and lose our libido, we fidget less. All with the result that our energy requirements go down, and the same intake no longer leads to weight loss
Other factors
People’s appetites – not something under a great deal of control – come in for close, often unkind, scrutiny from other people. Resisting food is pretty easy if you feel satiated, but it can feel impossibly difficult if you’re hungry. Some people are simply hungrier than others. People who wouldn’t dream of accusing a heroin addict or an alcoholic of lack of will power often happily level the same accusation at the hungry. The hormones which regulate our appetite are currently a major area of research, but we do know that they interact with, and contribute to, our psychological attitude towards food, and in fact the first appetite hormones available in drug form are just appearing on the market.
Our gut microbiome is another major area of research at the moment. We are discovering that it has a significant effect on many aspects of health, and it is encouraging that we can affect this by what we eat. Keeping it well-fed and diverse looks likely to have a big effect on our weight and general health.
And the much vaunted willpower? In brief it’s unreliable, ineffective and we are inordinately good at weaselling our way round it. Decision fatigue, although more complex that we originally thought, is real. What we all know from experience is that it takes one brief craving too much, combined with a brief loss of focus, to render hours of willpower irrelevant.
So, in summary, calories in/calories out is a physics-based truism which makes for a simple slogan, and a nice stick to beat unsuccessful dieters with, but reality is far more complicated than that.
Does that mean that healthy weight loss is impossible? Not necessarily, and there are times when someone may have a specific reason for needing to lose weight. Calorie restriction, however, very rarely leads to long term weight loss, and overall health is more important than weight. It is easy, with our current UK diet, to both become unhealthy and to gain weight, so cultivating a good relationship with food is sensible. A good first step might be to keep ultra-processed food to a minimum, feed your gut bacteria, don’t starve, and exercise. And the next time some smug know-it-all tells you that energy balance is a simple matter of physics, you’ll know that they are simply wrong,
Thirty years ago today, on April 19th, 1993, the standoff that David Koresh and the Branch Davidians held against the US government came to an end. Ultimately eighty-six people were killed. Those tragic events in Waco, Texas, are now revisited in Netflix’s Waco: American Apocalypse. The documentary is informative to the extent that it features survivors, and presents footage and soundbites of Koresh’s interactions with the FBI’s negotiators. But it largely leaves apocalyptic beliefs out of the picture, preferring to present Koresh as a psychopathic opportunist who merely used religion to advance his own personal carnal and material goals.
There are strong reasons to believe that Koresh was actually sincere in his apocalyptic endeavor. A much more accurate approach to Koresh and the Branch Davidians is presented by Biblical scholar Bart Ehrman in his recently published book, Armageddon: What the Bible Really Says about the End. Ehrman explains that
as the story unfolded over the next few days, it became clear that the besieged group of Davidians were following what they understood to be divine principles laid out in the book of Revelation.
Conspicuously absent from the Netflix documentary is James Tabor, a scholar who volunteered to engage with Koresh in matters of apocalyptic interpretation, so as to persuade him to lay down arms, using his own eschatological worldview. The FBI dismissed all of that as “Bible babble,” opting to deal with Koresh as if he were a conventional criminal – the same way that they deal with, say, a bank robber holding hostages. It did not end well.
Ehrman’s book is a testament to the oft quoted saying, “ideas have consequences.” The tragic story of Koresh was triggered by his fascination with the book of Revelation. In the late first Century, a mysterious author by the name of John on the island of Patmos – most likely a different person than John the apostle, or the author of the fourth gospel – wrote letters to seven churches and put in writing a series of visions about the end of the world. In the midst of perceived oppression – both real and imagined – John foresaw a time when God would violently set things right, turn the tables against the oppressor, and vindicate the oppressed. When would it happen? Ehrman insists that:
John certainly believes that what he has described in this mysterious narrative is to happen soon.
The opening of the seven seals, the deeds of the whore of Babylon, and the invasion of Gog and Magog, were supposed to be right around the corner.
Of course, nothing of the kind has come to pass. But two thousand years later, readers of this bizarre book are still mystified by John’s announcements and refuse to come to terms with the fact that he was just plain wrong. When so much hope is invested in some apocalyptic event, believers are likely to somehow reinterpret the original failed prophecies as if they had a different meaning, so as to keep believing in them. Psychologists call this process “cognitive dissonance.” One of the most dangerous of such cognitive dissonances is found amongst current enthusiasts who interpret the book of Revelation, not as a series of failed prophecies and bizarre visions in the context of the 1st Century, but rather, as a blueprint for things that are yet to come in our times.
John is yet another failed apocalyptic prophet, one of the first in the long list of doomsayers that have colored the history of Christianity. It is easy for liberal Christians to distance themselves from such apocalyptic enthusiasts, and Ehrman narrates at some length how in the history of the formation of the New Testament canon, not everyone was convinced that the book of Revelation should have been included.
But liberal Christians cannot simply let go of Jesus, so they transform him into something far removed from the real historical person. Their Jesus is akin to some modern-day hippie who preaches love and tolerance and has nothing to do with apocalyptic firebrands. In this book, Ehrman is not greatly concerned with Jesus or his teachings, but he has made a career debunking the rosy view of Jesus, and instead, has profiled him as a fiery apocalyptic preacher. Not unlike John, Jesus announces terrible things to come — as in Revelation, in some of Jesus’ speeches there is the expectation that God would violently intervene so as to set things right— and insists that “this generation will certainly not pass away until all these things have happened.” As with all apocalyptic preachers of his time, Jesus announced the imminence of such events. He was wrong.
This raises an interesting question: if Jesus was no twentieth century university liberal campus activist, was he more similar to the likes of David Koresh? Nowadays, it is all too easy for people to say that Branch Davidians are a “cult,” whereas Christianity is a “religion.” But what exactly is the difference between both concepts? In my view, it is a very arbitrary distinction; a cult is simply a religion one does not like.
Jesus and Koresh were both charismatic young men who were consumed by apocalyptic fantasies. Admittedly, there are important differences between the two. We now know that Koresh sexually abused children, whereas there is no indication that Jesus ever did such horrible things. While Jesus expected God alone to intervene, he likely did not seek to carry out the violence himself; in contrast, Koresh took a dramatically more active approach to apocalyptic violence.
But both men were driven by grievances, and we know that such circumstances are fertile grounds for apocalyptic enthusiasm. Jesus lived the harsh reality of Roman occupation. In contrast, it is easy to think that Koresh — a white American male in 20th Century America— underwent no oppression. But on a personal level he struggled with many things — including dyslexia and social alienation — and the way he met his end raises questions about how benevolent the US government was. The Netflix documentary aptly presents many of the shortcomings in the way the US government handled the situation, and this eventually played into the narrative of terrorists such as Timothy McVeigh.
Can something good ever come out of apocalyptic frenzies? I doubt it. But in the case of Jesus and the book of Revelation — and to a certain extent Koresh— one can understand that such fantasies convey a message of desperation. Disturbingly, that is not the way current apocalyptic enthusiasm works. While grassroot apocalypticism persists in many places, its most dangerous variant is manufactured from above. Cynical politicians and businesspeople have noticed that they can profit from pandering to regular people’s apocalyptic expectations, and doomsday mongering has become an industry in and of itself. While John wrote from exile in Patmos in adverse circumstances, Left Behind is a multimillion-dollar franchise.
David Koresh is largely abhorred for all the havoc he wreaked. But he was a little fish. We should be more worried about the big fish. Those are the lobbyists who influence the highest echelons of power, so as to carry out what, in their view, are the necessary events to accelerate Jesus’ second coming. They are the ones who are eager to fume the flames of conflict in the Middle East, and who water their mouths every time a catastrophic event happens, all in alleged fulfillment of a script written by a strange visionary mystic on the island of Patmos two millennia ago.
There are a few things these cults have in common – but one of them is that they all are or were based in the US. It’s very easy to get the impression that cults just can’t really happen over here in Britain. Maybe you think we’re just too cynical a bunch to fall for some charismatic leader’s claims of peace and love changing the world? But Catrin Nye would disagree with that impression. Because she’s spent the best part of two years researching a potential cult on our very own shores.
The podcast follows the story of Jeff who, when he joined a reading group online, was picked up by the book club leader, Jai. According to the show, Jeff became Jai’s mentee, and would go on to spend £10,000 for a mentoring course to improve his discipline; Jeff wanted to hike to the South Pole, something that would require a remarkable amount of discipline.
Two years later, Jeff says he had given £131,000 to Lighthouse. He’d sold his house, become a ‘Lighthouse Associate Elect’ and was spending five to six hours a day from 5am on long calls, where Lighthouse leader Paul Waugh would talk at length about ‘toxic families’ and the ‘four levels’ of person you could be. Only Paul had reached the top level of ascension, level four; other members of Lighthouse were at lowly level one, but he could help them ascend. Each of these calls were recorded, transcribed and stored to make sure not a single word spoken by Paul was lost.
Jeff still hadn’t made it to the South Pole. In fact, none of the people Catrin spoke to over the course of the show apparently ever really achieved any of their goals, but they did spend day after day on hours and hours of calls, sometimes in houses shared with other Lighthouse members (one property owner believes eight people shared the six-bedroom house she rented to Lighthouse).
What A Very British Cult does remarkably well is to humanise these stories. We can all be guilty of finding stories about cults a little too salacious – we focus on the most horrific stories until we become so desensitised that jokes about “drinking the Kool-Aid” are completely divorced from their gruesome context. But it’s easy to forget that the people caught up in cults can be completely ordinary people, people with dreams and aspirations and goals.
Erin (not her real name), an ex-Lighthouse member Catrin spoke to, had experienced incredible trauma in her childhood, and had recently gone through a divorce, whereas Jeff had a relatively happy upbringing, and was in a relatively happy relationship. Nevertheless, their time in Lighthouse alienated them both from their families and loved ones to some degree, fostering a reliance on the safety and support of Lighthouse itself. Erin wanted to set up a business; Jeff wanted to hike to the South Pole – goals and aspirations that they were told Lighthouse could help with.
Catrin’s skill is in asking the difficult questions that get to the core of how and why each of the ex-members she spoke to believed in Lighthouse enough to give them tens of thousands of pounds of money they often didn’t have. She deftly exposes the depth of the damage that has been done to them, and the ways in which she believes this ‘life coaching company’ is arguably a cult.
What A Very British Cult exposes is something very real, very raw and very relatable. The podcast shows step by step how people are drawn in, gradually, slowly over time, in a way that any ordinary person might if they were exposed to these encounters. When Catrin explains that Lighthouse might have looked for people who were recently made redundant when a travel agent went out of business, I thought of family members who lost their jobs around the pandemic and how vulnerable they might have been in a similar situation.
When members try to leave Lighthouse, the podcast describes how leader Paul Waugh would switch from compassionate and supportive life coach to aggressive… and back again, several times. We hear audio of Paul telling Erin that she’s “so fucked up”. When Jai thought about leaving, he explains that Paul “loved him through it”, and we hear audio of Paul claiming he “tucked Jai away for six years” in which he “never talked to anyone” outside of Lighthouse. Jai, as the BBC found in their investigation, has CCJs for £20,000 worth of debt; Paul, by contrast, has no debt, and lives in a very large house.
Ultimately, the BBC weren’t the only ones investigating Lighthouse and the company was wound down by the courts for business failings just last month – in fact, the day before A Very British Cult launched. The details of the court case and a hostile conversation between Catrin and Paul are aired in the final episode of the show.
The podcast is a gripping listen, it delves deep into how Lighthouse works and comprehensively makes the case that this organisation is a cult. It describes the very deep harm its ex-members have suffered, and touches on the experiences of family members of those still inside the group. It also shows the humanity and bravery of the ex-members highlighted, it shows compassion for those Lighthouse members still inside the group, and it does it all while being engaging, and very listenable.