Home Blog Page 82

University of Liverpool’s proposed redundancies raise serious red flags for academia

0

Working in academia comes with a lot of responsibility. For people working on teaching and research contracts there are a number of responsibilities that aren’t immediately obvious even if you have some vague appreciation of how academia work.

Firstly, you must bring in money to support a group of researchers – this means applying regularly for grants from a wide range of funding bodies. Once you’ve secured funding (not a guarantee), you become the Principle Investigator (PI), and have all the admin related to implementing the research that is funded by those grants. That might mean writing job adverts to hire post-doctoral research associates, or writing PhD studentship adverts to bring in PhD students. Then you have to deal with all the admin of filling those positions, and the responsibilities of being responsible for a team of staff and students. It’s no small job to manage a team of people who are dependent on you for their salaries, education and general professional development and wellbeing.

You’re also responsible for all of the projects you have funded – you might not have time to do the research yourself, but you do need to make sure your staff or students are supported in completing that research, which includes designing and conducting the experimental work, analysing the data and interpreting the results. It also includes ordering the materials needed to complete the research and deciding how that gets paid for. Even if your staff make the orders, you are responsible for signing off on them. Your staff and students need support in writing manuscripts and reports, presenting their work at conferences and meetings. On top of that, you must work on manuscripts and reports yourself, and present that work at conferences and meetings as well as (outside of Covid) travelling to those conferences and meetings.

That’s the research side – and it’s a full-time job in itself. But you are also responsible for teaching. That can include designing an entire undergraduate module and managing and lecturing on that module, or it might “only” mean you lecture on other people’s modules. Either way, you might well be racking up hours and hours of lecturing time for hundreds of students at a time. You are also likely to be responsible for one or more tutor groups, as their touchpoint throughout their entire degree. Plus, you might design or teach on practical courses. You are also likely to supervise a number of undergraduate or master’s degree dissertation projects each year – helping those students design and implement a ten-week research project in such a way that they get enough data to write and submit a report and present short talks or posters on their work. Each of those students might later ask for your help in applying for their own PhD positions or jobs and you would, of course, be happy to help them by giving them advice, interview practice or writing them references. Teaching is also a full-time job.

Once you add in the demands of doing all of this under Covid conditions, the work load sky rockets – you must convert all your teaching to online teaching and re-design or record your lectures online. You must figure out how to support your students when you can’t see them as frequently as you normally would, and you must support all your staff through these challenging times. You might need to apply for extensions from your funding bodies, figure out how to present work online and work out how to keep working despite a lack of access to labs. Plus you’re now maybe home schooling and sharing your work space with a partner, or dealing with being entirely alone. You’re worried about your family and friends. All the stuff that we’ve all gone through to greater or lesser extents.

But at least you have some level of job security. Academia is precarious, many are on short (under 3-year) fixed-term contracts, but once you’re at the level of managing your own group, you have some level of security. You are dedicated to your university, and you hope they are dedicated to you.

So imagine the distress of 47 members of staff at the University of Liverpool who were told recently that their University was considering making them redundant.

University redundancies

At the end of January the Guardian reported that at least 9 universities were planning redundancies.

They wrote:

“In London, the University and College Union is fighting potential job losses at three institutions: the University of East London; Goldsmiths; and Senate House, University of London. Elsewhere, redundancies are planned at the universities of Liverpool, Leeds, Leicester, Southampton Solent, Brighton, and Dundee.”

What many of these redundancy proposals have in common, as far as I can tell, is a lack of transparency.  The reason particular staff members have been or will be selected is not clear at many of these institutions. Some say they are not making any judgement on the performance of their staff. While others, like the University of Liverpool, have given criteria that are perceived as particularly problematic across the academic community.

The University of Liverpool

In June 2019, the University of Liverpool Faculty of Health and Life Sciences, announced new plans for a “strategic realignment” – a restructuring – with the stated goal of “improving health outcomes throughout the Liverpool City Region and beyond”. Professor Louise Kenny, the Pro-Vice-Chancellor for the faculty said:

“Project SHAPE will enable us to realign our expertise through a new structure and vision, which enables us to work in a more agile way and respond to unmet societal needs locally, nationally and globally.”

And the article said:

“The plans aim to deliver world-class, research-connected teaching and to deliver scientific research excellence with societal impact. This will provide the very best environment for tomorrow’s students, and create a platform for the University to engage in effective partnership working across the region to translate our fundamental research strengths into life-changing benefits, both locally and globally.”

Phase one of this project was to map out a new structure. Creating “four new institutes and four supporting directorates”

And then came phase two.

In a recent statement from the University, they describe phase two this way:

“The University of Liverpool is currently engaged in the formal process of collective consultation with trade unions about the proposed redundancies in the University’s Faculty of Health and Life Sciences (HLS) as part of Project SHAPE.”

The University were considering making 47 academics redundant. But the important question is how were those 47 selected?

In their public statement in March the University said:

“Firstly, we used a measure of research income over a five-year period to identify colleagues who may potentially be placed at risk of redundancy (although it should be noted that anyone employed at or below 0.2FTE, or who was appointed or promoted later than 2016, was not in scope).”

“At this point, a range of factors that might remove colleagues from the pool of those potentially at risk were considered, including the contribution of positive citation metrics where appropriate.”

So what does this mean? The first part, hopefully, is self-explanatory – if you’re bringing in plentiful grants, you’re ok. If, for whatever reason, your grant income is reduced then you’re at risk of redundancy. The second part relates to academic publications. You might have heard of the (arguably incredibly flawed) “publish or perish” model of academia – the idea that academics must regularly be publishing their work in academic journals, and that these must be high quality journals with reasonable impact factors. But it goes a step further than that. Your publications must also be valuable to the wider community. One way to measure that value is to look at how many publications cite or “link back” to your publications.

Citation metrics

So what do we know about the citation metrics? Currently not a great deal, the University haven’t said publicly what citation metrics they are using. But two of the people who have identified themselves anonymously as at risk of redundancy have mentioned the Sci-Val Field Weighted Citation Impact (FWCI) and the same metric is mentioned in an open letter from University of Liverpool academics.

SciVal are an analytics service from Elsevier an academic publisher. The Field Weighted Citation Impact of a researcher is the ratio between the total citations received on the person’s published output, and the total citations that would be expected based on the average in the subject field.

So a ratio of 1 means you’re receiving a number of citations within the average of academics within your field, a ratio above 1 means you’re receiving more citations than average and below one means less than average.

This measure was never intended to be used to rank the productivity of individual academics. The San Francisco Declaration on Research Assessment, to which The University is a signatory, recommends that institutions:

“consider the value and impact of all research outputs (including datasets and software) in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice.”

Using citation metrics as a short cut has many problems. In an article for Times Higher Education, Professor Moher of the University of Ottawa who is an expert in scientific publishing said: “Inappropriate use of research metrics incentivises poor science, corner-cutting, and data massaging; while creating insecure, untrustworthy, and low-morale research cultures.” In the same article, experts in research metrics Ismael Rafols, Ludo Waltman, Sarah de Rijcke, and Paul Wouters from the Centre for Science and Technology Studies at Leiden University, said that the “proposal seriously contravenes the principles of ethical and responsible use of research metrics”

In an open letter from experts in research evaluation and bibliometrics at Leiden University sent to The University of Liverpool, concerns expressed that the that “this proposal seriously contravenes the principles of ethical and responsible use of research metrics as stated in documents such as the San Francisco Declaration on Research Assessment, the Leiden Manifesto or the Metric Tide.” Going on to state that “we are concerned about the statistical robustness of the Scopus Field Weighted Citation Impact score, in particular when applied to relatively low number of publications – for example, below 50-100 publications, which is generally the case for individuals.”

Teaching goals

The University of Liverpool stated their goal of providing “the very best environment for tomorrow’s students”. According to the public statements from The University, when considering the redundancies of academics they are focusing on research income and citation metrics. Which leads me to the impression that contribution to teaching is not being considered. This seems completely at odds with the goal of improving teaching – in fact, those academics who give the most to enhancing the education of students are likely to be those who have less capacity for bringing in grant income and publishing academic articles.

Equality, diversity and inclusion (EDI)

There are plenty of other reasons an academic might not have the same grant income or citation status as others in their institution. They might work reduced hours due to disability, they might have had periods of parental leave, taken time off for ill health or grief or be a carer, they might have invested additional working time into initiatives that benefit The University. An open letter to The University highlighted that “the majority of the work to improve EDI (e.g., to decolonise the curriculum, to lead on anti-racism efforts, to obtain funding for EDI posts around antiracism, to mentor and coach early career colleagues, to widen participation, and to make the university a champion of community) is built on gendered and non-white labour.”

The need for transparency

Maybe The University of Liverpool are considering contribution to teaching and contribution to EDI projects in their assessment of these academics, maybe they did mitigate for disability, child rearing, sickness, grieving or caring. Currently, we have no idea because there is a lack of transparency.

The University say this is due to confidentiality purposes saying in their statement:

“The University has maintained confidentiality around the proposals being discussed within collective consultation, but given the inaccuracies circulating publicly about these proposals, we have decided to release further information on the key issues being discussed.”

And confidentiality surrounding this is important; however, if staff have been made individually aware of their risk of redundancy, they deserve to know the full criteria. Either, they have been made aware of the full criteria and it is only these two, or they haven’t been made aware of the full criteria and they are left with uncertainty and anxiety.

The need for transparency goes beyond the individuals directly affected. As uncertainty continues, other academics at the University (who also already invest a lot of time in research responsibilities that go beyond grant income and publications) are left with distress and concern both for their colleagues at immediate risk of redundancy, but also for their own situations and how they might protect themselves from sudden, unexpected redundancy in future. Are those academics reassured that they are deemed “productive” enough, or do they now fear their time is around the corner and they cannot protect themselves due to a lack of certainty of what is considered a priority for the University.

But it’s not just people on teaching and research contracts who are worried. One thing the University hasn’t said anything about is how these redundancies will be handled. All of these staff will have their own PhD students and post-doctoral researchers to support. We do not know what will happen to those students and post docs. Will they keep their positions in the University? That is certainly unclear for people in post-doctoral positions who are on fixed term contracts. And for the PhD students? Will they be transferred on to other PIs mid-way through their courses or will they be expected to move – if their PI manages to find a position at another university.

As for teaching responsibilities? Will they now be split between remaining academics who now feel their commitment to teaching is less valuable than their commitment to writing grant applications and manuscripts for publication?

All of this is currently uncertain and leaves the entire faculty reeling from treatment of these staff that – as the open letter puts it – is “callous and utterly unacceptable”.

The conspiracy theorists who believe ‘traditional masculinity’ is under deliberate, strategic attack

In reacting to a photoshoot featuring Harry Styles, the conservative commentator and conspiracy theorist Candace Owens tweeted:

There is no society that can survive without strong men. The East knows this. In the west, the steady feminization of our men at the same time that Marxism is being taught to our children is not a coincidence. It is an outright attack. Bring back manly men.

The tweet from Candace Owens which quotes a tweet from Vogue Magazine. There are two photos of Harry Styles. In the second he is wearing a floor length ball gown which is pale blue with a ruffled skirt.

Her tweet did not appear in isolation – there is an entire conspiratory world around the idea that men are being deliberately and systematically feminised by a secret group.

The key points in this narrative are that men – either across the entire world, or at least in a specific country or society – are being feminised, undermined, weakened or are losing key ‘masculine characteristics’. The claim goes that this has been happening for years, but it is only now possible to see these ‘changes’. Often the claim is accompanied by ‘evidence’, such as a graph showing testosterone levels declining over time, a photo comparison of a masculine stereotype (perhaps an actor in an action movie) with someone deemed to be less than masculine, a repurposing of a transphobic trope, or an image of men in clothing traditionally perceived as feminine. Each piece of ‘evidence’ is presented with the objective to argue that men are becoming more feminine, and therefore (in the mind of the conspiracist) weaker.

So who do these conspiracy theorists claim is responsible for this? As always, there are the classics: George Soros, the Illuminati, New World Order, The Jews – the greatest hits of the conspiracy world. Alongside these trusty scapegoats sit other perpetrators: Marxists, Feminists, Postmodernists, The LGBTQ+ community, progressives, and what some lump together as ‘The Left’. Each of these groups – or perhaps all of them – are said to be working in secrecy with the explicit agenda of destroying masculinity… but with what purpose?

According to Candace Owens and many of those who share her narrative, the objective is to destroy Western society via the destruction of masculinity, by subverting it with femininity. The conspiracy theorists posit that society can only survive with natural and strict gender roles that are biologically (or even Biblically) derived and immutable, and therefore any deviation from those roles will lead to total ruin and societal collapse. Thus all the evil groups need to do is achieve the mass feminisation of men, and the West will fall. That society is rarely claimed to be under threat due to women taking on more stereotypically male characteristics only serves to underline the misogyny inherent in this worldview: society will only crumble if men can be ‘weakened’ by being more feminine; they don’t believe women are ‘weakened’ by acting more classically masculine.

You may be wondering why this coalition of scapegoats is so intent on the destruction of our society. For some believers, the answer is obvious: these evil-doers seek to put a new society in place, allowing them to take over the world, control all the finances, et cetra, et cetra, the usual stuff. However, not all conspiracy theorists go quite so far, and their alternative explanations usually depend on who the conspiracists want to attack. For example, some argue that the feminisation of men runs hand in hand with the (they would claim) false notion of toxic masculinity – arguments that are most often used by the groups most intent on attacking feminism and feminists.

For the anti-feminists, the feminisation conspiracy theory exists to limit and challenge any attempt to consider and reflect on toxic masculinity. It is often driven by a misunderstanding – sometimes accidental, sometimes deliberate – of what the term ‘toxic masculinity’ means; these conspiracy theorists interpret it as meaning that all masculinity is toxic, and as such it is an attack on all men. From that starting point, they assume any attempt to challenge toxic masculinity is actually an effort to change supposedly male characteristics for female ones. They then claim this enforced change causes all sorts of problems for society, giving license to reactions like those of the Mens Rights’ Activist, Alt-Right or far right wing movements, who claim they’re in the right because they’re pushing back against the evil forces trying to change and destroy men.

This conspiracy theory isn’t limited to anti-feminist positions. Some believers claim the feminisation of men is just part of the ‘trans agenda’, or, prior to this most recent moral panic, the ‘gay agenda’. These agenda in part posit that LGBTQ+ communities want to corrupt children and society, in order to persuade more people that they are gay or trans. The assumption, of course, is that those communities in particular would be happy to force people to live as a sexuality or gender contrary to who they really are – which seems unlikely, given that in all of society, the LGBTQ+ community understand that particular pain better than anyone.

For decades, LGBTQ+ people have been fighting for rights and recognition, and the movement has had some significant and long-overdue progress in many places. As a result, the visibility of the LGBTQ+ community has increased in the media and in public spaces – especially on the internet, where people feel more comfortable expressing themselves and who they are. As a result, some people find their worldview challenged, and some of those people find the challenge too much to cope with, leaving them open to narratives that offer an alternative explanation – which is where the forced-feminisation conspiracy theories come in.

Most – if not all – of the adherents to this conspiracy theory have the unshakeable belief that gender is defined by sex, that it is binary, and that those two genders have strict and immutable characteristics. This worldview doesn’t allow for anything beyond that limited scope, so when they encounter (in the case of Harry Styles) a man wearing a dress, or someone expressing that their gender isn’t a binary, or when they encounter a trans person, that worldview comes under threat. For some, the only way to resolve that tension is to subscribe to the belief that someone must be distorting things and subverting the natural order of things, presumably with malintent.

For adherents to this conspiracy theory, it’s vital for society that men adhere to a very specific stereotype: strong, assertive, and silent, men don’t cry or show any demonstration of feelings. The idea that someone could be removing those characteristics, making men ‘weaker’ and ‘more like women’ is fear-inducing. This obviously demonstrates the sheer contempt believers in this conspiracy theory must have for women, if society is doomed to crumble should men become feminine.

While proponents of this conspiracy theory will argue their concern is about upsetting the natural order of society, at its heart this belief has more to do with an inability to deal with reality as it is – complex, varied and nuanced. Instead of addressing their own assumptions and biases, believers turn to a fanciful narrative involving shadowy forces, for which they have no factual evidence. As society evolves in ways they may dislike, disapprove of, or even fear, their only explanation is that there is a conspiracy – and one that becomes even more personal, as it posits that someone is coming to steal the believer’s masculinity and identity, and that of everyone else. From that position, anything can be justified: the problems of the world are not because of the believer or their ideas, but because They pushed the believer to it.

In the end, the feminisation conspiracy theory – like many other conspiracy theories – ends up as something to hide the fear of the believer, and to help them to cope with a world that is, finally, starting to leave old prejudices behind.

2019’s ‘The Mandela Effect’ is a stylish, shallow exploration of a rather silly conspiracy theory

Indie science fiction film The Mandela Effect didn’t get much attention on its 2019 release, nor the critical praise or word of mouth that superficially similar films like the superior The Endless (2017) have enjoyed. 

Although there are good things in this flawed film, it’s lack of popularity is something of a relief, as we don’t need any more people believing in outlandish conspiracy theories based on commonly-held false memories

The so-called Mandela Effect, for the blissfully uninitiated, was coined by “paranormal researcher” Fiona Broome, in specific reference to an apparently widely-held misconception that Nelson Mandela had died while incarcerated in apartheid-era South Africa. Mandela, as most of us will recall, actually died in 2013, after serving a term as the president of South Africa in the 1990s. 

If you are unaffected by this particular misremembering, there’s a very good chance you are party to one of the many others, as this movie reminds us. Have a think for a moment about the following questions: 

  • What is the famous line uttered by Darth Vader in The Empire Strikes Back, regarding Luke’s parentage? 
  • How do you spell the name of the classic Warner Bros. cartoon series featuring Bugs Bunny and Daffy Duck? 
  • How does the Monopoly man accessorise? 

There will be a test later! 

The rational explanation for the original Mandela Effect – Nelson Mandela’s supposed death in custody – is actually fairly straightforward. Similar false memories could, given a large enough set of participants (for example, 7 billion people) affect thousands or even millions of us, especially when we are all given the same set of cultural cues and shared media interlocutors, and prone to the same flaws in our cognitive systems. While Nelson Mandela did not die in prison, another noted anti-apartheid activist, Steve Biko, died in 1977. People who believed Mandela to have died in prison are confusing two of the most prominent anti-apartheid activists, and I’d further hazard a guess that very few people born after the mid-1970s makes that particular mistake. 

Are we sure about false memories, though? Absolutely! False memories are demonstrable, common, and can often involve the conflation of two concepts. Some false memories arise spontaneously from the internal working of the brain, while others are induced by suggestion. With particular relevance to the Mandela Effect, memory has been noted in an article by Cara Laney and Elizabeth F Loftus as being: 

susceptible to errors as a result of exposure to post-event information such as leading questions and reports of others

In one study conducted by Professor Loftus and John C Palmer, language was shown to influence memory: 

Two experiments are reported in which subjects viewed films of automobile accidents and then answered questions about events occurring in the films. The question, “About how fast were the cars going when they smashed into each other?” elicited higher estimates of speed than questions which used the verbs collided, bumped, contacted, or hit in place of smashed. On a retest one week later, those subjects who received the verb smashed were more likely to say “yes” to the question, “Did you see any broken glass?”, even though broken glass was not present in the film. These results are consistent with the view that the questions asked subsequent to an event can cause a reconstruction in one’s memory of that event.

This is obviously terrifying for the functioning of our court system, given the widespread belief in the reliability of eyewitness testimony. Psychology professor Stephen L Chew highlights one particularly alarming statistic from the Innocence Project in the USA:

358 people who had been convicted and sentenced to death since 1989 have been exonerated through DNA evidence. Of these, 71% had been convicted through eyewitness misidentification

And if you were just a moment ago wondering whether there’s something else at play when Europeans and Americans conflate two black South African men, you might be onto something, as Chew also points out that: 

Of those false identifications, 41% involved cross-racial misidentifications

The fallibility of memory also casts doubts on at least some recovered memories, as noted by Suzanne Lego in the Archives of Psychiatric Nursing

False memory occurs when a vulnerable patient with a history of overcompliant or highly suggestible behavior is unwittingly coached by a respected authority figure to create, as if in memory, an experience that never actually occurred.

The results of this can obviously be extremely traumatic for the subject, who then “remembers” appalling events that never actually happened to them. Indeed, false recovered memories may have been responsible for such events as the Satanic Panic of the 1980s: 

There were over 12,000 accusations nationwide of widespread cultic sexual abuses involving satanic ritual, but investigating police were not able to substantiate any allegations of organized cult abuse.

Maybe you think you’re too smart for such a false memory? Let’s come back to those questions from earlier. 

  • Vader says: “No, I am your father” to Luke; endless stand-ups, comedy skits and playground retellings have morphed this into the one-liner we think we remember today: “Luke, I am your father.”
  • The cartoon was always spelled Looney Tunes, not Looney Toons. 
  • The Monopoly Man, aka Rich Uncle Pennybags, has a top hat and a cane, but he doesn’t have a monocle. If you remembered a monocle, you’re potentially mixing him up with the similarly-styled Mr Peanut

If asked, I would have said that I thought the Monopoly Man had a monocle, and I’d have wavered on the spelling of the cartoon.

So there is an obvious, well-documented explanation for The Mandela Effect, but an exploration of shared cognitive errors would probably not make for a particularly entertaining film. (In fairness, one of the supporting characters does posit this as an obvious explanation for the protagonist’s experiences, only for it to be batted away in favour of a far more visually striking and intriguing possibility).

A shot from the trailer of the film. A girlf with a distorted face

To summarise the film’s premise: the Mandela Effect follows grieving game programmer Brendan (Charlie Hofheimer) and his wife Claire (compellingly portrayed by Aleksa Palladino), as they mourn the sudden death of their young daughter, Sam. Brendan struggles understandably with the loss, and as he goes through Sam’s room, he starts to notice that some of the things he used to enjoy doing with her are not as he remembers them, from the spelling of the Berenstain Bears to the (missing) tail of Curious George. Has Brendan simply misremembered? Perhaps parallel universes are intersecting somehow? Or is everything just a simulation and these apparent changes are glitches in the code? 

Throughout the film a variety of well-known scientists and public figures are quoted (often rather out of context) in runtime-padding montages used to help justify the eventual explanation, with the always-excellent Clarke Peters popping up briefly as quantum computing genius Dr Fuchs to drive the plot along. Sadly his arrival also marks the end of the realistic portrayal of technology for something rather less plausible. 

It all wraps up rather too neatly in just 70 minutes (with ten minutes of credits), and there is a strong sense that just as Brendan feels about Curious George’s tail, something important is missing from the movie. So despite excellent performances conveying grief, frustration and confusion, and decent photography and editing creating a real sense of terror and tension when required, I can only recommend this to the most die-hard indie sci-fi completist.

“Who decides?”: how fair questions can derail meaningful action

Who decides? As an ethicist, I get asked this question at least once a week. Make a case for an ethical principle like “people who promote dangerous conspiracy theories should be banned from social media” and you’ll inevitably get some version of “but who decides?”. Who decides what counts as a dangerous conspiracy theory? Who decides that conspiracy theories are sufficient justification for banning?

“Who decides?” is a reasonable question to consider, but I want to talk about two ways in which it can derail a debate. It’s a reoccurring theme of this column, and philosophical skepticism more broadly, that good arguments and bad arguments are often difficult to distinguish, and sometimes even good argumentative moves can be used fallaciously. “Who decides?” is a powerful example of that, because it is very much a fair question, but it’s also a common last-ditch refuge for unsupportable positions. “Who decides?” is often a hard question with unsatisfying answers, and it’s easy to trade on that dissatisfaction when a better objection isn’t available.

The problem arises when a person asks “who decides?” because they’ve run out of other objections. In cases where “who decides?” is used as a counterargument, rather than simply trying to understand the decision mechanism, the question tends to shift the discussion away from the key points of disagreement that need to be resolved, and towards an issue that might seem especially challenging but ultimately isn’t particularly illuminating.

Take the example of defunding access to harmful or ineffective forms of “alternative medicine”. Relative to other ethical dilemmas, the question of “who decides?” is not especially salient here. Whether a treatment is effective or harmful is a question best decided by medical experts. The answer to the broader normative question of “who decides that we should restrict people’s freedom to seek alternative treatments?” is the community, likely through their elected representatives who, in theory, attempt to bring about greater wellbeing for their constituents through the advice of experts and thoughtful cost/benefit analysis. None of those answers are especially satisfying, partly because the question itself is not particularly salient in this context, and partly because a further step of “who decides?” remains an open question. Who decides who is a medical expert? Accreditors. Who decides the criteria accreditors use? And so on, ad infinitum. As our good friend Sextus teaches us, any argument can be thrown back infinitely like this. The choice, as with all our other beliefs, is either to adhere to Sextus’ radical skepticism, or to decide you’ve sufficiently guarded against bad inference at some point along the infinite regress.

It can be tricky to get a handle on what is problematic about this question, but I think there are two key ways in which “who decides?” can undercut fruitful discussion. The first way can best be understood with a comparison to “god of the gaps” style arguments used by creationists against naturalist accounts of the universe. The frustrating nature of the god of the gaps argument is best exemplified by the historic debate between Professor Farnsworth and Dr. Banjo in episode nine of season six of Futurama, “A clockwork Origin”. Dr. Banjo’s god of the gaps argument trades on the fact that there is always the potential for an explanatory gap between two states of events, and filling in that gap just creates two more gaps, and so on ad infinitum.

God of the gaps aren’t always empirical. One could argue that William Lane Craig’s metaethical pressupositionalism is a kind of god of the gaps, as he claims that it’s impossible to bridge the gap between our natural world and moral truths. What’s more, the early moves of pressupositionalist arguments are often phrased as “who decides what is moral?”, with the inevitable implication that a god is needed to decide or provide the grounding for the decision. Whether it’s claims about science or morality, the key feature of a god of the gaps argument is that you can simply restate the gap problem on into infinity.

A person stands at three directional arrows drawn with chalk on the floor.

Similarly, it’s easy to generate an infinite regress of “who decides?” such that critics can always claim that you’ve failed to thoroughly address their concern. These feelings of dissatisfaction are frequently amplified by emotional appeals to the looming threat of domination. The harmful result is that, as long as we can’t give a perfectly satisfying account of who decides up front, we feel justified in avoiding implementing any new action or system, lest that system fall into the hands of a tyrant.

These arguments all trade on a similar kind of appeal to ignorance, one that can be addressed given enough time, but in functional discourse can and should be deferred by recognising that further explanation is possible, but not necessary to feel justified in forming beliefs and acting on them. We may never be able to fill in every gap in the history of life, but the fact that every gap we’ve pressed hard on so far has yielded a natural “missing link” gives us good reason to suspect that the remaining gaps can similarly be filled without appeal to a divine creator. Similarly, I may not always be able to give a satisfying answer to “who decides?” at present, but the fact that humans have made some progress developing and acting on our moral knowledge inclines me to think more of that work can be done, even if the path isn’t always clear and the risk of abuse is real.

The second way that “who decides?” can be an argumentative misstep is in cases where someone must inevitably decide, but the question is posed in such a way as to suggest there is a decision-free alternative. I see this problem constantly in discussions about social media moderation. There is a great deal of outcry, some of it understandable, that unaccountable social media corporations have near absolute power in deciding who and what is allowed on social media. Folks across the spectrum find this situation intolerable, but there seems to be a lot of resistance to acknowledging that the power of moderation will have to rest in someone’s hands eventually. If not corporations, then government agencies. If not humans, then algorithms programmed by humans. Even adopting a maximally hands-off approach, which would be a disaster, would still be a decision that someone has to make.

In this context, “who decides?” could be a valuable question if it leads to analysing the costs and benefits of different decision structures and the implementation of the “worst option, besides all the others”. However, when “who decides?” is used as a way to imply that no one should have the power to decide, or that those advocating for any decision structure are in favor of tyranny or are oblivious to the costs of their preferred system, the question undercuts progress.

The answer to “who decides?” is often an unsatisfying “it’s complicated”. That’s okay though, right answers aren’t always satisfying, and if we learn not to expect them to always be satisfying, we can avoid the temptation of satisfying oversimplifications like “nobody should decide”.

Are you smarter than an 800 year-old? Our ancestors were more skeptical than you think

0

For over a century now, historians and archaeologists have been steadily deconstructing the myth of the Dark Ages: the belief that medieval Europe was a time devoid of learning and intellectual inquiry. In truth, the period between the classical and early modern eras – roughly 500 to 1500 CE – saw advancements in mathematics and literacy, ingenious inventions like the windmill and the mechanical clock, and bold experiments in artistic expression.

Despite this, it’s still commonly assumed that people in medieval Europe were deeply superstitious and irrational, especially when compared to their descendants in the 21st century. It’s an understandable assumption: today we can easily cure diseases that were death sentences 800 years ago, when our ancestors were trying to ward off miasmas with herbs and spices. We can instantaneously communicate across distances that would have taken a medieval messenger months – years, even – to cross. Today there are entire fields of knowledge that simply did not exist in the Middle Ages.

But does this really mean we’re smarter or more skeptical than our ancestors?

Don’t worry: I’m not about to dive down the relativist rabbit hole and question whether science and technology have advanced since the Middle Ages. They have, of course, and our lives are much better for it. Rather, I want to ask whether these advancements have left us with a more inquisitive and critical outlook on life.

Take that perennial favourite of medieval mythmaking: the flat Earth. Contrary to popular belief, people didn’t need to wait until 1492 for Christopher Columbus to prove the Earth was round. For one thing, Columbus believed our planet was pear-shaped. For another, every educated European already knew the Earth was ‘but a little round ball,’ as a 15th-century compendium put it, and had done so since the days of St Augustine in the late 4th century. (St Augustine, in turn, was merely citing ancient Greek philosophers, who already knew the Earth was round.) The historian Jeffrey Russell was able to find just two obscure medieval scholars who, on the basis of some very literal interpretations of the Bible, argued that the Earth was flat. And far from being promoted by the Church, both authors were condemned by the clergy for their unorthodox views.

Contrast that state of affairs with today: tens of thousands of people belong to flat Earth communities online, and their various outreach efforts – YouTube videos being a particular favourite – reach millions more. In 2019, surveys in Brazil and the USA suggested that somewhere between seven and a dizzying sixteen percent of adults doubted the sphericity of our planet. Medieval scholars would be shaking their heads in shame and bewilderment.

Astrology provides further surprises. The pseudoscience of star signs is big business in the 21st century, making around $2.2 billion each year. Fuelling this boom is the increasing popularity of astrology among young people. According to The Independent, almost two thirds of 18-24 year-old Americans now believe that star signs and moon phases have an influence on their fate.

A medieval scholar making measurements

Surely this fascination with the zodiac is simply a continuation of long-held beliefs? Yes and no. Astrology is certainly ancient, and many people looked to the night sky for answers throughout the Middle Ages. But it was never as lucrative a belief as it is now. Indeed, it wasn’t even until the 12th century that the form familiar to us today – based on accurate plotting of the movements of the planets – became popular. The historian Peter Dendle suggests that, until this point, astrology had been less commonly practiced in the West than it is today.

This isn’t to say that medieval Europe was populated solely by rational skeptics. Superstitions abounded: many people believed in monsters like sprites and werewolves as well as supernatural beings like ghosts and devils, and protective amulets and charms were commonly worn or hung around the house to protect against these malevolent beings. Ravens and donkeys were omens of bad luck; crossroads were hotspots for demonic activity; eating goose on the last Monday in April could be fatal. It’s easy to laugh.

Before we start feeling too smug, however, we need to remember that a much smaller proportion of people in the Middle Ages received an academic education compared with today. Eight centuries ago, someone living in England could expect to spend a grand total of 0.03 years (that’s eleven days) in school; today that figure is closer to ten years. Despite this 300-fold increase in education, many superstitions and irrational practices remain as popular today as they were in the medieval era, if not more so. A 2019 survey found that a sobering 45 percent of Americans believe in ghosts and demons. Most depressingly, a far greater proportion of educated people today believe that the Earth is flat and that the planets foretell their future than ever did during the Middle Ages.

Once again the medieval literati would be disappointed with their descendants. How can we be sure? Because the educated minority in the Middle Ages was often extremely critical of unsubstantiated beliefs and irrational thinking. Indeed, the fact that we still know about so many old superstitions is thanks to the medieval scholars who recorded them merely to condemn them. An anecdote from the Middle Ages tells of a priest who grew tired of people making the sign of the cross whenever they saw him or his fellow clergymen – a superstition believed to prevent bad luck. One day he was walking down a country road when a woman coming the other way crossed herself. The priest proceeded to push her into a muddy ditch to disprove the efficacy of her practice (and, you might suspect, to vent some spleen). Whether this event actually took place is not known – for the poor woman’s sake I hope not – but it accurately conveys the distaste for superstition among medieval intellectuals.

So are you smarter than an 800 year-old? Maybe you are, maybe you aren’t. The point of this piece isn’t that our medieval ancestors were cleverer than us – neurologically, humans haven’t changed much since the days of Cro-Magnon, 40,000 years ago – but that rationality and skepticism can’t be taken for granted, no matter how impressive our species’ learning and technology becomes. Critical thinking isn’t destined to increase in line with scientific advancement, and what is gained in one generation may be lost in the next.

This is why the continued depiction of the medieval world as the Dark Ages is not only inaccurate but dangerous. By dismissing our ancestors as gormless fools we’re implicitly insisting that we, in the 21st century, are inherently smarter and more skeptical. It’s a complacent attitude that encourages us to overlook the many instances of irrational thinking and dubious practices that continue to flourish today. Skepticism, unfortunately, can never be assumed. It must be constantly fought for and defended, regardless of what century we happen to find ourselves in.

The ‘scientist as lone genius’ is a myth that obscures real stories of scientific discovery

How is a pseudoscience born? There is, for sure, no single answer. Some, like astrology, are based on ancient traditions and ways of thinking; others, like homeopathy, are the brainchildren of self-proclaimed, charismatic “geniuses”.  There’re also the once-legitimate research programs that were left behind by the facts but still refuse to die, like Neuro-Linguistics Programming (NLP).

There is, however, one source of pseudoscientific ideas and themes that is usually overlooked because it comes wrapped in the best of intentions: the over-eager efforts at science popularisation that, by hyping “sensational” research results, promoting “wacky” hypotheses without putting them in proper context, and simplifying scientific concepts beyond their breaking point, create a public (mis)understanding of science that is fertile ground for pseudoscientific ideation and exploitation.

Perhaps the most visible victim of such a process, nowadays, is quantum physics. Decades of popular writing on the subject have focused on counterintuitive features like the Uncertainty Principle, wave-particle duality, and the so-called “measurement problem”, where some properties of quantum systems only become definite once observed (disingenuous presentations often omit or underplay the fact that, in this context, any inanimate piece of equipment may count as an “observer”). As a result, the public’s broad-but-shallow understanding of quantum physics has given rise to the “quantum consciousness” movement, bringing about a flourishing market for quantum quackery.

For instance, in Brazil you can buy, for 49,90 reals (about £7) a book that will teach you how to use the quantum powers of the mind to rewrite your DNA into a “millionaire’s” DNA (we expect to see hordes of Bill Gates clones wandering the streets any time now).

This isn’t, however, a new phenomenon. In his 1995 essay The Turmoil of the Unknown, French Literature scholar Michel Pierssens notes that in 19th century France, there emerged “a ‘popular’ science (and not a popularised science, even though based on the latter), which would alone be able to continue where official science could only stop. The bold, optimistic science of the unknown would stand out against the fearful, skeptical science of the known” (italics ours).

Pseudosciences tend to build on what the non-specialist knows, or believes to know, about real science. And what non-specialists know is what they remember from school and what science popularisers tell them.

Pierssens was referring to the spiritualist belief in communication with the dead, but such considerations can be easily brought to the present and applied to a great number of subjects, from the search for Atlantis to Ancient Astronauts lore and, of course, all sorts of quantum shenanigans. Pseudosciences tend to build on what the non-specialist knows, or believes to know, about real science. And what non-specialists know is what they remember from school and what science popularisers tell them.

The use of sensationalism and hyperbole in science popularisation efforts has a long tradition, and it is especially dangerous (and prevalent) in issues related to human health.

Another factor that has strongly influenced science popularisation, not always for the best, is the need for storytelling and heroes. Communication and behaviour studies show us that humans respond better to stories than statistics, so it obviously makes sense to put this valuable tool to use in science communication. However, while it is effective, storytelling can be very misleading if we are not careful, and it can pave the way to present progeess in a most unscientific way.

Stories like the discovery of penicillin by Alexander Fleming, for instance, can lead people to believe that many scientific discoveries happen by chance. Not only is this wrong, it also allows for a religious or spiritual interpretation, as if great scientists were inspired by a greater force that guided them on the way to their great breakthrough.

This narrative may well be a result of our need to romanticise the past, and our need for solitary heroes and geniuses, but such figures are often more valued than the plain, honest scientific work done by thousands of scientists across the world – the work that generates knowledge, advances technology, and impacts our daily lives.

Take Fleming, for instance. As science historian John Walker tell us in his book Fabulous Science, there was nothing fortuitous about the discovery of penicillin, and if it was left to Fleming alone, it would surely not have turned into the first commercial antibiotic. This version is widely known among scientists, but lay people are usually only familiar with the romanticised tale.

A petri dish with bacterial colonies growing on it.

Years before his “discovery”, Fleming already knew that lysozyme, an enzyme present in our secretions, such as the mucus in our nostrils, could kill bacteria. And he knew this by experimentation, not by accident. We should give him credit, as a very competent scientist; unfortunately the common need to tell a different, ‘more engaging’ story of a chance discovery may be inadvertently endorsing similar stories that promote bogus science.

Fleming writes that being familiar with lysozyme made it easier for him to spot potential antimicrobial agents, such as the famous mould in the petri dishes that led to the isolation of penicillin. What the story does not tell is that Fleming had trouble reproducing the “chance” experiment. The way it is told, we are under the impression that penicillin kills bacteria, and did so in a petri dish. The actual mechanism of action is the destruction of the bacterial cell wall, stopping the bacteria from growing. When Fleming tried to kill bacterial colonies, he failed. He had to first grow the mould, then sow the bacteria near it. This way, bacteria grown within 3 cm of the mould would die, and the rest, further from the mould, would thrive.

What probably happened (and this likely was by chance) was that when Fleming left his petri dishes in the lab, there was a cold wave which slowed bacterial growth, allowing the mould to grow first – an element he did not include in his publication. For this reason, other bacteriologists were unable to replicate his work.

Then follows the myth that Fleming knew from the start that penicillin was a wonder drug. The truth is that he didn’t know how to produce it in scale, and he didn’t try. He sat on his discovery for 15 years, until Howard Florey and Ernst Chain, from Oxford University, found out about his paper; within just three years, they had achieved purification and mass production of penicillin. Fleming never contributed to this work and contacted the team only after they had published in The Lancet, in 1941.

The habit of twisting history for sexier stories comes with a price: it misleads people into thinking great discoveries are made by lone geniuses. Pseudoscientists take advantage of this to promote quackery and sell themselves as geniuses ahead of their time.

This habit of twisting history for better and sexier stories comes with a price: it misleads people into thinking that great discoveries are always made by lone geniuses, and that it takes time for society to recognise that. Pseudoscientists take advantage of this to promote quackery and sell themselves as Galileo; geniuses far ahead of their time.

Science stories don’t have to be fairy tales to be interesting. Florey and Chain’s work in re-discovering and working out how to mass produce penicillin, answering to a war effort, is fascinating. Their work, with a little help from Fleming’s lab work 15 years earlier, helped to win the war. Without them, the world would be a very different place. How awesome is that?

Being true to science and to history guarantees the advancement of the first and the truth of the latter. Both are essential to fight the popularity of pseudoscience.

Rapid Prompting Method: A new form of communicating? Hardly

0

In the past few years, newspapers in the U.S. featured human interest stories about a “new form of communicating” for individuals with disabilities. The stories claimed that non-speaking individuals with autism developed unexpected literacy skills by typing on a keyboard or other device with support from a communication assistant. These individuals, with a history of little or no functional spoken or written language skills, were, according to these articles, now giving advice on how to cope with the pandemic, taking college prep courses, publishing books, and producing movies.

While it is encouraging to see individuals with autism and developmental delays being recognised for their achievements when they are legitimate, this “new form of communicating,” often referred to as Rapid Prompting Method (RPM), is in actuality, a variant of Facilitated Communication (FC), a long-discredited technique used on individuals with severe communication difficulties. With FC, an assistant, called a facilitator, holds on to their client’s wrist, elbow, shoulder or other body part while typing on a letter board or other communication device. With RPM, the facilitator holds the letter board or device in the air while the client points a finger in the vicinity of the board, seemingly to spell out letters independently.

The sophisticated, sometimes poetic, written output, however, in both FC and RPM is wholly reliant on facilitator prompting and cuing. Ask the facilitator to stand out of visual and auditory range as their client types alone and the written output deteriorates significantly, often to the point of limited or no intelligibility. In contrast, existing evidence-based Augmentative and Alternative Communication methods and techniques allow individuals to interact with communication devices independently and without the interference of a facilitator.

In 2018, the American Speech-Language-Hearing Association (ASHA), released an update of their 1995 position statement opposing FC. In it, they identified alternate descriptors of the technique currently being used by proponents: assisted typing, Facilitated Communication Training, and Supported Typing. They also released a new position statement opposing Rapid Prompting Method and, again, identified alternate descriptors of the technique: Informative pointing, Letter Boarding, and Spelling to Communicate. There are, most likely, others not on this list. With all these terms meaning the exact same thing, it is no wonder that readers of the feel-good stories published in reputable newspapers may not understand that the individuals are being subjected to FC and RPM, and that it is highly likely (well above chance) that the FC-generated messages are the words of the facilitators and not the individuals being subjected to facilitation.

What bears mentioning is that there is overwhelming evidence that FC is not an evidence-based form of communication and had these reporters done their due diligence, they would have discovered this for themselves. The latest systematic review, completed in 2018, revealed no new evidence that FC produces independent communication. All preceding systematic reviews, starting in the mid-1990s, revealed the exact same thing. Most major health, education, and disability advocacy organisations have position statements in place strongly urging their members not to use the technique, citing concerns over facilitator control and potential harms caused by false allegations of abuse. Some, like the International Society for Augmentative and Alternative Communication, consider FC a human rights violation. Other organisations opposing FC include, but are not limited to, the American Psychological Association, American Academy of Pediatrics, Association for Behavior Analysis International, Association for Science in Autism, the American Association on Intellectual and Developmental Disabilities, and, in the UK, the National Institute for Health and Care Excellence.

A systematic review of RPM in 2019 showed there were no studies that met the evidence-based criteria for the review and no proof that RPM produces communications independent of facilitator control. So, while the efficacy of RPM cannot technically be ruled out, as in the case of FC, proponents have yet to produce reliable evidence that messages obtained using RPM are other than facilitator controlled. Because RPM shares significant characteristics with FC, namely facilitator influence and prompt dependency, increasingly, organisations opposing FC are adding RPM to their opposition statements.

It is baffling, then, why reporters consistently fail to mention (or, if mentioned, emphasise) these facts in their reporting. Except that, despite the overwhelming evidence discrediting FC and casting serious doubt on RPM, otherwise reputable institutes like Syracuse University and the University of Virginia heavily promote their use. Private organisations, like HALO, Growing Kids Therapy Center, and Rapid Prompting Method UK, market RPM and Spelling to Communicate as if they are proven techniques. The University of Virginia’s “Tribe” and the UK’s “Quiet Riot” claim to be advocacy groups and win sympathy with the general public but are made up of non-speaking individuals subjected to FC or one of its variants. Oberlin College and Whittier College, among others, have allowed students using FC as their primary form of communication, to graduate from their institutions (raising the question of who is actually earning the degrees). Reporters rely on institutes such as these to be the authority on reliable, evidence-based communication methods and, may or may not, have the critical thinking skills or the depth of knowledge regarding individuals with severe communication difficulties to understand that they are being duped.

But the dangers of spreading misinformation about FC and RPM do not stop with feel-good stories about individuals “finding their voice.” These stories add legitimacy to a discredited and harmful practice. Along with the opportunity costs for individuals subjected to facilitation which prevents them from accessing legitimate forms of communication, innocent people’s lives have been ruined as facilitators have typed out false allegations of abuse against the family members of their clients. Jose Cordero, Thal and Julian Wendrow, Robert and Julia Burns, John Pinnington, and countless others faced jail time and public humiliation as they defended themselves against these wrongful claims. Facilitators Anna Stubblefield and Martina Susanne Schweiger were both convicted of sexual assault for using FC as the sole form of consent. And, Gigi Jordan, mother of a child with autism, fed him an overdose of pills because, using FC, she believed he had been sexually assaulted and wanted to die.

It is not easy to be a critic of FC, RPM or their variants. People who dare to point out the flaws in the techniques receive the harshest type of accusations from proponents: that they are against people with disabilities. Perhaps this is what reporters fear as well. But, these are not “new forms of communicating” and every single controlled study of FC has failed to produce independent communications. Further, these studies have consistently and repeatedly documented facilitator control over the written messages.

Critics and journalists alike need to call out FC in whatever form it takes and urge proponents to produce reliable evidence to back up their claims or stop the practice. Reporting anything else lends credibility to these pseudoscientific practices and does a disservice to people who rely on established, evidence-based methods of communication.

These are the stories that deserve recognition and celebration—those told not by facilitators but by the individuals themselves. Anything less is unacceptable.

Species, Individual, Gender – biology and taxonomy don’t deal in black and white

Physics has fundamental forces as its core. Chemistry has its elements. And in biology we have species. Like chemists studying elements, once we know the different species we can work out how they interact with each other and how they work. And in so doing we can learn about our natural world. But what actually is a species?

What is a species?

You probably learned the “biological species” definition at school. This definition states that:

species are groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups (Mayr, 1942)

This is an often-used definition, which is why you learn it in school, but there are problems with it. For one thing, there are species that are parthenogenetic, meaning they don’t breed but instead reproduce asexually, with the egg developing an embryo without the need for sperm. If you don’t breed you can’t be in a group of interbreeding individuals, so under this definition you cannot be a species. Oops. Bye-bye New Mexico whiptail lizardbdelloid rotifers and brahminy blind snakes among many others.

Another problem is that “potentially interbreeding” line. If you have two groups of similar animals separated by, say, a river or a mountain, can you be sure they couldn’t interbreed given the chance? Polar/grizzly bear matings result in fertile offspring, as do American buffalo/domestic cow matings, so what does this mean for them as species? Can we stop worrying about the loss of polar bears because they’re all just grizzlies anyway? (No, in case you’re wondering.)

Even with just that short examination I think it’s fair to say the definition we learned at school isn’t great. And let’s be honest, when we’re trying to identify a species we’re not sitting around waiting for it to make babies. What most of us use in our everyday lives is the morphospecies definition. This definition states:

a species is recognised based on similarity of morphology to other members of that species and on dissimilarity of morphology to members of other species.

You see a bird in the garden. It’s got black feathers and a bright yellow bill. You instantly recognise it as a blackbird. But then you see it hop towards another bird that’s begging at it for food. This bird is of a similar shape, though slightly smaller and it’s brown with a speckled breast. What could it be? It’s a juvenile blackbird. Same species, but a different age. Another bird approaches, same size and shape, also brown like the juvenile but instead of a speckled breast it’s faintly striped. What on earth could this be? It’s the female. Same species, but three distinct forms (more if you count the chicks).

One of my favourite papers has an extreme example of this – three deep sea fish, so different in appearance that they were classed as different families, were found through genetic analysis to be the male, female, and juvenile of a single species. The lesson from this is that appearance isn’t the precise guide we think it is.

So that’s two commonly used definitions of species, neither of which are great. But it’s ok, there are other definitions. The “legal” definition (at least for animals) is defined in the International Code of Zoological Nomenclature as the taxonomic species, which states:

The fixation of the name-bearing type of a nominal taxon provides the objective standard of reference for the application of the name it bears. 

A blackbird with a speckled chest and a yellow/orange beak standing on a branch

This is rather legalistic but basically means that there is a specimen, the “type”, against which all other examples are compared. These types are often stored in museums where they can be accessed by researchers. Somewhere in a museum collection there is “The” blackbird, against which all other blackbirds are theoretically compared and judged to be close enough to it to also be called a blackbird. But there are, as you’ve probably guessed, problems with this definition too. You may have even guessed at least one of those problems already – if there’s only one type, what do you do for species where there are different forms? What about the females and juveniles?

Another problem is that when you have a single type specimen, if that specimen is damaged, lost or destroyed then comparing against it becomes difficult or impossible. The type specimen is also often one of the first specimens collected, and as such may not be a particularly good example. This is a common problem with fossils, where quite often specimens are incomplete and species can be described based on a few bones before other, more complete specimens are found. Another problem, particularly for specimens gathered early in the history of modern science, is that collectors often went for the biggest and most attractive specimens, rather than the most typical. I’ve written elsewhere about the problems this can cause in relation to the Lord Howe Island stick insects

So that’s three definitions down, and none are ideal. What to do? Well, you do what biologists do and realise that actually there is no one-size fits all definition of species but rather you choose the one that suits your work the best. Are you a taxonomist describing a species new to science? If so, you’d better follow the taxonomic species concept if you want your work to be published. Are you out doing a spot of nature-watching? Probably best to use the morphospecies concept. Are you trying to learn more about the evolutionary history of a species? Use either the evolutionary or the phylogenetic species concept. Are you trying to work out how best to conserve plants and animals? You might well be better off abandoning the species concept entirely and work with evolutionarily significant units, which can apply to species, subspecies, races or populations, and basically means “group of organisms whose long-term survival we want to ensure”. 

So, we can see that there’s no all-encompassing definition for a species. It’s context dependent and complicated (and I haven’t even got into plants, viruses and bacteria!). You really need to know why you want to define a species before you decide how to define it.

But that’s species. Of course they’re a bit complicated, they’re always evolving, always changing. Of course it’s going to be a bit difficult to find the precise boundaries between closely-related ones. But the rest of biology is easier to define, right? 

Let’s try and another one.

What is an individual?

You may think this should be easy – we all know what an individual is, surely? After all, we are all individuals. But let’s actually try to define it. Common definitions are basically versions of “a person separate from other people and possessing their own needs or goals, rights and responsibilities” which is pretty uncontroversial (at least when talking about vertebrates). But let’s break it down. A person separate from other people brings us to an immediate sticking point when we consider conjoined twins: two distinct personalities but a single body.

A rose with two distinct colours - red on one half, white on the other. Image by Raquel Baranow [CC SA by 4.0]
Chimerism seen in a plant – a rose with genes for two distinct colours

Looking at it from a more biological position, we might define an individual as someone with a unique set of genes. But then we are confronted with genetically identical twins and even triplets. And the rarer case of chimeras, where a person has two sets of DNA in a single body. We could go more metaphysical and say that an individual is someone who possesses a unique consciousness, but what about when you’re unconscious, in a coma for example? Do you stop being an individual then? 

Very interesting, but so what?

From these two examples I hope I’ve shown that just because a concept is one we use a lot, that doesn’t mean it is straightforward. There is no obviously correct definition of a species, or an individual. There are definitions that are right enough (a bit like Newton’s Laws of Motion which work great, as long as you’re not working at relativistic speeds or at very small scales) but no definition that works all the time under all circumstances. We have what you might call an “operational” understanding of these terms, an understanding that works well enough in enough circumstances that we don’t really consider there are limitations. But it’s important to remember that those limitations still exist and are still important.

Having taken the time to look at these basic biological terms, I’d like to look at another: gender. Gender is a topic which has been the subject of extraordinarily intense discussion of late, at least some of which has been based on very faulty understandings of science. A lot of these misunderstandings fall outside my field of expertise – genetics, biochemistry, and psychology – but some of these misunderstandings are precisely in my wheelhouse, as they revolve around how we define biological concepts.

Is gender really a special case?

Some people who are very keen to define what a woman is have popularised the definition: “adult human female“. It seems pretty straightforward on the face of it. But to test its usefulness, we can dissect it, just as we can do with “species” and “individual” (and any other biological concept).

“Adult”. How do we define that? Biologically, an adult is an organism that has reached sexual maturity and is technically capable of producing offspring. Legally, for humans, (at least in the UK since 1970) adults are people aged 18 or older. So “adult” could cover people from as young as young as 8, or no less than 18, depending on what definition we are using and who we are applying it to. I think you’ll agree that’s quite vague for something so universal.

We are fortunate that we have no close relatives so I don’t have to spend any time on “Human”. If I was writing 50,000 years ago this would be very different. 

Finally, female. Biologically, “female” is used to refer to organisms whose gametes are “usually immotile”. These gametes are usually referred to as ova or eggs. Unless you are a fertility doctor, it’s unlikely you will encounter too many ova, so we must be using other definitions in everyday life. Another biological definition is that, for humans, men have XY sex chromosomes and women have XX. But again, unless you have reason to analyse someone’s genetic make-up, you’re unlikely to know what combination of sex chromosomes they have (and the XX/XY dichotomy is massively oversimplifying the wide range of combinations that are found in humans).

So, if we’re not examining people’s gametes, and we’re not analysing their genetic composition, how are we telling who is male and who is female? Who is a man and who is a woman? The answer is that we are using what are termed ‘primary and secondary sexual characteristics’. For humans the ones we think of most often are the breasts, vulva and vagina in females, and the penis and testes in males. In most modern societies, these characteristics are rarely visible to other people, except in intimate circumstances. Women may accentuate their breasts using tight-fitting tops and bras, and men may emphasise their penises with tight-fitting trousers or underwear, but in most situations most people at best hint at their presence. So how are we generally fairly good at telling who is male and who is female?

Let’s talk about jizz

In birdwatching there is a term that often raises a chuckle when used around non-birdwatchers: “jizz”. Jizz is the “the overall impression or appearance” of a bird. It’s a formal term for all the incredible processing our brain does without us realising, allowing us to recognise something without needing to study it in detail. You may recognise its similarity in practice to the morphospecies concept discussed above. We use “jizz” in many situations without realising: when you see a friend in the distance and can recognise them even though you can’t properly see their face, you’re recognising their jizz – the way they walk, the clothes they wear, the shape of their body, the way they’ve styled their hair. You just know it’s your friend though if asked to explain how you recognised them you’d probably struggle.

So, when we are identifying men and women, we aren’t looking at their specific sexual characteristics but the gestalt that they produce. The amount and distribution of muscle and fat, the length and distribution of hair, the height, and so on. However, as none of these characteristics are unique to one sex, what we are really looking at is the combination of these characteristics, and from there we unconsciously make an educated guess. Tall, muscular, no breasts, short hair, and a beard? They’re probably a man. Short, thin, long hair, no visible facial hair? They’re likely a woman.

Most of the time this sort of educated guesswork is right. After all, we’ve been doing it all our lives – and our ancestors have been doing it for a really long time, too – and practice makes perfect. But there are times when the jizz is indeterminate, that’s when our innate (or, equally likely, socially-derived) curiosity to know if someone is male or female kicks in, and we find it a source of great consternation when we can’t immediately tell.

Sex and gender

So how does this relate to trans people? Some of the more hurtful accusations levelled at trans people is that they are merely “pretending” to be a different sex. This is not only an upsetting accusation, but it’s also based on a flawed assumption, given that (as I’ve hopefully made clear) it is rare that we actually know someone’s sex. We can assume, and most of the time that assumption will turn out to be correct, but we don’t actually really definitely know. What we are actually looking at is someone’s gender. Gender refers to the “socially constructed roles, behaviours, expressions and identities”. It is how we present ourselves to the world. And as such it is reliant on understanding and following (or subverting, if you’re in the mood) cultural norms. 

It is easy to find people making the claim that trans women are not “real” women, because they do not have a particular set of characteristics ‘definitive’ of being a woman. The problem is that this relies on there being a robust definition of ‘woman’ – which, as we’ve seen, doesn’t exist. For some, the definition involves menstruation, as getting your period is what indicates a transition from a juvenile human female to an adult human female. But not all females have periods. Primary amenorrhea is rare but does happen. And of course, women don’t have to have periods. Secondary amenorrhea stops the menstrual cycle, and women who undergo hysterectomies and menopause also stop having periods. Some hormonal contraceptives prevent periods.

A blister packed filled with pills is easily recognisable a s a packet of combined oral contraceptive pills
The oral combined hormonal contraceptive pill

If having periods is key to being a woman then what does that say about womanhood when you no longer have periods? Are you less of a woman? I haven’t had a period in over a decade thanks to my hormonal contraceptive implant. Periods, for me, are a vague memory and honestly, most of the time I completely forget they were ever a part of my life. I feel no bond of womanhood because I used to discharge my uterine lining for a few days each month.

What about those other primary and secondary sexual characteristics: breasts, vagina, hairlessness etc? These characteristics are not diagnostic of womanhood – there are many cis women who lament their lack of a cleavage; it is possible to be born without a vagina despite being assigned female at birth; and many women are hairy, some even having pronounced facial hair. Sometimes this is a sign of polycystic ovarian syndrome, but other times it’s just the luck of the genetic draw. I don’t think anyone would argue that those women are not women. Trans women can also have breasts, with just as much a “luck of the draw” as cis women experience in terms of their shape and size.

Looking from the other direction, breasts are not unique to women. Gynaecomastia (“man boobs”) is surprisingly common. Trans men have vaginas if they do not – or have not yet – undergone gender reassignment surgery. Hairlessness, even ignoring male pattern baldness, is common in men and has a wide range of causes. Ectodermal dysplasia is a genetic condition that, among other things, prevents hair growth anywhere on the body. It runs in my family. I was embarrassingly old before I realised that hairless chests was the result of manscaping and not just the way men were.

Our everyday understanding of these terms are usually sufficient in most cases, but if you’re trying to be precise and scientific, it becomes clear that nature abhors clean divisions.

Put simply, there are no characteristics that are unique to women that are not found in trans women without excluding a lot of cis women in the process, and the same is true for cis and trans men. Similarly, a lot of the characteristics we claim are definitive of womanhood, either by their presence or their absence, are not as definitive as they first appear. Just as with “species” and “individual”, our everyday understanding of these terms are usually sufficient in most cases, but if you’re trying to be precise and, dare I say it, scientific, it becomes clear that nature abhors clean divisions. The closer you get towards the boundaries the blurrier they get, and the harder it becomes to decide whether something is on this side of the line or that side. 

So, what can we do?

One thing nobody is disputing is that recognising women as a group is important. Women face problems that men do not, and men face problems that women do not. Identifying these problems, identifying their causes, and fixing them is key to making the world a better place.

But we should also bear in mind that women aren’t discriminated against because they have vaginas, or breasts, or even because they have babies. Having babies makes it easier to discriminate against us, but the pay gap still exists for childfree women. It goes back to gender – the “socially constructed roles, behaviours, expressions and identities” that have led women to be less valued than men in society.

Those social constructions may have had biological roots long ago, but that’s no reason to continue perpetuating them unquestioningly. If someone says they are a woman and are seen by society as a woman then they experience the same socially constructed barriers and stigmas that all women experience to varying degrees.

Not all women face all the same barriers. A woman living in a favela in Rio de Janeiro has a very different life to me. A woman who married at 18 and has had 5 kids has a very different life to me. A trans woman who went to an all-boys school has a very different life to me. Our differences are what makes womanhood so rich and diverse. And what binds us is that we all face barriers and stigma as a result of being women. Cis and trans, we are all women.

This article was updated on 22nd March 2021 at 8pm, to correct a typo, where the words “primary and” were missing in one of the references to “primary and secondary sexual characteristics”.