Home Blog Page 43

Are at-home blood tests a medical innovation, or a solution trying to sell consumers a problem?

The more information the better, right? Actually, it depends on what we are talking about. When we talk about health, for instance, this assumption isn’t always true.

Although many of us would like to know everything about our condition, not all information is equally accurate and useful. An investigation by the BMJ revealed that in the UK there are a lot of companies that offer blood tests to find out how many healthy years you might have left, or to measure one’s cholesterol level, kidney or liver function, thyroid hormones, vitamins deficiency or sleep problems. All of this comes from just a finger-prick blood sample.

These companies sell kits to use at home: a do-it-yourself service accessible to anyone who can afford it. For less than £200 (the cheapest test can be found for less than £100) you can get an overview of your health condition.

There is a growing market for these products: the whole blood testing industry was worth around $80.50 billion in 2021 and is estimated to grow to $128.45 billion by 2028.

In this way there is an inversion of the diagnostic process: instead of going to a GP, having an examination and then carrying out some tests to confirm the diagnosis, people access tests first, on the basis of generic symptoms – without having the tools to interpret the outcomes (usually the results merely feature asterisks next to abnormal numbers). People can also see these tests online or on television, while medicines can’t be advertised directly to consumers.

Furthermore, as a panel of expert said in an opinion article for the BMJ, there is a problem with the reliability of these tests: the accuracy is 95%, which means that for every 100 tests, 5 results, by chance, will be abnormal even if people are perfectly healthy. When you test a lot of biomarkers in thousands of people, the probability of false positive results increases.

Most importantly, the NHS is effectively paying the cost of these tests: when a patient gets an inaccurate outcome, they go to their GP, creating a double problem. On the one hand the doctor hadn’t prescribed the tests, and probably won’t be able to manage them. On the other hand, they are likely to recommend other blood tests to review the results, overloading NHS care.

In 2019, the Royal College of General Practitioners published a position statement about screening, saying that:

many of the private clinics pass back results to the NHS, often via general practice, to be assessed and followed up. Some private companies even recommend that customers routinely discuss their results with their NHS GPs. This can be an inappropriate use of NHS resources and can have a potentially significant negative impact on primary care.

In general, experts notice that there is a problem of appropriateness: a person without any symptoms shouldn’t undergo screening unless recommended. National guidelines, for instance, suggest a prostate cancer risk test for all men over-50, who are not experiencing any symptoms. The screening should be preceded by a discussion with a GP – yet, there are a lot of private tests available at any age which don’t need any prescription.

Home tests aren’t new: since 2006, Californian company 23andMe has been promising to identify a customer’s ancestry and genetic predispositions from a saliva sample. In 2008, 23andMe was nominated Invention of the Year by Time magazine, and in 2017 they received FDA authorisation to tell consumers their risks of developing ten medical conditions, including Parkinson’s disease and late-onset Alzheimer’s disease. Years later, the company also obtained the permission – with special control – to offer a test that reports three mutations in the BRCA cancer genes.

The company quickly amassed a huge database, which led to pharmaceutical companies expressing interest in partnerships. In 2018, GSK signed an agreement for personalised drug discovery (it will expire next July).

Less successful is the story of Theranos, a biomedical startup based in California. Launched in 2003, the company offered a new technology to analyse blood tests, using a single drop of blood. Theranos got into trouble in 2015, when an investigation by The Wall Street Journal found out that the so-called technology simply didn’t work: a drop of blood wasn’t enough to detect viruses as they claimed.

The founder, Elizabeth Holmes, and the former president Sunny Balwani, were charged with fraud. Holmes was just 19 years old when she founded the company. In addition to lying to investors, the two put the lives of thousands of patients at risk by providing them with inaccurate and incorrect results. Holmes said her dream was to “change health care as we know it”, but she failed.

The verdict came from the jury at the beginning of January 2022: she was found guilty and was sentenced to over 11 years’ imprisonment.

In their article for the BMJ, the expert committee concluded:

People have a right to spend their money how they wish, but regulators should protect the public from unfair practices. The NHS needs to explain robustly the criteria for high quality screening and testing and to explain to consumers when they should be sceptical and what they should question.

Who knows whether the health system will have the strength to do so.

Rumford Place: the Liverpool monument still celebrating slavery and the US confederacy

0

What if I told you there was a monument to slavery and white supremacy in the heart of an English city? That’s just what you’ll find if you visit Rumford Place, an unassuming street off Liverpool’s Old Hall Street. Each building is named after a place or person from the former Confederate States of America, and there are a couple of plaques commemorating fallen soldiers and a warship.

The site is linked to Liverpool’s long and shameful associated with the transatlantic slave trade. From around 1700, cargo ships laden with trade goods would leave Liverpool’s docks, bound for West Africa. They would trade the goods for enslaved people, who would endure terrible conditions as they were taken to the New World. After selling their “goods” in lands such as Barbados, Jamaica or the thirteen colonies that would go on to become the USA, the traders would then load up with stock to sell back in Europe. Sugar, cotton and tobacco all commanded high prices. Each triangular journey was hugely profitable, leading to the growth of the port of Liverpool in the 18th century.

In North America, enslaved people were forced to work on farms and plantations. Following the American War of Independence, slavery was picked apart little by little in the North, whereas in the South it continued as a key part of the agricultural economy. Cotton in particular proved particularly lucrative, with vast amounts of it being exported to North West England, where it would be received at the port of Liverpool before being processed in the cotton mills of Lancashire.

The issue of slavery drove a wedge between the free states in the North and the slave states in the South. The election of Abraham Lincoln in 1860, who wasn’t even on the ballot in ten of the slave states, was the final straw for many Southerners. Throughout early 1861, Southern states announced that they were seceding from the Union, and soldiers loyal to their states began to seize forts from federal troops. In February, seven of the Southern states formed the Confederate States of America and inaugurated Jefferson Davies as their president.

From the start, the Confederacy made their intentions on slavery clear. Their constitution was largely a copy and paste job of the US one, with several key differences. For example, Article I Section 9(4) states:

No bill of attainder, ex post facto law, or law denying or impairing the right of property in negro slaves shall be passed.

If that wasn’t enough to convince people how dear the institution of slavery was to the Confederates, the Confederate Vice President Alexander H Stephens on March 21st:

Our new government[‘s]…foundations are laid, its cornerstone rests upon the great truth, that the negro is not equal to the white man; that slavery—subordination to the superior race—is his natural and normal condition.

As well as defending slavery, the Confederacy also adopted their first flag, known as the Stars and Bars.

The Stars and Bars flag - three stripes (red, white, red) with a blue square in the top left corner with 7 white stars arranged in a circle.

With the Confederate attack on Fort Sumter, Charleston on April 12th 1861, the American Civil War was underway. Confederate armies fought under their own flags, the most popular one being the battle flag of the army of Northern Virginia.

a red flag with a blue X over it, 13 white stars running down each part of the X

Once the Confederates realised that their Stars and Bars was too similar to the Union Stars and Stripes, they looked for a replacement. They took the popular battle flag and placed it in the canton of a white field to create the Stainless Banner. Why white? To represent the superiority of the white race. In the words of Savannah Daily Morning News editor William Tappan Thompson:

As a people, we are fighting to maintain the heaven ordained supremacy of the white man over the inferior or colored race: a white flag would thus be emblematical of our cause.

The stainless banner: a white flag with a red square in the top left corner. The red square has a blue X over the top with 13 white stars arranged over it.

However, the Stainless Banner proved problematic, as they were marching into battle with a flag that was mostly white, giving the impression that they were surrendering. To fix this problem, they took the Stainless Banner and shoved a red stripe on the end, creating the Blood Stained Banner.

The blood stained banner: the stainless banner with a red stripe running down the right hand side.

Liverpool and the Confederacy

But what was Liverpool’s role in all this? It’s easy to think of the American Civil War as something that happened “over there” but it had huge ramifications for British industry. The Union navy blockaded Confederate ports, meaning that the cotton that the economy of the North West of England relied on couldn’t get through. The cotton factories ceased to function, causing immense hardship in what was known as the “Cotton Famine”. Although Britain was officially neutral in the conflict, the Confederacy had a lot of sympathy in Liverpool. In 1864 and with the Confederacy struggling in the war, Liverpool held the “Southern Bazaar” in Saint George’s Hall, raising over £20,000 for the cause.

Once the war was over, the historical revisionism started almost immediately. The “Lost Cause” myth was perpetuated, claiming that the South’s cause was noble and that the war wasn’t about slavery. Monuments to confederates sprang up, and over time films and books emerged that played down the evils of slavery and romanticised the Southern lifestyle, most notably 1939’s Gone With The Wind. 1979 saw the release of the TV series The Dukes of Hazzard. The show featured a car called the General Lee, which sported a confederate battle flag on its roof.

All this revisionism has had an effect on the American perception of the war. A 2011 survey carried out by the Pew Research Center asked Americans what the primary cause of the civil war was. 48% said it was mostly about states’ rights, whereas only 38% gave the correct answer, saying it was mostly about slavery.

Although Confederate symbols such as the battle flag can be seen at far-right rallies in the USA, they are seldom seen elsewhere. In Italy, football fans in the south of the country sometimes fly the battle flag, simply because they are in “the South” and have a rivalry with “the North”. However, there exists a substantial number of plaques and even buildings dedicated to the Confederacy at Liverpool’s Rumford Place.

The Rumford Place plaques

Rumford Place isn’t exactly on the well-beaten tourist trail, but it’s a stone’s throw from Moorfields train station and easy to find when you know where to look. The first thing you are greeted by is a sign featuring three portraits.

The sign with three portraits for Rumford Place as described in the main text.

Source: https://upload.wikimedia.org/wikipedia/commons/thumb/4/41/Sign_at_Rumford_Place.jpg/1184px-Sign_at_Rumford_Place.jpg

The sign features the flag of South Carolina on the left, the first version of the Stars and Bars on the right, and three portraits. The president is Jefferson Davis, the first and only president of the Confederacy. The agent is James Dunwoody Bulloch, the man the Confederacy sent to Liverpool to procure warships for the Confederate navy. The master is Raphael Semmes, captain of the CSS Alabama, a Confederate commerce raider.

The street also features a couple of memorial plaques, one dedicated to the memory of all that fought on the CSS Shenandoah, another commerce raider.

The plaque in commemoration of the shenandoah as described in the main text

Not only is it strange to describe those who largely attacked unarmed whaling ships as “brave”, it’s also worth noting the flags on this plaque. It features the British Union Flag and the American Stars and Stripes, it also has the Confederate Stainless Banner, the one that’s mostly white to represent the superiority of the white race. One thing it fails to mention about the Shenandoah is that it saw the last action of the war in Liverpool on November 6th 1865. Unaware that the war was over, it continued to attack merchant ships. Realising that they were now little more than pirates, they decided that it was best to stow their guns and surrender.

Each house in Rumford Place is named after something to do with the Confederacy, but the most interesting one is Enrica house.

The enrica house plaque as described in the main text

Note that it doesn’t feature a Confederate flag, but the flag of the British merchant navy. Over on the other side of the Wirral, a ship called the Enrica was built at Birkenhead. It sailed under the flag of the British merchant navy to the Azores, where it was refitted as a warship, given confederate flags, and renamed the CSS Alabama. The plaque therefore celebrates the violation of the laws on neutrality so that the Confederates could get their hands on another commerce raider.

You may be thinking that these plaques have been around for some time. However, one of the plaques states that it was a gift from the great granddaughter of James Dunwoody Bulloch and unveiled in 2010. The plaque with the Stainless Banner was given for the 150th anniversary of the surrender of the Shenandoah in 2015.

So what can be done? Here is my proposal: a few streets away from Rumford Place is the International Slavery Museum, Liverpool’s attempt to come to terms with its unpleasant past. It’s there for education, so why not move all the plaques from Rumford Place there to create a new exhibit? Then we can rename all the buildings in Rumford Place to honour the real heroes of the American Civil War, people like Harriet Tubbman, Robert Smalls and Frederick Douglass, to name but a few.

Monuments to slavery and white supremacy should have no place in our cities, and it’s time we did something about them.

The “purebloods”: how the unvaccinated came to see themselves as superior

As I write this, I realise that my existence is impossible for a section of the world; I have taken all four shots of the vaccine, I am alive and healthy, and so far, no massive blood clots have been found anywhere inside me.

When Covid started, the Anti-Vax community found excuses to avoid taking any vaccine: they said that it was experimental, that it was part of the Chinese bio-weapon plan, that it would make you magnetic, that it would be used to track your whereabouts, and so on. Then everything became part of a mega conspiracy, as “evidently” showed in the antivaxx propaganda films Plandemic 1 and 2, which alleged that the head of the National Institute of Allergy and Infectious Diseases, Dr Anthony Fauci, created Covid and developed a vaccine in order to make money, or possibly to kill everybody and cleanse humanity as part of an Illuminati plot.

I don’t want to delve into all the vaccine conspiracies here – other folks have already done that. Snopes, for example, has a pretty good catalogue of debunking fake vaccine news. Instead, I want to deal with the aftermath of the vaccines, once they were out; people got their vaccine, pandemic restrictions were reducing, and the folks who were able to began moving on with their lives. This is perhaps not the appropriate course of action in a pandemic – just because you are immunised, it doesn’t mean that the whole crisis is immediately over – but it is what happened. 

While the vaccinated went back to their lives, what of the antivaxxers? They needed a new way to distinguish themselves from the rest of society – something more positive than the inherently-negative term ‘antivax’. Thus, they embraced the term “Pureblood”.

A cartoon of a family - mother, father and two children - hurrying down a pathway from through the "covid vaccinated" you are laying on the floor with syringes around them. Shining on the family ("purebloods") is a ray of light with the word "health". The father is saying "we tried to warn you but you wouldn't listen".

Among antivax circles and conspiracy groups, reports circulated claiming that the ‘pure’ ones who refused the vaccine were all remaining healthy, while those who took the vaccine died. What proof did they have that vaccinated people were dying in great number? Viral videos, which showed people collapsing, or dying suddenly. How, asked the ‘Purebloods’, could such healthy young folk die like that? It must be because of vaccines.

Except, the videos, of course, don’t actually show what is claimed. The videos don’t explain whether the subject died because of the vaccine, or because of some other condition they had – or whether they even died at all. Many of the case studies of so-called “Vaccine Victims” include people who went on to make a full recovery, or who (as in the case of nurse Tiffany Dover, from one popular viral video) were never seriously ill in the first place.

Plus, the reality is that, sadly, death is a certainty in life: even if the chances of someone dying as a result of an intervention are extremely low, once you multiple that by eight billion humans, it’s always going to be possible to cherry pick the rare examples to build your antivax, Pureblood conspiracy. Without the context to tell us what actually happened to the people who are the subject of all the claims and videos, it’s impossible for to know whether they really were injured by the vaccine. And, with over 13 billion doses of the vaccine given so far, I doubt that it really is the cause of widespread injuries and deaths.

Still, the myth of Pureblood is fascinating, for a combination of extraordinary factors: the notion that the human body is a temple, and that a vaccine will not only enter but will pierce and penetrate the body. How very Freudian. Plus, according to various claims made by the Purebloods, as it enters and corrupts, it changes your DNA, or makes you become a giant magnet, or (as some extreme claims would have it) even makes you a transmitter for 4G or 5G… but, crucially, they say, it doesn’t actually stop Covid. So the only sensible solution is simply not to take it – and in doing so become, according to them, possessor of the most prized commodity on the planet.

Two syringes apparently filled with blood. One is labelled "not vaccinated" and is red. The other is labelled "vaccinated" and is black. Next to the black syringe is the label "black and thick, depleted of hemoglobin"

According to some who believe in the Pureblood myth, unvaccinated blood will become incredibly valuable, because it would be full of oxygen and haemoglobin, while the vaccinated would be sick, clumpy, and full of clots. 

The sad thing is, believing this conspiracy theory can have serious consequences, as in the case of a six month old baby in New Zealand whose parents initially refused to consent to life-saving heart surgery, unless the doctors could guarantee that any blood transfusions came from unvaccinated donors. Thankfully, eventually, the surgery went ahead, after intervention from the courts.

What happens when the predicted tsunami of deaths among vaccinated people fails to materialise? Believers in the Pureblood notion will simply double-down – in fact, we have already seen the narrative begin to switch, with claims that the ‘deep state and big pharma’ are hiding all news of the billions who have dead.

And how to explain that the excess deaths were more pronounced among unvaccinated people? The narrative became that vaccinated people were shedding proteins that made unvaccinated people sick. This story could then be used to explain why people were still dying of Covid, as well as reinforcing the persecution complex among antivaxxers.

What can we take from this pureblood narrative? That the persecution complex of the antivax community has only deepened, and they are now able to see themselves as special for refusing to accept a vaccine, yet they believe they are dying because of the careless decisions of others – pure projection. At the same time, we see the familiar old narratives of the body being corrupted by an outsider, a truly ancient fear.

Critical thinking is trending, so why doesn’t it seem like it’s helping?

At the time this article was written, #criticalthinking was used 100 times on Twitter in the previous 24 hours. It had the potential to reach almost half a million users. So if the concept of critical thinking is so popular, why aren’t we making any progress toward general consensus on controversial subjects?

Part of the answer lies in these two tweets:

Anonymous Twitter user, replying to @seanhannity

If @realDonaldTrump doesn't get back on #Twitter soon, he'll lost the ability to bring Americans from all walks and political persuasions together. #CriticalThinking

Nov 23, 2022
Another anonymous Twitter user

The MAGA crowd has continued to flourish because they don't take the time to educate themselves. Ohhh all the articles I have posted that go unopened. (facepalm emoji). I think they're capable of critical thinking, they just choose to stay ignorant which makes them even bigger dumbasses. #MAGA

Jan 3, 2022

Clearly the two tweets both suggest critical thinking is the solution to the problem, but from the opposite sides of the argument. How is it that we all claim to be implementing critical thinking and we are all coming to different conclusions?

The answer is that our cognitive biases interfere with our ability to be truly critical in our thinking. This is not a new concept. Without looking too hard, we can find examples of this in modern literature, recent psychological publications and neurological research. So, what do we do? Throw up our hands and accept that we are destined to disagree? Allow every debate to devolve into a fist fight? Swear off of holidays with the family?

That would be the easy solution, but if our goal is to attempt to achieve some sort of consensus, then I suggest that we adopt a more challenging strategy. First, we alter our approach to changing people’s minds. That one is much easier than the second aspect of the strategy which is to look honestly at our own perspectives.

In order to change people’s minds, facts are not as helpful as you might think. More important than the content of what you say is how you say it. If you have heard Megan Phelps Roper talk about her time with the hate-group the Westboro Baptist Church, she describes their confrontational approach as based on the idea that if they present their truth, God will soften people’s hearts. To them, it does not matter how offensive or insulting they may be; once the idea is presented, the rest is up to God. I feel that many of us approach conversations the same way: my job is to lay the facts at my opponent’s feet and then they will see the error of their ways and agree that my perspective is the correct one. Of course, that rarely happens.

When we use aggressive language, when it is obvious that we are not listening to the other person, and when we are dismissive or don’t address their ideas, the other person responds by shutting down or adopting a defensive stance. Either way, they don’t hear our perspective, no matter how well-crafted our argument or how accurate our facts may be.

But what if they are wrong and I am right? It doesn’t matter. No matter how wrong they may be, their mind will not change with a simple presentation of facts. Instead of treating the conversation like a battle to be won, treat it like a conversation. In a conversation we share ideas and there is a back-and-forth. Listen – actually listen – to their perspective. Find points that you can agree with (no matter how difficult it may be). Rephrase and repeat their assertions. Be gracious and polite, as if you are concerned for the other person’s feelings.

Don’t try and change their mind; instead make your goal that they walk away with some piece of information they did not previously have, so that they can consider it at their own pace and convenience. Don’t interrupt them, allow them to feel that they are heard. Don’t belittle or demean either them or their talking points. Don’t make the conversation feel like an interrogation.

A mem from The Simpsons - Principal Skinner says "Am I stubborn and close-minded?" then "No, it must be everyone that disagrees with me".

The second point of attempting to achieve some consensus is to look at our own beliefs. This is significantly more difficult than changing someone else’s mind, because the easiest person to fool is ourselves. The first step is to honestly consider why we hold the beliefs that we do. Is it because we had all the various perspectives laid out before us in an orderly and systematic fashion, and we chose the one that contained the most consistent and rational arguments? Or is the reason we hold a belief something more trivial? Do we hold the belief because it was the first one that was presented to us? Because it came along at a time when we were receptive to new ideas? Because someone we admire holds the same belief? Rarely is it the case that we held one belief, and someone debated us out of our position and into theirs.

Being wrong is emotionally debilitating and so we are hardwired to avoid that sensation, especially when there is someone to witness our acknowledgement of erroneous belief. Changing of minds typically occurs when our attention is elsewhere. One moment we are adherents to one belief and the next moment we notice that our opinion has shifted to a different point of view. When that happens, we often congratulate ourselves on our open-mindedness and willingness to adopt new opinions, but in reality the shift happened beyond our conscious mind, and so it could be argued that we deserve none of the credit.

Consider one of the perspectives that you have on a topic that is contentious, and not yet settled. How well do you actually understand the issue? Why do you hold that position? What would it take to change your mind? The last question is particularly insightful into how tightly we hold onto our beliefs.

Take, for example, the origins of COVID-19, and ask those three previous questions of yourself. Would it take an admission of guilt from a high-ranking health official, or an official acknowledgement of involvement from the Chinese government? Would you be willing to adjust your opinion if there were significant information that contradicted the official account? Perform the same exercise with the question of fraud in the 2020 US Presidential election. Consider another issue that is particularly contentious in your life and, as I said previously, pay particular attention to the third question: what would it take to change your mind?

If an acquaintance casually mentions an opinion that contradicts your own, do you immediately dismiss it, or do you ask them to elaborate so that you can consider the validity of the opinion? When a report comes to light that questions the integrity of your trusted source, do you immediately dismiss it or look for confirmation?

The human species is wired to create an “us” and a “them.” This strategy has ensured our survival, yet it is also the cause of nearly all of our conflicts. In times of peace, we are unable to turn off the instinct towards tribalism, and so we must develop internal mechanisms that will override that instinct. Without the overrides, everyone who agrees with us is our friend, and everyone else is our enemy, and we will never achieve any sort of consensus.

In Turkey, conspiracy theories about the Peace Treaty of Lausanne run riot

The year 2023 began with a series of jokes in Turkey about people eagerly awaiting the start of the new year to begin digging in their gardens, hoping to uncover valuable resources, including gummy bears and rivers of crude oil. The running theme of these jokes is to poke fun at a conspiracy theory surrounding the Lausanne Peace Treaty, considered the foundational document of modern Turkey.

An internet meme circulating on social media as the secret appendix to the Lausanne Treaty. “Turkey cannot do anything until 2023.”
An internet meme circulating on social media as the secret appendix to the Lausanne Treaty. “Turkey cannot do anything until 2023.”

The plot asserts that there is a secret 21-article appendix signed in the cellar of the Lausanne Palace Hotel, and that the treaty will expire on its 100th anniversary. The further claims fall into two categories: First, some believe that with the expiration of the alleged secret appendix to the Treaty of Lausanne, the current government of Turkey will be able to access valuable boron minerals and oil reserves and collect taxes from the straits. The second and less prevalent belief is that the treaty’s expiration will cause Turkey to lose all of its accomplishments overnight.

These theories, while nonsensical to critical thinkers, are quite prevalent. A 2018 survey by Konda Research found that 48% of the sample population agreed with the statement, “The Treaty of Lausanne will end in 2023”. The belief in this conspiracy theory doesn’t appear to be limited to any particular ideology. The results are likely due to a broader narrative unfolding over 150 years of modern Turkish history.

Survey results showing that almost 50% of people in Turkey believe the Treaty of Lausanne will end in 2023. 

Similar numbers are seen in a variety of demographics including those with below high school or high school education (slightly fewer but still 43% of those with university education) and those of religious or traditional conservative politics. The lowest percentage belief (40%) exists in those with "modern" political beliefs.
Konda Barometresi – Populist Behavior, Negative Identification and Conspiracism, November 2018

Duel of Treaties: From Sèvres to Lausanne

Before delving further into this topic, it is helpful to review the history of the interwar period to understand the context better. In the aftermath of World War I, the Allied Powers and the Ottoman Empire signed the Treaty of Sèvres in 1920, intended to implement the partitioning of the empire’s territory that had been decided through secret agreements among western nations. Although the Treaty of Sèvres was never ratified, some historians consider it a contributor to the “Sèvres Syndrome“, a historical trauma for the Turkish nation. The negotiated map depicting the slicing of the nation by the “enemy” has been etched into Turkish people’s memories through history lessons taught in primary schools (Tziarras, 2022).

A typical “Sèvres Map” depicting the country sliced into influence territories (including a Greek Zone, Italian Zone, British Zone, Armenian Zone, French Zone, International Zone of the Straits and Remaining Turkish Territory). Source: TRT World, public news broadcaster.
A typical “Sèvres Map” depicting the country sliced into influence territories. Source: TRT World, public news broadcaster.

As discussions around the Sèvres continued, a spark of resistance emerged away from the capital. The Turkish National Movement, led by Mustafa Kemal (Atatürk), separated from the Ottoman Empire and established a new parliament in Ankara. The movement aimed to establish the National Pact (Misak-ı Milli) as its foundation, which rejected the Treaty of Sèvres and defined the borders of Turkey to the point when the Ottomans were surrounded. They eventually waged a War of Independence against occupying forces, which ended with a Turkish victory in the autumn of 1922. This conflict set the stage for negotiations that ultimately led to the signing of the Treaty of Lausanne in July 1923, which formally recognised Turkey’s sovereignty and set its borders.

The original treaty, on exhibition in France in 2018. Source
The original treaty, on exhibition in France in 2018. Source

The negotiation process was a lengthy and challenging ordeal that required compromises for the founding leaders of Turkey. Mustafa Kemal Atatürk, the first president of Turkey, appointed his right-hand man and Chief of General Staff, Ismet Pasha (İnönü), as the head of the Turkish delegation.

One of the major issues was the city of Mosul, which was part of the National Pact; the Turkish delegation tried, and failed, to convince Lord Curzon, the British Foreign Secretary, that Mosul should therefore be part of Turkey. Turkey also did not obtain complete control of the Bosphorus and Dardanelles, and had to give up additional territories from the National Pact.

Some of these issues were to be resolved in the 1930s, but the Lausanne delegation faced harsh criticism from Mustafa Kemal’s political opponents, known as the Second Group, during the conference. The arguments were so intense that a prominent member of the Second Group, Ali Şükrü, was assassinated by a former commander of Mustafa Kemal’s special Bodyguard Regiment. The opposition group was eliminated before the treaty was signed.

The cartoon of the Turkish delegation at Lausanne, by Alois Derso and Emery Kelèn. Shared by The Lausanne Project under CC BY-NC-ND license.
The cartoon of the Turkish delegation at Lausanne, by Alois Derso and Emery Kelèn. Shared by The Lausanne Project under CC BY-NC-ND license.

Pro-Lausanne vs Counter-Lausanne

As a significant symbol in the history of modern Turkey, the Lausanne Treaty represents a moment of victory after two centuries of decline, and an opportunity to start anew after the Ottoman Empire’s darkest times. The Empire had surrendered its resources and independence to its enemies, yet Turkey was the only country to avoid the Allies’ harsh terms, even though they had lost World War I. The treaty marked a turning point towards a new future for independent Turkey.

Before the emergence of conspiracy theories, a contrarian narrative started in right-wing circles based on the “Lausanne as a victory or defeat” discourse. This counter-Lausanne sentiment was represented mainly by ideologies such as Islamism and Neo-Ottomanism, and heavily influenced by the opposition of the Second Group to the withdrawal from the National Pact. However, there were other factors: the nostalgia and aspiration for the Ottoman Empire’s former glory and magnificence, the abolition of the sultanate and caliphate, and the strict (French-style) secularism implemented against the Muslim lifestyle and Islamic traditions have all contributed to the counter-narrative (Tziarras, 2022).

Jewish plot

At this point, those familiar with conspiracy theories may wonder where the Jewish plot comes into play. As Aaron Rabinowitz has often pointed out, there seems to be an “inverse Godwin’s Law” in which conspiracy theories tend to involve an antisemitic angle if they persist long enough.

Enter the Jewish plot into the conspiracy. One member of the Turkish delegation present in Lausanne was Chaim (Haim) Nahum, the Grand Rabbi of the Ottoman Empire. Nahum was known for his close relations with the British and the French. The Ankara government included him in the delegation to leverage his connections, and to signal their intention to reconcile with non-Muslim communities. However, this pragmatic decision has since been subject to conspiratorial interpretations. For example, another member of the delegation, Rıza Nur, wrote in his memoirs about a strange encounter with Chaim Nahum during the negotiations. He implied that Nahum, whom he referred to as a “seasoned Jew”, approached İsmet Pasha with his “typical Jewish pushiness”. Nur claimed that he saved İsmet Pasha from the Rabbi’s influence, but this incident added to the conspiratorial interpretations of Nahum’s presence at Lausanne (Gürpınar, 2020).

The rumours of a Jewish conspiracy gained traction in the counter-Lausanne narrative with the emergence of a series of articles titled “Expose” and “Ismet Pasha and the inside story of Lausanne” in 1949-50, written by an author using the pseudonym “Detective X One” in the Büyük Doğu (Great East) magazine. The true identity of “Detective X One” was Necip Fazıl Kısakürek (NFK), an infamous Islamist ideologue who used a pen name to shield himself from political persecution at the time.

The second series of articles written by Necip Fazıl Kısakürek, published by Büyük Doğu (Great East) in 1950, is the most comprehensive portrayal of the conspiracy. Source
The second series of articles written by Necip Fazıl Kısakürek, published by Büyük Doğu (Great East) in 1950, is the most comprehensive portrayal of the conspiracy. Source

NFK was already deeply entrenched in antisemitic conspiracy theories, having translated “The Protocols of the Elders of Zion”, a notorious antisemitic text. He was also a prominent supporter of replacing the secular nationalist republic with an Islamist one. Therefore, he framed the Nahum story following his ideology. According to his perspective, Nahum convinced Lord Curzon to agree to the Turkish delegation’s demands. In return, the Republic of Turkey would abolish the caliphate and distance itself from Islam.

According to journalist Yıldıray Oğur, İbrahim Arvas, niece of Abdulhakim Arvasi, the 33rd sheikh of the Naqshbandi order and also the teacher of NFK in the Naqshbandi order, may have been another potential source of the conspiracy theory. Arvas was an MP during the Lausanne negotiations. In his memoir, he listed some of these secret decisions made during the negotiations, including abolishing Islam and declaring Christianity in Turkey. NFK knew him very well, but the memoir was written after NFK’s articles. Therefore it is a valid question to ask who put the cart before the horse.

AKP and Erdoğan Rhetoric

While the lesser-known Islamist books and magazines were writing about all these conspiracies and ideas, Islamism began to rise in Turkish politics. The first wave was Erbakan’s Milli Görüş Movement, which joined a right-wing coalition in the 1990s. However, with strong opposition from the establishment, they were removed through a “postmodern coup“, which became a significant milestone for the Islamist movement and fuelled their desire for revenge.

The next phase saw the emergence of Erdoğan’s AKP (Adalet ve Kalkınma Partisi, Justice and Development Party), which appeared more compatible with the secular system. The leaders behind this movement were from the reformist wing of their predecessor party. The AKP did not adopt a Euro-sceptic stance and, from the outset, supported a liberal market economy while rejecting fundamentalism. After the financial crash of 2001, AKP rose to power in the 2002 elections. Over time, AKP’s initially liberal and “mildly Islamist” politics evolved into a personality cult centred around Erdoğan, with an increasingly anti-Western and nationalist discourse coupled with growing authoritarianism.

Modern Islamists in Turkey, including Erdoğan, have long viewed NFK as an ideological icon. Therefore, Islamist Erdoğan might have taken a more contrarian stance against the Treaty of Lausanne. However, the reality is more nuanced. Although his government frequently adopted a counter-narrative against the treaty, he, as a populist leader, also acknowledges its significance as a nationalist element in his discourse. The AKP has never officially used the Lausanne conspiracy. In April 2022, Presidential Communication Center (CİMER) responded to a direct question, “There are no secret articles in the Treaty of Lausanne, and there is no article that prevents us from mining.”

Erdoğan adopted a more practical strategy, applauding the treaty’s historical achievements on its anniversaries but promoting the counter-Lausanne narrative on other occasions. His revisionist tactics serve a dual purpose: for domestic politics, he can seek revenge against the old regime; for international politics, he can leverage the anti-Lausanne narrative to advance his Neo-Ottoman agenda.

In several instances, conspiracy theories were directly promoted by close circles of the AKP. For example, Yeni Şafak, a daily newspaper known for its hard-line support of the Erdoğan government and infamous for its fabricated “milk port” interview with Noam Chomsky in the past, claimed that Erdoğan personally instructed the Foreign Ministry to publish the secret articles of the Treaty of Lausanne. Many AKP supporters linked their hopes for a new and improved Turkey with the treaty’s expiration, in parallel with AKP’s Vision 2023 plan and the “New Turkey” sentiment. Conspiracy-minded secularists were not comfortable with the idea of the Treaty’s expiration and worried about Erdoğan’s possible secret intentions to replace Ataturk’s legacy.

With social media, the conspiracy theory was transformed from its original form. Rather than focusing on the “Lausanne as a victory or defeat” narrative, the theory began incorporating urban legends, such as foreign powers preventing Turkey from extracting boron or oil reserves. These urban legends may have diluted conspiracy theories that once accused the founding leaders. One speculation could be that the rise of ultra-nationalism on the left and within Islamist circles may have contributed to this change.

Conclusion

We aimed to trace the origins of the conspiracy theories surrounding the Treaty of Lausanne and uncover the political agenda behind them. These theories exemplify a tendency to interpret historical events imperfectly and with motivated reasoning rather than critically assessing historical sources. For some, an alternative explanation was necessary to sustain the anti-Lausanne narrative. For others, it was to conform to their ideological stance or predisposition towards victimhood. Eventually, everyone fabricated their own stories and provided a plausible rationale suited for themselves. As Rob Brotherton pointed out, “Conspiracism is just one potential product of a much broader worldview” (Brotherton, 2015).

While these narratives may seem amusing or irrational, they can have real-world implications and shape public opinion. Understanding the origins and evolution of these conspiracy theories is crucial to uncover the underlying social and political forces driving their popularity. This knowledge can offer insights to combat the spread of misinformation and promote a more informed and nuanced discourse.

Further reading

  • Zenonas Tziarras (2022), Turkish Foreign Policy, SpringerBriefs in International Relations – 2022
  • Gürpınar, Doğan (2020). Conspiracy Theories in Turkey, Taylor and Francis
  • Brotherton, Rob (2015). Suspicious minds: why we believe conspiracy theories. New York, NY: Bloomsbury Sigma

The myth of the well-filled slate: we shouldn’t discount the influence of society on our lives

0

Twenty years ago, Steven Pinker published the bestseller “The Blank Slate” (2002) in which he argued that important sectors of the social sciences, psychology and political philosophy were making the mistake of insisting on a false and outdated model of human nature. According to this blank slate model, each human individual would be like a clean blackboard at birth, where the chalk of education, society, and culture could write anything — an infinitely malleable mass, whose only significant genetic inheritance would be that of belonging to the Homo sapiens species.

Perhaps the clearest articulation of the blank slate model came from the American psychologist John B. Watson (1878-1958) in 1924. Watson, founder of behaviourism, wrote the following (precisely, in a book entitled “Behaviorism”):

Give me a dozen well-formed, healthy infants, and my own specified world to bring them up in, and I’ll guarantee to take any one at random and train him to become any type of specialist I might select — doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations…

By combining Watson’s behaviourist radicalism with the hypothesis developed by anthropologist Franz Boas (1858-1942) — that the differences between human groups can be explained by their historical background and cultural habits, and not by some kind of ethnic “essence” — one arrives at the formulation that culture and education would be virtually omnipotent in the construction of human phenomena. By transforming education, the individual would be transformed. By transforming culture, humanity would be transformed, and the possibilities would be endless.

This is the blank slate model, which Pinker denounces as a kind of politically correct orthodoxy in the humanities, a well-meaning obscurantism that leads the “humanities guys” to ignore the theory of evolution, and to react with irrational horror to the mere mention of a biological hypothesis to account for a psychological or social phenomenon. Evidently, this idea — that human beings and their behavior are infinitely malleable, and that the only biological impositions we must submit to at birth are those that, for example, prevent us from flying or breathing underwater — is wrong, and has already caused great social and individual harm.

Pinker cites the case of a boy who suffered penile mutilation as a baby, and whose parents were instructed by their pediatrician to raise him as a girl. For many years, this story was presented as proof that gender is a product of socialisation, but only until a scientific article published in 1997 showed that the girl “Joan” had not only spent a good part of her life loathing the female identity imposed on her — to the point of rejecting the hormone treatment recommended for her at the onset of puberty — but had also completely reverted to her original masculine identity of “John” at the age of 16.

Models

One of the truest things about science is that all scientific models are wrong, but some are useful. Furthermore, their usefulness is only valid for very well-defined applications. A map depicting the Earth’s surface as flat is wrong, but it can be useful when traveling from São Paulo to Campinas.

The blank slate model, particularly as it was synthesised in the 1920s-30s, emerged at a historical moment when the alternative model was one of absolute biological determinism. In his masterful book entitled “The Myth of Race,” anthropologist Robert W. Sussman (1941-2016) explains that rediscovery of the work conducted by Gregor Mendel (1822-1884) on heredity, and the experimental finding that there was no genetic inheritance of acquired characteristics (e.g. children of muscular parents were not born stronger) led many scientists and intellectuals, mainly in the US, England, and Germany, to postulate that all behavioural and personality traits were hereditary and transmitted by simple Mendelian inheritance.

Culture would be a fixed characteristic of ethnicity, a collective phenotype, and education would be powerless in converting “savages” into “civilized people.” In other words, everything — even something like “thalassophilia” (a knack for maritime life), which presumably made good Navy officers — would be passed down from parent to child, according to the binary rules of dominant and recessive genes, those same rules that today’s students are required to know for their biology tests.

Compared to this alternative, the blank slate is an excellent model. Boas’s historical and cultural relativism, in particular, has proven to be much more successful empirically than Mendelian genetic determinism, in that it describes and explains reality far better. If we are to rely on oversimplification, the “everything is culture” idea provides a much more sophisticated and reliable map of the territory being explored than the “everything is genes” idea.

Levels

Even if a map is better than the alternatives available at a given historical moment, it can be corrected and improved later on. Pinker argues that the orthodoxy of the humanities has refused to update its map of social and psychological phenomena with the scientific facts that attest to the influence wrought by genetics and evolution on these phenomena, for eminently political reasons (namely, a commitment to the ideal of equality).

Undeniably, there’s still some resistance to incorporating biological considerations into the humanities, particularly in the areas more connected to public policy-making (e.g. policies for combating racial and gender discrimination), but it would be a caricature to attribute all of this resistance to mere political prejudice or unyielding leftism. There are a couple of good rational reasons for this. The first is the problem of the explanatory level. The second is the effect size.

The explanatory level refers to how far down the architecture of science one needs to go to find an adequate explanation for a given phenomenon. Societies are made up of human beings, in other words, biological entities who function based on chemical reactions governed by the laws of physics. From that perspective, it should be possible to use quantum physics to explain the start of World War I, but exactly what purpose would this explanation serve? Who would be capable of comprehending it? What would be its relevance?

Clearly, a “higher” science in the edifice of explanatory levels could never contradict a more fundamental result: the roof cannot stand without the foundations, and a sociological hypothesis that contradicted the Theory of Relativity would be stillborn. However, whether or not one should cite the foundation — rather than imply it — to describe the ceiling, only the particularities of each case can say, and this “case-to case judgement” will depend on the effect size. In other words, we would have to ask: can a phenomenon be explained based on the considerations of a more fundamental level? And, if so, would the explanation be at all helpful? When it comes to psychological or social differences, the answers seem to be ‘it depends,’ and ‘very little,’ respectively.

Most of the time, keeping biology as a silent backdrop when attempting to explain complex social phenomena is as reasonable as not considering subatomic interactions when trying to explain the outcome of the World Cup, and, for that matter, is just as devoid of any “obscurantism” or “ideological motivation.”

Well-filled slate

On publication, “The Blank Slate” prompted a series of criticisms towards anti-discriminatory public policies as being counterproductive because they would be unnatural — the sociopolitical equivalent of Watson’s crass behaviourism. If read carefully, Pinker’s book is more nuanced and subtle than that. However, a rushed reading of it has become a banner for conservatives who consider that Western society is the finish line of civilizing development, and that, not only is there nothing more to improve, but it is time to go back a few centuries. This is the well-filled slate crowd.

For instance, by stating that the professional preferences of men and women are different, because there are innate differences in personality, and that this is reason enough to adequately justify the disparities between the sexes — even better than the discrimination hypothesis — with respect to access to certain careers and income levels, Pinker bites off more than he can chew, and provides munition to those advocating the well-filled slate.

The misconception is not as gross as the one implied in saying that “being a ship’s captain is a Mendelian characteristic,” but it comes close. The fact that average differences in personality exist does not necessarily mean that they are innate — hence not produced culturally — or that they are relevant to all cases where discrimination is suspected, or even that their effects in today’s society are consistently greater than those of discrimination. 

Pinker gives special emphasis to opinion polls on work and career choices, where, in his words, “men and women say what they want.” However, he apparently fails to take into account the psychological question of desirability, i.e. the tendency of interviewees to respond to surveys according to what they imagine the interviewer expects or would like to hear, an issue that particularly affects surveys involving stereotypes and social roles. “Men, on average, are more willing to face physical discomfort and danger,” he writes. Are they really? Or do they just say they are, knowing that this is what is expected of a “real” man?

Then there’s the problem of generalisation, which hides behind the expression “on average.” The book “Brain Gender,” by neuroscientist Melissa Hines, discusses data from studies investigating the average differences between men and women in terms of standard deviations, as a means of assessing how much one population mean is different from another (technically, the author talks about a statistic called “Cohen’s d”, but we’ll skip the details).

The sexual orientation of men and women differs by six standard deviations: a very substantial majority of men say they prefer having sex with women over men, and vice versa. Height differs by two standard deviations: most men are noticeably taller than most women. In contrast, two cognitive and behavioural traits — mathematical prowess and physical aggressiveness — differ by less than half a standard deviation.

In an interview given to journalist Angela Saini (published in the book “Inferior”), Hines further adds that the difference between sexes in terms of the ability to empathise (which, according to the well-filled slate model, would explain why there are so many more female nurses than female engineers) is also about half a standard deviation. In an article published in 2005 by psychologist Janet Shibley, Hyde summarised the results of more than 40 studies on the differences in personality between the sexes, most of which were well below one standard deviation; a large number of these differences were in the second decimal place (0.07), including those regarding negotiation competitiveness.

Compared to these modest effects, the historical impact of discrimination takes on colossal proportions. In Brazil, there was only one woman trained in medicine in the entire country in the 1830s. In comparison, the first medical degree awarded to a woman in the US occurred in 1847. In 1910, female doctors accounted for 22% of medical professionals in Brazil. A hundred years later, this figure rose to 40%, and, in 2020, to 47%. It seems unlikely that the jump from a single female doctor in 1834 to nearly 223,000 in 2020 was caused by a change in hereditary, innate preferences, or in the effects of oestrogen on the central nervous system of the fetus.

The Polytechnic School of São Paulo, today part of the University of São Paulo (USP), was founded in 1893, and had only two female audit students (hence not actually enrolled) between the opening date and the 1920s. Only in 1928 would the first female engineer graduate there. Currently, 56 (13%) of the institution’s 415 professors are women. Of the 1,588 graduate students (master’s and doctoral), 417 (26%) are women.

Recognising that there are biological factors linked to personality, talents, and preferences does not mean that we have to accept these factors as the predominant cause of our current social arrangement, or that any deliberate sociopolitical effort to reduce glaring inequalities is a mistake or the result of ideological inflexibility. The evidence to this effect is far more precarious than what the well-filled slate upholders like to go around trumpeting about.

In view of historical experience, well-filled slate advocates make two seemingly extraordinary claims. The first is that social asymmetries — which suggest prima facie that there is unfair discrimination or cultural bias in place  — reflect some sort of biological “point of equilibrium.” The second is that the precise nature of human differences observed in Western societies has not only been unravelled, but is predominantly innate. As we all know, extraordinary claims require extraordinary evidence. And theirs is flimsy at best.

This article was translated from the original Portuguese by Ricardo Borges Costa.

As a teacher, I wish I’d handled the conspiracy theorist in my class differently

With the proliferation and popularisation of conspiracy theories on the rise throughout the world it is inevitable that educators will encounter them in the classroom. Here I recount one such encounter I had as a graduate student, explain how I handled it incorrectly, and provide some thoughts on how we all can respond better.

In 2012 I was in my last year of graduate work at the University of Missouri’s department of philosophy. Like most students completing their dissertations, I also worked for the university as a graduate instructor, often teaching large sections of introduction to philosophy. During the fall semester of that year, I had a student who stood out in the lecture. He would come to every session wearing a jacket that was an obvious imitation of the one that Ryan Gosling’s character wore in the film Drive from the previous year. He also differed from his peers in his interest and willingness to engage in the discussion.

As anyone who has spent time in front of an early morning college lecture hall can attest, students rarely want to say anything at all at 8:00 in the morning. But, not this student. He eagerly followed along with everything I said, while raising his hand for clarificatory questions and comments. Normally a professor would be thrilled to have such a student. However, this one presented a unique challenge in the way his contributions derailed the discussion.

I used to open my courses with a discussion of St. Anselm’s ontological argument. When making the distinction between things that exist in reality and things that exist merely in the imagination, he raised his hand to say that he understood the difference between the two because he could imagine a government that told the truth, but none exist in reality. While a jarring interruption, I dismissed it as an attempt at humour that fell flat, and we moved on. Subsequent discussion, however, showed this to be more than just an idiosyncrasy. When discussing how Descartes required an epistemic purgative to achieve the metaphysical doubt needed in his First Meditation, the student compared it to the time he learned falsehoods about the 9/11 attacks and decided to doubt all of the other “official stories” he believed. Many other such comments followed.

His fellow students quickly grew frustrated with his claims. It was not uncommon to hear one of them signal their exasperation when he raised his hand. They would often respond combatively, aggressively challenging him on the things he was saying, in a ridiculing way. I sometimes heard the anonymous “shut up” come from the back of the room when he spoke. No professor should allow any of their students to be attacked in class, so I would respond to such abuse when I heard it. But, truth be told, I understood their frustrations and shared them as well. We had a purpose in class, and his interruptions frustrated it. What is more, these conspiracy theories didn’t exist in a vacuum. The worldview he adhered to was dark, malevolent, and very clearly false.

It would have been easy to take the same track as the students, however, the responses I chose were no more productive than theirs. When the student began listing what he perceived as 9/11 anomalies, I told him (truthfully, but without trying to present myself as an expert on the subject) that all his claims had been debunked in materials that were widely available. When he cited Alex Jones as a credible source I pushed back against his veracity.

A few weeks into the course he stopped showing up for lectures. He also ceased doing the online components of the course without withdrawing. There’s nothing atypical about having a handful of students who do this in a large lecture course, yet I think about him more often than I do any other student who went this route. I clearly hadn’t responded to him in an adequate way. Several years later I learned that he had risen to a somewhat prominent position within the conspiracy community, and had multiple arrests that resulted from this. I often wonder what else I could have done to have stopped him from falling further down the rabbit hole.

At the time there was little research on how to break through to a conspiracy theorist. However, the topic has taken on a new gravity over recent years, and strategies have begun to emerge. We know, for instance, that attempts to rebut a conspiracy theorist’s claims with facts will sometimes cause the believer to entrench themselves in their belief. Even if the backfire effect is overblown, as some have suggested, it seems to have been what happened here. The other students in the class did not provide a welcoming environment, and feeling alone he seemed to have given up any chance at change.

So, what could I have done differently? First of all, without knowing the cause of and degree to which the student is engaged with their belief it’s difficult to provide a one-size-fits-all answer. Still, it should be rather clear that attempts to make the believer feel bad or wrong for holding their beliefs are bound to fail. Hence, the importance of providing a classroom experience that limits judgment insofar as possible is paramount.

Additionally, I would recommend that we treat them as we would any other student that is in distress. Every professor has the experience of dealing with students in crisis. We often meet with them during or outside of office hours and have long, emotionally open and welcoming conversations in which we identify the problems facing the student, as well as developing clear plans of action for addressing what’s holding them back. I’ve had such discussions with students experiencing living with illnesses both physical and mental, dealing with loss, on the edge of homelessness, and more.

Given the negative life path and impact on others that conspiracism can have, I suggest that we see their descent down the rabbit hole as an instance of a crisis event. Unfortunately, the conspiracy theory believer sees themselves as the ones in the right, and not in need of any help. Professors should have private, judgment free conversations in which they first build a rapport with the believer. Then, once that bond of mutual respect has been established, the professor may start questioning them on why they believe the things they do. If they’re willing to supply the answer, then the professor can begin to ask contextually salient questions to lead them to the contradictions of their belief system.

It is vitally important that you do not come off as having an agenda or antagonistic. Instead, it must be done in the same open and honest spirit of pedagogy that we must provide for all students. None of this is guaranteed to work, but it likely provides them with a better chance of a positive outcome than we had.

Three years later I began a new job at the University of Texas-El Paso. My first day of teaching I saw the Infowars logo spraypainted through a stencil at multiple locations around campus, and knew that this was a challenge that I would have to meet throughout my time in front of the classroom.

Fictional impressions: the shaky foundations of many common forensic science techniques

0

My father was a cop. Specifically, he was an arson investigator which meant that after a fire was put out by the fire department, he was called to determine if the fire was arson (meaning it was deliberately set) or accidental. Sometimes, a very young me would get to come along for this dirty business. “Dirty” because it’s messy, everything is wet and covered in soot.

Part of the investigation is to find the point of ignition. The method is very simple, look for the room that is the most destroyed, and then look for the “v.” The “v” is where the fire started (if there is more than one, it was much more likely arson), because fire burns up and out in its nearly sentient search for fuel. You can trace it right down to the single place where the fire started, and then begin looking for other clues like if there is a gasoline smell to the room, if the copper wiring has melted (copper melts at about 1000C, meaning that it was a pretty hot fire), or if the fire traveled in an odd manner.

I describe this because this information is objective. Sure “odd” is an impression that one might come to, but if you understand that fire burns up and instead the burn pattern has it snaking across the floor, you will say that it is odd. 

In some cases (literally), there must be chemical analysis. Fire is a chemical reaction, and those reactions leave traces. Those traces will inform a person as to what burned and what, if anything, was used to “help” the fire along. Lab analysis in these cases is also objective, though I never witnessed any of these firsthand.

With every step in the crime scene investigation for this kind of crime an objective technique is used to reconstruct what happened. After that story is written (or more accurately re-written); the determination of arson or accident is made. There is very little room for subjective interpretation or bias to enter the case. The data is present—it just needs to be collected and arranged.

That whole story is to serve as a preface that we have all been taught, mostly through fictional portrayals, that this is how all criminal investigations work. We’ve seen the various techniques in so many different versions by different people that it’s hard to pin down a particularly jarring or incorrect show. The portrayal of forensic techniques in popular media is more like science fiction than reality; CSI’s green lasers are better at getting the writers out of the corners they wrote themselves into than the Doctor’s sonic screwdriver. This impression from years of film and television has served to remind us that committing a crime will get you caught. The impression here is that of a meticulous and scientific examination where the smallest particle of evidence is enough to find the perpetrator.

The impression has generated something known as the “CSI Effect.” There are two versions of this: the first is the one you may be the most familiar with—that prosecutors are often unable to get convictions because real forensic science isn’t as flashy as it is on the television show CSI, or that sometimes the evidence against the accused is a confession and an eyewitness not involving complex scientific testimony and CGI.

Studies into this effect though, have turned up empty. There was no finding that suspects were having their cases dismissed because a leggy blonde scientist wasn’t there to give a walk-through of blood splatter analysis. This CSI effect was no more than anecdotal complaints by frustrated prosecutors.

The second version is also anecdotal, but also more humorous, as the impression that these shows give leads perpetrators to try and coverup their crimes…but in doing so they provide more evidence to investigators.

Both descriptions should lead us to some heavy skepticism as to whether the effect is real though the latter one should give us more pause: we should never underestimate the effect pop-culture has on our perception of reality, i.e., a police officer does not have to tell you they are a police officer.

The list of forensic techniques at the disposal of the police includes, but is not limited to: fingerprint analysis, bullet analysis, object impressions, bite mark impressions, hair analysis, the polygraph testing, and DNA analysis. In this article let’s imagine that we have a gunshot murder (I do live in America after all), and that we are going to use each of these techniques to identify a suspect.

Let us begin our murder investigation with the discovery of a blood sample that has viable DNA on it, for the purpose of our fictional investigators this is great for the investigation. DNA currently sits as the gold standard in forensic investigations. It is the bar by which all the other standards are set. Most of my information is drawn from a report submitted to the American Justice Department by the National Academy of Sciences; where they performed a survey of studies on the forensic techniques used by law enforcement in the United States. I will note that despite the American focus of the report the problems described are not just American problems but cross the ocean to the UK as well.

The report sets DNA analysis as the epitome of a scientific forensic technique for a few reasons, to quote directly:

1] There are biological explanations for individual-specific findings; (2) the 13 STR loci used to compare DNA samples were selected so that the chance of two different people matching on all of them would be extremely small; (3) the probabilities of false positives have been explored and quantified in some settings (even if only approximately); (4) the laboratory procedures are well specified and subject to validation and proficiency testing; and (5) there are clear and repeatable standards for analysis, interpretation, and reporting.

One of the primary reasons that DNA can be relied upon is that the scientific principles which support it were hypothesized first and its application in forensic science followed it. This is not a case of a practical method which then found some post hoc justification later. What makes this feature interesting is that it is not how other techniques were developed. For example, there was never any science behind the polygraph machine (or lie detector) it was only justified because people felt that it could work. In some of the techniques I will discuss below, such as bullet striations, there is an impression that the technique can work but the conclusions that are derived from the method make much stronger claims than the method.

Page 133

DNA does not have this problem. Your DNA, minus some very odd cases, is your DNA. There is a connection between the sample and the person. Finding your DNA at a crime scene usually means that you were there. This connection brings us to (2) and (3), in the above quote, the methodology has been developed specifically to minimise false identification to a rare and quantified probability.

Importantly the last two factors generalise the science of DNA to the larger forensic world. For example, (4) claims that in order to be a DNA laboratory a facility must meet certain specifications. The person in the laboratory who conducts the tests must have a specific education and certification to be able to perform the test.

Finally, the fifth feature states what we assume is the case for all these tests: that the testing methods meet a scientific criterion of being repeatable. Two different people performing a DNA analysis on the same samples should reach the same conclusion. The problem, and what separates DNA testing from the following examples, is that this repeatability is not the case with other testing.

With DNA at the scene our murder suspect is going to have a difficulty in explaining how that happened. The report details several other techniques that are portrayed as reliable as DNA but have severe problems in meeting that threshold. Let’s roll a “chung chung” noise and return to our murder scene.

Our two detectives, one old and a recovering alcoholic (because this is the cliché) and his younger impetuous partner do not find DNA. They do have the bullet slug in the victim and a gun of matching caliber in the suspects’ flat. What follows is a scene so ubiquitous in police procedurals that it appears in everything from Law and Order to Law and Order UK. The pathologist has fired the gun into a tank full of ballistic gel and examined the slug. She seems reluctant to say it’s a match but the older detective explains that unless they get a match from the suspect’s gun they can’t make an arrest. The pathologist then offers some technobabble about striations and twist angles which the detectives use to get their arrest warrant. Then another “chung chung” and the crown is prepping their case.

The scene is as full of problems as it is clichéd. I’ll start with the science, “The fundamental problem with toolmark and firearms analysis is the lack of a precisely defined process (page 155).” A match involves the examiner concluding in which there is significant agreement between the slug pulled from the victim and the one from the recovered firearm. Yet the term “agreement” is ambiguous and changes between examiners. There is no definition of “agreement” to make a match. Further, even if there existed a 100% match, that only proves that the type of firearm was used in the crime, but not the token firearm held in official custody.

Guns aren’t made in a bespoken manner. They are mass produced. The barrel of the weapon, the thing which causes all the marks on the slug, are produced in long tubes and then cut for individual weapons. The slug can tell us the caliber of the weapon, which narrows the field; it can perhaps tell us the type of weapon if the number of grooves is particular to a type of gun. Generally, though, if we find out that the slug came out of a Glock-7 (the firearm of the UK army) then there are hundreds of thousands of possible matches save some deformity in the barrel. What our detectives have is a type of gun not a specific one. While ownership of the weapon by the suspect is a problem for the suspect it doesn’t prove they are the murderer.

Let us say that our detectives never found a firearm. Instead, they have a hair that does not match the victim but doesn’t have a follicle on it—so no DNA. Hair can tell us something, it can rule out a blonde if the hair is black. Shows like CSI give us the impression that a person can determine many more things about a hair, but none of that is borne out by the science. The report claims:

No scientifically accepted statistics exist about the frequency with which particular characteristics of hair are distributed in the population. There appear to be no uniform standards on the number of features on which hairs must agree before an examiner may declare a match

Page 160

Instead of a hair, our detectives find a clothing fiber. Not only do the same problems as before existing but there are more factors which can affect a fiber: sunlight exposure, cleaning, and wear. The only thing that can be determined by the examiner is that the fiber is artificial or natural, and sometimes what kind of fiber it is. Again, the portrayal of this in popular media is that a fiber match might as well be a video of the person committing the crime. The report is even more condemning of fiber analysis,

Fiber examiners agree, however, that none of these characteristics is suitable for individualizing fibers (associating a fiber from a crime scene with one, and only one, source) and that fiber evidence can be used only to associate a given fiber with a class of fibers.

Page 161

Let’s adjust our detective scenario one more time. Instead of submitting a hair or fiber, the detectives submit a fingerprint lifted from the scene and want a comparison with one taken from the suspect. You’re probably thinking that I’m some ACAB Antifa Anarchist who wants to discredit literally anything that the police do; surely fingerprint analysis is legitimate. We use fingerprints as an analogy for things that are unique.

Fingerprint analysis is properly called “Friction ridge analysis.” I want to get one thing out of the way immediately, I am not saying that fingerprints are not unique to the individual as the report admits,

some scientific evidence supports the presumption that friction ridge patterns are unique to each person and persist unchanged throughout a lifetime.

Page 143, 144

That much is true we can still use their uniqueness for analogies, but

Uniqueness and persistence are necessary conditions for friction ridge identification to be feasible, but those conditions do not imply that anyone can reliably discern whether two friction ridge impressions were made by the same person.

Page 144

At the risk of being repetitive, the issue is that no standardisation of the methods of comparison exist. It is up to the examiner whether they measure the swoops, the loops, or the number of ridges; between examiners though there is no agreement for standards. Scotland Yard may have a different standard than the FBI, but there is no good reason why the standardisation is different. If friction ridge analysis is a science in the same way that DNA is a science, there should be no differences in the analysis. The matches are determined through comparison of points, meaning that if a fingerprint has an “8 point” similarity with another one: that they share eight similar swoops or loops. France requires 12 points while the UK requires 16: this difference is nonsensical. The primary difference is that the UK police have a higher threshold for matching than France, which has a higher one than in the US (where the standard is up to the relative jurisdiction. Lowering the threshold only means that there is a higher likelihood of a false match which will lead to false arrests and accusations as was seen in the infamous arrest of Brandon Mayfield for the 2004 Madrid train bombings, where the US FBI was 100% positive that Mayfield’s print matched one from a piece of luggage found at the scene.

Consider the scenario: the fingerprint at the scene is on a drinking glass and it is getting compared with a print taken by the police. The police method of taking a fingerprint from a person is done in ideal conditions by someone trained to make the impression. They are comparing an ideal sample with a natural one. If you’ve ever fought with the fingerprint sensor on your phone/computer, you understand the problem. You set the sensor with a dead-on impression but when you unlock your mobile it’s not as flat and sometimes you have to hit it a few times.

Imagine that our older detective is sees a print on a high ball glass and calls for an examiner. There are several factors that can change the shape of a print. The material of the surface, the shape of the surface (i.e., if it is curved or not), the humidity, the temperature, etc. You can perform this at home: make a print on a piece of glass, then make another one a little firmer than that. They will not be significantly different, but they will be different. Now imagine that the print at the scene is a smudged partial on the rim of a highball glass and that partial has an eight-point match with the suspect. We don’t know if the match is because it was smudged or if it was a match regardless of the smudge. If we consider friction ridge analysis to be a science then the standards should be universal, the threshold should be the same across all jurisdictions.

The physical evidence aside, the other problem with that scene is the influence of the detectives on the examiner, this is a form of obvious bias. From a scientific perspective there exists no reason that the detectives are even discussing the situation with the examiner. The only purpose that serves is to bias the examiner into agreeing with the what the detectives, i.e., their co-worker, want. We would not tolerate this kind of influence, or even the appearance of it, in any other kind of scientific inquiry. This would be the same kind of interference as a CEO of a pharmaceutical company discussing the results of a clinical study with the people conducting the study.

For an honest examination the evidence should be submitted with only the absolute necessary information—in this case, two bullet slugs and then a chain of custody report. The examiner should not even be aware of what the case that samples refer to is. They shouldn’t know if it was from the famous murder or from the beer bottle someone throw at a referee. None of that information is relevant to whether the two samples match or not.

We as skeptics can claim what we do is important; that science, logic, and reason matter, that it is more than just explaining how a well-known psychic appears to be communicating to the dead, that facilitated communication has no scientific basis, or that homeopathy is literally nothing. Our skepticism is most important when people’s lives are on the line. In Democratic societies like ours, miscarriages of justice become the responsibility of the electorate. It is important to remember that when the crown, or the state, prosecutes a person suspected of a crime they are doing so on our behalf, and we bear a responsibility when innocent people go to jail.