Moderating social media: hate is the name of the business

Author

Carlos Orsihttps://www.iqc.org.br/
Carlos Orsi is a journalist, editor-in-chief of Revista Questão de Ciência, author of "O Livro dos Milagres" (Editora da Unesp), "O Livro da Astrologia" (KDP), "Negacionismo" (Editora de Cultura), and co-author of " Pura Picaretagem" (Leya), "Ciência no Cotidiano" (Editora Contexto), for which he was awarded the Jabuti Prize, and “Contra a Realidade" (Papirus 7 Mares).
spot_img

More from this author

spot_img

The Federal Supreme Court (STF) of Brazil has decided to allow digital platforms to be held legally liable for the content they publish when such content, even if generated by users, constitutes “crime or unlawful acts” or falls into categories such as “incitement to discrimination based on race, colour, ethnicity, religion, national origin, sexuality or gender identity” or “anti-democratic conduct and acts”.

Given that it is virtually impossible to successfully practice medical quackery without an Instagram account these days, it will be interesting to see how the “crimes or unlawful acts” section impacts the market for fake cancer cures and miracle diets.

But this isn’t the aspect of the decision that has drawn the most attention in the public debate: in fact, the conversation on the subject in the public sphere seems split into two camps, with a pair of very well-defined focuses that ignore each other’s arguments. On one side, the mainstream media and widely circulated opinion pieces warn of the threat to freedom of expression. On the other, academics point out that social media platforms are not neutral regarding the content they broadcast, but rather partners that profit from the engagement generated by fraud, lies, and hate speech.

Both sides have their reasons. Given the Brazilian judiciary’s penchant for turning censorship into a lottery, the existence of subjective criteria such as “anti-democratic conduct and acts” – is lamenting an election result “anti-democratic conduct”? – is concerning and creates uncertainty. But also concerning is the insistence of opinion makers on pretending that the editorial control platforms exercise over the content they publish (by deciding, for example, which posts will be most widely circulated and viewed) doesn’t imply some level of complicity and responsibility for the potential repercussions of this material. The irresponsible exercise of power is a problem both when it comes from the state and from private actors.

Villainy

The history of the tobacco industry is full of moments of pure villainy. There’s the infamous 1969 memo, authored by an executive who remains anonymous to this day, outlining, for the first time, a clear strategy of lies and scientific denial. “Doubt is our product,” reads the document’s most infamous line. There’s abundant evidence that, at least since the early 1960s, the industry knew that people smoked not because cigarettes are pleasurable, but because nicotine is an addictive drug.

Perhaps internal documents and research from companies like Meta and X (formerly Twitter) will be released someday. If so, it’s certain that confessions of pure malevolence, like those found in cigarette papers, will emerge (we already got a taste of the leaked Facebook documents a few years ago). If sowing false doubts about the effects of smoking on human health is the business of tobacco companies (and again today, this time with new products), fanning the flames of hatred is the bread and butter of social media.

It’s not that Facebook or Instagram were created to make people hate each other, or to destroy polite debate. It’s just that, once they were up and running, it became clear to their administrators that hate was the perfect lubricant, the easiest way to keep people engaged. And engagement is the secret to success. Everything else – including the deliberate ineffectiveness of content moderation efforts – stems from this simple fact.

In his book on misinformation, Foolproof, psychologist Sander van der Linden has this to say about Facebook:

“Indeed, in a 2016 presentation, Monica Lee, a leading data scientist and machine learning expert at Facebook, mentioned that internal research had determined that ’64 percent of all ‘extremist group joins’ are due to our recommendation tools,’ including the ‘Groups You Should Join’ and ‘Discover’ algorithms.”

A man with short dark hair and light brown skin looks at his smartphone as he unlocks it with his right hand, his bright white keyboard out of focus in the background
What might you end up talking about when you log into your social accounts today? Image via Foundry Co on Pixabay

And, on the next page:

Mark Zuckerberg himself published the following chart in a public post, noting that engagement with content on Facebook increases dramatically as it approaches ‘our policy threshold’ of what is allowed content. Specifically, he said, ‘Our research suggests that no matter where we draw the line for what is allowed, as content approaches that threshold, people will engage with it more, on average, even when they later tell us they didn’t like it.’

More recent research tends not only to confirm this, but also to show that disinformation spreaders already know that outrage is key to engagement, and are exploiting it. From a recent paper published in Science:

“Compared to reliable news sources, posts from misinformation sources evoked more anger and outrage than happy or sad feelings. Users are motivated to reshare content that evokes outrage, and they share it without first reading it to assess its accuracy.”

Extremism

There’s also data showing that political extremists dominate online political discourse, which can distort public perception of the true level of social polarisation. As a 2024 article summarises:

In fact, 97% of Twitter/X’s political posts come from just the 10% most active users on social media, meaning that about 90% of the population’s political views are being represented by less than 3% of online tweets (…)

This article in Nature shows that most links shared on Facebook are “shared without clicks” (in other words, people just pass the content along without bothering to read or verify it) and that most of the “shared without clicks” material is “extremist and user-aligned political content.”

Let’s not forget that the lifeblood of social media is content – text, comments, videos, images – created for free by users. If the most productive and popular of these unpaid creators are trolls and extremists (or, for that matter, charlatans and scammers), making their lives incredibly difficult won’t maximise shareholder returns.

Nothing personal

None of the above facts answer questions like: do social media actively create hatred and division? Or do they merely exacerbate the hatred and division that already exists? Or perhaps they amplify these social ills?

Promoting hate is the internet’s business because human nature is simply the way it is, and unfortunately, people seem to have an obsessive-compulsive relationship with disgusting content and false promises.

The networks embraced the human passion for tribalism and ignored their responsibility for the negative impacts – just as tobacco companies embraced the addictiveness of their product and ignored lung cancer, emphysema, and painful death.

Fundamental flaws in human character, coupled with the incentive structure as it currently exists, conspire to create the current state of permanent conflagration in political discourse and the industrial-scale promotion of fraud and ignorance in science and health discourse.

This incentive structure therefore needs to change. The Supreme Court’s current decision may well be, as its critics say, a misstep – but it is nonetheless an attempt to respond to a concrete need. Human history is, among other things, a long chain of inadequate or counterproductive solutions to very real problems.

Perhaps it would be more productive if those who debate in both bubbles – those who minimise the harmful bias of the networks and those who disregard the voluntarist and authoritarian bias of the Justice system – were to step out of their echo chambers and give due credit to the arguments and concerns of those on their side.

This story was originally published by Revista Questão de Ciência in Brazil. It is translated and reprinted here with permission.

The Skeptic is made possible thanks to support from our readers. If you enjoyed this article, please consider taking out a voluntary monthly subscription on Patreon.

spot_img
- Advertisement -spot_img

Latest articles

More like this