Jimmy Wales, the co-founder of Wikipedia, was interviewed in the November 24th issue of TIME – and as I read the piece I kept having the same thought: thank god this weird, fragile space still works at all.
Wales describes himself as a “pathological optimist”, which is a way of saying that he looks at our current bonfire of nonsense and still believes people can build trustworthy things together. In the interview, he talks about what Wikipedia has managed to do over the last two decades – create a giant, mostly reliable, constantly updated reference work that anyone can edit, in a time when trust in institutions, media, and each other has been steadily falling off a cliff.
It’s easy to forget how strange that is. Social media has shown us what happens when you let anyone just say anything at all – a tidal wave of conspiracy theories, harassment, propaganda, and weaponised nonsense. By comparison, the idea that you can also let anyone edit the world’s encyclopedia sounds like a recipe for chaos. And yet, Wikipedia is consistently one of the most trusted sites on the internet.
Reading Wales talk about trust, neutrality, and the pressures Wikipedia is under – from governments, activists, billionaires, and now AI – nudged a lot of things into focus for me. Most days it feels like we’re all wading through this endless tide of half-truths and weaponised nonsense, with a quiet little voice in the back of my head whispering that the misinformation machine is too big now, that maybe we’ve already lost. But it also brought into focus for me why defending facts still matters, how skepticism can be an act of compassion, and why I’ve been quietly sending a little bit of money to the Wikimedia Foundation each month ever since Elon Musk started loudly insisting he could do it all better.
Skepticism as care, not combat
I’m a paranormal researcher who doesn’t believe in the paranormal. I spend a lot of time poking at ghost stories, hauntings, monster sightings, and local legends – not to prove people wrong, but to understand what’s really going on and how those stories function in people’s lives and through wider society.
The popular caricature of skeptics is that we’re miserable killjoys who swoop in to shout “well, actually” at grieving people. And yes, there are folks in the skeptic world who seem to treat debunking as a blood sport, but that has never been my thing. For me, skepticism – scientific skepticism in particular – is about harm reduction and standards of accepted fact. It’s about noticing who gets hurt when false things are allowed to spread unchallenged: the grieving person exploited by a medium, the sick person diverted from evidence-based medicine, the community targeted by conspiracy theories.
If you’ve spent any time investigating paranormal claims, you quickly learn that just being right is rarely the most important thing in the room. Being kind is. Facts matter because people matter, but that doesn’t mean just believing anything because it feels comforting. Over the years, I’ve come to think the default position when you encounter a claim – especially online – should be skepticism. Not sneering or knee-jerk dismissals, but a calm approach that asks “How do we know this? What’s the evidence? Who benefits from me believing it?”
Critical thinking isn’t something you acquire once and then tick off the list. It’s more like a tool that, if you don’t keep sharpening, becomes blunt. Decades of investigating paranormal mysteries have helped me hone those tools, but the modern world keeps throwing new kinds of nonsense at us, and the bullshit out there is the whetstone.
The New Misinformation Landscape
When Wikipedia launched in 2001, the misinformation landscape looked very different. We had tabloids, chain emails, talk radio, and late-night paranormal TV shows. Now we have algorithmic feeds that serve us uncritical podcasts listened to by millions (Joe Rogan, I’m looking at you), deepfakes that can fabricate video evidence, AI systems that will confidently generate plausible-sounding lies at scale – our very own personalised reality, on demand.
Wales points out that Wikipedia is now under pressure from many directions. On one side, there’s coordinated political and ideological pressure – governments, lobbyists, and activists trying to bend articles to their preferred narratives. On the other, there’s a new wave of AI-driven competitors, like Elon Musk’s Grokipedia, that claim to transcend “human bias” by letting a chatbot generate the entries instead.

Grokipedia markets itself as the bold, unbiased alternative to “woke” Wikipedia. In practice, early analyses have found that Grokipedia leans heavily on low-credibility sources – including conspiracy sites and even the neo-Nazi forum Stormfront – and that many of its entries frame right-wing talking points as neutral fact. Some of its entries have copied Wikipedia content, stripped out nuance, and then layered biases and AI hallucinations on top.
This isn’t just an internet slap-fight between tech guys. It’s part of a broader struggle over who gets to define “the truth”, and how. Musk and others frame their attack on Wikipedia as a blow against elite censorship and left-wing propaganda. But what they’re really offering is a version of reality shaped by an opaque algorithm, hosted on a platform owned by one very rich man, with no meaningful community control.
In response, Wikipedia has been quietly reminding people that it is “created by people, not by machines”, and – crucially – not controlled by a billionaire or a for-profit company. In an era where so much of what we see online is designed to serve someone’s commercial or political interests, that really matters. Wales has also been vocal about the fact that Big Tech companies are scraping Wikipedia to train AI models, then serving AI-generated answers that often bypass Wikipedia entirely. He’s argued that if AI companies are going to rely on human-created commons like Wikipedia for training data, they should be contributing back – financially and otherwise – to sustain those projects.
All of this reinforces something skeptics have been saying for years: tools are never neutral. The real question should always be, “who built this, for what purpose, and who is accountable when it goes wrong?”.
Wikipedia: warts and all

None of this means Wikipedia is perfect. Far from it.
Wales himself revisits some of the platform’s most notorious failures, like the time a troll falsely linked journalist John Seigenthaler to the Kennedy assassinations. He acknowledges that governments, lobby groups, and ideologues have all tried to manipulate entries to push their own worldviews.
Beyond obvious vandalism, there are more subtle problems: systemic bias in whose biographies get written and how, and edit wars on contentious topics – content reflecting the demographics of the most active editors (still disproportionately male, Western, and tech-adjacent). There’s also the question of organised editing. In skeptical circles, one of the most talked-about examples is the Guerrilla Skeptics on Wikipedia – a project founded by Susan Gerbic, in which volunteers coordinate to improve pages on paranormal and fringe science topics with better sourcing and clearer explanations.
I know people involved in that project and understand their aims. I also understand why some folks outside of it feel uneasy. There are Facebook groups and forums dedicated to “exposing” Guerrilla Skeptics, accusing them of being a shadowy cabal of “atheist materialists” rewriting pages from a hidden headquarters. Of course, supporters of fringe and pseudoscientific topics – from paranormal claims to homeopathy – also coordinate their messaging and sometimes try to massage Wikipedia in their favour. That’s not unique to skeptics. From the outside, though, any organised effort to shape knowledge can look sinister – especially if you already suspect they are out to silence you.
Personally, I think it’s good that people care enough about accuracy to spend their free time fixing citations and untangling bad sources. I also think Wikipedia editors – skeptical or otherwise – need to be extra transparent when they’re part of organised projects. The power of Wikipedia comes from its openness – anyone can inspect the talk pages, check the edit history, and join the discussion. If editing starts to look like a closed shop, trust erodes.
The answer to coordinated editing isn’t to give up and let AI hallucinate history for us. It’s to double down on what Wikipedia does well when it’s working: verifiability, transparency, and a culture where disagreement has to be hashed out in public, with sources on the table.
Fact-checking as a habit, not a hobby
One of the most useful habits I’ve picked up from both paranormal investigation and while studying for my BSc is this: never stop at the first source.

Wikipedia is a decent starting point for a huge range of topics, but it should be a launchpad, not the final destination. The real magic happens when you start clicking through to the references at the bottom, following citations back to original research, historical documents, or high-quality reporting. If you’re trying to work out what’s going on with a controversial claim – a health scare, a supposed breakthrough, a political scandal – that’s where you need to start. Not in the AI-generated summary box, or the viral thread, but in the sources those summaries should be based on.
For me, that often means exploring the citations or links in an article to see if they say what the piece I’m reading claims, plugging key terms into Google Scholar to see what the peer-reviewed literature looks like, looking for explainers by people who work in the relevant field (not just commentators with strong vibes), and, importantly, paying attention to dates, methods, and funding sources – not just headlines.
A lot of those skills were hammered into me – in a good way – by The Open University. Learning how to read journal articles without wanting to cry, how to tell the difference between a nice graph and actually robust data, how to spot the difference between a cautious conclusion and wild over-claiming – that comes from training. It’s learnable.
As an OU alum – and now a slightly sleep-deprived MSc Psychology student – I wholeheartedly recommend digging into their free OpenLearn resources if you’d like to sharpen your own bullshit detector. For example, ‘How to be a critical reader‘ covers things like distinguishing fact from opinion, recognising a writer’s agenda, and understanding how argument texts are structured, and ‘Reading evidence‘ walks you through how qualitative and quantitative evidence is presented and how to read numbers and text actively.
The same skills apply whether you’re examining a ghost video or staring at a sensational screenshot from Grokipedia, or a claim in a viral thread about vaccines, elections, or whatever the panic of the week happens to be. You don’t have to know the answer immediately, you just need to know how to start checking.
Compassionate Skepticism in a Polarised World
If you’ve ever tried to talk a loved one out of a conspiracy rabbit hole, you’ll know that “here’s a long list of fact-checks” rarely works on its own. People don’t just cling to false beliefs because they’re stupid or ignorant, but because those beliefs make emotional sense, or because they feel like the only people who take them seriously are the ones feeding them the misinformation. Defending facts in this context looks less like a courtroom and more like a long, patient conversation. It means asking questions, listening for the emotional undercurrents, and gently introducing alternative sources. It also means modelling the kind of epistemic humility we want to see: being willing to say “I don’t know”, “I got that wrong”, or “the evidence has changed”.
That’s one of the things Wikipedia gets right at its best – the acknowledgement that knowledge is provisional. Articles change, evolve, get corrected. There’s a “View History” tab precisely because no entry is carved into stone tablets. That doesn’t mean anything goes but it does mean the standard is verifiable and well-sourced right now, not revealed truth forever.
At the end of the TIME piece, Wales gives a bit of advice that sounds almost too simple: if you find yourself spending too much time on platforms that constantly feed you information you don’t trust, stop using them so much. Audit your feeds and delete apps that make you feel worse and less informed. It’s obvious, but it’s also radical because for all our talk about defending facts, a lot of us are still voluntarily wading into streams of content designed to keep us angry, confused, and scrolling. We’re treating our attention like it’s infinite and our trust like it’s irrelevant. Neither is true.
The truth is powerful precisely because it’s fragile and the reason people try so hard to control it is that it actually matters what we believe. Beliefs shape policy, health decisions, elections, families, entire lives. In a world where AI encyclopedias can rewrite reality to flatter their owners, there’s something quietly revolutionary about that kind of messy, collaborative, human truth-seeking. That’s what Jimmy Wales reminded me of, reading that interview – we’re not just defending abstract facts. We’re defending the possibility that ordinary people, working together in good faith, can still build things worth trusting – and that, to me, is worth fighting for.



