The digital doppelgänger: how algorithms decide who we become

Author

spot_img

More from this author

spot_img

I love animals. But I don’t own a pet – not even a goldfish. So when I started getting advertisements for luxury pet coffins that emphasised, “velvet-lined mahogany boxes, tiny urns with angel wings” or “premium resting solutions for your furry loved ones”, initially I couldn’t help but laugh.

But then I stopped. Somewhere out in the “digitalverse”, an algorithm had decided that I was grieving. It had connected digital dots that I didn’t even know existed. Perhaps it was from a post that I lingered on for a moment, or a keyword in a message, or a photo I once liked. Whatever it was, there was now a digital version of me that wasn’t me at all. The algorithm had created someone who was sentimental, heartbroken, ready to spend money on eternal peace for a pet dog who never existed.

That’s when I realised: the internet doesn’t sell to who we are, but who it thinks we are. Somewhere in that invisible process, it quietly builds a mirror self, an algorithmic reflection that begins to shape what we see, what we think and eventually who we become. If my ‘for you’ page could mistake me for a grieving pet owner, what else could it decide about me? And for me?

I started paying attention. Instagram, YouTube and even Spotify seemed to know which version of me to feed. Some days I was the overachiever who needed productivity hacks for my studies. Other days, I was the comfort seeker who wanted to listen to lo-fi beats and watch cat videos.

It wasn’t magic; it was maths. These algorithms, I later learned, don’t understand people the way we think they do. Algorithms track things like hesitation, repetition, and emotion. These all translate into patterns, probabilities and predictions. Every scroll becomes a data point. Every pause, a confession.

Professor Mohammed Hammoud, who teaches computer science at Carnegie Mellon University in Qatar, described it to me like this:

“Algorithms don’t see humans – they see behaviour. They connect dots between your actions and everyone else’s, to predict what you’ll do next. But when those predictions start defining what you see, that’s when they start defining who you are.”

This is the part that stuck with me the most: if the machine’s guesses are shaping the lens I look through, then even my own thoughts might become reflections of someone else’s pattern.

Once I started seeing the algorithm’s invisible hand in my own feed, I could see it in other people’s lives. I began to notice my friends’ Instagram stories becoming anxiously political, while others had their YouTube home screens on a loop of self-improvement mantras, and some had their TikTok ‘for you’ pages trapped in conspiracy spirals. Even though we all live in the same city, our digital realities were completely different.

Why? Because our feeds weren’t random reflections of our interests, they were carefully constructed reflections of what would keep us there. Studies show that this isn’t just a coincidence. Algorithmic curation, “may reduce exposure to diverse content or different viewpoints. This in turn can create echo-chambers, polarisation of ideas and, in the worst case, massive circulation of fake news”, according to Lorena Blasco, a professor from ESCP Business School. The personalisation process can become a ‘filter bubble’ – a system that “limits exposure to diverse viewpoints” across major platforms.

At the core, every click, pause or skip becomes a data point. Algorithms don’t care whether you’re a grieving pet owner (or not). They care whether you linger long enough, engage enough, scroll slow enough. Once that behaviour is logged, a ‘you-profile’ forms a version of you that is optimised for engagement. That profile decides what you see next on your feed.

It’s chilling, because what’s being shaped is not just our tastes, but our identities. The studies are clear on this. When personalisation filters out diversity of viewpoints, it narrows the world we think we live in. Researchers found that increasing use of recommended content on social media correlates with more segmented online communities and greater polarisation of information.

This effect shows up in politics. In one analysis of Facebook’s News Feed, liberal users were shown 8% fewer conservative viewpoints and the conservatives were shown 5% less liberal ones. Although these might not be huge numbers, scaled up into millions of users and interactions, it can be hugely significant. 

A black and white image of a person looking down at themselves into a broken mirror. The image is inverted, with the face at the bottom. They're wearing denim, logos and jewellery indicative of a counter-culture and have short hair.
Pieces, reflected back. By Savannah B., via Unsplash

Whispered certainties

We tend to think of manipulation as a loud voice shouting lies. But what if the manipulation were gentle? A soft suggestion, a comforting curation, an unbalanced feed. It doesn’t need to scream propaganda to us if it can just whisper certainties so gently that you might as well start to believe them. A trap disguised as convenience.

What’s worse, it might be a trap not everyone is aware of. A study examining four countries identified an ‘algorithmic knowledge gap’, with people’s ability to understand these algorithm systems unevenly distributed. Younger users and more frequent social media users had better algorithmic knowledge, but gaps exist across education, gender, age and nationality. If you don’t know the ‘mirror’ is there, you can’t question what it shows you; when you live inside a reflection, you rarely see the frame.

It’s rare that these platforms set out to radicalise, censor or brainwash. Their content is optimised for engagement but, when content is optimised at scale, it can still reshape society. This is not done by pushing you into a radical ideology, but by gradually narrowing your views until your thinking feels inevitable.

It is too easy to see the algorithms as the villains. Yet, they make our lives bearable. Without personalisation on social media, the internet would be chaos. Millions of videos, articles and opinions thrown at you all at once. Algorithms filter that noise into something that’s more manageable for one individual. It guesses what you might like next, and sometimes it’s right. I spoke to a media researcher who admitted that ‘a neutral feed is a myth.’ There is simply too much content for neutrality to exist. Platforms like Spotify recommend songs that become personal soundtracks. Netflix learns that I prefer psychological dramas to comedies. Convenience is addictive, and that’s a trade-off we have implicitly agreed to. The issue is not with the existence of algorithms, but rather the transparency of their design.

The issue isn’t that machines are shaping us, but that we as users rarely get a chance to shape them back. Transparency, not total rejection, might be the skeptic’s answer. Despite all the flaws of the algorithm, they can still show us something about ourselves – our habits, our biases, our cravings for certainty. Given that information, they are then able to distort the reflection of ourselves, but they didn’t build the mirror alone.

Reshaping the algorithm

An awareness of these algorithms can materially affect how you engage with them. Every time I open my phone, I see the ghosts of myself hiding between every post. The version that shops at midnight; the one that reads opinion pieces about burnout; the one that always stops at animal videos. Each of these is a data point built from tiny choices I didn’t even know I made. 

When I started writing this article, I thought I was studying algorithms. But they were also studying me. Over the weeks of research, my feed began to shift. The more I searched for studies on filter bubbles and personalisation, the more I was shown videos about ‘digital detoxing’, and content about ‘critical thinking’, ‘AI literacy’ and ‘mindfulness technology’. I was even sent ads for online privacy tools. At first, it felt ironic, but it began to feel invasive. I am still inside the machine, it just decorates my cage with whatever awareness it has of me.

Whenever I scroll these days, I pause. Not to resist what’s inevitable, but to at least notice it. To remind myself that the reflection is not the whole picture. This is just a version of me the machine has chosen to remember.

I keep thinking about the first pet coffin ad and how it confidently spoke to a version of me that never existed, built from clicks and pauses. I realised, what unsettled me the most wasn’t that the algorithm guessed wrong, but that it guessed at all. The line between influence and identity gets thin online, and somewhere between all the scrolling and searching, I started to wonder how many parts of myself were shaped this way. How many beliefs I assumed were mine were products of what I’d been selected to be shown? How many of my preferences were thing I never chose, but simply absorbed because they just kept showing up online?

Who decided who I am?

The Skeptic is made possible thanks to support from our readers. If you enjoyed this article, please consider taking out a voluntary monthly subscription on Patreon.

spot_img
- Advertisement -spot_img

Latest articles

More like this