The Online Safety Act doesn’t make our children any safer

Author

Mike Hallhttps://mikehall314.bsky.social/
Mike Hall is a software engineer and Doctor Who fan, not in that order. He is the producer and host of the long-running podcast Skeptics with a K, part of the organising committee for the award winning skeptical conference QED, and on the board of the Merseyside Skeptics Society.
spot_img

More from this author

spot_img

Last month, after many years of delay, the UK finally brought key provisions of the Online Safety Act into force. These new rules create a ‘duty of care’ for online platforms, requiring providers to take action to remove illegal content (such as depictions of sexual abuse), as well as legal content that may be harmful to children.

If you’ve been following UK politics for a while, this may all feel very familiar. Under David Cameron, the government attempted to implement similar blocks at the Internet Service Provider level, requiring broadband and mobile providers to block explicit content unless users specifically opted in. Critics at the time pointed out that the plan was based on an emotionally-loaded and thought-terminating appeal to ‘think of the children!’. In debates about online harms, this phrase is a rhetorical nuke. Once detonated, it permits no further discussion beyond ‘so you want to hurt children, then?’

Later efforts by Theresa May, initially as Home Secretary, later as Prime Minister, introduced the Investigatory Powers Act, also known as the ‘Snoopers’ Charter’. In part, this introduced a requirement for operators of encrypted online services, such as WhatsApp, to add a ‘backdoor’ to allow authorities to inspect people’s private communications. This too was done in the name of protecting children.

The latest regulations, which became law in the final days of the Rishi Sunak government, take things a step further. In practice, the ‘harmful content’ referred to largely (though not exclusively) means pornography. That’s why, as of July 25th 2025, UK users of many websites are being met with mandatory age verification checks. This includes the obvious targets like Pornhub, RedTube, and XVideos, but also social networks such as Reddit, Discord, and Bluesky, where explicit material can live alongside more general content.

There are a variety of ways these verification mechanisms can be implemented. Take Bluesky’s system, for example. Bluesky validates through a service called KWS, which is provided by Epic Games, the creators of Fortnite. Users can either submit a credit card number or have an AI guess their age based on a facial scan. Both options have significant problems.

In 2024, the US National Institute of Standards and Technology (NIST) evaluated facial age estimation software by testing six systems on eleven million images of people with known ages. The report is very large, but we’re interested in three key metrics, which reveal the limitations of this sort of scan.

First is the mean absolute error, the average gap between the estimated age and the true age. This ranged from 2.3 to 5.1 years, depending on the software used, how it is configured, and the quality of photographs. That’s negligible if you’re 56 and mistaken for 51, problematic if you’re 23 and blocked for ‘looking’ 17, but potentially dangerous if you’re 13 and waved through as appearing 18.

Second was the binary false negative rate, the percentage of adults which are incorrectly flagged as underage. This occurred in 4% to 30% of cases, again depending on the system, setup, and photo quality. This would mean a significant number of adults attempting to access legal content would be incorrectly blocked.

Finally we have the binary false positive rate, the percentage of 13-17 year olds who were incorrectly estimated as adults. This reached as high as 45% in some cases, again depending on various factors. While this represents an extreme, and typical results were better, it still means a significant fraction of younger teens could potentially be granted access to adult services.

NIST also notes that these systems are prone to demographic biases. For Asian and African faces, the error rates were much higher; some age estimates for African people had an error as large as 40 years. Additionally, while NIST does not cover this specifically, these systems will also likely break down when estimating the age of people with certain types of disability.

But never fear! If you fail the face scan, or you just don’t fancy it, you can instead prove your age with a credit card. In theory this works, because UK credit cards can only be issued to over-18s; in practice it excludes huge swathes of adults. According to the website Merchant Savvy, 35% of UK adults have no credit card at all. This figure jumps to 71% when considering just 18-24 year olds, the group most likely to have failed the face age scan.

At a rough estimate, if 30% of the adult population fail the face scan, that’s 16.5 million people unable to access adult services they are legally permitted to access. If 35% of those also have no credit card, that is over five million adults who are left frozen out.

Moreover, the credit card check does not even reliably block children. Teenagers can and do borrow – or steal – cards from parents. Even well-meaning parents may unwittingly hand over a card for what they think is a harmless account check, not realising it grants access to explicit content.

Leaving aside the verification step itself, the protections put in place to prevent unverified users from accessing adult services are often more of an obstacle than a barrier. Some sites display verification overlays that can be removed using the web browser’s built-in developer tooling, or by using ad-blocking technology. These are not hypothetical weaknesses: both these methods currently work on some major adult sites, including from a mobile phone. Our elaborate age verification process can therefore be circumvented by anyone with basic digital literacy, including the teenagers the system was designed to protect.

Virtual Private Networks, or VPNs, are an even more obvious route. Because these checks are applied only to UK users, routing your traffic via another country bypasses them entirely. Many VPN service providers specifically advertise their product as being useful to bypass geographic blocks, and have been doing so for years. In the immediate aftermath of the age verification coming into force, one VPN provider told the BBC they had seen an 1,800% spike in downloads.

Research from Stanford University, published in pre-print in March, found that similar laws in the United States also led to a significant rise in VPN searches. Perhaps more concerning, the same research indicated a 48% increase in searches for porn sites known to refuse to implement the age gate. These are often the least regulated sites, with the fewest safeguards, and therefore the places most likely to carry harmful or even illegal content. This creates a perverse incentive where the law pushes users (including teenagers) toward the very sites most likely to host the content the legislation was meant to address.

Even if the technology worked flawlessly, the privacy implications are serious. Age verification is usually tied to an account, meaning you must now sign up and log in to access adult content. These records become a liability in the event the website suffers from a data breach, such as a hack, potentially exposing the personal information and viewing history for users of that service and opening them up to significant social stigma or even blackmail.

Beyond porn, the Act applies to all ‘user-to-user’ services. This means services where some users create content and other users consume it, for example Facebook, Discord, Reddit, forums, chatrooms, and so on. This places heavy moderation requirements on these platforms. Large companies can absorb the compliance costs, but smaller, niche communities cannot. Volunteer-run forums like furry.energy (an LGBTQ-friendly Mastodon server), Dads with Kids (a forum for single fathers), and Green Living (a sustainable lifestyle forum) have already shut down rather than risk fines.

The impact is not evenly distributed: marginalised groups often lose their spaces first, while discussion consolidates on large platforms that survive by monetising user data.

The Act also risks locking out teens from non-salacious but nevertheless ‘mature’ discussions. Reddit has already added age verification to forums for sexual assault survivors, queer and trans discussion forums, and eating disorder support groups – spaces teenagers may urgently need.

Even Wikipedia is threatened by this law; since users both write and consume their content, it technically qualifies under the law and could be forced to put some articles behind logins and age gates. The Wikimedia Foundation attempted to pre-emptively challenge the applicability of the new regulations to Wikipedia via a judicial review, but in a ruling on August 11, Mr Justice Johnson dismissed the foundation’s claims. It is now within the power of Ofcom, the Secretary of State for Science, Innovation and Technology, or any future holder of this post, to determine if Wikipedia will be subjected to the Act’s most stringent requirements, including user identity verification. This classification can be done effectively arbitrarily, as the Act grants the Secretary of State broad powers to redefine Ofcom’s codes of practice without additional legislation and only minimal justification.

The Act also places a burden on encrypted messaging services like WhatsApp, Signal, and Apple’s iMessage. Despite these powers already existing in some form (thanks to the Snoopers’ Charter) the Act again demands that authorities have ‘backdoor’ access to private conversations, just in case they carry harmful content. Experts point out there is no such thing as a ‘good guys only’ backdoor – once it exists, it can be abused by anyone – including by hostile foreign governments who could use the same mechanisms to obtain access to the private communications of British citizens and officials.

A mobile smartphone with speech bubbles on the screen indicating messaging. 

Photo by Franck on Unsplash

Following the receipt of a ‘technical capability notice’ under the Snoopers’ Charter earlier this year, Apple has already withdrawn some privacy features from the UK market rather than comply. Other platforms, including Signal and WhatsApp, have indicated they would exit the UK entirely before weakening their encryption. This creates a precedent where UK citizens will find themselves cut off from the privacy protections available elsewhere in the world.

In response to this pushback, the government has pinky-promised they will not enforce this part of the Act until such time as it is technically possible to do so. This, however, remains little more than a policy promise, which could be discarded on a whim tomorrow. The promise has no legislative support, the legal powers remain on the books regardless of current enforcement intentions, and future governments could activate them without new legislation.

None of this is to deny there’s an issue to solve. Today’s teenagers have instant access to extreme forms of pornography that were rare or inaccessible even 20 years ago. Material which at one time would have been considered niche BDSM content, is now considered mainstream content. Practices which were once the preserve of the fetish community (such as degradation, choking, and hair-pulling) and were enacted with a framework of negotiation, enthusiastic consent, and safeguards for all involved, are now presented without that context, as if that is what sex is usually like.

This can and does shape expectations, particularly affecting the kinds of acts young women are being expected to perform during their first and tentative sexual experiences. This is not an imaginary problem, and ready access to pornography does seem to be a proximate cause, but the blunt instrument of the Online Safety Act is poorly suited to addressing it.

How else might this problem be effectively addressed? Children and teenagers are perfectly capable of understanding the difference between fantasy and reality when given the appropriate tools. No-one is watching RoboCop and thinking that is how policing works. Teenagers need to understand that porn also presents a fantasy world; it is not reflective of real sex, or typical sexual relationships.

A 2020 pilot study of a porn literacy programme found that after a single workshop, the percentage of teens who wanted to emulate porn dropped from 44% to 29%, and the number who thought porn was a realistic depiction of sex fell from 26% to zero. As part of the workshop, the teens were taught that they needed to ask for consent for each new sexual act during a sexual encounter, and that certain acts (e.g. anal sex, spanking, degradation) may not be as widely enjoyed by women as they might presume after watching porn. Importantly, there was no increase in porn consumption after the workshop, and no reported distress from discussing the topic openly.

While small in scale (just 31 participants, no control group), these results are consistent with wider research: education reduces harmful misconceptions without intrusive surveillance or censorship.

The Online Safety Act is tackling the wrong end of the problem. Age verification tools are inaccurate, intrusive, and easily bypassed. Compliance demands drive out smaller communities and centralise control in large corporate platforms. It risks pushing curious teenagers towards less safe spaces while simultaneously denying them access to beneficial ones.

If the goal is to protect children, the answer is not to slam the door, but to give them the critical skills to navigate what’s behind it. That is to say, comprehensive, non-judgemental sex and media literacy education.

The Online Safety Act may be well-intentioned, but it is a fundamentally flawed attempt to legislate away a complex social issue.

The Skeptic is made possible thanks to support from our readers. If you enjoyed this article, please consider taking out a voluntary monthly subscription on Patreon.

spot_img
spot_img
spot_img

Latest articles

- Advertisement -spot_img

More like this