A list of “serious” subjects could include bio-sciences, nanotechnology, artificial intelligence, and cybersecurity. These demand rigour, honesty, skepticism, far-sighted prescience, ethical and honourable practice. Cybersecurity is a serious subject because it requires care, without which grave harms might occur.
Today the world rests on reliable computing. Our food, material supply chains, water, sanitation, energy security, financial and medical systems are one cyber-attack away from collapse. That’s partly our own fault for wiring-up everything from children’s dolls to electric pencil sharpeners, but that horse has long left the stable.

Cybersecurity however, is very far from being “scientific”. Some might say it isn’t even supposed to be. Humanities and politics play a big part too. Yet it is presented to the public in a way that I feel is dishonest. It’s cast in a light similar to engineering disciplines, or medicine. It is common to hear “experts” cited during arguments.
Having spoken to many great thinkers through three decades of work in computer science and cyber security, I’d say that to a good approximation, including myself there are zero cybersecurity experts. That isn’t to disparage my learned colleagues, but to speak to the extent and difficulty of the field. I hope the most honest, accomplished and humble ones would passionately agree. Let’s go through some qualities that make cyber unique, and those it has in common with other fields that generate smoke.
First off, the stakes are high, yet the subject has poor legibility to the average person. It is also considered dull. That makes it emotive. People naturally bracket the subject out of their purview. A common response to questions of cybersecurity is “Shouldn’t the government fix that?” It evokes a sense of helplessness.
People call intelligence operatives “spooks”. This should raise flags for any skeptic as to how the whole subject is deliberately shrouded in mystique. In an attempt to attract talent, the field is popularly conflated with policing or spy-craft, and given a veneer of naughty excitement. “Hacking for the good guys” sounds a bit sexy and cool, but to mention “cybersecurity” is a sure conversation killer.
On an individual level we’re extraordinary professionals. The skillset is indeed closer to those of a scientist, detective or doctor. Constant research, great caution, care, and respect for evidence are required. A genuine interest in the welfare of potential victims is still considered a virtue. Stress levels from the responsibility burdens and constant crisis response are overwhelming. Burnout and mental health issues are commonplace. It is chess-boxing, a kind of academic soldiering, only without the boots and fieldwork. But that’s where the honour and dignity ends.
The distance between the values of the cyber professional and of the “industry” as a whole is remarkable. “Lions led by donkeys” hardly covers it. As an industry it is a circus of chancers, fashionistas, hocus pocus, wishful thinking, extraordinary claims, power politics, and outright deception or corruption.
Security often attracts cranks precisely because it is emotive. The notable case of the ADE 651 fraudulent bomb detector is illustrative of just how thoroughly large and powerful organisations like the military can be hoodwinked, because challenging the rigour or scientific basis of security arrangements is discouraged for many reasons. The device was nothing more than a coat-hanger or “divining rod”, alleged to work by “electrostatic magnetic ion attraction”. Millions were spent on technology that independent auditors finally deemed “absolutely useless”, prompting a criminal investigation. Lives were almost certainly lost as a result.
Why are critical thought and evidence-testing discouraged in the security field? Partly because an aspect of security practice is deterrence or security theatre. Airport queues would take too long if full security was exercised, so there’s a lot of make-believe and ritual to attend to. For example, metal detectors might be simply switched off, or set to extreme thresholds so you’d need to be carrying a tank in your pocket. There are still good deterrence and assurance arguments for security theatre, so at the same time nobody wants the Wizard of Oz curtain to be pulled back.
One way to win a battle is to convince the enemy you have superior force or knowledge. This is the psychological aspect of warfare. Common examples are fake CCTV cameras: perhaps one in three cameras are inactive or fake, yet their visible presence is effective. Though physiological stress measurements are possible with a variety of transducers, the field of “polygraphy” is bunk. However, a “lie detector” is an effective intimidation tool if the subject believes in it. Results are better if even the operators believe in it. This presents a most interesting yet dangerous situation, in that security practice often utilises deception, but also requires its own self-deception in order for things to work. Very seldom is it subjected to the scrutiny and skepticism it deserves.
Security is both a reality and a feeling. Ideally these are matched, as both are important, but this duality makes it hard to measure. For example, modern air travel has extraordinarily safe engineering, maintenance schedules, checklists and pilot training. Yet people justifiably still fear flying. They feel unsafe, even when they are in fact quite safe. Here, the mismatch between reality and feeling is skewed toward caution; passengers on the “unsinkable” Titanic had the opposite problem. Overconfident claims of the White Star operator lead directly to deaths. People felt safe, even while they were in great peril.

Without real metrics, cybersecurity is an area where vendors make exaggerated yet unverifiable claims to obtain assurance. Cybersecurity products only need to make people feel secure to win markets. Furthermore products are not the same as actual security, nor the same as protection, which is invariably a racket.
The harms accrued are many steps removed from the seat of failure. There is, in the language of legal philosopher Andrew Kimbrell, a “cold evil” at play, where diffusion of responsibility meets great complexity, creating a cover for the unscrupulous to hide. If your surgery is disrupted, was it the management software? The firewall? The operating system update? The network operator? Who knows? The system is, in concert, unfathomable. Somewhere, someone made a buck. Somewhere else, on the far side of the planet, a preventable death occurred.
Since the markets are massively anti-competitive and run by monopolists, it makes little odds whether things work or not. The penalties for failure are almost nonexistent. CrowdStrike, Amazon, and Microsoft are all still in business, despite racking-up hundreds of millions of lost dollars and customers’ data. Their chiefs and failed security officers still circulate in the revolving door of upper management. Governments bail out companies like Jaguar Land Rover. Militaries hide damning reports from security auditors, out of sheer embarrassment.
Complexity plus scarcity of real expertise make for a lethal brew. Even the very best experts admit to being overwhelmed and unable to keep up. There are perhaps a few dozen computer scientists in the world who have the education, age and experience to speak authoritatively about the full spectrum of computer security. They’re getting old, and they are not being replaced.
Education is extremely poor. The pipeline for creating top-tier cyber personnel dried up long ago. Universities dropped the ball on induction, quality and understanding the changing landscape. Demand for competence is so high that education cannot afford to keep professors, and universities drove out the brightest and best with low wages and disrespectful working conditions.
Courses are dominated by vendors like Microsoft, CISCO and Google, who ply proprietary wares hoping graduates will fall into their corporate orbit. Instead of critical thinking and deep understanding, a regime of shallow compliance has prevailed, forced down by governments desperate to tackle a cyber-crime wave. The result is “security by numbers”.
As well as authorities discouraging rational examination, there is an intrinsic fear of examining our security situation. Reality confrontation is difficult. Free cancer and other health tests won’t always get takers, because we can be irrationally but understandably “afraid to know”. It’s easy to be skeptical of things we’ve no serious stake in. If astrology or homeopathy prove bunk, little is lost to most of us. However, our lives rely on the phones, tablets, and online service accounts we use daily. To lose belief in the systems, and the apparent authority behind them all, is terrifying. Fear of iconoclasm is so intense it sometimes feels like Stockholm syndrome. Cybersecurity has evolved into a sacred totem, such that people within the profession who criticise it risk their careers.
Science concerns what is evident, testable, and reproducible. Outside theoretical computer science, very little in practical computing is positively provable; even less so in computer security. It is the problem of “proving a negative”, in the sense of Russell’s teapot. We get proof that a system is insecure when it fails, often at a catastrophic stage during deployment. Despite incident response, root-cause analysis, lessons learned, and the updating and tweaking of systems, improvements are rare and immeasurable. A week later another breach occurs in an endless whack-a-mole game. In software engineering, quality and proofs of correctness remain hot and messy issues.
A central security maxim is “Trust but verify”. When you cannot verify, all bets are off. Most of what should be transparent is the opaque, proprietary code of monolithic Big Tech, who hide behind trade secrets, software patents and digital rights laws that prevent examination. No serious security is possible even in principle, beyond the call to “Just trust us”. Everything about this is massively exacerbated by “AI”. A powerful but not complete solution to this is mandating Free Open Source Software in all critical systems.
Trust, on the other hand, should never be given lightly. Many security companies have been caught making backroom deals with governments to install weaknesses (known as back-doors), or even colluding with enemy nation states. “Experts” are unsure whether cybersecurity is a ten trillion dollar “industry” or a ten trillion dollar “criminal liability”. Why? Because some of the companies play both sides. There is a massive “legal” insecurity industry selling phone hacking tools, spyware, stalker tools, and malware to criminals and governments.
Duplicity of purpose blights digital security. Many systems are touted as “for security” and placed as gates to desirable or necessary goods and services. When “security” is used as a bare noun, people do not ask the essential questions about it: whose security? Yours or mine? Security from who or what harm? Security to what end?
In fact many systems offer little or no security to the user, but are to secure private profit from the customer. Face recognition cameras in supermarkets are not to make you safer. Indeed “security” is used as a cover, to gather unnecessary but valuable data, to track and snoop on people. Organisations sell valuable data collected under a pretext.
Influence also plays a big part in distorting perceptions of digital security. Of course tech firms are the wealthiest entities on the planet. Giant vendors have obscene budgets to spend on every imaginable form of marketing, decades-long international propaganda campaigns, lobbying, regulatory capture, outright bribery, and threats. For the skeptical scientist who technically questions the security of a mainstream system, or even for the seasoned journalist who has collected mountains of damning evidence, it’s a tooth and nail struggle to be heard even at the crankiest margins. Alan Bates took over twenty years to expose and challenge corruption at the heart of the British Post Office Scandal. A spiral of silence, general apathy, complacency and fear keep the public oblivious to the code that runs – and frequently ruins – their lives.
A recent trajectory is to simply insure against failure. This reduces cybersecurity to mere “risk management”, and eviscerates the real skill base. Even with eye-watering fines for failure, nobody ‘higher-up’ is interested in the hard, painstaking “egghead” work to secure a system when failure can simply be budgeted for. An upshot is inequality and “security poverty”, since money now buys protection and only the biggest, richest companies can afford to meet government regulations.
Rational skepticism need not stop at the corporate level. There are few cyber skills within government not tied up in intelligence and national security work, so public agencies turn to third party providers in “public-private partnerships”. GCHQ famously outsource (sensitive?) data storage to Amazon. There is scant public scrutiny of government IT contracts. Remember “test and trace”? Although a sensible idea given the threat of COVID and need to restart the economy, it was deemed a failure because it was implemented with almost total contempt for privacy, security, diversity, openness, financial accountability and correctness.

I believe that the science is sound and such a tool could be crucial for future disease control. But governments seem unable to resist the temptation for corrupt dealings with costly consultants who have their own data collection side agendas, overbearing authoritarianism, and disregard for individual rights when it comes to technology. Ironically, by backing-off on the “technofascism” and proximity to shady security companies, we could all be made a lot more secure in reality.
Now, for example, some governments wish to use companies like Palantir, Microsoft or other US service vendors for things like “health record analysis”, “digital ID cards”, or “age verification”. Whether you consider these projects useful and necessary or authoritarian over-reach, a bigger problem looms: the cybersecurity is, in the confession of the US top government cyber experts, “a pile of shit“.
Governments may glibly give the public “the strongest assurances that these systems are secure”, but those claims have no basis in fact, nor are they verifiable in principle. The appalling track record of all such IT systems speaks loudly and clearly for itself.
I’ve only sketched a few of the conditions that make cybersecurity an area where any curious skeptic should look to dig deeper into claims made in the media. It’s an area where you are not supposed to look, and few people will thank you for doing so. Next time some bank, face-scanning company, your kid’s school system, private blood-testing clinic or whoever tells you “Don’t worry, our systems are completely secure”, ask them as a skeptic, “Can you prove that?”.



