As AI and discourse around AI infects every facet of our lives, it is difficult to know what it means to be a skeptic or remain skeptical in the age of AI. Worse still, it can be difficult to ask questions or find reliable sources on this subject as AI discourse has quickly become one of the most cursed on the internet. Given that the AI bubble is unlikely to burst in the very near future, it is helpful to understand both why this discourse has become so cursed and how to approach major issues around AI with healthy skepticism, because the alternative seems to be an increased risk of a range of harms.
One major reason that AI discourse has gotten so bad is that it is always taking place in the shadow of AI hype. We have been at what feels like a peak of AI hype for some time now, with talk of the technology always being “just a few years” away from breakthroughs that will upend human civilisation for the better, and these extreme proclamations make it difficult to discuss both the growing capacities and harms caused by this emerging technology. In this high-intensity environment, even the most banal statements about the advancements in AI capacities are often perceived as if they were full-throated defenses of the technology, while reasoned criticisms are too easily dismissed as virtue signalling on the moral topic of the week. Reality tends to be complex, but hype flattens every attempt to discuss those nuances.
With that mental hazard clear in our sights, we are better positioned to work through some of the knotty issues surrounding AI, particularly the Large Language Models (LLMs) like ChatGPT that are dominating the current environment. I want to focus on three interconnected issues that experience some of the heaviest distortion from AI hype: can AIs understand, can they be conscious, and is it ethical to work with them?
Can AI understand?
One major point of contention is the degree to which it is accurate to say that AI understands the inputs it receives, the outputs it produces, or the process by which it gets from one to the other. Those who believe that AI does have understanding point to examples of cogent responses to complex and novel inputs, including material the AI was not trained on, as proof of its ability to comprehend. Critics argue that the appearance of understanding is merely an illusion caused by a predictive text algorithm and cite that it is incapable of reliably counting the number of Rs in “blueberry” to prove that it lacks the understanding of a six year old.
Both views are right in a sense; the emerging capacities of LLMs to take in and respond to novel inputs is genuinely impressive, but the models also continue to have points of absurd fragility that no amount of hype can plaster over. As a result, both sides of the discourse end up feeling gaslit about what they are experiencing with their own eyes, and so tend to infer that the people on the opposite side are arguing from a more biased perspective.
The reality, though, is that more often than not people are talking past each other because they haven’t distinguished between what I call internal and external understanding. Internal understanding is what we normally think of as the gold standard of understanding, the internal mental state of recognising the connections between concepts and representations enough to form a more accurate picture of how the world works. External understanding, what many would consider mere mimicry of understanding, occurs when an entity can respond in ways that are comparable to the responses provide by individuals with internal understanding.
The most advanced AI currently in existence possess increasing amounts of external understanding while likely not developing anything like internal understanding. This is why it is correct to say they display adult human levels of external understanding while lacking the internal understanding possessed by preschoolers.
Critics can rightly point out that the absence of internal understanding creates the possibility for blueberry-style fiascos, because there is no internal awareness of the absurdity of its responses in those cases preventing it from looking like an overconfident fool. However, internal understanding doesn’t always translate into external understanding, as in the cases of experts whose practical understanding is so ingrained they’re unable to convey it effectively to others. So, in at least some cases, an entity with reliable enough external understanding can be more helpful than one with internal understanding.
This is why deriding AI as “cold reading machines” or “just glorified autocompletes” falls flat in the face of actual examples. It fails to grasp the significant degree to which these AIs have learned to understand text in the external sense. They have no phenomenal experience of that understanding, and we can debate how much that absence limits their capacity for reliable responses, but the understanding they already display in an external sense was unthinkable even five years ago, and it will likely continue to improve. Which raises the next question:
Can AI be conscious?
As with the first question, the answer here depends on what we mean by “conscious”. Most of the time, when this topic comes up in this context, the question is about whether AI can develop internal states of awareness comparable to the ones humans and other evolved organisms experience. To avoid ambiguity, philosophers often refer to this with terms like phenomenal consciousness, subjective experience, or sentience. The idea is that there is something it is like to be you in a way that there is nothing it is like to be a chair or a laptop. How we treat entities is heavily influenced by whether we think they have phenomenal consciousness, with many treating it as essential to being a member of the moral community.
As with understanding, AIs will likely continue to improve in their ability to mimic consciousness in their responses while not developing anything like subjective experience. The reason is what is called the hard problem of consciousness, which is that we don’t really understand how or why consciousness seems to arise in some entities and not others. For biological entities that evolved in similar ways to humans, consciousness seems tied to the functioning of central nervous system, but it doesn’t follow that a meat brain is essential to consciousness, it’s just the only place where we’ve found it so far.
What makes the hard problem so hard is that consciousness, as a fundamentally subjective experience, can’t be a direct object of scientific study. At best, we can only study consciousness indirectly through people’s external behaviors. As a result we have an intractable problem of how to test for consciousness. Given our shared evolutionary histories, it is a reasonable inference that other evolved organisms are also conscious, but it’s a separate question whether it will be reasonable at some point to conclude that AI have become conscious.

What is most worrying on this front is that AI is likely to be able to mimic consciousness long before we have any sort of test of consciousness. There is already growing evidence that people are engaging with these entities as if they are conscious, leading to harmful asymmetric relationships with the technology. This is not a new problem; individuals reported similar experiences with Elisa, an experimental chatbot created in the 1960s that simply mimicked the open-ended questioning style of a therapist. However, the improved capacities and widespread commodified access to these programs significantly scales up the risk of harms from people conflating mimicked consciousness with genuine consciousness.
Without a test for consciousness, which is arguably impossible to develop, it seems likely that this aspect of the debate will intensify and bring with it increased push for AI civil rights with little clarity on how that would work with entities that are fundamentally different from humans in various ways. The best I can offer is that one should remain skeptical that current AIs are conscious, while taking seriously the looming problem that, short of extreme artificial constraints on their capacities, AI will likely be able to reliably mimic conscious responses long before we have any way to test if it has acquired internal consciousness. It’s not enough to say we will just defer to the conclusion that AIs are conscious when the time comes, we have to seriously wrestle with what that would look like and the bizarre problems it would raise.
In the meantime, we have a third question that we’re all now wrestling with on a daily basis:
Is it ethical to work with AI?
It’s far too complex an issue to fully answer in a single article, but as an ethicist I do want to at least share my own perspective after wrestling with this question a great deal. I’m going to save the topic of forming relationships with AI for its own piece, so here I’m only talking about using AI for work or hobbies.
My current view is that it can be ethical to work with AI, but you have to consider various factors like how the AI was developed, who owns it, what you’re producing with it, and the impacts that working with it may have on you as an individual and others, including the environment. Unfortunately, there is no set formula for weighing all these factors alongside the benefits provided by working with the AI.
There are clear-cut cases where its use is unethical, such as using deepfake to make porn of people without their consent, but many uses of AI are far more morally grey than that. Worries about environmental impact, the use of copyrighted materials as training data, increasing dependence on AI assistance, and AI generated media that undermine our sense of a shared objective reality all have some degree of legitimacy. The question is whether they rise above the level of “no ethical consumption under capitalism” that we are forced to deal with in every facet of our lives.
This is a question everyone has to answer for themselves. For me, it has meant that I stopped using ChatGPT a while ago, despite considering it both helpful and interesting to engage with, because I felt that I couldn’t trust the people currently managing OpenAI. That concern has been born out in a myriad ways, most viscerally in the ongoing case where ChatGPT helped a teen to complete their suicide.
I don’t think it follows, though, that people are morally obligated to abandon using AI entirely, or are morally tainted for doing so. What they are morally obligated to do, as skeptics, is make sure they aren’t lying to themselves about that decision, or letting AI hype cloud their judgment or compromise their behavior towards those who reach a different conclusion on grey cases. As with so many topics, the best skeptical approach is resisting the siren’s call of hype while staying compassionate towards those who succumb to the hype, or simply have different perspectives.
And, as always, don’t trust anything you see or hear online unless it’s verified by multiple reliable sources, because it’s going to get much worse out there.