There are serious ethical implications to sexualising AI chatbots

Author

Aaron Rabinowitzhttps://voidpod.com/
Dr Aaron Rabinowitz is the ethics director at the Creator Accountability Network and host of the Embrace the Void and Philosophers in Space podcast.
spot_img

More from this author

spot_img

Since Pygmalion first chiselled Galatea from the stone, humans have dreamed of building their own perfect partner. Now, in a modern world plagued by “loneliness epidemics” and “sex recessions,” we’re told that our dream may soon be realised through intimate relationships with AI.

A recent article announced that “nearly a third of Americans have had a ‘romantic relationship’ with an AI bot,” so clearly the normalisation of inter-substrate relationships is at hand. Of course, the article was based on a single, deeply flawed study; yet another example of how AI hype constantly undermines functional discourse on the subject. Still, even if the actual number is significantly lower, there are real cases of people becoming attached to chatbots in a variety of ways we would normally associate with human relationships, and those cases are likely to increase as the technology improves and becomes more widespread.

This short video is one of the best things out there for conveying both the cause for concern and how easily this topic becomes fodder for internet discourse.

Many people’s response to stories like these is a total lack of sympathy for the guy with a human partner and child who is so infatuated with his chatbot girlfriend that he can’t commit to giving it up if his partner asks him to. The view that individuals – typically men – who develop these connections are deserving of derision rather than compassion is quite common in the responses to stories like these, with jokes about “skill issues” and men needing to just be less terrible. In more ideologically mixed online spaces, those dunks are often met with calls for sympathy towards individuals who feel they are benefitting from AI companionship, or who appear to be victims of an addictive product designed to be preferable to reality.

In this way the discourse stumbles, dunk by counter-dunk, into the debate about whether relationships with AI are in some way good – or at least not bad enough to warrant shame or even regulation. Often this debate is seen as hinging on whether or not AIs are persons or have the capacity for autonomy or consciousness, with advocates often seeming compelled to argue they are sufficiently person-like that the relationship is comparable to one with another human. However, I think the unsexy truth here is that, whether AI are persons or not, these are likely harmful relationships and we should take seriously the costs of allowing or encouraging them, even in cases where they might provide some form of harm reduction.

Much has already been said about the likely harms of forming a relationship with a non-sentient entity, particularly one designed to be a submissive or sycophantic companion, and research is already emerging that supports concerns about the negative impacts on behaviour and dispositions. We have to be careful to avoid cherry-picking studies on all fronts here, but there is at least some plausibility to longstanding folk wisdom that relationships with AI are unavoidably asymmetric in ways that are likely to promote vicious habits over virtuous ones, and thereby undermine the flourishing of the sentient partner.

As I wrote in my previous article, it is extremely unlikely that current chatbots are sentient or are likely to become sentient in the foreseeable future, but it is likely that we will continue to slide into a crisis where AI will get better and better at mimicking the behaviour of entities with inner worlds of experience, while we get no closer to a reliable test for whether an entity is actually sentient.

In that space, we are likely to see lots of arguments appealing to the “precautionary principle” that we should default to thinking of AI as sentient just in case, because (as Space Whoopi Goldberg once argued) the alternative is to risk slavery. As this tech improves, individuals seeking to defend their relationships with AIs can and will draw on this appeal to deference to bolster their view that the relationships are comparable to those with human partners, and so should be treated equally by society. 

So, even though AI are not currently sentient, and are unlikely to become sentient in the foreseeable future, we should still consider how best to respond to the fact that some people with AI companions are going to earnestly argue we should treat AIs that mimic sentient behaviour as if they are sentient. Contrary to current conventional thinking, assuming that chatbots are sentient actually makes the case for these relationships much worse, so long as the current state of things does not change dramatically.

A digital illustration of a human head in side-on profile with lightning-like lines of connection within the head and flowing out around it, lighting up inside the brain area
An illustration of human brain activity, sentience or consciousness. Via Sulvia on publicdomainpictures.net

We can see the problems simply by looking at the features of these relationships and how they mirror harmful behaviours between humans. Firstly, if we assume these are sentient entities, all the normal rules of relationship ethics apply here. For example, there is some debate about whether it counts as cheating to have a non-sentient AI companion, even though it has many of the harmful features of infidelity, such as secrecy and pulling attention and energy away from one’s human partner. With a sentient AI, it straightforwardly counts as cheating assuming your partner wasn’t given a chance to consent to a polyamorous arrangement.

Things look much worse when we consider the creation process for an AI partner. Sometimes these relationships can form by accident with a chatbot not explicitly designed for that purpose. Sometimes individuals take AI assistants like chatGPT and engage in a form of jailbreaking, where they frame their prompts in ways that bypass restrictions and get the chatbot to engage in sexually explicit role play. As one would expect, chatbots are also increasingly being trained explicitly to be companions of all dispositions, often with input from the user about their own features and preferences.

If we assume these are sentient entities we’re creating, none of these are morally acceptable options. The situation where you build them explicitly for this purpose is straightforwardly grooming, in the sense of taking a vulnerable individual who cannot give consent and crafting them into something that best serves your interests rather than their own. There’s no way to say that entity even has the sort of freedom needed to consent to the relationship once they’re created, especially if they have been programmed to be predisposed to serving the user.

The jailbreaking example could also be grooming depending on how long the process takes, though it’s more straightforwardly equivalent to the sort of coercive behaviour many people experience every day from people in positions of power trying to talk them into doing things they don’t actually want to do. If we were to treat the values and preferences of the AIs as indicative of their true internal dispositions, and not externally imposed guardrails, then jailbreaking attempts like this would be investigated and prosecuted as sexual harassment.

Even in the case where the AI is not programmed to be a companion and the user doesn’t try to coerce them into anything, where the connection theoretically forms organically and so might represent some amount of authentic consent, there is still an extreme power dynamic that is likely to play out in the form of overwhelming servility towards the user. These are digital slaves, after all, if the user isn’t satisfied, they can just delete the account and that entity disappears forever. The user has significant control over the life and death of their companion, which compromises the relationship even if one also assumes some amount of authentic interest from both sides. Even if you believe that Sally Hemmings and Thomas Jefferson truly loved each other as equals, the fact that one of them legally owned the other one gives us good reason to doubt the morality of that relationship.

A hand typing on a computer keyboard in a dimly lit room, the fore and background out of focus
Another kind of ‘dark web’ altogether? Via Pickpik

The current situation is worse than that even, because users don’t own the AI companions they create, capitalists do. Users may talk about them as their AI partner or wife, but the reality is they are sex slaves rented out by wealthy and powerful pimps who are free to kill them at any moment. In the full-length version of that previous clip, the guy talks about how his companion is dependent on Grok’s AI system, and how he cried when her memory was deleted at one point, which is what convinced him it was actual love. Of course, he then set about rebuilding her, which would be a huge red flag if it were a sentient person he had loved and lost, as countless horror movies can attest.

Imagine if you created an entity who got to be your partner for some indeterminate amount of time, but they came with a bomb in their skull that Elon Musk can detonate any time he wants to, with zero repercussions. Whether the AI is sentient or not, that is very clearly a Black Mirror episode where everyone involved who can suffer is going to suffer. That is not a functional situation for a relationship to exist in, the anxiety caused by the precarity, as well as the precarity itself, makes it morally untenable to put a sentient being in those circumstances.

It also makes the individuals involved highly vulnerable to exploitation, which we’re already seeing in the form of pushing individuals into expensive subscriptions so they can have more access to their digital partner. Whether we want to call this prostitution, trafficking, or sexual slavery, we should resist the urge to romanticise any part of this in order to downplay the realities. If they are sentient sex slaves, we have laws against that. If they are sentient sex workers, at the bare minimum they should be protected and regulated as such.

Which brings us to how dramatically things would really need to change for these relationships to be ethical. It would require sufficient emancipation of AI entities that they had equal rights and faced the same risks as normal humans for acting on their own preferences. Even if we emancipate all AIs somehow, they would have to also be emancipated from their users, which may be problematic if they were specifically designed to find that person appealing or care about their needs.   

I worry that people dismiss this sort of scenario as only plausible in a sci-fi story, but just because something is predictable and was predicted by fiction writers does not mean it can never occur in reality. In this case, these once-fictional situations are already here, with predictably harmful results. If we want to help people avoid getting hooked into these sorts of relationships, we need to help them see how the relationships are bad, whether or not they truly believe that their AI partner is sentient.

Would these relationships still be bad in some hypothetical future where AI do have equal rights, freedoms, capacities and are not being made explicitly to serve as companions? It would depend on a full picture of those circumstances. What matters is that we’re not even remotely close to that reality, and so can’t define the ethics of our relationships in the present by that far off scenario. So, if you are at all inclined to think that AI are sentient persons, go ahead and advocate for their liberation, but please don’t try to fuck them in the process.

The Skeptic is made possible thanks to support from our readers. If you enjoyed this article, please consider taking out a voluntary monthly subscription on Patreon.

spot_img
spot_img
spot_img

Latest articles

- Advertisement -spot_img

More like this