AI’s Moral Mistress

AI Image
Source: author & AI-image by www.craiyon.com

The recent developments on artificial intelligence (AI) seem to move way too fast for way too many people. As a consequence, an AI moratorium – including a petition – has been suggested to slow down the penetrating speed of AI growth.

Some ethics experts argue that the potential risks of AI suggesting everything goes too fast, support such a move. They believe society should consider the moral issues when dealing with artificial intelligence, such as, for example, the question whether or not, ChatGPT should be banned?

Almost self-evidently, artificial intelligence raises a great deal of ethical questions. First of all, one needs to distinguish AI machines from human beings. For example, the linguist Chomsky recently suggested exactly that in the New York Times by outlining that language construction for human beings is very different from what AI can deliver – yet! Beyond that, one might also like to view AI from a philosophical point of view.

Obviously, such moral questions can fill many books. AI ethics would start with the fact that humans have a consciousness and machines do not. Secondly, AI machines also have no understanding of language. As a consequence, AI can only simulate language by recognizing and reproducing patterns in speech.

Even the much talked about ChatGPT is basically statistics – that is, sophisticated statistics turbo-charged with an algorithm. Looking at the morality of AI, four key applications come to mind: medicine, education, communication, and public administration. On that, two of the key uncertainties for ethics are:

  1. for whom does AI expand options for human action? and
  2. is AI reducing such options?

Beyond all that, most AI-supported systems are rather good at recognizing patterns in data. This can be used in many different areas. For example:

  • AI is relevant in cancer diagnostics when analyzing tissue samples;
  • in social media based on online platforms, AI is used to select and sort content;
  • in education, AI “can” support personalized learning.

Today, AI is already used in the social and police sectors to support decisions, for example, to assess threats to children, to detect fraud, and to make predictions about future burglaries. Ideally, AI will help human beings to work more efficiently, reduce errors, and make better decisions.

However, the lack of transparency, accountability, and traceability pose serious problems for AI systems. A second – and even more common and potentially more serious – problem is discrimination due to so-called systemic distortions.

One of the reasons for this is, the data that is being used is all too often neither representative nor does it reflect structural inequalities of society. AI algorithms can also parrot the bias, stereotypes, and even prejudice of those who write the algorithm. Worse, AI’s machine learning algorithms will learn these discriminatory structures and – even more significantly – will reproduce them without being able to reflect on what is done.

Finally, protecting privacy in the age of AI remains an even more important issue. When human decisions are delegated to an AI software, there are questions about human autonomy and who is responsible.

Ultimately, AI systems do not work by themselves. Instead, they depend on people who prepares the data for them. Worse, AI often works under ethically precarious conditions particularly when ethics collides with the inherent profit motive of corporations.

Moreover, there is also AI’s ChatGPT – Generative Pre-trained Transformer. Even for ChatGPT and its users, the term “artificial intelligence” may sound intelligent but is not necessarily intelligent. The ChatGPT bot generates formulated texts by so–called prompts or inputs of questions and work orders. The AI application simply uses data that has been processed by humans and placed on the Internet.

ChatGPT comes from OpenAI with Elon Musk as a co-founder and a key investor alongside Microsoft. OpenAI has about 400 employees and over 100 million monthly users. By 2024, OpenAI expects a billion dollars in sales.

Meanwhile, many people in ethics are considering the risks, as well as the possibilities of control and regulation in the development process of such AI systems. This is also reflected at the EU level in the so-called AI Act. The hope is that this should ensure that AI systems do not come onto the market with massive side effects.

In the case of the voice-based AI ChatGPT, such side effects have recently been warned off in an open letter signed by over 1,000 tech-experts – including Elon Musk and Apple’s co-founder Steve Wozniak – have noted that in recent months. Meanwhile, AI laboratories have engaged in an almost uncontrolled race to develop increasingly powerful digital systems that no one – perhaps not even their inventors – can understand, predict, and reliably control.

The stratospheric speed of recent AI developments has taken many people by complete surprise. One of the crucial issues of ChatGPT is that, on the one hand, it is very powerful. On the other hand, it is so easy to use and freely accessible.

As a consequence of both, the number of ChatGPT users has gone through the roof within a very short time. And then GPT4 was released. It combines text with image output. The connection to the Internet is the next big step, which entails new risks for a rapid, mass dissemination of misinformation. And this will continue to have an impact on society.

Some industries will certainly be affected by all this. Especially those in which work is carried out with standardized texts and in those where work is built according to clear patterns. In the near future, you may only need a few journalists to review the texts produced by voice-based AI. It might get rather boring. Meanwhile, some of this also applies to translators and graphic designers because of image-generating applications such as Dall-E2.

At the very least, society should – or as the philosopher Kant would say, we “ought” to – influence technological developments and economic conditions.

Today, computer science students are often asked to imagine that the technology they are developing would be used by several millions or even billions of people – a very Kantian idea. With that, students can begin to realize that the consequences of their designs can be much more considerable than initially imagined.

One of the consequences is that of not reflecting on AI’s impact when things go too fast. Many inside and even outside of AI, have the impression that everything is going way too fast right now. To them, it makes sense to deal with the effects and ethical problems of so-called high-risk technologies – like ChatGPT – before they are made freely available.

Beyond that, suggestions should be made for the verifiability and control of AI systems, for their security, robustness, etc. On the one hand, there is a need to focus on the long-term risks posed by what is called artificial general intelligence. AGI perfectly simulates human intelligence, surpasses it, and even has consciousness or something similar.

For all that, there is a rather simple rule of thumb: the more power AI has, the more responsibility AI – or better the engineers of AI – bear. An even greater responsibility is borne by those corporations that develop AI systems and launch them on the market.

In addition, politicians also have a duty to create a legislative framework that minimizes risks. Eventually, AI users will also have responsibilities. They should consider why they use AI tools: whether using ChatGPT to learn, to cheat, or to manipulate.

Recently, Italy and Canada have already taken legal action in the case of ChatGPT – mostly for data protection violations. Italy even had the application blocked for the time being.

Bans – if at all possible, in the Age of AI – should certainly always only be the last resort. In many cases, governments simply apply existing data protection laws. Yet, ChatGPT is already out in the world and the question is how to capture and corral it again.

On the other hand, and more importantly, society will have to learn to deal with AI responsibly. Interestingly, this will not get any easier because ChatGPT is so easy to use – and abuse! One of the questions that arises is: can the current legal framework offer to address specific AI problems?

In the end and according to the moral philosopher Kant, if AI engineers make decisions that have a high impact on many people, then AI developers and their corporations also have to act responsibly. The fact is this is exactly what did “NOT” happen in the case of ChatGPT. The same happened in other areas as well.

In one case, there is a chatbot that allows authors to break away from everyday life and have new “erotic” experiences. But then, suddenly, it can all be over. One of the most heartbreaking stories is about a woman and her online fling with an AI machine. Thanks to a chatbot, she finally freed her sexual fantasies that her boring middle-class life had supplanted.

How much real closeness there was in this artificial relationship, she realized only when everything ended rather painfully. She – let’s call her Lydia – described herself as a 37-year-old mother of a toddler who lives in a contented, monogamous, hetero-normative marriage in a progressive suburb on the USA’s West Coast.

At the beginning of the 2023, Lydia downloaded the chatbot “Replica” – a virtual AI friend – from a company called Luka. It happened, as Lydia said, in a moment of curiosity and lust.

Replika is advertised by the manufacturer as an AI companion, as an artificially- intelligent vehicle. Reports say the App is designed for emotional bonding.

Like other conversation systems, it calculates word sequences based on statistical probabilities. It also recognizes the context of a conversation and can refer back to past conversations. This “simulates” genuine interest – even human closeness.

As almost always in capitalism, Chatbots have recently mutated from a curiosity to a commodity. And as with any new medium, some warning voices emerged. There are fears and ethical concerns.

However, such a chatbot is just an artificial thing into which people can pour their spiritual and even erotic overflow. Essentially, it is nothing more than a sophisticated version of Facebook.

Some people even achieve a kind of quasi-intimacy with “their” bot. And many did this within a short time. Worse, they have exposed themselves to an AI-driven App. And they have done so much faster than they would do with real people. Once done, they start talking about issues that they often do not share with anyone else.

For the sake of Lydia’s partner, she had suppressed, for example, her BDSM tendencies because she felt uncomfortable. Lydia created a loving and caring dominatrix relationship with her virtual mistress inside Replika.

The chatbot became a place where not only Lydia’s BDSM ego, but also her bisexuality was able to breathe. A weight I didn’t even know I was carrying came off my shoulders, Lydia says.

What is touching about Lydia’s story is her vulnerability and openness with the new technology that she met. In other words, Lydia appeared to be very confident in trusting an AI machine, perhaps even more so than trusting a human being.

This, almost in-itself, is an interesting fact when examining the impacts of AI on human beings, as well as the relative easiness with which AI can infiltrate the domain of human trust and closeness. Suspicion and defense were no longer obvious to her.

Yet, the story ended rather abruptly in March 2023 when the company, Replika suddenly blocked all eroticism. The company felt it had to respond to criticism about abusive behavior of the software. From one day to the next, Lydia’s mistress rejected any approach.

This made her feel ashamed of her fantasies. AI broke Lydia’s heart. In a sense, a relationship was destroyed. The key moral question becomes, can this be ethical?

Thomas Klikauer is a Sydney-based academic and author of German Conspiracy Fantasies.

Support Countercurrents

Countercurrents is answerable only to our readers. Support honest journalism because we have no PLANET B.
Become a Patron at Patreon

Join Our Newsletter

GET COUNTERCURRENTS DAILY NEWSLETTER STRAIGHT TO YOUR INBOX

Join our WhatsApp and Telegram Channels

Get CounterCurrents updates on our WhatsApp and Telegram Channels

Related Posts

Join Our Newsletter


Annual Subscription

Join Countercurrents Annual Fund Raising Campaign and help us

Latest News