Germany’s Ethics Council on Artificial Intelligence

Artificial Intelligence AI

Source: AI’s answer to AI on + human input, i.e. the author

Germany has a problematic relationship with morality. On the one hand, it is home to moral philosophers such as Kant with his categorical imperatives, as well as Hegel’s slightly more upmarket concept of Sittlichkeit.

Yet on the other hand, Germany is also the country that carried out the most immoral act the world has ever seen – Auschwitz, as immortalized in Claude Lanzmann’s nine-hour documentary called Shoah.

Germany, unlike France, never had a real revolution but only a revolution in philosophy. It is concerned with philosophy and even more so with moral philosophy. Not unsurprisingly, almost all issues – not just the current AI and ChatGPT – are moral issues.

Yet, Germany has a special institution that deals with ethical questions, named appropriately as the Ethics Council or Ethikrat. Recently, the Ethics Council issued its findings on artificial intelligence (AI).

Institutionally, the Ethics Council is an independent assembly of experts addressing the questions of ethics, society, science, medicine, and law. It assesses consequences for individuals and society while encouraging discussion in society, prepares opinions, and issues recommendations for the Bundestag – Germany’s parliament. Its recommendations on AI were issued on 20th March 2023.

One of the key ideas of the Ethics Council is that AI must not replace humans. Overall, the Ethics Council’s investigates the human-vs.-AI relationship in schools, medicine, online platforms, and administration.

Interestingly, in a country that started two World Wars, the bombing of Serbia twenty years ago, and the recent delivery of “Made in Germany” Leopard tanks to the Ukraine, the Ethics Council has surprisingly little – actually nothing – to say about AI-guided weapons.

The Council thinks that AI is likely to enter into almost every area of human existence, from shopping to work, from crime to recruitment, and beyond. In its recent 287-pages report – called Human & Machine – it states that the use of artificial intelligence must expand human development – it must not reduce it. These are guiding principles for its ethical evaluation of the interaction between humans and AI-controlled technology.

Necessarily, this involves aspects of social justice and power. It demands that AI applications cannot replace human intelligence and responsibility. Its findings are based on philosophical and anthropological concepts that are important to the relationship between human and machine. The Council named four aspects relevant to human-machine interaction:

  1. intelligence,
  2. reasoning,
  3. human action, and
  4. responsibility

Yet, the Council believes that AI will offer opportunities and risks. In any case, AI has already shown that, and in many cases, it has clear and positive consequences in the sense of expanding the possibilities of human authorship. At the same time, however, there are always potentials for a decline in human development.

On the downside, the use of digital technologies can create dependencies and even pressure to adapt to AI. Worse, previously established ideas by human beings can, potentially, be closed off by AI. For the Council, one of the central ethical questions in their evaluation is,

whether and how the transfer of activities previously carried out by people to technology systems influences the possibilities of other people, especially those who have been impacted by decisions made by AI.

As a consequence, AI-to-human process must be made transparent under two guiding questions of: for whom an AI application create opportunities and risks?; and, will AI create an expansion or a reduction of human authorship? This also means that for the Ethics Council, all aspects of social justice and power are involved.

The Council’s 26 members also discussed the question of whether human authorship and the conditions for responsible action are extended or reduced by the use of AI or not? According to the Council, artificial intelligence can certainly be used in the medical field, for example with regard to diagnostics and therapy recommendations.

However, the Ethics Council is also pushing for compliance with the highest standards for the protection of data and privacy and that strict due diligence obligations are to be observed. It demands that weaknesses in the deployment of AI programs would have to be detected at an early stage. Simultaneously, AI-supported outcomes would have to be subjected to a plausibility check.

In addition, the Ethics Council argues that should certain AI systems become established in the medical field, their ethically correct application would have to be integrated into medical education as soon as possible.

It warns that a complete replacement of the medical specialist by an AI system can endanger a patient’s well-being. The Council strongly warns against giving AI technology too much influence in, for example, the medical sector. The use of AI should not lead to a further devaluation of medicine and to a reduction in medical staff.

Simultaneously, the council remains open to the use of AI-based software in schools, for example, to evaluate learning progress, identify typical mistakes made by students, and outlining the strengths and weaknesses of students. As a result, AI software could be used to recognize the learning profile of  learners and adapt the learning content accordingly.

In addition to that, subjective impressions of teachers could – potentially – be moved towards data-based substantiated outcomes that better address a learner’s special needs. However, the council remains concerned with the meaningfulness of data collection. Potentially, data could be misused to screen and stigmatize individual students.

The council remains critical about AI-based possibilities to measure students sufficiently, accurately, and reliably as this can create systematic distortions. Besides, digitalization is not an end in itself. As a consequence, AI in schools should not be guided by a purely technological vision – known as techno-solutionism.

Instead, AI should be driven by the basic ideas of education which, incidentally includes the formation of a personality or what philosophers call as personhood and what German philosopher Adorno calls Mündigkeit – self-reflective and critical maturity.

From this it follows that if AI systems are to be used, they must be incorporated into the training and education of teachers. Outside as well as inside of schools, the council is clearly in favor of regulating online platforms, known as (anti)-social media.

Understanding the ongoing shift of public communication to online platforms, the council is strongly in favor of robust regulation of AI operators, i.e. corporations. Simultaneously, it warns of an AI’s threat to pluralism and free opinion.

It also warns that the selective use of information by algorithms according to the personal preferences of users as well as the economic (read: profit) interests of platform operators (red: corporations) promotes a rather rapid spread of false news, hate speech, and personal insults. There is a distinct likelihood that AI will contribute to the creation of filter bubbles and echo-chambers.

As a result, there is a danger of what the council calls “relevant” decisions, can take place on the basis of very limited information. And this comes aside from pre-planned manipulation as well as misinformation and disinformation.

In other words, AI has the capacity to reduce the freedom to find high-quality information – now downgraded by an invisible algorithm. At the same time and as a consequence, AI can easily lead to what the council refers to as the  “brutalization” of online political discourse. The council also argues that three already existing regulations:

  1. Germany’s State Media Treaty;
  2. Germany’s famous Network Enforcement Act (NetzDG); and the
  3. EU’s Digital Services Act

do not go far enough in regulating online platforms. As a consequence, existing platforms would have to make content available without personalized tailoring. Perhaps even more important, online platforms need to display “opposing positions” that run counter to their own preferences.

In the use of AI, any form of discrimination should be avoided and the rights of people to object should be protected. It also demands that those who use AI should ensure the highest level of transparency, to employ only trained personnel, and to raise public awareness of potential dangers.

As for the use of AI-supported systems in law enforcement and police, AI’s opportunities and risks must be carefully considered and put into an appropriate relationship, as the council calls it. This means there needs to be some form of “social negotiations” on the relationship between AI, human freedom, and security.

All in all, Germany’s Ethics Council fundamentally opposes technological development without adhering to its three ethical principles: firstly, AI must expand human development; secondly, AI must not diminish human development; and thirdly, AI should not replace human beings.

Thomas Klikauer is the author of German Conspiracy Fantasies – out now on Amazon!

Support Countercurrents

Countercurrents is answerable only to our readers. Support honest journalism because we have no PLANET B.
Become a Patron at Patreon

Join Our Newsletter


Join our WhatsApp and Telegram Channels

Get CounterCurrents updates on our WhatsApp and Telegram Channels

Related Posts

Quest of immortality

Quest of immortality has been as timeless as it has remained fruitless. First recorded over 4000 years ago on the Babylonian Tablets, it narrates the tale of King Gilgamesh of Uruk. His…

Join Our Newsletter

Annual Subscription

Join Countercurrents Annual Fund Raising Campaign and help us

Latest News