FCAS AG Technical Responsibility Minutes of the meeting

31 March 2023 Berlin

Programme Update

Welcome by Mike Schoellhorn. Highlighting the importance of AI and algorithms for modern warfare and for a programme like FCAS.

Status update on the programme and the work of the Technology Responsibility WG. In particular, the results of the workshop “Applied Ethics, AI & FCAS” on 13 January 2023 at the Evangelical Church Office in Berlin.

The discussion on the current situation made clear how much the geostrategic situation has changed with the Russian attack on Ukraine.

In this context, the European dimension of Franco-German-Spanish cooperation was once again underlined.

Military Threat Scenarios 2040

Ben Hodges offered an overview of military threat scenarios based on his book “Future War: The Threat and Defence of Europe”, which also took into account the current development of war in Ukraine.

Four theses:

  1. soldiers fight better when they follow ethically based guidelines.

  2. wars are won by those who can adapt most quickly to the dynamically changing situation.

  3. precision is (in the long run) better than mass.

  4. (Simple) new technologies with new concepts are a threat to sophisticated (old) technologies and concepts.

Discussion:

Do we know more about the Russian army now than we did a year ago?

The two biggest surprises are the poor quality of the Russian air force and navy. The flagship Moskva, for example, was distracted by drones and sunk by missiles. The crew saved themselves rather than the ship, which could have been saved. It remains to be seen how much ammunition and how many soldiers Russia still has at its disposal.

Do “ethics” make soldiers perform better than those who act unethically?

The decisive factor is the cohesion between the soldiers. Furthermore, trust in the army as such and especially in the respective superiors is important. If there is no trust between the parties involved, the fighting strength diminishes.

“Humans in the loop” can also make mistakes. Who is responsible?

AI-based systems would have to overcome unknown challenges. Discussions at the level of ethics, rules of engagement and concepts of operations would have to contribute to this. If technology and humans failed, there would be investigations. Failures could occur at any point in the chain of command. The necessary exercises and procedures would be further developed. In particular, doctrines would have to be developed for AI-based systems, according to which human control would be maintained if they were used in an application-specific manner. As long as people are properly oriented and there is the will to endure unavoidable errors, responsible systems are possible. Politicians have the responsibility to provide soldiers with the best possible equipment. The Bundeswehr has a responsibility to make use of this. We have to gain the confidence of decision-makers by involving them in the process. Accountability is important and must be observed.

“Ethics-by-Design”: AI in FCAS – Status of the Demonstrator

Ethics-by-design – presentation cover

„Ethics-by-design“ für FCAS: Status des Ethical AI Demonstrators

Autonomous systems and so-called general AI were distinguished from automated systems of systems such as FCAS. Thomas Grohs and Wolfgang Koch looked at specific phases of the targeting cycle as characteristic AI applications in the military context, which can be analysed using the Ethical AI Demonstrator developed within the framework of the working group. Its use as an ethical requirement operator with the aim of making ethical aspects of AI tangible using concrete military deployment scenarios was explained.

In particular, the focus was on the problem of object recognition. In the discussion, the following politically relevant questions were raised and discussed: Should the AI deliver a legally compliant result? Does the AI have to understand a rule? Is implementation that is compliant with the rule sufficient? How does conformity to the law relate to conforming to the law? One thesis was that it should be defined politically and ethically what decision an AI is allowed to make at all?

In the discussion that followed this impulse, it became clear that interfaces had to be created to ensure multi-domain capabilities. Data would have to be in the right place at the right time. To achieve this, a multitude of technical challenges would have to be mastered, especially what would happen in the event of a communication failure. Development goals would have to be defined as early as possible, and legal and operational elements would have to be reconciled.

The Patriot system was cited as an example, which has been working with highly automated functions for 30 years without raising any significant questions. Ethically and legally critical functions are also performed by humans there. For FCAS, the question is: What should humans take over? Where do we need their control? What is the role of the human being? This depends sensitively on the operational context. “Guard rails” must be defined within which the machine should operate?

IEEE-7000 standard and its application potential in FCAS

Value-Based Engineering – presentation cover

Value-Based Engineering und Standardisierung von KI im Bereich Verteidigung

Within the framework of the "Seven Pillars of Artificially Intelligent Systems", human-centred design was highlighted as central. An interdisciplinary approach, thinking mathematics, physics, engineering together with psychology, cognitive and occupational science, raised systems engineering research questions for FCAS. The “IEEE Standard Model Process for Addressing Ethical Concerns during System Design” was presented, which was initially conceived for non-military applications. Reasons were given as to how this standard might not be fully applicable to FCAS.

Anthology: “Bundeswehr of the Future: Responsibility and AI”.

The session was concluded by the presentation of the anthology “Bundeswehr der Zukunft. Responsibility and Artificial Intelligence”, on which numerous participants of the FCAS Working Group on Responsibility for Technology had contributed as authors, published by the Konrad Adenauer Foundation.