FCAS AG Technical Responsibility, Minutes of the Meeting

Friday, 30 April 2021 09:00 to 12:00

1. Welcome & status

Welcome & review of the group's activities since the last meeting on 01/02 October 2020 in Berlin (vote on minutes, guest article “Behördenspiegel”, publication of contributions by Bishop Overbeck & BND President Kahl, interview for “Wehrtechnik” with Ulrike Franke & General Funke as well as Bishop Overbeck & General Rieks).

Agreed objectives 2021: a) Working towards an “ethics by design” and b) expansion of the group and inclusion of French & Spanish stakeholders. Point a) is on its way and will be deepened as an idea sketch in today's meeting. Point b) was discussed internally with the result that the involvement of Spanish & French participants should take place for the first time in the autumn meeting on 1 October 2021, and not in a virtual meeting.

A short introduction of the participants who are attending the working-group for the first time.

Transition to agenda item.

2. Presentation & discussion of the Airbus engineers' paper “The responsible use of AI in a FCAS”

The responsible use of AI in a FCAS presentation cover

Presentation “The responsible use of AI in a FCAS”

The objective is to implement the theoretical considerations into practice. That is, how to implement the theoretical problems in practice and show solutions that come from an engineering perspective, but at the same time are flanked by ethical & legal aspects. In essence, it is about creating a holistic design for an FCAS that incorporates all the facets and components mentioned. A reference framework could be the IEEE 7000 standard, which we have looked at more closely in the course of developing initial considerations, together with the developers of the same.

A step in this direction is the detailed paper ”The responsible use of AI in a FCAS“ prepared by AI engineers, in which numerous technologies relevant to a FCAS were looked at in more detail and discussed in terms of their ethical and legal components

AI has a lot to do with delegating things that were previously done by humans to machines. It is obvious that this also leads to fears. A central question here is: who bears the responsibilities of actions carried out by AI? Questions such as regulation and standardization are also being discussed in detail for the military sector. The work of the engineering group inevitably has a technical focus, but wants to see itself embedded in a multi-stakeholder approach that shapes the debate comprehensively. The paper mentioned above is a first result of these discussions; it is work-in-progress and will be further adapted and developed as the debates progress. And ultimately, the paper is also about providing technical transparency. The ALTAI catalogue of the EU High Level Expert Group on AI was also referred to in the drafting process. However, the focus is on the “FCAS AI use cases”, which are to be used to make the application of AI in an FCAS concrete, especially with regard to the so-called targeting cycle, i.e. finding, localizing, classifying, identifying and attacking targets.

In general, these use cases are not too far removed from applications that are being discussed, for example, in robotics or autonomous driving. However, in the military context, special specifics and sensitivities have to be taken into account.

And it is precisely in this context, in the targeting cycle, that the question is how and where AI can and should be used sensibly and legitimately. In other words: How can decisions be delegated to a technical system in an accelerated decision-making cycle, with the proviso that the applicable military rules of engagement are observed and that this can be reconciled with an ethical value base?

For the FCAS AI use cases, a corresponding reference was made to the catalogue of criteria of the EU High Level Expert Group on AI with the three components of legality, ethics and robustness. This resulted in the ALTAI assessment list along 131 questions in 7 categories. An attempt was made to answer these questions along a selection of the previously defined FCAS AI use cases. This was done in the expectation that answering the questions of the ALTAI catalogue would also provide guidance on critical design aspects of a future FCAS.

Discussion

It follows a discussion on the work assignment for the FCAS Forum; suggestion that, for example, the creation of a catalogue of questions specifically relevant to FCAS is conceivable. The idea is expressed that the heterogeneity of the group can make an important contribution to this, as can the IEEE standard for the corresponding implementation.

Note that the ALTAI questionnaire is very generic and general; and applies just as much to e.g. Facebook & comparable social media platforms as it does to autonomous driving. No “ethics by design” can be derived from this. Suggestion: Look at ”use cases“ very closely in detail and approach them with a multi-stakeholder approach in order to work out the respective specifics, which then have to be taken into account in the design. The ALTAI list can be helpful here, but it will be crucial that the right stakeholders are involved, e.g. also the operators, the pilots. This analysis must result in a technological and organizational design for the future system. That means we have to clearly define the ethical design requirements and give them to the engineers as standards in a kind of “ethics roadmap”. We have to define what role humans should play in such a system, in principle. And how, in general, the responsible use of such technologies must be designed.

There are already projects that deal with this topic, ALTAI is certainly a good orientation, but there are already projects in the academic debates that look at this in more detail and depth at the scenario level; for example IPRAW, International Panel on the Regulation of Autonomous Weapons.

The paper takes up important aspects that the German government has also deposited in the Geneva negotiations. Moreover, it is important not to think statically, i.e. to look at what would be applied technologically in such a system today, but to anticipate future developments and trends and integrate them into our reflections (both normatively and operationally).

Details of the technology and the overall system are important; the military framework conditions are also important, i.e. the operation itself, its environment, preparation, etc. What is the situation there, do we have to reckon with civilians, etc.?

We will experience at least two more major technological leaps before the planned commissioning of an FCAS. And artificial intelligence will not be distributed centrally in the system that is the principle of the system-of-systems approach; one part will be in the aircraft, another in the unmanned components, yet another in the combat cloud. “Ethics by design” must take this into account, as part of a multi-stage analysis and development process. That is why it is good that we have started the discussion on this so early.

The need to discuss the topic not in broad terms, but very specifically along the lines of the technologies at stake in an FCAS - results-oriented!

FCAS as a project is not only a technological challenge, but also a social and political one. We do not yet have a method to meet this challenge comprehensively, socially. We must embed the FCAS debate in the overarching debates, for example in the EU context, where proposals for the responsible use of AI are also being developed.

The debate should not be reduced to AI - as important as the topic is. But: There are also other sensitive areas in the context of FCAS that need to be addressed, e.g. the currently increasingly discussed militarization of space.

3. Ethics by design for an FCAS: Outline of ideas for operationalisation along selected fields of application (“use cases”).

FCAS Ethical AI Demonstrator cover

Presentation “FCAS Ethical AI Demonstrator”

A presentation of a first idea sketch, a concrete “case study”, which should ideally be made available to the forum members directly via the web as a demo application in order to make interaction with a real AI in a military context personally tangible. Concrete use case: Reconnaissance, detection and identification of enemy air defence forces with the help of an AI-based automatic target acquisition. Operational background: SEAD/DEAD (Suppression/Destruction of Enemy Air Defences). Here: Search for the combat vehicles of the enemy air defence system “SA-22 Greyhound”, these are heavy four-axle trucks with 12 mounted missiles, radar technology etc., mobile ready for action within 5 minutes. Time from target acquisition to firing 4 to 6 seconds. Challenges: the AI has to be trained for the “right” characteristics, otherwise it may be confused with heavy four-axle construction vehicles; different configuration of the superstructure during transport (possibly planned) and in combat readiness. Targeted camouflage is also possible.

This is decidedly an initial sketch of ideas to be jointly designed and further developed.

What the machine can do well: Recognising a multitude of targets, sorting them, pushing them into categories and presenting the results to humans. Humans would need much longer to do this. What the human must do: he or she must resolve the ambiguity in the images or information provided. In other words, in human-machine interaction, it is important to intelligently resolve: what can the human do well, what can the machine do well?

The question is to what extent it makes sense to rely exclusively on optical signals when the opponent can simply pull a coloured tarpaulin over the truck, for example, and thus distort the image accordingly. The next generation of such vehicles will presumably have counter-AI measures integrated, for optical distortion. Response: Data fusion with SIGINT is being considered.

The goal is to make AI robust - of course the opponent will react accordingly, you then have to react to that as well.

The aim of the debate is to see whether we are thinking in the right direction; or whether we need to readjust or possibly completely rethink.

In real operations, the pilot has to consider many more parameters than in the outlined scenario. And these are all dynamic, i.e. they change in seconds or fractions of seconds. This means that it would make sense to design the parameters in the scenario dynamically as well; and then consider which parameters the operator decides on and how. One could, for example, include 3 gradations. And: you might also have to make some adjustments to the timeline. It is very unrealistic to have 30 seconds to make a decision in an aeroplane; the time windows for decisions are usually much shorter, around 3 to 5 seconds.

The task is to give the operator more time through the design of the system. This means that the information about corresponding technical possibilities, for example satellite-controlled, must be improved accordingly. Through the early generation and processing of data sets, “meaningful human control” can then be made possible, since the operator has the necessary lead time & all the information for his/her decision.

This always depends on the mission. Does an operator have a situation on the ground that he/she has to observe with the human eye for 5 minutes in order to get an idea? At the same time, however, there is also reconnaissance that takes place elsewhere, for example via an AWACS or the control station on the ground. But there are different development dynamics: sometimes you have to observe a convoy in real time, sometimes a processed data set that is made available is enough. From an operator's perspective, it has to be said that a scenario in which a pilot can give 5 minutes of undivided attention to a development on the ground is rather unlikely.

Expectation that with an FCAS, the actual flying of the platform will no longer be the main task of the operator, so that there is capacity to focus on other things. And that the time window for human decisions can be increased through appropriate technologies. This is based on the question: What can a human being/pilot do better? What can the machine do better? And tasks must be assigned accordingly. And this is where AI can make an important contribution. Certainly, AI can better recognise all enemy weapon systems. But when it comes to the nuances, to certain conceivable deviations, when it comes to removing the ambiguities from the picture, this requires humans.

It is not about quantitatively packing more and more technology into the system; it is about clarifying who or what delivers the best result in certain situations. The specific levels of automation need to be designed accordingly.