Protocol FCAS Forum

October 1st 2021 Berlin

TOP 1: Welcome & Introduction

Wolfgang Koch and Florian Keisinger open the round and welcome the participants, especially the newcomers. One of the goals that were set in the last discussions of the Forum was to make the exchange more European which is why it was very welcomed to have with Colonel Luis Vilar a first Spanish participant present.

TOP 2: Introduction by Michael Schoellhorn (CEO Airbus Defence & Space)

Michael Schoellhorn expresses appreciation of and support for the Forum. He finds the depth of the discussion impressing as the ethical aspects and responsible use of technology are an extremely complex topic.

TOP 3: Update on the FCAS program

General Leitges refers to the results of the previous (webinar) discussion on April 30th which he appreciates.

Participants underline the importance of having in-depths discussions on the responsible use of AI in a system like FCAS at such an early programme stage. Furthermore, the discussion should not take place only between likeminded people (e.g. only MoDs), but it is important to open it up, towards other European partners, but also wider society. It is stated that France has its own procedures and formats when it comes to this topic, however, expresses his hope that we can convergence our discussion streams in the future.

FCAS-status: German parliament has approved the budget for the next FCAS phase (technology maturation) in June 2021; in summer 2021 the so-called “Implementation Agreement” (IA3) between the states has been signed. The industry contracts are still being discussed and all partners hope for a quick convergence. German parliament will be updated once the industry contracts are finalized. Important to look at all the details, e.g. the IP rights defined. First common operational requirements have been approved by the Air Chiefs of the 3 states. This discussion is to be continued in close alignment with the industrial engineering’s capacities.

It is referred to the “Eckpunkte für die Bundeswehr der Zukunft” and the Multi-Domain-requirements as well the A2AD capabilities outlined there – which will be important components of a FCAS. Furthermore, FCAS has a strategic dimension for German and European defence policy which goes beyond the pure progamme-logic. Therefore, strategical discussions and also ethical aspects of the use of AI are of importance.

FCAS started with 3 national studies, then the “Joint Concept Study” (JCS) was kicked off which should have ended in 2019 but due to Covid-19 took slightly longer. Most of the results, however, are available by now, so we can do our national assessment. The JCS encompasses several outputs from R&D roadmaps regarding the overall structure of the system. The task was to downgrade from 10 system-architectures to 5 and then 2 to 3.

The Demonstration Phase 1B/2 (2021-2027) contains further technology maturation, incl. some new technologies; however, overall the programme is still at an early stage.

A key success factor is the tri-national programme-set-up, e.g. Combined Project Team and joint governance that covers the interfaces to our contractor.

Where national positions divert, compromises between the nations need to be achieved.

In Spain similar discussions on the responsible use of AI in defence and particular in a FCAS take place, however, have not been formalized yet. Spain is very interested in developing e.g. ethics in the target recognition. Such discussion should ideally take place in a trans-national resp. European context. Spain intends to continue and increase its share in the FCAS Forum discussion.

As a truly European programme, it is important to also get the other partner nations on board. Spain as a fully progamme partner is committed to that, not just with government involvement, but also with societal representatives.

AI and a “Combat Cloud” will be the “glue” for a NGWS/FCAS. It will ensure the interconnectivity of assets, and therefore be a key driver of a FCAS. Existing assets such as a EF or Rafale will be integrated in this overall conception. A joint European approach is essential, e.g. when we come to A2AD which will be part of an international setup. The need for information-management in the air domain is probably higher than in other domains; therefore FCAS is so important and could also become a catalyst for other domains; integration beyond air domain is one key ambition of a FCAS.

In a broader view, not only looking at technology, but also ethical implications, there is an interlinkage between this multi-domain-approach. The ethical aspects, just like the technical requirements, need to be discussed and define; this applies for all areas and domains where we apply AI in defence. We are aware that potential enemies outside Europe do not pursue the same standards as we do here; however, this cannot be a reason for us to give up our values and procedure. Nevertheless, we need to be aware what is going on world-wide, and what technologies are being applied. Furthermore, we need to train our soldiers and staff on the responsible use of AI; we need to raise awareness on the topic, define rules and standards, and tech and, of course, apply them.

Status of the international discussion at the United Nations (UN) – Convention on Certain Conventional Weapons (CCW)

Developments since the latest update in the FCAS Forum in 2019:

  • The CCW GGE (group of government experts) is a treaty based forum mandated by the 125 “High Contracting Parties” (HCP, i.e. States Parties) to the CCW, they do not represent the entire UN, but relevant military players.

  • 11 guiding principles endorsed in the CCW at the Meeting of the HCPby consensus at the end of 2019, englobing the topics of International humanitarian law, Human responsibility for decisions on the use of weapons-systems, Human-machine interaction etc.

  • Mandate for the CCW GGE’s work in 2020/21: clarification and development of aspects for a normative and operational framework for LAWS, recommendations planned for adoption by the 6th Review Conference of the CCW in Dec 2021. Some delays in the GGE work due to Covid-19, in 2020 only a hybrid meeting could take place, contested by Russia.

  • At the end of April 2021, the French Defense Ethics Committee had published its opinion on LAWS, advocating to renounce the use of fully autonomous lethal weapons systems and to subject the use of partially autonomous lethal weapons systems to a set of conditions.

  • In June 2021 Germany and France submitted a joint input paper to the CCW GGE (c.f. attachment) proposing that States clearly commit not to develop, produce, acquire, deploy or use fully autonomous lethal weapons systems, that is weapons systems operating completely outside a human chain of command and control. And to commit to making sure that lethal weapons systems featuring autonomy must only be developed, produced, acquired, modified, deployed and used in accordance with a number of provisions, such as compliance with international law, preservation of human responsibility and accountability at all times, Retention of appropriate/sufficient human control, and adoption and implementation of tailored risk mitigation measures and appropriate safeguards regarding safety and security.

  • In August 2021 the GGE discussion in Geneva was resumed, with an additional session at the end of September and a final session in early December 2021. In the GGE discussions, there are is still some different interpretations in the details; however, there is a lot of convergence around such a 2-tier approach. In general, there are delegations on one side of the spectrum that wish for no (further) regulations of LAWS (referring only to the basic consensus already reached in the past but not interested in further developing that consensus), and on the other hand there is a very ambitious group of countries that want a legally binding ban on LAWS → obviously not easy for the chair (Belgium) as CCW operates under consensus rule.

  • Trying to regulate technologies that have not yet been developed is more difficult than technologies whose capabilities, effects and characteristics are already known → similar for FCAS within the tri-national context.

It is encouraging that there is consensus that in the use of autonomous weapons systems, the human has to be part of the chain to make the call of applying lethal force, Therefore there is an expectation is that each user needs to sufficiently understand how the technology and system functions.

From a international legal point of view: Accountability cannot be transferred to a system as States are responsible for compliance with international law and individuals (not machines) need to be held accountable for violations.

Comparison with automated driving: When certifying the algorithms; developers need to understand, but what about the operators? Ultimately, decisions to engage need to be taken in a responsible chain of human command and control. Nevertheless, this underlines the importance of the overall design of such future systems, as operators cannot be expected to fully oversee all AI and date driven processes during a mission.

What is the framework for this? Direct human-machine interaction not required in every action. What type of responsibility falls on the manufacturer and what on the operators? These are key questions to be answered.

TOP 4: Ethics-by-design for a FCAS – Scenario presentation

The demonstrator initiative (“find fix track” application with AI for Automated Target Recognition) was introduced and presented as a practical example. The ambition is to provide a first “hands-on” step towards an “ethics-by-design”-methodology which then can be integrated into an overall FCAS design process; the designing engineers would not only receive operational or technical requirements, but also normative and ethical ones to be implemented into the design of a system.

Ambition for the scenario:

  • Demo objective: Have a concrete example that we can extract, feedback for the concrete technical requirements and normative discussion

  • How can we get a meaningful interaction, and provide assistance to the human operator?

  • Ambition for a non-biased decision, meaningful interaction, with the role of operator to confirm or reject the target, or ask for more information.

  • Relationship with certification: How to make the system predictable? What does the user need for the decision, new ways of how to certify AI (explainable AI)

FCAS Ethical AI Demonstrator – presentation cover

FCAS Ethical AI Demonstrator

Questions:

  • Where can it work, where not? (e.g. weather conditions, environnemental restrictions etc.)

  • When is AI better qualified to take a decision than a human?

  • Automation can fail; how do we ensure full human backup / human oversight?

Most answers depend on the mission situation (planning), therefore on factors that are known before the mission; furthermore, the training of the staff – in the air and on the ground – will be essential; multi-sensor-technologies to be further improved (data fusion). “Rules of engagement” need to be defined along these lines.

Comments in the discussion:

It seems highly relevant to concentrate on Explainable AI and then on further steps for a truly interactive systems that clarifies confusion in a timely manner:

  • How quickly can we learn from errors? Neural networks stay with a complex structure, you can learn on a higher level, inputs/output base, which should be fair.

  • How works the engine? No interest in the details, you just need to make sure you can predict.

  • You open the system to dangers (malicious input / cyber-attack) – one can record errors, and learn for the future from it.

A question on context and type of scenario (identification of target) and how to further embed this into a sequence of actions (e.g. counterattacks). Response: The scenario was a very first step – in a real life situation, of course, we would need more here (e.g. SIGINT / electro-magnetic spectrum) etc.

Overlapping detection and discrimination (also normative): evaluation of a target / ROE, when it comes to discriminate between civil and military target: The reasons for the discrimination have to be touched upon to really understand it and to understand why the discrimination is important for the target cycle.

Response: AI in this case helps for classification only, it is true that discrimination is more difficult, knowing what we are not knowing is a next step. Further question could be: How can AI assist discrimination?

For this we would need to define clearly defined identification criteria. E.g. is a person not carrying a weapon vs. someone carrying a weapon? Can we see if the person takes part in the hostile action or not?

Definition ICAC: You are not allowed to attack someone if he/she does not carry a weapon

→ Task for the group: enlighten and further specify the issue of discrimination!

The gap between quantity and quality of information: The percentages given in the scenario are still complicated for target confirmation, ROE might decide this (what does percentage mean? Engage or not engage?).

Legal aspect: Evidence vs probability: ROE will never set you a clear percentage and tell you when you can confirm the target or not → you have to be sure (you need to decide if this is 70 or 80 %). Important: The person acting is legally responsible.

From an operator's view: Choice is also without AI assistance always based on probability (e.g. if you have difficult conditions like stress / a very quick reaction needed / fog, you can anyway never be sure

It is highlighted that this is only a first demonstration to trigger our discussion which is maybe still far away from a real technical solution for which we would need to take the input of percentages and refit it resp. enlarge data base. We could change evidence with probability then (e.g. “90% of the users say XX”)

Also to keep in mind: Human behavior is to see that at 90% probability it is the target, a lot of factors make it difficult to know for sure (context environment, behaving, …)

Questions:

A) In the scenario, the percentage of that it is SA22, there is no explanation why the AI thinks this? What are the elements? Will it be included?

B) XAI result based on limited data (issue in military): Can the X help to solve that issue resp. help evaluate its correctness?

On A: Automated target recognition already developed by Airbus, our take: Make it explainable, mock-up of what we can expect, then you see some explainable data.

On B: It is important it make clear the confidence level of AI and consistency with reality.

On A: Expectation for operator is to be very well trained in the solution: what means the percentage. In-depth training beforehand on which criteria / levels of confidence etc. to understand the outcome / advice that the AI provides → where is one involved as an operator? Where do I need a clear guidance resp. decision? Educate how to use it in mission.

Question: What was the process; how did this demo develop?

The complexity of the scenarios will need to be taken into account. For the one simplified scenario here it is fully understandable – however if you think of distributed sensor systems , influenced data sets that come together with other operators, more sensors and assets (as planned for an FCAS), this is going to be more complex. We need to further enhance this demonstration and, of course, step-by-step, add complexity.

It was chosen as a concrete example and discussed with GE Airforce that confirmed that without AI assistance it is hopeless to recognize the target here. However: of course it only shows the principle.

Recommendations:

  • Be careful with the percentage communication; it will cause discussions and irritations. For some observers 80 % is high, for some low …

  • Utility of the demo: This needs to be judged best by an operator. Will such a tool improve my work? Is the error-rate getting smaller? Are we creating new errors? Etc.

  • It will be important to explain better to the public: Why can/should we rely on these technologies? It is all about trust

  • Our findings should be spread on a broader scale, right now: car engineers are fascinated on specifics of engineering, not how to use a car safely (norms). Different ways of thoughts incl. philosophy, notions in the right order, computer scientist should not be by themselves, we need to have other voices, and must find broad consensus on how to deal with AI in defense. We build cognitive machines; certain parts of mental processing are taken over (glasses in your brain). In history, for example, the telescope was such a “new thing” that there were requests for a framework how to use it safely.

TOP 5: Way ahead

Suggestions:

  • In 6 months, Germany will have a new government, new political people – bring in some new people from the Berlin scene? Update of the technology? Bring in international law? Computational assistance? Further work towards the “Europeanization” of the group.

3 points to be more precise:

  • We need to be more precise on what is the operational capability we want to show; and how does it fit in this context. Further: Why is AI the tool of choice? How would the operational capability look like without AI? Could you still do it? In what quality?

  • Ethical challenges in scenario: what is precisely the ethical challenge?

  • What process was used for this problem statement? Methodology behind?

Specific ethical problem in AI, what we could do:

  • We need common definitions of what is ethics, ethical judgements in this specific context etc.

  • We could invite legal advisors from armed forces with concrete operational experiences, e.g. issue of legal expertise.

Further comments:

  • Discussion should not derive into if we will have AI, yes or no; AI is all over already (any sophisticated technology will incorporate Ai already now and surely tomorrow), but the need is to explain it (call it rather “support system”, if necessary).

  • AI is a supporter in decision-taking.

  • First you need to know: What are the decision points, then think about the AI to come in: Divide between decision points, controlling processes, then choose where to put what AI.

  • We will need to translate this rather technical debate into understandable language, as it is in wide parts a societal debate with a lot of different opinions.