Protocol of the working-group on the responsible use of technologies in a “Future Combat Air System” (FCAS)

Friday, 2 October 2020 09:00 to 16:00 (Berlin)

Agenda item 1: Welcome (Florian Keisinger, Wolfgang Koch)

Florian Keisinger opens the meeting, welcomes the participants and presents the agenda and notes on the proceedings.

Agenda item 2: Where do we stand with FCAS? Status update on the technology development (Thomas Grohs)

Photo of Thomas Grohs
Thomas Grohs
Chief Architect FCAS, Airbus Defence and Space

Thomas Grohs gives a status update on the technical situation in the FCAS. A discussion will follow after the next presentation (Top 3).

What is it about? The Future Combat Air System (FCAS) follows the concept of the Next Generation Weapon System (NGWS), i.e. it pursues a cross-platform approach consisting of Remote Carriers, Combat Cloud and Next Generation Fighter, whereby the complementarity of the individual capabilities of these platforms must be ensured. It should also be able to respond to new types of threats in a holistic manner. The ambitious project is embedded in a European and international environment, surrounded by existing and new systems and platforms. Especially in the NATO alliance, interoperability is a must. FCAS is also embedded in various dimensions, including land, sea and space, as well as cyberspace. This means that effective protective measures against cyber-attacks must be developed, so the question of self-protection is very complex and the networking of various resources is therefore becoming increasingly important.

How will FCAS be implemented? FCAS is anchored in a timeline following the incremental approach. We do not want to wait until the system is fully developed before it is implemented, but we intend to gain practical experience from which we can learn. From 2025 onwards, the aim is to achieve basic connectivity that can be shared by every core FCAS nation (i.e. France, Germany and Spain). In addition, FCAS, as a NGW system with regard to a highly dynamic threat situation, should offer a holistic solution to relieve and support the operator digitally in the situation picture generation and the decision-making process of the action based on it.

Status of the study and demonstration phase: Since February 2018, a tri-national “Joint Concept Study” (JCS) has been carried out and demonstrators are to be used to validate the technologies that are on the critical path. Ensuring the commissioning is a key objective. Here, the French approach of an early practical implementation of developed concepts and the theoretical developments and simulations initially pursued on the German side must be brought together in a joint coherent approach. The demonstrators comprise various pillars: the Combat Cloud, the Next Generation Fighter and the Remote Carriers. The remote carriers are unmanned assets (assistance drones), which are intended to provide additional capabilities in line with the fighter in an agile and demand-oriented manner. The aim is to find out how the assets should be managed, i.e. how the situation assessment can contribute to the successful implementation of the mission and which general conditions play a role. Here, a basic trust of the end user in the decision proposals for the mission is essential, i.e. a basis of trust must be created on which a pilot can base a decision. The issue of increased speed of decision making is extremely important, but it must be understood that the system is under the ultimate control and responsibility of the human operator (the so-called human in the loop), who is only supported by digital applications. This enables the operator to make faster and more informed decisions.

The OODA Loop: OODA strategy should also be used as a guideline for the flow of information in FCAS. The concept consists of the following four aspects:

  • Observe: First, the sensors record where and in what situation I am. This enables a digital picture of the situation to be produced, whereby uncertainties must be taken into account, especially when predicting future events. This gives the operator an improved perceptive faculty.

  • Orient: Here the focus is on how the mission can be implemented or has to be adapted in the context of the current situation. Digital support brings a lot of efficiency here and can also support redundancies, e.g. to compensate for the failure of a supporting asset.

  • Decide: It must be considered very carefully to what extent the system can or may take decisions from the operator. However, there is no ethical guideline for this so far. Perhaps it is even possible to deliberately include interruption points to actively challenge decisions. Legal and ethical areas also play a role; the technical side should provide what is needed.

  • Act: Actions are based on the decisions made, but the situation must be constantly monitored. Speed is a key skill here, which is why the so-called “teaming intelligence” is important as a support. The system should only take decisions at the lowest possible level (“Lowest possible decision taking”), but more far-reaching decisions may have to be taken at a higher level, which brings legal and ethical aspects into play.

Summary: The industry side tends to fall back quickly on technical discussions, although political and strategic aspects must also be examined. It is also very important and interesting to consider technical and legal arguments. Airbus colleagues have started on their own initiative to find possible solutions for the topic of technical responsibility and to examine them analytically. They intend to develop a white paper by the end of the year and present it to this group. It will cover topics such as digital transparency and confidence building. A process of habituation through trial and error must take place, personally for each individual, but also for the nation with “Rules of Engagement” for implementation.

Agenda item 3: A strategic threat analysis for 2040 (Dr Bruno Kahl)

Photo of Dr Bruno Kahl
Dr Bruno Kahl
Federal Intelligence Service, President

While we can make long-term forecasts for specific world regions and individual subject areas, it is not possible to predict with absolute certainty which of these developments will ultimately be most significant in geopolitical terms. We can nevertheless identify a number of megatrends that will have an impact on German security policy, requiring changes to our security architecture and consequently to the demands placed on our armed forces.

One fundamental megatrend that has been apparent for some time is that the world will not become a more orderly and organised place over the next 20 years, but rather much more diverse and, as a result, even less safe. Further megatrends include continued technological progress; a shift in economic strength between Europe, China, India and the USA; demographic development and change; climate change and a scarcity of resources; migration; and, despite the number of isolationist and nationalistic movements around the world, an advance in globalisation.

Each of these megatrends will have specific ramifications for military operations and the nature of military conflicts in the future. The protagonists of the 21st century will be faster moving, more sophisticated and more volatile. A large mix of state, non-state and quasi non-state actors will influence the battlefield. This will make it more difficult to distinguish between friend and foe.

The future will present new and different forms of conflicts. This applies to a broad range of hybrid threats and conflicts triggered by scarce resources or climate change. We are experiencing a ‘socialisation’ of war: military operations will increasingly take place in urban settings, affecting large segments of populations as a result.

The cyber and information domain will continue to gain significance – especially when conflict dynamics and threat situations are complex or unclear. Social media and fake news will become increasingly prevalent, triggering wars of narratives. Digital asymmetries between the various parties involved, as well as an overall increase in mobility, will end up facilitating the proliferation of powerful and effective technologies to dangerous levels.

Digitalisation, automation and technological innovation will give rise to a ‘glass battlefield’ that – in contrast to the complex and opaque ‘fog of war’ of earlier eras – will shed light on the current situation and deliver information that is highly beneficial to tactical decision-making. Overall, the conflicts of the future will be more hybrid and interconnected affairs without geographical borders or temporal limits.

We need to know the future development strategies of other armed forces so that we are adequately equipped and prepared. Here, we will take Russia and China as examples.

From the Russian point of view, the major powers in today’s multipolar world are not engaging in direct military combat but rather in a rising number of local and regional conflicts. Right now, these kinds of conflicts are more likely to escalate militarily rather than a large-scale war between major powers.

Russia is investing in new weapons, technologies and tactics, and is even testing them in the field – for example in Syria – to serve as a more credible deterrent. Air and space are becoming more important for the Russian armed forces in the process.

The hybrid and mostly covert methods of influence exerted by Russia are gaining ever-greater significance, while moral, ethical and legal considerations are taking a back seat behind tough realpolitik using any method available. At the same time, the Russian system retains a consistent structure, regardless of Putin: the security authorities and military still form the centre of state power. In view of this continuity, the conflict with the West continues to be a defining factor in the system.

Even the multipolar world order can work to Russia’s advantage. The USA’s geopolitical focus on Asia once again gives Moscow the opportunity to exert more influence on Europe. At the same time, however, Russia is still keeping a very close eye on the rise of China, its Eastern neighbour. From a defence perspective, Russian weapon systems are expected to have a technological edge over China until 2040.

In turn, Beijing’s security policy and military focus are likely to remain regional until 2040. Conflicts over sovereignty and territory on China’s direct periphery, as well as its rivalry with the USA in the Asia-Pacific region, will remain major influential factors in this regard.

The further rise of China under the leadership of the Communist Party will continue to play a decisive role in the world order up to the mid-21st century. President and party leader Xi Jinping has already declared China’s intentions to become a great power, on a par with the USA, by the 100th anniversary of the People’s Republic in 2049. Washington will therefore remain the benchmark for Beijing and its own foreign policy ambitions and security policy.

The goal of China’s military doctrine is to contain the USA’s position as the dominant military power in the Asia-Pacific region in the middle term and to displace them in the long term. In this context, the People’s Liberation Army is expected to be gradually reformed so that, by 2049 at the latest, it is capable of winning highly intense regional wars in the information age.

Due to weapon exports, it is conceivable that by 2040, complex Chinese weapon systems such as combat aircraft and ground-based air-defence systems may be found in countries neighbouring NATO member states.

In terms of security and defence policy, it is therefore in the interests of Germany and our European and transatlantic allies to know how Russian and Chinese defence exports in our neighbouring European or African regions could contribute to the international threat situation.

Overall, from a German point of view in general and from the perspective of the Bundeswehr and German Federal Intelligence Service (BND) in particular, it will play an important role how we can better protect our soldiers against enemy threats during missions abroad.

With regard to the strategic threat analysis for the decades ahead based largely on BND intelligence findings, policymakers will not only have to make foreign and security policy decisions, but also develop European industry solutions with adequate technology and equipment.

Discussion on agenda items 2 & 3:

Photo of Dr Ellen Ueberschär
Dr Ellen Ueberschär

It is discussed whether digitalization now enables the “little man” to build weapons and whether this is not highly dangerous in the context of international terrorism. At the same time, however, it is stressed that the exploitation of digitalization by individuals is not the central problem; the capabilities of state actors such as China are to be classified as more dangerous for the Western world in this respect.

Cyber-threat and virtualisation are also mentioned as potential threats. Possible threats are posed by the manipulation of public opinion, technology espionage and cyber-attacks on critical infrastructures, with many of these activities going unnoticed. Companies are also affected and are therefore more alert to the trustworthiness of their staff.

The external impact of FCAS is also discussed, as well as whether there are efforts to achieve similar technological developments in China and Russia. In general, FCAS activities are being followed with great vigilance. There are efforts to develop independent systems and to anticipate responses to Western technologies. The question of ethical considerations on the Chinese and Russian sides is negated, priority here being “survival”.

With regard to the ruling of the Federal Constitutional Court of May 2020, there is a discussion about the significance of a stronger integration of Articles 5 (freedom of opinion, art and press) and 10ao (secrecy of letters, post and telecommunications) of the Basic Law for the activities of the BND, and also for the FCAS project. It is pointed out that the law is of particular importance for the BND's activities abroad and also for cooperation with other secret services.

Furthermore, Russian efforts to destabilize the Western systems are discussed, and whether there is a unified spectrum of actors in this field. It is argued that the Russian system is centred on the policies of Putin and the Kremlin. Putin is pursuing two strategies to render the Western system of freedom and prosperity harmless, namely by strengthening his own system on the one hand, and by minimising Western attractiveness on the other. Furthermore, it was commented that Putin strives to increase Russia's power through potential threats, i.e. through rearmament, and to exert a controlled influence on neighbouring states in local conflicts.

Another inquiry relates to the possible effects of global climate change on conflicts, and what this means for military technologies. It is generally stated that the scarcity of resources caused by climate change holds enormous potential for conflict. For example, the struggle for water in Africa results in asymmetric conflicts, with the parties often not having the appropriate means to conduct such disputes.

The described tendencies to shift future conflicts to society and cyberspace motivates the question of how an FCAS system could make valuable contributions here on the one hand, but also could become the target of attacks itself on the other. It is explained that FCAS does indeed serve cyber and urban space and that individual technology modules could be used there, so a link to a multi-domain combat cloud is conceivable. Of course, protection against cyber-attacks is indispensable.

In the context of a Multi-Domain Combat Cloud, it is noted that certification will play a role, especially with regard to the difficult and highly dynamic field of artificial intelligence. It is replied that certification schemes have to be defined for this purpose beyond the security aspect, especially on the international level.

From the perspective of the Bundeswehr (German armed forces), a central objective of an FCAS system is to cover a broad spectrum of threat types beyond cyberspace, therefore national cooperation must also be sought for this. Networking and cross-domain solutions in a system-of-system would be a central benefit of FCAS in this respect.

Another question is raised as to the risk of future dependencies, particularly with regard to digital infrastructures. Europe is integrated into the US system, while China is digitally separated. It is assumed that globalisation can no longer be stopped, and that global networking is being sought, but a seclusion of totalitarian powers is both realistic and problematic.

Furthermore, the issue of redundancy at FCAS is addressed, according to which aircraft certification is subject to strict requirements on the German side, which provide for quadruple redundancy. At present, there are no concrete specifications on the subject of command and control or monitoring. However, FCAS is only at the conceptual stage at the moment.

At this point, there is a question about the Observe aspect in the OODA loop, why this would not occur in Manned-Unmanned Teaming. It is argued that teaming intelligence is strongly based on observation, but the algorithms are concentrated on the tactical area in the other cycle elements.

Finally, the international exchange on ethics between the FCAS partner states is discussed. From an engineering point of view, there is no official exchange so far, neither with Spain nor with France. Culturally and historically, France and Germany have very different approaches. Defence and security are much more central political issues in France than in Germany, while defence-related issues are much more controversial in Germany. However, there is a great effort to build bridges as soon as possible.

Agenda item 4: Constructive conflict culture (Military Bishop Franz-Josef Overbeck)

Bishop Franz-Josef Overbeck presents his thoughts on responsibility in a conflict culture.
(Note: The literal text is also published on the website, this is just a summary.)

What about the responsibility perspective of the aspects mentioned so far? We find ourselves in a world of the scientification and mechanisation of the military, and this raises the question of who is responsible for action and responsibility. The more lethal the technologies are, the more we need to know what we are doing. We have to be aware of the purpose of the technology and insist on an ethos of technical responsibility. In the Christian sense, this is a responsibility before God and humanity that should serve peace.

The structural potential for conflict has increased. Here the destructiveness of the classical conflict must be avoided, since man also has responsibility for technical systems and here too is called upon to do good and avoid evil. Especially in armed military conflicts, it is relevant to work in an ethically constructive way and to de-escalate conflicts. Soldiers must serve a universally conceived peace. The concept of conflict must be grasped in its complexity socially and politically. Conflicts can have a productive force by making mistakes and grievances visible, thereby enabling joint solutions. Change and the preservation of what exists are not mutually exclusive; familiar orders must be challenged so that processes can be rethought and change initiated.

One finds a long tradition of Catholic and Protestant ethics, which, similarly to the Enlightenment, calls for reflection and autonomy. What does this mean for the individual or the person in charge? The decisive aspect is a constructive conflict culture, i.e. one should not only recognise the negative aspects of a conflict but also its productive potential. It is also important to reflect on one's own conscience because inner attitudes must be reflected upon in order to be able to assume co-responsibility. To this end, it would be a good idea to return to the concept of classical virtues as a guardrail for ethics. Especially the discussion about controversial topics and the search for connecting links requires wisdom, courage and bravery. Humans have the ability and the will to do good, and therefore cannot and must not avoid the question of right or wrong. However, one must first find one’s own moral identity. In the religious sense, identity means the encounter between God and humankind, but one can also speak here of conscience or autonomy in the sense of conscience.

An ethical judgement should not only be derived from general norms, but also be reflected upon when applied to norms. However, the recognition of norms alone does not replace the application of conscience since the moral existence of the judge is central here. If one has to violate one's own ethics, one thereby betrays one's moral self-determination. Therefore, one must be able to refer back to one's conscience under unclear ethical conditions and reflect the burden of judgement on oneself. Respecting the right to freedom means that one does not have to stand against oneself.

The aspect of autonomy of weapon systems raises questions of ethics and also of international law. The human decision maker could easily disappear behind an autonomous system, and the emotional disconnection of autonomous systems exacerbates the ethical conflict. We must not forget that we have a duty of care towards both combatants and civilian victims in every mission, especially if we do not know exactly who we are actually harming. If full autonomy were ever possible, there would be no one who felt responsible because then there would be no clear assignment of responsibility. Killing a human being is never legitimate, especially if nobody can take responsibility for it. Therefore, we have to ask ourselves where the human being should be in the hierarchical chain of decisions; this is not only a systemic but also a personal question.

The connection between human beings and robotics is parallel to the connection between conscience and the doctrine of virtue, i.e. the ability to act on one's own responsibility is central to identifying individuals who can be held responsible for harming or liquidating people. Dehumanising war without a sense of responsibility runs counter to the endeavour of peacekeeping.

Discussion on TOP 4:

It is discussed how military pastoring supports the sharpening of military conscience and how the ethical aspects mentioned above would be operationalised. It is explained that soldiers are taught the intellectual level of responsibility through life lessons. Conscience is always at the centre of this, although some soldiers first have to develop their own ethos, i.e. to keep an eye on their fellow human beings and to avoid creating victims of violence. The ethical valence of Christianity is also understandable for non-believers.

Furthermore, the question is asked whether a globally defined ethical system for algorithms in the sense of Virtues of Algorithms is conceivable, parallel to the concept of human Virtues of Mind. It is pointed out that virtues in the Anglo-Saxon tradition are defined differently from the German concept of virtue (Tugend). However, there must be a difference in quality between human beings and algorithms, the latter cannot be attributed virtues. As long as a human operator intervenes, one can apply one’s own ethics and become aware of the God-human relationship and the finiteness of existence; it is not obvious how this capacity could be transferred to algorithms.

The following is an assessment of the facts discussed from the perspective of the Bundeswehr. Many soldiers are reached by the ethics debate; here the active experience of missions contributes considerably to the discussion. With regard to the question of autonomous weapon systems, it is emphasised that a distinction must be made between autonomy and automation. We are already surrounded by a high degree of automation because current systems can no longer be controlled simply by hand. We therefore need an ethically based technology that includes the human in the loop, and we need to ask ourselves how automation rules can be implemented. Such criteria would have to be developed in advance, based on which the technology would be evaluated and the desired result ethically supported. The task of this discussion group is to work out in the context of FCAS in which framework this was possible.

Agenda item 5: Interim conclusion by Nora Bossong / discussion

Photo of Nora Bossong
Nora Bossong

What is free will and what is it to be human? The automated decision is opposed to the human decision. What else is humanity when automated decisions become rampant? Here we have both a legal and an ethical framework, but it is also important to discuss these questions in society.

War or peace, what should I do? As an author, I have to plead for peace, but the global dynamics have shifted. We must ask ourselves what non-intervention would cause. The purely pacifist view of the 1970s and 1980s seems no longer tenable in the present day. The German position of “never again” is very specific due to its history, but the consequence is still interpretable because one can also argue: Never again and precisely because of this. Looking at Africa, the example of Rwanda in 1994 shows that a non-intervention of the United Nations led to a repeated, huge genocide, the most terrible of the post-war period - if one refers the term post-war period to the Second World War.

What can I know? One of the main tasks of the broad debate is to counteract the spread of fake news and to contrast it with a culture of discussion in order to avoid slipping into radicalised, polarised camps. A healthy and well-fortified civil society can make its contribution here.

What about the global challenges? Often, only Russia and China are cited as opponents of the liberal Western countries; here the Western world likes to take on a spotless “jack of hearts” character. But when it comes to Africa, where resource exploitation and destabilised regions are at stake, the West's white waistcoat is at least eggshell-coloured. Military means in African countries are quite rudimentary and far from FCAS, but they can be very powerful. Here, the realities of warfare are very different, especially when we think of the use of child soldiers. Frequently, such crisis regions are deliberately kept unstable in order to get access to needed resources. One has to think carefully about how to deal with such countries and regions, as we are not dealing with good democracies in the Western sense.

Agenda item 6: Value-based Engineering for Ethics by Design (Sarah Spiekermann, University of Vienna)

Photo of Prof Dr Sarah Spiekermann
Prof Dr Sarah Spiekermann
University Vienna, Head of Institute for Information System and Society

Ms Spiekermann presents her methodological concept of value-based engineering and embeds it in the context of a development such as FCAS. Her thesis is that value-based engineering must also include organisational engineering, a process of identification with values that accompanies the development of functionality.

The term “ethics” can be very implicit and personal; a nonprofessional would possibly equate it with morality. For Ms Spiekermann, the concept of ethics goes further: it is the theory of what is good and right - in everyday life, in the role of the soldier, the engineer, the general - which is why, in her opinion, a military project like FCAS cannot do without this question.

FCAS cannot be seen in isolation from the IT problems of the past 10-20 years:

a. Data quality is a major problem for the reliability of AI systems, as is software quality (the ability to “clean up” is conditional, as the example of the 2 billion “fake accounts” on Facebook shows, which could only be removed in one go after 6 months).

b. Hardware quality: how do I integrate hardware and software?

c. Business model: when looking at suppliers in a complex system of systems, who are under financial and time pressure, the question arises how many of the requirements are actually processed (corporate culture of responsibility vs. money)

d. Sustainability in the supply chain: Resource scarcity is an issue, so FCAS should not make itself dependent on supplies that do not come from Europe. Such projects often involve a battle of materials. The Chinese have already invested a lot in Africa for these purposes, but do we want to be dependent on a Chinese supplier in 20 years? In wartime, such a supply chain is not possible. With 10,000 companies in South Africa, the Chinese account for 90% of the rare earths.

The industry is overwhelmed by these problems, which is why companies are desperately seeking principles that they can sell to the press. But do we serve ethics with these “industry” lists? We have to manage to think along with the fundamental systemic problems so that the involved actors are able to act according to their conscience in the medium and long term. The machine must obey what the actor wants to carry out in an ethically responsible manner.

The presented concept thinks fundamental points and attacks them at the root, in order to then shed light on the system environment and make the actors act according to their conscience. The concept recommends that values be developed together with the working level and then prioritised by the management (core values). One aspect of this is to keep the objective in mind in order to find the right path. With FCAS the question arises: is it about protection or is it about power? Sometimes attack is not the best defence, this remains a dilemma for Ms Spiekermann.

Our environment (e.g. a pacifist Europe) can change and is affected by different stakeholders, which is why the so-called core values (e.g. trust) must be shared among the partners and be anchored in the system. When we break into a biosphere, we create harmonisation / standardisation, including in the military sector. Data protection, for example, is only one solution that responds to a self-created problem. Instead, we should work systemically and start at the root or causes, which is why a system design should be properly prepared from the very beginning. Value-based engineering should create a concept similar to ISO standards in a company, which everyone knows today. In this process, different considerations are made:

  1. Definition: What is at stake? What is the System of Interest? How far does it go and what are its direct and indirect stakeholders?

  2. Environment: How can we choose partners over whom we still have control tomorrow? Do we have hostile structures as neighbours? Who has access to other systems?

  3. Resources: Will we still have enough resources tomorrow to build these systems? FCAS is not a programme you want to get rid of and “just like that” sell through a venture capitalist. Such a project is a long-term endeavour, context and value analysis have to be developed and adapted over time. Therefore, it is good to start this dialogue for FCAS today.

  4. Involvement: How can we involve (also indirect) stakeholders right from the development process phase? For example, nature, society, tribes, citizens? Experts who will later be users and stakeholders of the product should already be involved in the engineering process, even if companies sometimes do not want to do so, in order to be able to carry out their planned innovation stringently without consideration of counter-arguments.

  5. Situation analysis: Chinese thinking is characterised by the principles of preparation and analysis, whereas we are always very much focused on a goal, to which we then find a certain way with certain means. We assume that this will work. In this context, we are dependent on what IT can achieve.

  6. Context independence: Values are independent of context. Some principles of “being meant to be” are given a priori, this gives meaning to life; it is a striving for attractiveness that is human. Categories of values can be divided into system values, utilitarian values and virtues. Virtues are values that are translated in character and without which, for example, disputes cannot be fought.

Ms. Spiekermann pleads for the formulation of personal values, the system development and prioritisation of values based on the categorical imperative: As a human being and in association in history, everyone has the right to represent what one does, which is why prioritising values is of great importance. We cannot only feel values, but also rationalise them. To do so, we must break down value qualities and give them names. This is a biblical task, which of course is not easy.

A critical view of the system in the innovation process is empirical, where a critical view should not be confused with classical product roadmapping where technology itself is often glossed over. This view makes it possible to visualise ten times more things that can go wrong. If we anticipate them, we are much more resilient, even before we build. At the same time, this process leads to a better quality of ideas.

“We live in a world steeped in values” - this is what these processes allow us to see. A mapping often brings together up to 200 values, which need to be prioritised and then brought together with all stakeholders. Ethical requirements can be translated into system requirements - in three ways: directly, via Risk Based Technical Design, or with an interim solution, the Iterative Software Development Approach. In order to build the system, we only have to bring together technical and ethical aspects.

Discussion on agenda item 6:

It is discussed whether this process model could achieve a global consensus. Even if it is easy to “work through” it and the process itself does not have any cultural particularities, the results (virtues/values) would of course be very different.

Moreover, this process should be iterative because it does not work to commit to fixed ideas in the beginning and never to deviate from them. This aspect of the long-term view is considered somewhat difficult, as a long-term system analysis would fade out a lot of news (so-called black swans). A remedy could be an Ethical Value Register as documentation for decisions. If the expectations were not confirmed, new priorities for new aspects could be introduced (e.g. “privacy” for Facebook), and these would then be managed in the system.

It is questioned how the aspects of time and money could be reconciled as constraints in FCAS, as they definitely play a role. A parallel is drawn to the utility analysis, in which design decisions are also based on prioritisation and a weighting of subgoals together with indicators is established. It is argued that the business framework is a real challenge in such considerations. Today's financial markets and economy are not made for it because a cost strategy is pursued instead of a quality strategy. As a counter-argument, it is argued that it is sometimes sufficient to record decisions in writing, to take these decisions top down and to sign them jointly. A simple ranking of the values or the integration of already existing organisational values into the product could also help. Systems can never be created perfectly, so an ethically perfect product is also not achievable. The goal should be to do a good job that we can represent.

On the question of whether FCAS means protection or power, it is noted that both are needed because both defence and the offence as capabilities are always part of it.

There is also some discussion on the definition of values. It is argued that the described method would ultimately only produce a so-called “industry list”, though perhaps based on bundled personal values, and that values are very volatile. It would be easier to stick to virtues. Together, the authors work out that one has to keep an eye on the user in this socio-technical system. The design of a system cannot be based on ethics alone - it must be considered in the requirements, the process, and the applications. For the analysis, corporate principles must also be examined in terms of their starting point and context: If data protection is a value for a video surveillance product, it should be evaluated differently depending on whether it is a surveillance camera in a hospital, at a train station or for your own home. In this respect, the System of Interest is backed by different intentions.

Agenda item 7: Man in the loop? Technical design for responsible use of an FCAS (Wolfgang Koch)

Photo of Professor Wolfgang Koch
Professor Wolfgang Koch
Chief Scientist, Fraunhofer Institute for Communication, Information Processing and Ergonomics FKIE

Wolfgang Koch would like to take up the thoughts of Ms Spiekermann and present his view on FCAS and possible steps to operationalise the ethical considerations made so far.

Man in the Loop. The key question of this lecture is the Man in the Loop, this should prepare the ground for good engineering. John F. Kennedy's Moon Speech of 1962 is also highly relevant to today's AI debate, here we can easily replace the term Space Science with Artificial Intelligence:

“For space science [or artificial intelligence], like nuclear science and all technology, has no conscience of its own. Whether it will become a force for good or ill depends on man [...].”

Technology is ethically agnostic; we would have to teach it this first. The question of good and evil therefore depends on us.

Perception and action. Artificial intelligence, sensor data fusion, technical automation and resource management are tools to support human perception and action. Perception answers the question of what, i.e. it is about detection, classification, object interrelation and decision relevance. The question of action is the question of why, i.e. what goals I have and how I achieve them, according to what standards and by what means. This principle goes back to Aristotle.

Responsibility. The Bundeswehr is already taking the digital transformation into account and deals, among other things, with the targeted use of automated technologies with the characteristic of personal responsibility and free will. As General Wolf Graf von Baudissin already said, scientification leads to an acceleration of military action; therefore, everything must be done to challenge human responsibility frequently. This is a core idea of the Bundeswehr, which we want to master with FCAS.

Aspects of an FCAS. Dominance over the electromagnetic spectrum is crucial for FCAS. The world of algorithms is very large, more than just machine learning and deep learning - we have to master this world in FCAS. Missions must also be mentally and emotionally realisable by responsible people. Hard sciences such as mathematics, physics, and computer science play a major role here, but also “soft” aspects such as human ecology, as well as cognitive and work sciences, which make humans the object of knowledge. How do we create an anthropocentric worldview in the world of computing? One example of a “miniature FCAS” is the project Urban Close Air Support by Coordinated UAVs conducted at Fraunhofer FKIE. The aim here was to enable a cluster of drones comprising reconnaissance and combat drones. For legal protection, so-called Rules of Engagement were established which were implemented by means of Compliance by Design. The gained experience is also to be applied to FCAS, even if the issue is much more complex.

The seven pillars of digitisation are:

  • The zoo of algorithms - this includes much more than just machine learning.

  • Data - the resource par excellence that is needed for training and testing.

  • Art of Programming - here we focus on the quality of our personnel rather than quantity.

  • Computing Power - in the future, quantum computers will play an increasing role.

  • Anthropocentrism - the human being is in focus.

  • Push, Pull, & Realize - the game between supply and demand.

  • Joint and Combined - learning from each other in a multinational context.

Man-machine dichotomy: In the Bundeswehr, the aim is not to choose between human and AI, but to ensure the best possible performance of tasks through a clever combination. This means that mind and will must be supported by the provided technology. We are talking about the fundamental dichotomy something for somebody. Armed forces become a danger when things are placed above people.

Deep Learning vs. Bayesian Reasoning: In the Deep Learning paradigm, only the question of what is answered but not the question of why, so an AI must not be certified or used for target selection and combat just like that. The Bayesian Reasoning paradigm, on the other hand, can use contextual knowledge such as a building plan to make estimates more accurate; here, ethical contextual knowledge could just as well be included. Dealing with artificial intelligence certainly requires human intelligence because AI coupled with stupidity is a great danger. It is therefore important that the deep learner is not left alone; it is always the second-best solution only. So-called Poisonous Noise can easily mislead AI, so we have to deal with the issue of counter AI to avoid attacks on our decision-making processes.

Certification of responsibility: Responsibility must be implemented as a design principle because only free beings can bear responsibility and know the limits of knowledge. Ensuring the human in the loop is not unproblematic because there is often no time to react manually to highly agile threats such as Lethal Autonomous Weapon Systems (LAWS). Therefore, we have to make high automation justifiable and ask ourselves which tasks can or cannot be delegated to the machine (autonomous weapons against machines vs. against humans). The IEEE standard P7000 on Ethically Aligned Design should definitely be mapped to FCAS.

Discussion on agenda item 7:

Photo of General Ansgar Rieks
General Ansgar Rieks
Deputy Chief of the GE Air Forces

Reference is made to the need for a new digital ethic, which is legally designed and based on established legal principles. It is replied that virtues are also values, the concept of values goes beyond the legal concept. It is pointed out that laws can be seen in a certain way as coagulated values and that a society can ensure that these values are lived by providing a solid legal basis. At least something like “hard” values are needed.

On the subject of anti-LAWS systems, the question is raised whether LAWS are or should be part of FCAS planning since from a security policy point of view an arms race cannot be stopped. It is explained that a drone-safe airspace is a major topic for Germany. Some kind of drone traffic police should be developed here, and when it came down to it, only hard kill would help. These thoughts could easily be generalised to other systems. In order to control threats, they need to be understood, so it would be obstructive to ban technologies across the board. In support of this view, it is argued that buzzwords should not restrict the discussion; mines, for example, are also autonomous weapons – it all depends on the application what ethical action means.

Two further remarks aim at the definition of terms. Discussions so far have shown that a bridge must be thrown between values and legislation through norm building. Furthermore, requirements should be defined in the form of guiding principles in order to provide the operator with a system that on the one hand functions predictably and on the other hand enables people to act in accordance with legal, mission-relevant and individual ethical requirements.

Concerning Anti-LAWS-AWS the question arises whether full autonomy can still be ethical and how experience would come into play as with programmed Rules of Engagement. It is argued that an assistance system always has to integrate contextual information and that it can never function detached from the context. Ethical rules could be mapped on the computer in the form of legally compliant rules of engagement, similar to room plans. The human being should only be supported by an assistance system in order to understand the situation faster and better.

Concerns are expressed about the concept of a rights-based definition of values because international standards are difficult and there are no clear legislators. According to Georg Jelinek, law is an ethical minimum. It is replied that one has to ask oneself how a consensus on good and true values can be found. It is also noted that there is a global consensus on values but that the power and beauty of cultures lies in living them out differently. Here it would be advisable to work on objective criteria based on which to prioritise values.

As a comment on the subject of machine ethics, the history of philosophy before Kant is referenced where the concepts of intelligence and ratio were still distinguished. Intelligence is the capability to deal with things holistically (i.e. in their overall context), whereas ratio is a purely rational matter. Only the ratio exists in artificial form, whereas true intelligence is only available to humans. Here, one must be careful not to neglect human intelligence due to a confusion of concepts.

It is suggested that, based on the process model presented by Ms Spiekermann, the above considerations should be applied to FCAS in order to make the operationalisation more concrete.

Agenda item 8: The European dimension of the dialogue - opportunities and challenges (John Reyels)

Aspects of the European dialogue. FCAS is a Franco-German-Spanish project with a European dimension. If FCAS aims for practical relevance, it is important to bring the European partners on board. It would be a disaster if the aspects addressed in this group were not implemented. But what is the right time? One should weigh up the extent to which all partners first deliberate at a national level before entering into dialogue. Certainly, international exchange also adds complexity to this already complex issue but we must face it nevertheless. Airbus and the German Federal Ministry of Defence (BMVg) are already holding many talks, perhaps there is already some experience here. Franco-German exchange in particular lies at the heart of FCAS. An agreement between France and Germany always has a high influence on pan-European decisions as both countries cover a wide range of opinions due to their different attitudes.

What makes us different from France?

  • A different basic attitude towards technology. In contrast to France, Germany is very critical of technology as the drone debate has shown, for example. Furthermore, France has a very offensive AI communication policy, to which President Macron has presented the French strategy “AI for Humanity” in 2018. The aim is to become one of the top 5 AI nations worldwide; a clear positioning is missing on the German side.

  • A very different military culture. The French military continues to enjoy unbroken prestige; France is also a nuclear power and a permanent member of the World Security Council (the so-called “P5”). They show a willingness to intervene militarily worldwide and even see it as their obligation. AI is also intended to ensure the operational superiority of the French armed forces, for which they intend to provide €100 million between 2019 and 2025. At the same time, an ethics committee is being established, made up of military, economic and governmental representatives.

  • A different focus for civil society. The nuclear debate of the 1980s had a great influence in Germany, while in France nuclear weapons were seen as a protective shield during the Cold War. This leads to different approaches in the respective security policies.

What connects us with France?

  • A common tradition of values. France's strict separation of church and state makes the establishment of values in the defence context a “non-topic”. Nevertheless, French politics is value-based and an ethics committee with equal representation has already begun its work. Human control is one of the basic principles of the French armed forces; the Terminator will not parade on 14 July, according to Florence Parly.

  • A common legal tradition. Both countries have a tradition of codification, unlike case law in the Anglo-Saxon countries. Furthermore, they see themselves as a community of destiny in Europe, so there is an iron will to compromise. Many painful negotiations in the past have enormously increased the ability to compromise, also with regard to considerable differences of opinion.

Arguments for cooperation. You have to find good arguments to make the French partners understand the necessity and added value of a catalogue of values, ethics and law. A jointly developed Code of Conduct would give added security because it would contain destabilisation and reduce the potential for escalation. Furthermore, Compliance by Design could also have a global market value, which would increase export opportunities for France and Germany.

Communication. One should consider what kind of product should be proposed to France; guidelines would perhaps be more readily accepted than legally binding rules. Under no circumstances should Germany argue with its national problem situation and shift the public debate onto its partners; furthermore, the Franco-German similarities should be brought to the fore. However, the Franco-German motor is notorious in Europe, so Spain should not be run over with a finished product. It might even be advisable to seek the trinational dialogue directly in order to relax the situation of discussions with France.

The global context. It would not be good for Europe to limit itself to itself. Perhaps it would be useful to establish a dialogue with other system developers, e.g. with the USA or the United Kingdom (regarding BAE Systems Tempest). However, this requires a sufficient global system of values; perhaps one could build on international law and transfer basic ideas to autonomous weapon systems. Global standards must be the goal here.

Discussion on agenda item 8:

All speakers in the following discussion clearly advocate the inclusion of France and Spain. Many different points are mentioned which are considered important in international cooperation.

  • At Airbus, a working group has been set up on its own initiative, in which a discussion has been taking place for some time at employee level in a Franco-German context. There is already a lot of interest in a discussion.

  • It is also noted that France is the leader in FCAS, so the French partners should be involved as soon as possible. However, Spain also wants a discussion at eye level and should therefore be integrated at the same time.

  • A central argument for an international discussion on ethics is the difference between France and Germany, which produces a strong positive fertilisation. It is stressed once again that France takes the ethics issue very seriously and feels excluded if it is included too late. As France thinks geopolitically, a European exchange is inevitable for them, and Ethical AI is a selling point for France.

  • In this respect, it is noted that coordinated action is needed to ensure compatibility with the French setup as France wants to be an ethical leader on a global scale. Here, however, one should insist on the heterogeneity of the German ethics discussion and not limit oneself to ministries only. Defining global standards is regarded as a completely different matter since cooperation with NATO alone is a major hurdle due to disagreements with the USA, not to mention the attitude of Russia or China.

  • It is admitted that Germany might be showing technical scepticism and modesty, but is in no way inferior to the French in terms of AI strategy. German AI providers in software architecture and research would certainly have an influence here, and Germany would postulate a technological leadership in Industry 4.0, even worldwide. Reflection on a digital strategy is taking place in projects such as GaiaX, for example, which are already being considered on a European level, whereas France only joined in later. It is also particularly important for Germany to counteract the hyperscalers such as Google or Microsoft and to be independent, for example in a cloud infrastructure.

  • It should be remembered that the fronts between France and Germany in Geneva are quite hardened and regulatory intervention is needed here. There are military incentives for Control by Design and the exchange with France certainly provides positive input. Perhaps an operational definition of ethics would be more conceivable than a general one.

  • First, one would have to clarify what exactly should be integrated into an FCAS. A copy of human behaviour would be difficult to imagine because although humans make mistakes, they are able to make decisions based on emotions without background knowledge, which remains denied to the machine. In this respect, confidence-building measures between nations or population groups would be beneficial.

  • In order to find a consensus on values, one would have to sound out the extent of the intersection between German and French values. Furthermore, about 80% of the French ethics committee is made up of members of the Ministry of Defence and has so far been quite invisible to the outside world, while the Working Group on Responsibility for Technology also includes foundations and the wider public in the debate; it would be interesting to see how France reacts to this.

  • The question arises whether and how NATO's AI efforts should be taken into account in FCAS. Mr Koch intends to look into this matter.

  • It is noted that international agreement on ethical standards in FCAS will not be an easy exercise for France because although it is “only” a matter of rules for a system, these decisions are also material for a multilateral framework and therefore highly political. Ethics is a big issue in France, but they are two completely different questions whether at national or international level. Therefore, it is necessary to discuss in advance what should be operationalised and how it should be packaged for France. Cooperation with NATO is also important because FCAS sends the signal that something is being done for the transatlantic alliance. The NATO framework also provides scope for common standards for autonomous weapon systems.

  • In conclusion, it is argued that it is perhaps too early for a global solution to the ethics debate; it would be better to take a critical component out of FCAS first and use it to draw up a list of values across countries. This would offer a simplification in the FCAS framework, which would be brought to the political level in a more mature form afterwards.

Agenda item 9: Summary and next steps (Florian Keisinger & Wolfgang Koch)

Photo of Florian Keisinger
Florian Keisinger
Airbus, Campaign FCAS
  • Minutes of the meeting will be distributed to all participants for feedback/approval.

  • The newspaper “Behördenspiegel” has offered to publish an 8000 character article in the November issue. For this purpose, a joint text is to be crystallised in the name of the group (without naming individuals) by the end of October.

  • Another action is defined as formalising the discussion on standards and finding ideas for the way ahead. These are to be presented in the next rounds and discussed iteratively in the group.

  • A concept for international exchange with France and Spain will be considered.

  • The next meeting is scheduled for spring 2021, and proposals for dates will be made soon.

As a feedback to the event, it is proposed to give more room for discussion at the next meeting so that a better exchange of views can take place. It might be helpful to organise the meetings in a more monothematic way and to focus on more specific points and securing results. It is replied that this meeting was a constitutive one but that the next meetings should be held under a central theme. A discussion paper shall be prepared for this purpose.