Following the establishment of the Airbus/Fraunhofer FCAS-Forum, a group of Airbus Defence and Space engineers working on AI self-organised and developed a joint paper based on their views and experiences. The following text is the shortened version of a much longer paper that will be used by the FCAS Forum group for further in-depth analysis and discussion as well as for the development of an “ethics by design”-methodology to be applied for a FCAS.

White paper The Responsible Use of Artificial Intelligence in FCAS – An Initial Assessment

A group of Airbus Defence and Space engineers: Massimo Azzano, Sebastien Boria, Stephan Brunessaux, Bruno Carron, Alexis De Cacqueray, Sandra Gloeden, Florian Keisinger, Bernhard Krach, Soeren Mohrdieck

Executive summary

1 Set the scene and context

1.1 What is FCAS and why we are concerned with AI and autonomy?

Overview of FCAS elements including Air, Cyber, Space, Maritime, Land

1.2 Multi-Stakeholder Ethical Debate

World map showing Positions on Lethal Autonomous Weapon Systems

1.3 Legal Concerns

2 Focus on ethics: ALTAI methodology

Elements of a trustworthy AI: Human Oversight, Technical Robustness, Data Privacy, Transparency, Diversity, Sustainability, Accountability

3 Technical/Operational Domain

3.1 AI, Automation & Autonomy

3.2 The Military Perspective

3.3 AI Applications in FCAS

4 Use Cases in the FCAS Context

5 Example of application of ALTAI methodology on a use case

6 Next Steps