ServicenavigationHauptnavigationTrailKarteikarten


Forschungsstelle
AIS
Projektnummer
2020EDA01
Projekttitel
AI-supported Decision Making in Military Affairs Ethics, Technology, Law & Policy

Texte zu diesem Projekt

 DeutschFranzösischItalienischEnglisch
Schlüsselwörter
-
-
-
Anzeigen
Kurzbeschreibung
-
-
-
Anzeigen
Projektziele
-
-
-
Anzeigen

Erfasste Texte


KategorieText
Schlüsselwörter
(Englisch)
artificial intelligence
ilitary Affairs
egal, Ethical and Policy Challenges
HL
Kurzbeschreibung
(Englisch)

 

  1. Research Agenda

 

Since 2018, the Directorate for International Law (DIL) has been working on the link between international law and AI. It has developed a background research within the framework of the "Ressortforschung" project entitled "IDAG Künstliche Intelligenz: Datenverfügbarkeit und Datennutzung". The DIL intends to build on this foundation in collaboration with the competent services of the FDFA.

 

On 13 December 2019, on the basis of the report of the interdepartmental working group "Artificial Intelligence", the Federal Council mandated the FDFA (DIL), together with the concerned departments, to submit a report on "Artificial Intelligence and International Law" by the end of 2020. This report is intended to show how international rules arise in the field of AI, how these rules should be categorized, to what extent they create international law and, if necessary, propose measures concerning Switzerland's position. The report will contain a section on the law in armed conflict (jus in bello), also known as international humanitarian law (IHL). Thus a need arises to clarify questions relating to AI-supported decision-making in times of armed conflict and in the framework of IHL.

 

Advancements in the field of autonomy, artificial intelligence (AI) and robotics will change the way wars are fought. Applications of these new and rapidly developing technologies have primarily been explored and discussed in relation to lethal autonomous weapons systems (LAWS). LAWS, however, only mark the tip of the iceberg of the transformative impact these technologies will have on armed conflicts. AI-supported and AI-augmented decision-making in military operations already exists today and is rapidly increasing across all military domains and theatres of conflict (land, water, air, outerspace, cyberspace). Indeed, it is only through the help of AI that the vast amounts of big data can be analysed and effectively be filtered into real-time military decision-making. Detention centres, hospitals and other facilities relevant under IHL will more and more function with AI. This development carries great promise for strategic and operational decision-making but it also carries significant risks.

 

In many instances AI enables better informed and more precise/targeted decision-making as well as accelerated decision-making processes. At the same time, AI developers/programmers make decisions and choose trade-offs that will affect outcomes. Through programming they are embedding ethical and legal choices within the technology that will be replicated manifold at implementation stage. In this context and with regard to military decision-making in particular it is essential to ensure that AI-supported military decision-making processes are at all times in conformity with IHL and ethical.

 

While some of the general questions concerning AI-supported decision-making have already been identified (see e.g. the ICRC’s Human-Centred Approach, DIL’s work under the project « IDAG Künstliche Intelligenz: Datenverfügbarkeit und Datennutzung”) many of the risks and challenges regarding AI-supported decision-making in times of armed conflict and with respect to IHL-based decisions remain underexplored and are in need of further research and discussion. Against this backdrop, a research project aiming to map the entire range of legal, ethical and policy challenges raised by AI-supported decision-making in relation to warfare is particularly timely and opportune.

 

The proposed research project will address the following questions, with a particular focus on IHL:

  1. How to ensure lawful and ethically sound human machine interaction in relation to AI-supported decision-making and with regard to different decision-making situations (e.g. selection and identification of targets) with varying degrees of risk? What lessons can be learned from civilian applications of AI e.g. in the health and automobile sectors? What particular challenges arise with regard to IHL when AI-supported decision making is relied upon in the conduct of hostilities or with regard to detention issues?
  2. How to detect hidden biases and how to prevent and avoid biased-decision-making?
  3. How to prevent and avoid the escalation of tensions/conflict situations when relying on AI-supported decision-making? Which safeguards can be put in place in this regard?
  4. How to ensure transparency, reliability and accountability with regard to AI-supported decision-making?
  5. How will future developments in the realm of machine-learning affect the answers to these questions?

 

The proposed research project aims to explore and answer these questions/issue areas through a workshop and a report. The idea is to vest policy-makers and relevant stakeholders with a better knowledge base regarding AI-supported decision-making and to enhancing their understanding of the various complexities this development entails. Concretely, the project will map the various applications of AI-supported decision making in military affairs and, on this basis, will explore the legal, ethical and policy challenges this brings about. This will feed reflexions on how to apply and interpret IHL, in this context
Projektziele
(Englisch)

A workshop will be held in Geneva in summer / fall 2020. Case studies will be distributed to participants in advance of the workshop to facilitate, coordinate and direct discussions to ensure high-quality results.

 

A report/study (app. 20 pages) on the implications and the legal, ethical and policy challenges of AI-supported decision making in military affairs will be written on the basis of the workshop outcomes and will be published (online) late 2020 / early 2021. It will be launched at an event to be held at the Geneva Academy in late 2020 / early 2021.