AI and autonomous weapons systems: the time for action is now
15 May 2024Last month over a thousand participants from 140 countries joined the 2024 Vienna Conference, ‘Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation’, hosted by the Austrian Federal Ministry for European and International Affairs. With global attention turning towards the growing use (and misuse) of artificial intelligence (AI) in military technologies, Saferworld’s Charlie Linney and Xiaomin Tang take a look at the challenges lying ahead for the international community as it moves towards regulation.
While we may be some way off from seeing ‘killer robots’ on the battlefield – lethal and fully autonomous weapons systems (AWS) with the capacity to identify, select and target humans – there is increasing autonomy in military equipment and operations. From claims of drones equipped with AI targeting systems in Ukraine to the Israeli Defence Force developing the ‘Lavendar’ and ‘Gospel’ AI systems to identify targets in Gaza, the use of AI in conflict is already a reality. Without procedures, norms or legally binding rules to regulate new technologies, there is nothing to constrain the development, proliferation and use of more advanced systems such as AWS. We will continue to see AI used in conflicts around the world and, given the rapid pace of technological development, we are likely to start seeing ever-more advanced and potentially more lethal weapons systems.
An ‘Oppenheimer’ moment?
Alexander Schallenberg, Austrian Minister for Foreign Affairs, opened the conference with a keynote speech in which he described the increasing risks of AI as “this generation’s Oppenheimer moment”. The idea that the risks of AI in the military domain are on a par with the development and subsequent use of the atomic bomb in the 1940s – which resulted in the deaths of between 129,000 and 226,000 people, mostly civilians, and remains the only use of nuclear weapons in armed conflict – is a stark reminder of the potential dangers of such systems. Technological developments in AWS are progressing rapidly, and this technology is used in conflicts around the world – yet at present no multilateral regulation exists to control their use or transfer, or to protect civilians.
Last year the UN General Assembly (UNGA) adopted the first-ever Resolution on Lethal Autonomous Weapons Systems (A/RES/78/241), with the majority of UN Member States agreeing on the urgent need to address the new challenges these weapons pose to humanity. In the New Agenda for Peace, the UN Secretary General put forward a deadline of 2026 for regulation of fully AWS, including a prohibition on lethal AWS. Such commitments at the highest level are essential, but now it is for states and their populations to harness the momentum from the UNGA Resolution and push for a regulatory framework.
Towards new regulation?
The call for international regulation of AWS and the use of AI on the battlefield is not new: industry, academics, think tanks, civil society and some states have been sounding the alarm bell for years. However, recent technological developments and the use of such systems in conflicts over the past year show the need to act – and to act now.
Over the past year, states have gathered at various regional conferences to build momentum and discuss concerns about the development and use of AWS. These included the Caribbean Community (CARICOM) regional conference in September 2023, where Central American states adopted the CARICOM Declaration on Autonomous Weapons Systems, a regional meeting bringing together Indo-Pacific voices in the Philippines in December 2023, and an Economic Community of West African States (ECOWAS) conference convened by Sierra Leone in April 2024.
This tells us that states recognise the risks of AWS and are willing to make political commitments to control their development and use. The challenge now is to give these high-level commitments legal force – and agree on global regulation. This really will be a challenge: one key takeaway from the Vienna conference was that despite widespread agreement on the risks and the need for regulation, there are radically divergent views on what form this might take in practice.
Some points of contention include:
- The definition of AWS: AI and some degree of autonomy are already present in many of the world’s militaries – whether through AI-integrated intelligence and surveillance systems or through semi-autonomous armed drones. For any regulation to be adopted, states would need to agree a threshold beyond which autonomy becomes unacceptable. We often hear calls for ‘meaningful human control’ or ‘context appropriate human involvement’ in new weapons systems, but with increasingly complex platforms, and unimagined future possibilities, this distinction is not always evident. The growing technical complexity also complicates efforts to even understand how weapons systems work, their application and potential outcomes, and raises challenges in ensuring human responsibility and accountability.
- The form that new regulation could take: An agreement on how regulation would work in practice is still a way off. Some states advocate for rules on how such systems should be used and transferred, some argue for strict prohibitions on certain types of AWS, while many want a ‘two-tier’ approach that would combine both prohibition and regulation.
- The need for a new global instrument: There is broad consensus among the United Nations, the ICRC, and almost all states that have made a political commitment to regulate AWS that international humanitarian law and international human rights law apply to the development and use of AWS. However, whether existing international law is sufficient or new technology-specific rules are necessary to effectively control AWS is still up for debate.
- Which forum to use: To date, debates between states on AWS have been concentrated in the Convention on Certain Conventional Weapons (CCW), where governments first convened a meeting on the topic of fully AWS in 2013. In 2017, a Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS) was established under the CCW framework, with a mandate to advance common understanding on the issue running until at least 2025. However, not all UN member states are party to the CCW and its consensus-based decision-making procedure has proved a major obstacle to progress. At the November 2023 Meeting of High Contracting Parties to the CCW, participating states agreed to ‘further consider’ measures to address AWS but fell short of discussing legal or regulatory options, thereby failing to make any substantive progress towards the UN Secretary General’s call for international regulation. Recent regional conferences on AWS have called for states to consider pursuing discussions on regulation through other forums, such as the UNGA. The idea that international treaties and agreements can be reached with broad, but not unanimous, support is something that has enabled progress in other areas – such as the adoption of the Arms Trade Treaty.
The way forward
A growing number of countries and regions have developed regimes to regulate the design, deployment and use of AI in the civilian domain. The UN is also working to establish a strong global AI governance framework. It is crucial that discussions to address the challenges of AWS do not lag behind.
Although there are many tough questions for states to consider over the coming months, the UN Secretary General’s target of agreeing international regulation on AWS by 2026 gives us a deadline to aim for. All states, from those at the forefront of developing these technologies to those most at risk from their possible deployment, must together confront these contentious issues head on through inclusive, participatory and practical discussions on the humanitarian, legal, ethical, security and technological dimensions of AWS.
The UNGA Resolution on Lethal Autonomous Weapons Systems tasks the Secretary General with producing a report by the end of this year to reflect the full range of views from states, international and regional organisations, the ICRC, civil society, the scientific community and industry. Key to future discussions will be meaningful participation by non-Western states and civil society. At global debates such as the 2024 Vienna Conference, we have already heard how the use of military AI and AWS are likely to be disproportionately concentrated in the Global South, particularly in conflict-affected contexts, and that biased datasets, algorithms and systems distinguishing certain groups of people from others could disproportionately affect already marginalised groups and lead to grievous error – yet their voices are relatively unheard in international settings. This has to change.
Civil society organisations have already played an important role in driving international discussions on the regulation of military AI and AWS, by communicating the concerns of citizens around the world and by developing a deep understanding of the technological developments and the risks they might pose to humanity as a whole. It is crucial that such organisations continue to feature in high-level debates.
As participants at the Vienna conference highlighted, AI is already transforming the way militaries operate. But if we want to protect civilians from harm, we need legally binding regulations – and we need them fast. If we don’t act now, the consequences could be catastrophic.
Image reproduced courtesy of NATO.