top of page

Autonomous Weapon Systems and the Changing Face of International Humanitarian Law

Autonomous Weapon Systems based on Artificial Intelligence

As humankind continues to make unprecedented strides in technology in the 21st century, and with the growing proliferation of Artificial Intelligence (AI) in armed conflict situations, International Humanitarian Law (IHL) faces an urgent need to grapple with the applicability of the rules of warfare in relation to the use of Autonomous Weapon Systems (AWS) based on AI.

Before delving further, some definitional aspects need to be clarified. AI refers to machines which are designed with specific algorithms that give them the capability to collect information, analyse them and to make decisions without requiring human intervention. In a way, AI seeks to mimic human intelligence through a complex process which includes machine learning and deep learning.

There are various definitions as to what AWS are, and as of yet there is no globally agreed consensus on it. In simple terms, AWS are machines that can, once deployed, operate on its own through the use of AI without the need for further human supervision, and can thereafter collect data, analyse them, choose targets and apply force, lethal or otherwise.

Land-mines and defensive missile systems also operate with some degree of autonomy. However, they should not be labelled as AWS because they require very specific and exacting trigger-situations within which they must operate; they do not have absolute autonomy in choosing a target; and they are not based on AI. Machines including drones that only use AI to collect and relay information, but are not able to deploy weapons on its own should also not be labelled as AWS.

There are multiple terms used globally for AWS in various contexts such as ‘killer robots’, ‘fully autonomous weapons’, ‘lethal autonomous weapon systems’, ‘lethal autonomous robotics’, etc. Arguably a streamlined and consistent terminology is more helpful for the global community to relate to as it helps in creating momentum in international advocacy. It is hoped that a consensus can be reached in that regard amongst academics and relevant stakeholders.

Issues with using AWS in armed conflicts

The deployment of AWS is a very concerning matter, not least because there is a distinct possibility that the decision making of algorithm based machines may simply be incorrect because various nuances of a particular situation may need to be taken into account which a human may be better placed to do rather than machines.

For example: a drone equipped with explosive munitions is deployed in a conflict zone to destroy an enemy military installation situated directly adjacent to a residential building occupied by civilians. In this case, dropping a bomb on the military installation runs the risk of killing civilians. If the drone is operated by AI, the concern is how will AI decide whether the bomb should be dropped or not? Can the AI algorithm be designed in such a way whereby it can strike a delicate balance upon the principles of proportionality and military necessity in relation to the military advantage sought? Will there be a risk percentage within which the AWS can operate? Or for incidental loss of life to be considered excessive will there be a mathematical threshold based on rudimentary consequentialist moral reasoning?

Furthermore, how much can AWS be relied upon to even distinguish civilians from combatants? In some situations combatants may disguise themselves in civilian clothes. Children growing up in various conflict zones are sometimes seen playing with toy guns. A machine may fail to uphold the principle of distinction in these scenarios.

In an article at Just Security it was acknowledged that “the mere existence of the guidance, processes, and technologies to avoid civilian harm […] does not, in itself, solve the tragedy of collateral damage. They have to be employed in good faith […].” It is doubtful that there can be a mathematical algorithm for good faith.

Eventually, if there is a ‘mistake’ by the AWS, and there was no possibility for a human operator to intervene and suspend the attack, in that case the machine itself cannot be held accountable for individual criminal responsibility under International Criminal Law.

These are some of the many compelling issues about AWS that are yet to be resolved.

The inevitability of the deployment of AWS

In its June 2019 report on AI the ICRC stated that it is “not opposed to new technologies of warfare per se” provided that as a minimum requirement it “must be used, and must be capable of being used, in compliance with existing rules” of IHL and that it is “essential to preserve human control over tasks and human judgement in decisions that may have serious consequences for people’s lives in armed conflict”.

In relation to AWS however, the ICRC has expressed serious concerns. In addition, the UN Secretary General on March 2019 stated that AWS are “politically unacceptable, morally repugnant and should be prohibited by international law”.

Regardless of such concerns and opposition, arguments in favour of AI and AWS are also being proffered. Some claim that AWS may lend greater precision to attacks, minimise collateral damage, increase efficiency and save costs. In September 2019 the Director of the US Joint Artificial Intelligence Center expressed that AI will give them battlefield advantages and they will begin rolling out the technology in warzones. Militaries of other countries such as Israel, Russia and China are also developing their AI and AWS capabilities.

In the 2019 report on AI the ICRC stated that “military applications of new and emerging technologies are not inevitable”, but they are “choices made by States”. It is submitted that it is inevitable that in the near future advanced military States will continue to develop the technology at an even greater pace and begin testing the use of AI and AWS in conflict situations, and it is unlikely that the deployment of AWS can be stopped.

The immediate need for regulating the use of AWS

Advanced military systems across the world are rapidly developing their AI and AWS capabilities without internationally agreed safeguards. On the other hand, although research is being done in relation to how IHL can be potentially updated and new rules promulgated to reflect the challenges posed by AWS, the progress has been relatively slow. We do not yet know how exactly AWS may impact the on-ground situation in armed conflicts, given the paucity of empirical evidence at hand. The UN reported in 2021, citing a confidential source that the Turkish manufactured STM Kargu-2, which the manufacturer itself considers a form of AWS, were deployed in Libya, but the UN report unfortunately lacks detailed information. It has also been reported that Azerbaijan have used AWS in its conflict with Armenia in the Nagorno-Karabakh region, but factual details are very scarce. Under the circumstances, the international community will not benefit from a reactive approach by waiting to see evidence of the implications of AWS before seeking to regulate it further. A more proactive approach is required.

It should be noted that militaries across the world do not expressly deny that the use of AWS must be in conformity with IHL. However, despite their best efforts, and even best intentions, AWS which are able to learn by itself and change over time, may act in ways that may violate existing rules of IHL in the battlefield. Furthermore, there are massive risks that AWS may end up in the hands of non-state actors. Absent adequate safeguards, the risks are far too much.

Therefore it is prudent that concerned organisations including the UN and the global civil society step up immediately and with greater fervour to ensure that whilst AI and AWS are still in this relatively nascent state of development, the international community agrees to a set of rules within which AWS can be developed and implemented. Otherwise the technology will jump ahead whilst the law remains lagging, and once a dangerous uncontrollable weapon is developed, the situation may very well be irreversible. When nuclear weapons were first developed, the world failed to completely stop its proliferation. We should not make the same mistake again.

Enforcing human control in the deployment of AWS

The final decision as to whether or not a human should be attacked in an armed conflict should lie on another human, and should not be delegated to a machine. Indeed it is an affront to human dignity that a human being is killed by the decision of a machine which “raises fundamental ethical concerns” as expressed by the ICRC in its May 2021 position paper on AWS. The ICRC also asserts that “human control” is essential for AWS to be compliant with IHL. The US Department of Defense’s Directive on AWS also states that AWS shall be designed to ensure “appropriate levels of human judgment over the use of force”. Apart from the ethical dimension, human control is also required to ensure accountability for any potential violations of IHL. Therefore it is imperative that an AWS should never be fully autonomous and human control must be ensured.

Notably, there are strong arguments that AI cannot necessarily be designed with such an algorithm which can take into account cultural sensitivities and on ground realities to make context-based decisions and to reliably identify who is a civilian and who is not. Consequently, Article 51.4 of Additional Protocol I can be interpreted to suggest that since an AWS without human control has the potential to strike military objectives and civilians or civilian objects without distinction, it is inherently an indiscriminate weapon. Thus, arguably such a means or method of combat can be prohibited under IHL.

Furthermore, the Convention on Certain Conventional Weapons already restricts the use of certain weapons. Considering the growing use of AWS, it can be potentially updated or an additional Protocol can be negotiated and adopted to contain express provisions that AWS always require human supervision before it can attack another human.

A challenge to humanity and the need for global consensus

At this juncture of human civilisation, human-made machines may soon reach a level of autonomy whereby they cannot even be controlled by humans anymore. AI is a very powerful tool and the technology is already at a very advanced stage. AI can process humongous amounts of data, can teach itself to do extremely complex analyses in milliseconds to a very high level of accuracy. A quick look at OpenAI’s ChatGPT and Dall-E 2 will inform the reader how advanced and potentially even scary current AI technology is. Therefore, as explained above, it is imperative that globally agreed standards are put in place for the development and use of AWS.

Whilst a complete ban on AWS is perhaps ideal, it is acknowledged that the reality is that they are here to stay. Nonetheless, in developing AWS we must ensure, at the very least, that human supervision and control supersedes the autonomy of such machines. Consequently, any AWS should only have limited autonomy, and a fully autonomous weapon that can operate without human control should be declared illegal under International Law. If the term ‘AWS’ in that case seems impractical because the presence of human control entails that it can no longer remain truly autonomous, it is suggested that it can be called ‘Controlled-AWS’ (CAWS). The CAWS can therefore, in practice, identify targets and make suggestions, and humans can make the final decision whether to launch an attack or not.

In the end, it is hoped that a fine balance between dogmatism and pragmatism can be reached by the global community, and concrete measures are put in place so that humans remain at the helm of civilisation, and not machines.

6 views0 comments

Comments


bottom of page