Source: Venturebeat.com

Fareeha Shamim 

Considered by many as a revolution in warfare, the emergence of new technologies and the incorporation of (AI) in the military by states are transforming the dynamics and discourse surrounding the future of warfare. Advancements in AI and its application in the military domain has led to the development of autonomous weapons that are enabled to survey their surroundings, independently identify potential enemy targets, and choose to attack those targets based on algorithms without human involvement. While fully autonomous weapons do not exist yet, according to military and robotics experts, the pace at which the integration of AI into the military is taking place signals that Lethal Autonomous Weapon Systems (LAWS) could be developed within the next 20 to 30 years, resulting in machines replacing humans on the battlefield. The research, or the lack thereof, regarding the emergence of new technologies that will erode human involvement in war raises serious concerns regarding the future of warfare, moral implications of the increasing autonomy in the weapon systems, and the ethical considerations regarding the violation of international humanitarian law (IHL) due to weaponisation of autonomous technology. The development of fully autonomous weapons or killer robots is also challenging our conventional understanding of the principles of jus in bello – law that governs the conduct of warfare.

These rapid expansions in the field of AI have led to a new arms race taking place with countries like China, Russia, and the U.S. taking the lead in the incorporation of AI into the military, and others following the lead. While countries are evaluating the development and deployment of autonomous weapons through the prism of their security  needs, ethical concerns are seldom a point of consideration. Despite the dangerous moral and legal implications of the use of autonomous weapons, its importance has often been overlooked in the lure of national security.

As far as the ethics of warfare are concerned, the notions of responsibility, legitimacy, and accountability are of key significance.During a war, the taking of human life is considered an act of high moral significance – one that not only requires valid justification but also holds the agent accountable and responsible for its actions. The laws of war prohibit certain actions in armed conflict and as long as humans remain in control of weapons, they are to be held accountable under IHL in case they commit any violations. This, however, poses a serious challenge in the development and deployment of LAWS, where a machine is empowered to make life and death decisions, raising concerns regarding the legitimacy of actions, accountability for any violations, and determining the chain of responsibility. These questions are particularly important with regard to the two aspects of IHL – the principles of distinction and proportionality in the use of force. The principle of distinction requires militaries to discriminate between military and civilian personnel amid war and spare civilians from harm, while the principle of proportionality requires warring parties to apply no more force than needed to achieve the intended objective while sparing civilian personnel and property from unwarranted collateral damage.

The complexity of war puts a soldier in a constant moral struggle where he is confronted with situations requiring him to make quick decisions, putting his morality to a test. While a soldier is to be held responsible if he fails to adhere to the laws of war, these principles pose a particular challenge in the case where fully autonomous weapons systems are used because LAWS cannot grasp the nuances of war – ignoring the subtleties of war and failing to adhere to the protocols of IHL. While many differ over whether autonomous weapon systems can be equipped with algorithms that could differentiate between targets to fulfill the laws of war, it is imperative to consider that human beings possess the innate ability to understand the nuances of the various situations that may arise amid war – a quality that robots or autonomous weapons lack. Since the principles of jus in bello are highly context dependent, even if assumed that an autonomous weapon could be given the ability to discriminate between combatants and non-combatants, with an ‘ethical governor’ instilled within the weapon itself, the real challenge of understanding the intricacies of context dependent situations amid war would remain. Furthermore, the responsibility of deaths caused during a war is an important element of the ethics of war and the problem with the use of fully autonomous weapons is that there is ambiguity over assigning responsibility and accountability for the deaths caused by robots. Additionally, according to Yale University Professor Paul Kahn, the use of such autonomous weaponry is ethically impermissible in case there is no legitimate recourse for less technologically advanced opponents. Hence, he claims that where the asymmetry between opposing forces is large enough such that one party faces little or no risk, war becomes immoral.

According to international law, only humans possess the moral capacity to justify taking another human’s life and machines can never be vested with that power. The Hague Convention (IV) of 1907 requires combatant “to be commanded by a person” while the Martens Clause, a longstanding rule of IHL, specifically demands the application of “the principle of humanity” in armed conflict. Furthermore, the Martens clause of the Hague Convention of 1899, also inscribed in Additional Protocol I of the Geneva Conventions, states that even when not covered by other laws and treaties, civilians and combatants “remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of human conscience.” In the case of LAWS then, removing humans from life-and-death decision-making contradicts the principles of humanity.

Since autonomous weapons are capable of committing fatal errors even if programmed
to adhere to the laws of war, taking humans out of the loop also risks taking humanity out of the loop. While the proponents of autonomous weapons argue that humans will always retain some level of supervision over decisions to use lethal force, the current revolutionisation of weapons nurtures the fear that the development of autonomous weapons would reach a point that would allow machines the ability to make such choices during war on their own – limiting the role of humans on the battlefield, both physically and cognitively.

According to U.S. Defense Department directive (November 2012), “autonomous
and semi-autonomous weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” The speed and precision of these weapons, however, create temptations for countries to remove soldiers from the decision-making process. Even though compelling motivation for countries to develop such weapons is the reduction in human cost and vulnerability, these considerations, however, are not the sole catalyst for countries to develop autonomous weapons systems. Ambitions to incorporate such weapons in the military also stem from the determinations of competitors – leading to an AI arms race. The goal of gaining a technological advantage over rival countries often leads to disproportionate and destabilizing arms buildups. In the case of autonomous weapons, this could result in catastrophic consequences where countries start investing heavily in machines with increased intelligence and decision making authority.

While the regional implications of such advances are afar in the future – since both
India and Pakistan do not possess fully-autonomous weapons yet – it is important to take into consideration the impact of development and deployment of autonomous weapons on the Pakistan-India relations because some recent actions and statements from India are indicative of its plan to deploy remote-controlled robot weapons in Kashmir that it claims will “target terrorists.” As part of its strategy to increase commitment to AI, a multi stakeholder task force on the strategic implementation of AI was also set up in India which included members from the government. This reflects that the possibility of the development of fully -autonomous weapons by India cannot be overruled; such a move, however, will lower the threshold of conflict between India and Pakistan over Kashmir because if a country has autonomous weapons, it will deploy them without any risk of the human cost – making war inevitable. India’s plan to develop and deploy autonomous weapon systems in IOK, will not only increase the probability of escalation and outbreak of war but also pose a serious challenge for Pakistan to balance its stance over LAWS. In the regional context, the foreseeable development of such weapons by the Indian military could lead to a regional arms race, disrupting strategic stability.

Since it first became evident that AI could lead to the deployment of increasingly
autonomous weapons systems, a group of governmental experts was established under the parties to the 1980 Convention on Certain Conventional Weapons (CCW) – a treaty that prohibits the use of particular types of weapons that are deemed to cause unnecessary suffering to combatants or harm civilians indiscriminately – to assess the dangers posed by fully autonomous weapons systems and to consider possible control mechanisms. However, more emphasis is required on the adoption of a legally binding international ban, in the form of a new CCW protocol, on the development and use of fully autonomous weapons systems. The only way to avoid inevitable violations of IHL would be through a total ban of fully-autonomous weapons; because even if human control over weapons of war accompanied by a non-binding code of conduct requiring human responsibility over fully-autonomous weapons systems is upheld, it would still fail to halt the arms race in fully-autonomous weapons systems. Even if there are very few autonomous robotic weapons in combat use today, many countries are developing machines possessing high degrees of autonomy. The international community must call for a comprehensive evaluation of the use of such weapons through the lens of international ethics before fully-autonomous weapons systems become widely deployed and trigger catastrophic escalation. There needs to be concentrated focus on the ethical dimensions of developing and deploying fully autonomous weapons systems to ensure that morality and humanity retains its significance in the ethics of warfare.

Fareeha Shamim is a Research Associate at the Strategic Studies Institute(SSII), Islamabad.