Muhammad Sharreh Qazi
“AI doesn’t have to be evil to destroy humanity – if AI has a goal and humanity just happens in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.” Elon Musk
The end of World War II was an immaculate showcase of how much a state is willing to go to prove its technological superiority within the liminal space of a grand war. Both World Wars were a gruesomely awesome display of how wars posture not only warfighting capabilities but also the ability of a state to sustain mass production of a diverse array of equipment. Throughout the Cold War, a similar trajectory was followed, and both blocs dedicated almost all of their scientific potential in proving their ability to industrially mass- produce conflict. Come the Age of Information and the game begins to change; states no longer require millions of soldiers accompanied by thousands of tanks and artillery and billions of bullets, rather only a handful of experts and the World Wide Web suffice. The schematics and semantics of warfare begin to change and modern warfare quickly outdates traditional canons of combat. Cyberspace begins employing maneuvers like data manipulation, identity theft, hacking, and what can be termed as ‘invisible war’ into a reality; science fiction outwits the science of war. Some argue that it cannot be termed a new dimension of warfare while others postulate it to be a new front. In this intellectual standoff, however, one question emerges: if machines can outwit strategists and data and outflank weapons, what would be the outlook of future wars? This question plagues strategic thinking into a limbo or arguments and debates that have yet to arrive at a conclusive answer.
The fog of war has always remained a pertinent feature of all military dispositions and warfare has progressed without retiring this concept. Artificial Intelligence and its associated architecture upgrade the ‘fog’ to sheer invisibility and this upsets military industrial complexes around the globe. The ability of a state to showcase and parade both quality and quantity of its military potential attracts consumers, which, in turn, keeps the armed forces relevantly augmented; the larger one’s military industrial complex, the more ‘hard power’ it can churn out and the more ‘soft power’ it can hope to generate as a byproduct. America’s lengthy Afghanistan campaign, or its aggressive Iraq campaign, Russia’s flash entry into Syria, continuity of conflict in South Asia and Middle East using classical tactics and warfighting maneuvers are all some of the examples how war on an industrious scale has not yet breathed its last.
Satellite constellations, militarization and weaponization of space, missiles and missile defense systems, aircraft carriers, and precision guidance are to name some of the achievements of this thinking, but it all amounts to naught when a bug or a virus or data manipulation of leaked conversations throws a spanner in the works. UAVs and UGVs and cyberspace make battle formations irrelevant just as tanks and blitzkrieg outclassed trench warfare. The time and stretch of war and its sustainability all ground to a halt in hours or even minutes worth of cyber interference, making the whole exercise redundant. A quick display of this exercise was already played.
AI and its associated framework are not sustainable under government control and require private-public partnerships. Sustainability of said infrastructure is evidenced by the fact that many states have started considering to revisit how they define national security and role of private entities to assist government agencies in fulfilling that aspect. Slowly but surely, private sector outpaces public-sector both in terms of incubation and investments and imagining war with such apparatus seems to be a somewhat ‘privatized’ endeavor. Not only does this severely dent military industrial complexes as to their financial consumption but also makes their operational failures more pronounced. The impact of this is what the above-quoted statement posits; when conventional warfighting carries with it elements of ascertainable liability, can a similar liability be affixed to autonomous unmanned systems controlled by code? Even when AI transcends both laws or war and warfare, it also commercializes conflict and invalidates, to some extent, national interest of a state pursuing such a course.
Privatization of conflicts is not a new agenda under scrutiny as a tinge of it was witnessed during colonialism specifically in the case of East India Company but then states maintained more centralization of authority. Industrialized war, or war on an industrial scale, can be regulated and fine-tuned to maintain equilibrium that suits adversaries by preserving status quo. This is done by regulating flow of information to one another without tipping scales but for AI, such actions would be counterintuitive and thus, violative of its directives. AI would aim for absolute victory without stopgaps and would readjust to its designer’s choice which, if privately administered, might even hurt national interest.
Between an industrial scale war that spreads mass turbulence and a commercial scale war that spreads mass confusion, the entire concept of Jus ad Bellum stands revisited. Choosing to opt for one against the other place decision making in stasis. Prolonged wars with continued meltdowns or privatized wars with uncontrollable systems are both unstable equations for national interest; the canonized idea of jus bellum justum. The future of war is a tragedy of decision making where one choice of mindset not only corrupts the other but aims to work against it. AI checkmates conventional military industrial complexes and aims to abolish the human factor. Military industrial complexes opt for regulated use of AI which defeats its very purpose of minimizing elasticity and maximizing pace of a war. Future wars thus become caricatures of confusion and misapplication of both, causing decision making to opt for indecision. Iran’s inability to react to Stuxnet and America’s fear of a possible cyber-reprisal is evidence enough that such maneuvers have yet to be defined. War, whether commercialized or industrialized, is a classical paradox as it aims for regulation post-war and unbridled execution mid-war. The future of war and warfare, in such a hypothesis, will be determined not by national interest but by a quid pro quo of technological competition between rationally unreasonable competition among contestants. Have we been at such crossroads before? Think of the Cold War and decide what and who aims to lead future wars.
“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Stephen Hawking
Muhammad Sharreh Qazi is a defense, security and foreign policy analyst, and currently lectures at Department of Political Science, University of the Punjab, Lahore.