Abstract
This paper analyzes the excessive epistemic narrowing of debate about lethal autonomous weapon systems (LAWS), and specifically the concept of meaningful human control, which has emerged as central to regulatory debates in both the scholarly literature and policy fora. Through reviewing work drawing on international relations, security studies, international law and ethics, and technology policy, I argue all share a common epistemological position. This draws on a philosophical and analytical tradition that is Western and modernist, and places a “meaningful human” at the center of debates over controlling LAWS who reflects archetypes associated with a Western, rational, white, male. This epistemological location, I argue, excludes epistemological perspectives relevant to communities who both are most likely to experience LAWS, because they live in areas where deployment is most likely, and have the greatest experience of the effects of key LAWS precursors, such as unmanned aerial vehicles. Drawing on insights from decolonial approaches, I establish a research agenda that challenges this epistemological closure and looks to relocate debates about meaningful human control over LAWS in research that makes space for far more diverse perspectives on a crucial issue that may shape humankind's common future.
Résumé
Cet article analyse la réduction épistémique excessive du débat sur les Systèmes d'armes létales autonomes (SALA), plus précisément le concept de contrôle humain significatif, qui a émergé comme étant central dans les débats sur la réglementation et dans les forums politiques. J'examine des travaux ayant trait aux relations internationales, aux études sur la sécurité, au droit international et à l’éthique, ainsi qu’à la politique technologique, et je soutiens qu'ils partagent tous une place épistémologique commune. Cela repose sur une tradition philosophique et analytique occidentale et moderniste, qui place un « humain significatif » au centre des débats sur le contrôle des SALA, cet humain reflétant les archétypes associés à un gomme blanc occidental rationnel. J'affirme que ce positionnement épistémologique exclut des points de vue épistémologiques pertinents pour les communautés qui sont à la fois les plus susceptibles d’être confrontées aux SALA, car elles vivent dans des zones où le déploiement de ces SALA est le plus probable, tout en ayant la plus grande expérience des effets des principaux précurseurs des SALA, tels que les drones. Je m'appuie sur des renseignements issus des approches décoloniales pour établir un programme de recherche qui remet en question cette fermeture épistémologique et cherche à replacer les débats sur le contrôle humain significatif des SALA dans des recherches accordant de l'espace à des points de vue bien plus diversifiés sur une question cruciale susceptible de façonner l'avenir commun de l'espèce humaine.
Resumen
En este artículo, se analizan las limitaciones epistemológicas excesivas del debate sobre los sistemas de armas letales autónomas (Lethal Autonomous Weapons Systems, LAWS) y, en concreto, la noción de control humano significativo, que se posiciona como concepto central de los debates regulatorios en la literatura académica y los foros sobre política. Luego de haber consultado obras sobre relaciones internacionales, estudios de seguridad, ética y leyes internacionales, y políticas tecnológicas, considero que todas comparten una misma posición epistemológica. Estas se apoyan sobre una tradición analítica y filosófica que es modernista y occidental, y posicionan en el centro del debate sobre el control de los LAWS a un “humano significativo” que refleja prototipos asociados con el hombre blanco y racional de occidente. Considero que esta postura epistemológica excluye perspectivas epistemológicas relevantes para las comunidades más propensas a experimentar los LAWS, debido a que residen en áreas donde es más probable que se implementen y a que experimentaron de manera directa los efectos de los precursores clave de los LAWS, como los vehículos aéreos no tripulados. De acuerdo con las reflexiones obtenidas de las perspectivas decoloniales, establezco un programa de investigación que rete este corte epistemológico y se oriente a reposicionar los debates sobre el control humano significativo de los LAWS en estudios que consideren perspectivas mucho más diversas sobre asuntos fundamentales que pueden configurar el futuro común de la humanidad.
Introduction
This paper argues that present debate over meaningful human control (MHC) over lethal autonomous weapon systems (LAWS) occurs within a remarkably narrow epistemological space.1 By considering key accounts from across international relations (IR) and security studies, international law and ethics, and engineering and technology policy, I set out how unquestioned epistemological assumptions produce important exclusions from the debate about MHC and LAWS, and look to set an agenda for future research that can challenge this closure.
Specifically, I use a concept of “meaningful human” to critique current uses of MHC, showing that the “meaningful human” in MHC as currently understood reflects a specific account of humanity rooted in a Western tradition of enquiry and analysis that places an idealized, rational, rights-holding, masculine individual at its core. Forms of knowledge outside this tradition and that develop modes of analysis that do not accord with this idealization risk epistemological marginalization, regarded as “invalid knowledge” in debates about LAWS and MHC. This epistemological closure is present in academic and policy analyses, and shared by LAWS advocates and opponents. Forgetfulness about academic disciplines’ historic role in racist and colonial practice with powerful contemporary conceptual and theoretical legacies also plays a part, contributing to assumptions about the analytical and normative validity of concepts in the MHC debate that ought to be questioned.
Rendering inadmissible and inapplicable alternative epistemological perspectives means groups and communities inhabiting areas where LAWS are most likely to be deployed, and who have the greatest experience of the effects of key LAWS precursors, such as unmanned aerial vehicles (or “drones”) are likely lost. As Santos (2017, 118–35) argues, a key component of 500 years of Western intellectual dominance lies in placing some non-Western knowledge forms beyond “the abyssal line” whereby such knowledge is not just downplayed or set aside, but where it ceases to exist epistemologically. Those adhering to such epistemologies consequently risk becoming “meaningless humans” as they are unable to express their humanity through knowledge that has traction within the world on the “right” side of the abyssal line (e.g., Çapan 2017).
By demonstrating current epistemological closure and suggesting how alternative epistemologies may provide openings to new insights, I aim to open space for revising MHC debates. It is impossible to develop in detail the insights epistemologically distinctive positions will produce, as those will be diverse and necessitate extensive empirical research. A core claim of decolonial research is that knowledge production, distribution, and exchange have, for too long, been conducted to impose epistemological standards and forms on marginalized peoples and communities, aiming to make their world “legible” to others through standardizing permissible forms that knowledge can take and through which discourse can occur to produce “valid” conclusions (on “legibility,” see, e.g., Manchanda 2017). Santos’ (2018) concept of the “rearguard intellectual” is an apt framing of the need for “expert” analysts to follow, not lead, in learning from marginalized peoples’ experiences and epistemologies. Training and accreditation in a specific tradition makes this challenging, but that is not a reason not to try, even if it is a reason to adopt a perhaps unfamiliar intellectual modesty about who has insights, why, and how they are expressed. Praxis-based research techniques embedded within marginalized groups to collaboratively generate knowledge through partnerships are ways forward (e.g., Mignolo and Walsh 2018, 33–104; Santos 2018, 107–208).
This means the fourfold research agenda I propose includes the forms research needs to take and some indicative lines of enquiry—but not more. However, these are sufficient, I argue, to demonstrate that framing of an “X human Y” debate is emblematic of a lack of sufficient consideration of the literally and metaphorically central component—the human. What constitutes “meaningful control” cannot be separated from who is regarded as a “meaningful human.” Logically, in fact, the question of the “meaningful human” is prior. Without assumptions about what it is to be a human and to live a specifically human life, the purpose of control and the basis for assessing its meaningfulness are absent. Claiming current MHC discussions in forums such as the CCW take place without any prior assumptions about the content of concepts such as “human” or “humanity” is indicative of the epistemological closure I critique.2
As discussed in section “Grounding MHC”, many major human philosophical traditions reject the possibility of asocial or neutral account of the possibility of being human. Consequently, my argument is distinct from a claim that LAWS, and debates about their regulation including MHC, are instances of neocolonialism.3 I take no stance on that issue here, as it is a different question. Neocolonial outcomes may follow from LAWS’ development and deployment: their control may take forms that discriminate against the interests of non-Western actors to embed forms of indirect domination of former colonial states by former imperial power centers. MHC could contribute to such outcomes. However, assessing issues of that sort is separable from my epistemological enquiry and its focus on what knowledge forms presently constitute the “meaningful human,” why those forms are exclusionary, and what may be gained from a debate that acknowledges and engages with different epistemologies and the “meaningful humans” they constitute.
This research agenda matters because epistemological exclusion and resultant neglect of the nature of the “meaningful human” damages current MHC debate. The perspective, experience, and epistemology of populations where LAWS are most likely to be deployed, and who will experience most directly any violence they inflict—the non-white inhabitants of among the poorest, most marginal, and least secure places on Earth—are epistemologically excluded. They are not “meaningful humans” in MHC debates. Consequently, they will exercise no control over the processes, policies, and doctrines through which LAWS may develop. They will have no say on where LAWS will be deployed, how they operate, the standards of accountability, or other key features of the future LAWS may create. The marginalization, occlusion, or co-option of efforts to refound or reinvigorate key principles, norms, or rules based on the experience of both peoples and leaders from what we now call the Global South in political practice and academic analysis is commonplace. Ideas of sovereignty, nonintervention, economic redistribution, and self-determination, for example, suffered these fates during the Cold War (e.g., Acharya 2018, 68–96; Getachew 2019, 142–75). As LAWS become linked to renewed ideas of Great Power competition and arms racing, similar dynamics of denying the relevance, even validity, of views and voices from outside the Great Power club may well repeat. However, I hope to establish a basis for alternative analysis and to pose important questions for existing claims about MHC. This builds on existing critical work, for example, challenging gender assumptions in LAWS debates (Roff 2016b; Jones 2018), and how technocratic and managerialist politics of LAWS stifles engaged politics (Schwarz 2018). Focusing on the concept of MHC and the “meaningful human,” I add novel and analytically powerful insights from a decolonial perspective to this developing work (e.g., Blaney and Tickner 2017; Santos 2017, 2018; Mignolo and Walsh 2018).
The paper comes in four main sections. The first section locates MHC and its position in the academic and policy debates about LAWS. This introduces the restricted epistemological space the debate inhabits. The second, third, and fourth sections detail how this epistemological space locates LAWS within debates in, respectively, IR and security studies, international law and ethics, and engineering policy literatures. The epistemological legacy of disciplinary intellectual history, and its neglects of issues such as race and colonialism in making these disciplines, is raised in all three. These three sections identify agenda items for future research, with the third section expanding on how one element of the research agenda may develop.
Locating LAWS and the MHC Debate
The confluence of increasingly sophisticated artificial intelligence (AI), more effective robotics, and machine learning through “big data” based on massive digitization of multiple data-gathering inputs from sensors emplaced in a growing range of locations and systems is at the core of the potential for LAWS (e.g., Haas and Fischer 2017, 283). That confluence is not confined to military sectors. In fact the ubiquity of these developments and the benefits they offer means that, for some, LAWS appear near-inevitable. Illustratively, RAND note (Morgan et al. 2020, iii) “The field of artificial intelligence (AI) has advanced at an ever-increasing pace over the last two decades. ... It should be no wonder then that AI also offers great promise for national defense.” Amadeep Singh Gill (2019, 175), former Indian Ambassador to the UN Conference on Disarmament and Chair of the Group of Governmental Experts (GGE) on LAWS at the CCW, stresses similar dynamics.
The economic, political and security drivers for mainstreaming this suite of technologies [AI] into security functions are simply too powerful to be rolled back. There will be plenty of persuasive national security applications—minimizing casualties and collateral damage …, defeating terrorist threats, saving on defense spending, and protecting soldiers and their bases—to provide counterarguments against concerns about runaway robots or accidental wars caused by machine error.
Alongisde policy practitioners, academic research can assume the potentiality of LAWS, necessitating regulatory responses. For example, Carrie McDougall (2019, 58) notes, “As robotic and artificial intelligence (‘AI’) technologies continue to develop apace, discussions in relation to the practical, policy, legal and ethical implications ... are gaining in intensity.” Denise Garcia (2018, 334–35) reaffirms this: “The development of artifical intelligence is already resulting in major social and economic changes ... [with] the capacity to impart considerable benefits and dangers on the future, and, as such, there is an urgent need for innovative global governance in these areas.” LAWS are regarded by some as existentially significant for humanity (e.g., Human Rights Watch 2012, 1–2; Future of Life Institute 2015; Asaro 2019).
MHC has been promoted as a regulatory principle since 2013, when the NGO Article 36 introduced the term (Article 36 2013). Since then, MHC has been central to Human Rights Watch's sustained campaigning for a preemptive ban on LAWS and in the position taken by numerous states at the ongoing United Nations CCW talks (e.g., Human Rights Watch 2020a, b). Similarly, advocacy of temporary moratoria (Heyns 2013, 21–22), or regulation (e.g., Anderson and Waxman 2013; Garcia 2018) also emphasize human beings must exercise meaningful control over LAWS. Typically, MHC is discussed in relation to humans being “in the loop”—taking each decision prior to LAWS engaging a human target; or “on the loop”—continuously monitoring LAWS in combat situations to ensure appropriate legal and ethical standards, and able to disengage LAWS if they appear to be malfunctioning (e.g., Human Rights Watch 2012, 2; Bode and Watts 2021, 16–20). The focus is on controlling when a human being is identified as a legitimate target and engaged with violence, reflecting core International Humanitarian Law (IHL) principles of discrimination and proportionality (e.g., Human Rights Watch 2016, 4–10).
This is not unchallenged; Cummings (2019, 20–21, 24), for example, doubts this form of MHC is “meaningful” where time-critical, high-pressure situations around target engagement induce decision biases. She suggests “meaningful human certification,” where weapons development processes provide better opportunities for ensuring LAWS will not get out of control. Bode and Watts (2021) offer extensive critique of MHC, by drawing on experience with current weapons systems, which, when operating in certain modes and environments, “once activated, ... select and engage targets without further intervention by a human operator”—one of the standard definitions of “atuonomy” in weapons systems.4 Bode and Watts describe substantial obstacles to meeting a three-fold requirement for MHC, setting weapon system's parameters, defining operational environments, and appropriate supervision including unerstanding systems’ characteristics to identify malfunction. Reviewing case studies of close-in weapons systems, area defence systems, and ballistic missle defence systems, they suggest all of these requirements are compromised. Future systems, deploying increasingly sophicticated AI capable of machine learning, will exacerbate the problems.
However, because of its narrow epistemic framing, mainstream LAWS debates both undermine the ostensible goal of maintaining MHC and reveal the “meaningful human” whose putative control needs maintaining. Analysis and advocacy draws on international law, especially IHL and International Human Rights Law (IHRL) (e.g., Heyns 2017), moral philosophy (e.g., Sparrow 2016), security and strategic studies (e.g., Altmann and Sauer 2017), IR (e.g., Bode and Huelss 2018), and engineering and computer science (e.g., Cummings 2019). This interdisciplinarity recognizes LAWS represent multifaceted processes and dynamics, challenges, and opportunities (e.g., Gill 2019). Interlocking assumptions extend across empirical, conceptual, and theoretical domains to establish a distinctive epistemology identifying the key challenges creating the need for MHC, specifying the experiences which should guide assessment of where meaningful control is most urgent and difficult, and constructing who is likely best placed to exercise such meaningful control. These assumptions reflect and entrench an epistemology arising from Western experience and philosophy since the late eighteenth century and, in particular, since World War II. Detailing how this operates establishes how a restricted notion of a “meaningful human” has come about, why it matters to producing a specific and exclusionary account of MHC, and what can be done to open new epistemological space. I therefore now move on to look at the key analytical frameworks.
IR and Security Studies
Western intellectual traditions, historical experience, and strategic interests characterize typical LAWS framing in IR and security studies, generating an account of MHC typifying the reified rationality, utility maximization, and account of expertise associated with the white, Western, male epistemic archetype. Well-established debates between structural realism (e.g., Waltz 1979) and neoliberal institutionalism (e.g., Keohane 1984; Ikenberry 2001) describe structural dynamics driving interstate security maximization and strategic competition under conditions of anarchy on the one hand and the potential of institutions to mitigate those dynamics through reducing uncertainty, enhancing confidence through improved communication, and creating compliance incentives, on the other hand. Within this consensus, war's nature is Clausewitzian: clashing state wills furthering policy ambitions. War in IR is an unavoidable and occasionally necessary tool in the policy arsenal, despite its grave risks and uncertainties. This security–war nexus posits immutable structural dynamics shaping state behavior. Managing strategic destabilization means addressing two key risks: “... [firstly] proliferation of arms and the emergence of arms races, [and secondly] crisis instability and escalation, either across the threshold from peace to war, or, when war has already broken out, to a higher level of violence” (Altmann and Sauer 2017, 120–21).
The security dilemma is prominent: military innovations and advances by one side are perceived as creating insecurity by the other, necessitating improvements in military capability that are, in turn, seen as destabilizing by the first side. Morgan et al. (2020, xvi) argue the United States has no choice but to, “... stay at the forefront of military AI capability. ... [N]ot to compete in an area where adversaries are developing dangerous capabilities is to cede the field. That would be unacceptable.” Things likely look the same from Moscow and Beijing. Fear of LAWS arms races, and associated heightened risk of war, is widespread (e.g., Haner and Garcia 2019; Morgan et al. 2020). As the Future of Life Institute (2015) puts it, “The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major power pushes ahead with AI weapon development, an arms race is virtually inevitable.” Haner and Garcia (2019, 335) suggest that race is underway. Roff (2016a, 123) stresses the likely ubiquity of LAWS if they proliferate, undermining the possibility of managing interstate security dilemmas. “... [A]utonomous weapons ... will eventually find a home in every domain.... They will hunt in packs. They will be networked in systems of unmanned weapons systems. They will patrol computer networks. They will be everywhere.”
Turning to the second strategic logic, incorporating AI systems into Command, Control, Communication, and Intelligence (C3I) systems risks destabilization. AI-enhanced systems may allow decisive C3I infrastructure attacks. This could render commanders deaf, blind, and mute: unable to receive information from their own forces, to see what is happening on the battlefield through intelligence assets, or to issue orders to organize defense (e.g., Altmann and Sauer 2017; Horowitz 2019). Cold War concerns about “decapitation strikes” saw states decentralize decision-making, employ “launch on warning” protocols, and take other steps to ensure the temptations of a decapitation strike were reduced, even at the risk of losing control over using military force and more rapid escalation.
Further from current reality are advanced AI systems interacting with one another and altering their behavior, and even their coding, as machine-learning capabilities deepen and embedding LAWS in decision-making processes increases. LAWS offer critical speed advantages, creating incentives to reduce human involvement in decision-making processes (e.g., Maas 2019, 140–41). Altmann and Sauer (2017, 124) note, “operational speed will reign supreme,” something that prompted US Deputy Secretary of Defense Bob Work to comment in 2016 that the United States’s self-denying ordinance against delegating lethal decision-making to machines may be tested in conflict with adversaries not exercising such restraint (Altmann and Sauer 2017, 124). This again reflects Clausewitizian accounts where war's changing character grants substantial advantages to those who can think, act, and strike most quickly, with the greatest concentration of force, and with the best knowledge about the enemy (e.g., Jensen, Whyte, and Cuomo 2020, 534). Marginal gains over strategic adversaries may prove decisive, and even temporary shifts in the military AI balance could provoke pre-emptive strike logics (e.g., Maas 2019, 141–43).
These concerns undermine potential regulatory structures reflecting Cold War experience, such as arms control treaties. Gill (2019, 173–74) summarizes: “From an arms control perspective, the central questions are whether AI in weapons will lower the threshold for the use of force in international relations, whether it would accentuate strategic instability, and whether it would trigger new arms races and empower shadowy nonstate actors.” Definitional challenges with dual-use technologies such as AI, verification challenges in relation to software, and the ease of concealment and cheating make LAWS arms control—whether through new protocols and treaties or via adapting existing mechanisms—exceptionally difficult (e.g., Maas 2019, 143–44), contributing to pressure for outright bans as regulation may prove unworkable (for discussion, see, e.g., Anderson and Waxman 2013). The security maximizing dynamics of the anarchical international system and the resultant deep-rooted problem of the security dilemma are seemingly both baked-in and exacerbated by AI developments leading to LAWS.
MHC as a key component of managing LAWS risks therefore privileges humans best able to navigate this complex, dangerous, and multi-faceted strategic environment. They will have to be astute strategists; skilled diplomats; fully versed in military doctrine, operations, and tactics; and calm calculators of utility maximization able to balance the dilemmas, even trilemmas (Himmelreich 2019) in pressured situations. This reflects an approach to MHC, which sees it primarily as a complex technical challenge, balancing sometimes conflicting political, legal, and military factors within a strategic logic assumed to be heavily conditioned by inescapable structural factors shaping options and defining optimal outcomes. One set of concerns is therefore whether those who might acquire LAWS are fully “rational” in the Western strategic sense. Developments by Russia and China, and by terrorist organizations, consequently pose the gravest dangers as they may not commit to ethical and legal restrictions essential to strategic stability (e.g., Harari 2018; Maas 2019, 147–48; Morgan et al. 2020, xiv, xv, xvii, 27).
The epistemic paradigm of MHC emerging from Cold War experience sets the strategic agenda for analyzing LAWS. However, that experience is not characteristic of Western uses of force in the past 250 years. War has colonized non-European political communities, enabled slavery, eliminated indigenous peoples, imposed forms of economic interaction, and suppressed political opposition to established authorities most conducive to European governments, their settler colonies, or trading corporations. Predominant forms and experiences of military violence are marginalized. IR's disciplinary intellectual history bears significant responsibility because it ignores non-Western traditions of thought (Blaney and Tickner 2017), non-Western histories (Buzan and Lawson 2015; Acharya and Buzan 2019), non-Western experience (Phillips 2014, 2016), and occludes its indebtedness to discriminatory and racist concepts and theories (Hobson 2004, 2012; Vitalis 2015). While non-Western states raise issues of strategic stability (e.g., Bolivarian Republic of Venezuela on Behalf of the Non-Aligned Movement and Other Parties to the Convention on Certain Conventional Weapons 2018, para. 4(f)), arms races, and discriminatory export controls (Brazil 2019a) in the CCW GGE, these remain within the epistemic space defined by particular and selective Euro-Atlantic-centric accounts. Engaging richer, and more honest, military and strategic history is the first research agenda item and one also applicable to international law and ethics, and engineering and technology policy as shown below.
The ideal-type “meaningful human” to control LAWS carries these legacies unnoticed and unquestioned. They are a “meaningful human” privileged by centuries of violently oppressing non-Western and non-white people, and legitimized by an ostensible strategic stability that rested on decades of Cold War violence fought out at the costs of millions of lives in proxy wars, coups, insurgencies, and dictatorships air-brushed from this debate. The ostensible neutrality of “human” and “humanity” in MHC instantiates this intellectual tunnel vision about security. Those categories cannot escape this epistemological legacy. To be a “meaningful human” able to exercise “meaningful control” assumes acceptance of this framing. “Meaningful” exists within this paradigm that portrays the absence of general, system-wide war since 1945 as the product of astute great power management, led by the West. “Control” must therefore contribute to reducing the dangers of destabilizing this delicately poised, but supposedly universally beneficial, management system. This is not to dismiss the potential severity of strategic destabilization or increased escalatory risks dominating the IR and security studies aspects of MHC debates. It is, though, to call for critical engagement with the selective history that is called upon as evidence of these risks and how to manage them, and of the epistemological assumptions about “rationality” that create the “meaningful human” who is both reference point for and idealized exerciser of “meaningful control.”
The post-9/11 security debate reinforces this epistemological privileging, showing how non-Western space and those who exploit it are the source of insecurity. This ties in to the development of a key precursor technology for LAWS, armed drones, as critical to exercising control over “dangerous” space. For example, the United States portrays “un-governed,” “under-governed,” or “ill-governed” space as key threats. This is where transnational terrorists will inevitably gather, exploiting opportunities to train, organize, and plan attacks against the west. This narrative, present in every US National Security Strategy since 2001 (United States 2002, 10–11, 2006, 7, 8, 9, 12, 15, 2010, 8, 19, 20, 2015, 9, 10–11, 26, 2017, 10, 11, 48), ostensibly necessitates penetrating these spaces for intelligence, surveillance, and reconnaissance purposes, in preparation for potential strike operations to “disrupt,” “dismantle,” “degrade,” “destroy,” or “defeat” transnational terrorist organizations (on multiple three-fold permutations of these “'D”s, see Page 2016). Presenting such organizations as “cancers,” or other dehumanizing tropes, is widespread, reinforcing these threats as inescapable and necessitating action (e.g., Obama 2014; Cameron 2015; Price 2019).
Drones, and their complex supporting infrastructure, have become central to US practice in places including Pakistan, Afghanistan, Somalia, Libya, Iraq, Syria, and Yemen. A dangerous “non-West,” often invoking colonialist cultural tropes (Neocleous 2013; Satia 2014), is the source of threats, destabilization, and terror. What the “West” did to them in the past and which may help explain hostility and distrust is forgotten. Management to protect Western interests through military force is essential. The histories, experiences, perspectives, and epistemologies of people subjected to such violence are fundamentally irrelevant. They may be “objects” of violence, including the violence inflicted on them by terrorist and insurgent groups, but in thinking about the lessons of drone use for MHC, there is no sign such experiences and perspectives have anything significant to say about what “meaningful” control might entail from the position of those who are most likely to experience Counterinsurgency (COIN) and Counter-Terrorism (CT) uses of LAWS. These people are not “meaningful humans.”
Summarizing Cold War and post-9/11 influences on LAWS debates shows the epistemological privileging of Western IR and security expertise is ubiquitous. Defense and security “experts” are the “meaningful humans” able to exercise appropriate control. Expertise arises from, and is judged against, this framework and through the exercise of technical competence in understanding the framework's components. For example, authoritative interpretation of key reference points are claimed for US military authorities: Pentagon doctrine captures “the perceptions of practitioners across the globe,” and former senior US officers are definitive interpreters of Clausewitz as the canonical theorist of war (Kirkpatrick 2016; Jensen, Whyte, and Cuomo 2020, 527). The relevant expert constituency are political and military leaders making decisions about which systems to develop, how to integrate them within existing armed forces structures, when to deploy them to the field, and how to evaluate their contribution. Current efforts to quantify the effectiveness of “decapitation” drone strikes against terrorist and insurgent groups (e.g., Johnston 2012; Price 2012; Johnston and Sarbahi 2016; Rigterink 2020) focus on whether and to what extent LAWS reduce risks to “us.” When Roff and Danks (2018, 2) offer a neglected view, it remains a Western one: “The debate about autonomous weapons systems ... overlook[s] ... how such systems ... will affect those individuals they are supposed to help: the warfighters.” These are exclusively the warfighters of armed forces that match the blueprint of advanced industrial democracies, and key official reference points come from the US Airforce and the US Department of Defense.
Whether managing strategic rivalry, coping with arms races, devising regulatory mechanisms, countering terrorists and insurgents, or addressing threats from failed state “safe havens,” non-elite and non-Western perspectives are marginalized at best. I can find no interest in finding out how populations who have directly experienced key precursor technologies, such as drones, think about MHC. What standards for regulation would they set? Which operational tasks might they think suitable for LAWS? What trade-offs do they think are appropriate between control and military advantage? How do their reference points for assessing the purpose of war, the nature of strategy, or the role of combatants cast light on LAWS? This sets research agenda item two: the need for appropriate empirical research with those who will be on the receiving end of LAWS and/or live where they will be deployed. This is, of necessity, an empirical question that requires careful research and cannot be prejudged on the basis of theoretical assumptions about what such people and peoples can be expected to think, as considered further in the next section. What this discussion emphasizes is how the current account of MHC, drawing on IR and strategic studies reference points, creates a technical standard for MHC rooted in assumptions privileging one set of experiences of and perspectives on military technology. The control to be exercised is corralled within this epistemic space and accessible to those meeting these epistemic standards. Those outside that space, judged as lacking necessary and appropriate expertise and understanding of the political and strategic contexts, cannot gain access to these debates about what it means to control LAWS.
Currently, there is no explanation why only one perspective matters. There might be an unstated empirical bias here: this is the perspectives of those who, as a matter of brute fact, set the agenda and make the decisions affecting the future of LAWS. It ought not, we might assume, to be present when turning to expressly ethical debates and their manifestation in law, but here, too, only one perspective dominates. And, again, there is no explanation as to why.
International Law and Ethics
As with IR and security studies, I cannot fully survey legal and ethical debates here, instead focusing on how they also embed a specific “meaningful human.” Protecting human rights as far as possible under circumstances where humanity is often in short supply is central to the IHL and just war traditions, especially in the face of short-term incentives to commit war crimes and crimes against humanity. A Western tradition of political and legal thought rooted in post-enlightenment social contractarianism defining rights-holding human individuals as irreducible political subjects is entrenched in legal and ethical debate about war, largely replacing the Catholic natural law tradition that drove Just War theory from Augustine to Grotius (O'Driscoll 2008; McMahan 2013; Rodin 2014). Many of the thirty states currently supporting an outright ban on LAWS (Human Rights Watch 2020b, 4) appeal to a mixture of ethical and legal objections, with some also identifying strategic risks already discussed.
This legal and philosophical heritage creates two principal analytical axes in current efforts to define, first, how to ground MHC in basic ethical claims, such as dignity, and, second, how MHC can be maintained. I start with its maintenance, showing how current debates around compliance and accountability, principally framed via legal reference points, adopt a specific concept of the “meaningful human” from this Western tradition. This helps explain the focus on control through comparing potential LAWS performance with that of humans, and that control is made meaningful through holding humans accountable. The second part of this section draws in the question of MHC might be grounded, by considering objections to LAWS on the basis of “dignity.” This opens the path to more sustained critique of taken-for-granted assumptions about “meaningful humans” in legal and ethical analyses, including starting to elaborate on insights arising from non-Western epistemic traditions, forming item 3 of my research agenda.
Maintaining MHC
Protecting inherent and inalienable human rights prompts concerns about LAWS’ potential noncompliance with IHL and IHRL. MHC is important to protecting human rights through legal compliance, modeled on human functions and standards defined via rights. Liu (2019, 104) notes: “Implicit [my emphasis] within the [L]AWS debate to date is the treatment of an autonomous weapons system as a substitute for the human combatant.” “Implicit” reinforces my argument that IHL is not and cannot be neutral about the “human” in MHC because it assumes a specific subjectivity that underpins an account of agency that drives ideal types of compliance and accountability. Philosophical comparisons with humans are sometimes explicit. Robillard (2018, 715) bluntly states, “Either an [L]AWS is an agent or it is not. It is not both and it is not neither.” His preference is that LAWS are not agents and cannot be so. As discussed in “Grounding MHC”, however, that clarity is specific to a particular, Western, approach to agency, one not shared by other powerful accounts. Robillard's argument that a binary distinction between humans and LAWS is the only philosophically and ethically sound way to ask this question is indicative of the epistemological closure around MHC.
The consequence of this assumption is that compliance debates about LAWS rest on comparisons to the ideal-type human combatant: the IHL compliant just warrior. Consequently, and irrespective of the legal ideal of zero breaches of IHL and IHRL, reliably achieving fewer breaches than current human combatants is a widely touted potential advantage of LAWS, acknowledged, even by critics, as a standard that could make LAWS acceptable (Heyns 2017, 175; Garcia 2018; Gill 2019). Potential noncompliance with IHL and IHRL is commonly emphasized in CCW GGE meetings by representatives of non-Western states and groups of states opposing LAWS development and advocating moratoria or bans. Illustratively, the General Principles on LAWS submitted by Venezuela on behalf of the Non-Aligned Movement emphasizes implementing IHL and IHRL, retaining effective accountability for legal breaches, and embedding these legal reference points in a future legally binding instrument on LAWS (Bolivarian Republic of Venezuela on Behalf of the Non-Aligned Movement and Other Parties to the Convention on Certain Conventional Weapons 2018, para. 3, 4(a), 4(b), 9). Sri Lanka identifies IHL compatibility as “central” to GGE deliberations (Sri Lanka 2017), Brazil stresses “convergence” around full compliance with international law and IHL “in particular” (Brazil 2019b), Cuba argues that non-IHL compliant LAWS must be banned (Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons System 2021, 36), and South Africa emphasizes the risks of indiscriminate systems, referencing a core element of IHL (Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons System 2021, 79). China's position defines LAWS as inherently indiscriminate weapons systems, which, once activated, are incapable of being terminated. Yet it also states how IHL must be applicable (People's Republic of China 2018). IHL compliance is prominent in the current GGE Chairperson's Summary of possible elements for consensus recommendations (Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons System 2021, para 15).
Whether, and, if so, by how much, LAWS may outperform human combatants in IHL and IHRL compliance is impossible to define in advance and in the abstract via rights-based legal methods. For example, subjectivity is inherent to proportionality judgments unavoidably made in context. Additionally, LAWS’ characteristics mean they will undertake missions human combatants would not and could not. IHL and IHRL may be inapplicable or inappropriate to nonhuman missions. Furthermore, quantifying IHL compliance plays to posited strengths of LAWS: they will not get tired, become angry, be vengeful, or succumb to other emotions commonplace in the commission of war crimes (e.g., Blount 2011, 36; Schmitt 2013, 13; Horowitz 2016a, 29). Unemotional calculating killing machines may, if appropriately programmed and sufficiently artificially intelligent, offer superior IHL compliance. Liu (2019, 98–99) summarizes:
IHL principles of discrimination and proportionality are … technical performance criteria … and not true legal challenges. … Insofar as the legality of [L]AWS is collapsed into the question of IHL compliance, [L]AWS will be declared as lawful prematurely and based upon overly narrow grounds. Instead, IHL compliance … should be treated as a sign of caution, precisely because high-performing [L]AWS will be rendered invisible and unobjectionable in IHL terms.
More profoundly, LAWS, like all complex technological systems, will suffer “normal accidents”: systems unavoidably, if only occasionally, fail. How failures manifest are not predictable and cannot be designed out. Seeing IHL breaches, if that is the form such normal accidents take, as systemic outcomes, as opposed to being ascribed to specific agents (whether human or technological), “... would alter the very meaning of what a human right is, and what it means to breach its protections” (Liu 2019, 107).
This suggests IHL debates recognize deeper ethical issues raised by LAWS about the nature of the “meaningful human.” If they are the classic sovereign rights-holding liberal individual that is the bedrock of contemporary legal and ethical debates about war, then LAWS may present a fundamental challenge, as “normal accidents” as inevitable challenge what such rights actually are and what their breaching means. The question of the “meaningful human” is live, but, at present, it is masked by a focus on accountability for LAWS.
A great deal of legal literature addresses accountability (e.g., Sparrow 2007; Crootof 2016; Nyholm 2018; Robillard 2018). To be “meaningful control” humans must be accountable through institutionalized processes drawing a clear, bright line between the human and the nonhuman. As Heyns (2017, 57) puts it, “If ... accountability is premised on some level of control, it is hard to see how their can be accountability without meaningful human control.” Accountability and justice are “strictly human affair[s]” (Heyns 2017, 57, 59). Robillard's assertion that LAWS are not and cannot be agents is confirmed, because they cannot be accountable and thus cannot exercise meaningful control.
Achieving accountability and justice, therefore, reflects assumptions about the “meaningful human” as a consciously acting agent, able to take responsibility for their actions on the basis of (non)compliance with rules and norms of behavior. Individuals may be situated within complex social settings, which allocate levels of authority and responsibility, such as “chain of command,” but accountability must come back to specific humans on this account, which is why “normal accidents” present a fundamental challenge. Individual human accountability grants meaning to control. Using “chain of command” to ensure those authorizing LAWS deployments are accountable is the obvious way of addressing this difficulty (e.g., Robillard 2018) and is prominent in policy recommendations for future governance regimes (e.g., The Canberra Working Group 2020). However, three illustrative problems suggest accountability gaps. First, it is unlikely commanders will fully understand the algorithms behind LAWS’ decision-making, so may not appreciate risks raised by certain circumstances. Second, increasing LAWS sophistication toward levels of deep learning closer to general AI may generate unpredictable behavior. Third, the speed of LAWS decision-making and action may make human control via “on the loop” monitoring and “in the loop” decision-making impractical, unachievable, or significantly detrimental to military effectiveness. All three problems already manifest in MHC over currently deployed systems, such as Close-In Weapons Systems (CIWS) (Bode and Watts 2021). Accelerating technological innovation exacerbates these risks.
Other means of establishing individual accountability could include removing legal indemnities from executives of LAWS-producing firms isolating them from accountability for the battlefield use of their systems (e.g., Cummings 2019, 25). “Strict liability” could ensure relevant humans cannot escape responsibility for failures of LAWS they authorize to act, but seems unjust to some because it penalizes people for things that are not, on balance, their fault (e.g., Heyns 2013, para. 79, McDougall 2019, 81–82). Drawing on civil legal standards via “war torts” might further establish accountability for damages caused by LAWS (Crootof 2016). Research into maintaining accountability is therefore already lively and effective. Many of these arguments are well-represented in regulatory proposals, including calls for bans at least in part on the basis of the scale of these challenges. This “grounding” in an individual human, however, needs unpacking.
Grounding MHC
Arguing that LAWS may, potentially, outperform humans in IHL compliance as a basis for possible permissibility and, consequently, the need to ensure accountability for instances where they do not risks creating a utilitarian calculation competition that humans may well lose. Legal and ethical objections also invoke more foundational claims about ethical impermissibility. Rosert and Sauer (2019) and Heyns (2017), for example, stress that allowing algorithms to make decisions to kill affronts human dignity, something legally enshrined. This argument, expressed in various ways, is powerfully present in appeals for outright bans, with many of the states proposing banning LAWS stating fundamental ethical objections. These, however, are often couched within the epistemic space of IHL via invoking the Martens Clause and its test of public conscience (Sri Lanka 2017; Brazil 2019b; Human Rights Watch 2020b).
Constituting the “meaningful human” via a right to dignity begins to fill the gap left by the idea that neutrality is possible. “Dignity,” and in particular the right to a dignified life (and death), raises the question of how dignity manifests itself. That would seem to open space to consider how different philosophical, ethical, religious, and cultural traditions answer those questions. Notably, the only piece in this survey to consciously invoke a non-Western perspective, Heyns (2017, 62–63) sees “Ubuntu” merely as a contributor to legal formulations of the right to dignity, not as a basis for asking far-reaching questions about whether and to what extent the rights-based legal order can cope with the challenge of LAWS. It also demands consideration of how the dominant ethico-legal framework for locating LAWS achieved its privileged position. Here, as with the denial in IR and security studies of disciplinary history and epistemological closure, it is worthwhile pausing to consider the baggage that IHL and just war frameworks carry.
The ostensible universality and cultural neutrality of the “human” that enables a universal right to dignity reflects a specific epistemological historical context connected to compliance with a Western “standard of civilization” used to assess which legal statuses, protections, and privileges might be granted to non-white peoples and polities (e.g., Gong 1984; Keene 2002, 2014; O'Hagan 2017). While losing its formal international legal institutionalization early in the twentieth century, the standard of civilization concept remains implicit in many spheres, including the use of violence (e.g., Keal 2017; O'Hagan 2020). The sovereign, liberal, rights-holding subject of contemporary IHL is a geographically, intellectually, and historically specific account of subjectivity. It grounds a tradition of international law that, in the quite recent past, has been complicit in the racist exclusion of non-white and non-Western peoples, creating them as “meaningless humans” in many ways.
As with terrorists or insurgents as “cancers” noted previously, descriptions of them as barbarous, savage, monstrous, and other dehumanizing terms are too numerous to mention. The prospect of setting LAWS loose on the nonhumans or subhumans who may be terrorists or insurgents, or complicit in their activities, or just unfortunately nearby raises accountability issues in relation to how “civilized,” “meaningful humans” will hold LAWS to account. Accountability, via the law, is to the law written, administered, and enforced by those most likely to deploy LAWS. Accountability to those civilians in places where LAWS will most likely operate is vicarious at best—via these processes. Whether that leads to anything recognized by them as “justice” in their terms is a nonissue in the mainstream legal–ethical debates about LAWS.
Widening the debate about how to “ground” MHC as the third item for a renewed MHC research agenda means engaging non-Western epistemologies, such as indigenous cosmologies or Daoist relationality. These develop the possibilities of an alternative “meaningful human” to exercise control over LAWS. Agency exists in and through human relationships to other humans and, potentially, nonhumans, as in Andean indigenous cultures that locate life forces running through and between people and other living things in their environment. Concepts such as Pachamama or sumac kawsay illustrate how humans’ agency and moral standing locate within complex relational systems (e.g., Santos 2018, 9–12, 238–43). These extend to other dimensions of the natural world, raising profound questions about technologies such as LAWS. Unconnected to any kind of spirituality and created within a techno-political-legal world justified via the abstract rather than the lived, modeled on a universal, not on the particular, LAWS do not just physically threaten, they existentially threaten. Existing in relation with others is an epistemological perspective completely absent from LAWS debate. Yet it is a fundamental perspective on the nature and meaning of human (and other) life with deep and wide roots in the human experience, even if post-Renaissance Western political and economic thought and action has sought to obliterate it, including through mass extermination of people (e.g., Mignolo and Walsh 2018, 153–210).
Liu's (2019) argument for seeing LAWS as “networks” and “systems” suggests a possible opening to something approximating these epistemologies. Moving away from LAWS as substitutes for human agents, and instead locating them as components of more complex forms that include human beings, means thinking about and analyzing them differently depending on the functions performed. That hints at a potentially relational epistemology, although one with very limited potential, because it retains IHL as the key reference point, accepts the strategic rationales behind LAWS, and consequently closes the debate about MHC to those outside these milieus.
So why not actively seek, and take seriously, the views of people most likely to be most directly affected by LAWS to gain different perspectives on the “meaningful human” exercising control, including consideration of what “control” means? Rather than decidedly paternalistic (Barnett 2017) MHC notions within a highly restricted epistemological framework, “meaningful human” control should reach outside this framing. Relational methodologies in IR, often linked to non-Western epistemologies, including Daoism and Confucianism, highlight how meaning and value exist through interconnections relating one to others, not within the “thing” itself (e.g., Ling 2014; Qin 2018; Ling and Nordin 2019; Nordin et al. 2019). A person's moral value exists in relationships granting meaning and creating ties. Robillard's certainty about the singularity of agency is, in fact, widely contested, as is the idea of a neutral pre-social conceptualization of “the human” or “humanity.”
Relationality's centrality to systems of thought as extensive, influential, and durable as Daoism, Confucianism, Andean cultures, and Indian philosophical traditions linked to “Advaita” (Shahi and Ascione 2016) mean they are not a footnote to thinking about social, political, and ethical relationships between human beings and the past, present, and future of the world. “Meaningful humans” are meaningful because of interrelationships with, through, and too other humans, or other features of the natural world and the “spirits” of those who have lived and who will live. Human subjectivity is fundamentally social and cannot be defined outside of relations, because it does not exist outside of relations. Neither, consequently, is it static, because subjectivity varies depending on intensity and nature of the relationships that exist. This renders abstract hypothesizing of MHC and of the proper way to establish control and accountability highly problematic, and the idea of a universal or neutral stance on “the human” or “humanity” untenable.
Utilizing these powerful and widely held relational perspectives to challenge Western-centric legal and ethical assumptions in MHC points to how the empirical research agenda should be taken forward. That research goes beyond theoretical work exploring traditions as rich as Daoism and indigenous cultures and the methods they use to conceptualize distinctive forms of relationality, something I have outlined very provisionally here. It demands careful empirical work led by and with those developing relational perspectives on and accounts of MHC and LAWS, imbued with the spirit of Santos’ “rearguard intellectuals” and committed to research as praxis stressed by decolonial theorists such as Mignolo and Walsh (2018). The references for this paper show a debate dominated by Western-based and trained academics and policy professionals, with non-Western governmental representatives and experts engaged in forums such as the CCW GGE framing their positions, objections, interventions, and commentaries within the same epistemic space. Developing new research projects into how non-Western perspectives focused on those most likely to experience LAWS in future and with the direct experience of precursors such as drones is essential. That needs to avoid methodological shortcomings identified in research into experiences of living under drones in Pakistan, for example, and the parsing of such experiences through IHL and IHRL as the way to make such experiences “meaningful” (Williams 2015c, 168; Shah 2018, 52).
Technology and Engineering Policy
Epistemologically privileged claims are present within engineering and technology literature on LAWS. Some are highly personal: “As a former fighter pilot for the US Navy, but also as a professor of robotics ...” writes Cummings, going on to decry “debates filled with emotional rhetoric, often made worse by media and activist organizations” (2019, 20). This common rhetorical device—discredit opponents as emotional and thus not fully rational—and claiming of personal epistemological privilege via credentials goes much deeper, though. Technological and engineering writing reinforces LAWS’ ostensible virtues in unemotionally calculating legal compliance, creating a “meaningful human” that is, ironically, so hyper-Western as to be more machine-like than human. This section focuses on how this technological trope works to close down debates about “meaningful humans” and then points toward how efforts to empirically establish ethical bases for assessing MHC ought not to be done. That leads to a fifth and final research agenda item—decentering Western technological history and ideas of progress, and the gendered and racialized assumptions they contain.
Rationalistic ideals ascribed to LAWS are already critiqued for their gender stereotypes: rational, utilitarian, detached, logical (read “male”) LAWS are superior to emotional, illogical, and impulsive (read “female”) humans. MHC deploys a masculine stereotype of the “fully” human person that, in the face of technological advances, no human can now fulfill: “all humans become subordinated as weak, incapable and emotional; that is, feminized” (Roff 2016b, 11). “Technology is not going to free us from gendered practices and hierarchies ... but will instead reify those very practices and power relationships. ... [C]reating gendered autonomous machines ... will solidify a version of hegemonic masculinity, and further factionalize and subordinate all other masculinities and femininities” (Roff 2016b, 12). The tragic irony of ascribing Western masculine characteristics used for centuries to oppress women and non-Western men to potentially sideline all humans in decision-making about whether, when, and how to kill humans is seemingly lost.
Jones (2018) develops xenofeminist critiques of LAWS (and other technologies), blurring categories between human and technology, and “hacking” AI development based on unspoken paradigms of white, Western, heteronormative masculinity to reorientate technology to advance feminist outcomes. This provides analytical and normative purchase on AI. Challenging the epistemological location of dominant LAWS debate by the two-step move of posthumanism and xenofeminism establishes that apparently taken-for-granted assumptions in LAWS debates should be destabilized. Taking seriously what it means to be human and what it means to look at technology from a marginalized perspective is invaluable.
Contra Cummings (2019, 21), MHC cannot be reduced to, “a discussion about role allocation between humans and autonomous systems” where good decision-making is defined algorithmically (e.g., Arkin 2009) as the application of a specific set of rules in a context where the immediate utilitarian maximization of desirable outcomes and the minimization of undesirable harms is the standard of judgment. Killing the “right” people or destroying the permitted material objects, lends itself to quantification and a data-processing epistemology commonplace in the engineering and technology approach. AI can process more data from a wider array of inputs faster than any human or team of humans, so let AI make the decisions, especially in high-pressure, time-critical environments (Cummings 2019).
MHC may even become unnecessary, especially at the point of target engagement, if operational simulations validate carefully designed and tested systems. This is Cummings’ (2019) notion of “meaningful human certification,” reflecting technological determinism and algorithmic approaches to decision-making found elsewhere in engineering and technology discussions of autonomous processes in increasingly technologically mediated military functions (e.g., Li, Huai, and Wang 2017). If ethics and law are programmable problem sets solvable through algorithmic processing of large-scale data and machine learning, and in need of insulation from “emotion” or similar pollutants, then the case for LAWS makes itself (Arkin 2010).
The benefits of autonomous systems that “help prevent injuries or death, as is the case with advanced driver assistance systems increasingly found in automobiles,” are apparent to “most reasonable people” (Kirkpatrick 2016, 27). This analogy with cars (see also Nyholm 2018) reinforces the idea that “reasonableness” derives from a particular epistemology. Reductions of harms such as death or injury through technological innovation, arising from a capitalist mode of production, speak to a utilitarianism that is universalizing in its assumptions and hostile to accounts of ethical relationships to other humans, to the natural world, or to ideas of spirituality and the divine. These are not calculable, so, by inference, not “reasonable” in thinking about LAWS. The “meaningful human” in this paradigm is abstracted from specific social, political, economic, cultural, and historical circumstances. Yet, in reality, the idealized conditions of post-enlightenment Western modernity are inextricable from the account of reasonableness: the rational, sovereign, liberal, rights-holding subject within a modern project. Particularly pernicious here, however, is that this subjectivity is trivialized through the analogy with civilian technologies, such as autonomous cars, negating the moral distinctions between peace and war via their ethical codes and systems, and the virtues they demand (e.g., Walzer 2006). Technologically rendering violence against the non-Western “ordinary,” a part of the everyday and the everywhere, and imposing a subjectivity onto non-Western peoples that confers permission to kill has been a trend in the use of drone warfare (e.g., Gregory 2011a; Shaw and Akhter 2012; Niva 2013; Williams 2015b; Agius 2017; Gregory 2017; Hurd 2017). This technology prefigures many of the potential uses of LAWS and displays a track record whereby control over drones has been moved away from public scrutiny or engagement, even by citizens of Western democracies, and certainly from the citizens and governments of the places where drone use is most extensive (e.g., Niva 2013).
MHC via the engineering and technology processes and principles that drive AI-based autonomy has no room for people who see their relationship to others and their place in the world differently. Disciplinary forgetfulness about racial and colonial legacies manifests here, too. The role of technological and engineering sophistication in constructing “barbarians” and “savages” in the “standard of civilization” is set aside. Bourke (2017) reveals how the language of overt civilizational hierarchy that characterized technological discussions of weapons development in the past has been sublimated and replaced by euphemism. Outcomes, though, are similar: “their” “irrationality” and “unreasonableness” are reasons why LAWS should be developed and deployed to protect “us” from violence stemming from their failure to adopt “our” subjectivity.
Technology's centrality to the history of colonial and post-colonial violence, from the civilian bombing strategies of World War II, the conduct of the Vietnam War, to collusion in contemporary conflicts such as Yemen, to pick from an immense list, is glossed over. The human victims of increasingly technologically mediated, amplified, and justified violence count for little; their perspective as “meaningful humans” counts for less. As with IR and security studies, and law and ethics, destabilizing historical mythmaking of Western civilizational superiority in this literature is an important agenda item.
This dehumanization of non-Western subjectivities in engineering and technology literature, and trivializing LAWS via analogy to other autonomous technologies, misses another key aspect of how LAWS will develop. As Jones effectively highlights and also discussed in the critical literature on drones (e.g., Williams 2011; Gregory 2011b), the human–AI interface and the development of technologically augmented human combatants make issues of human control and human–machine interconnection increasingly complex. The point in the design, development, deployment, and operational use of LAWS at which “control” occurs is complex and noted in GGE discussions as being a point of concern (e.g., Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons System 2021, para. 7–12). However, technological augmentation and increasing AI are not confined to human–technological assemblages of drones and other weapons platforms. The direct technological augmentation of human beings is a near-future prospect, as systems such as exoskeletons to augment human strength, speed, and endurance move from laboratories to operational deployment. LAWS, as Nyholm (2018) discusses, although in a very different way and rooted in the problematic analogy with driverless cars, will most likely operate as parts of systems where human agents are present and acting. While Nyholm addresses the ostensible “accountability gap” LAWS create, following Jones, I show how this opens the door to alternative epistemological perspectives.
Jones (2018, 2019) shows how posthumanism asks questions about the nature of (technologically augmented) human agency. Is the MHC concept applicable to a weapon system that includes both human and machine components? Inclusion could be via direct technological augmentation of human decision-making capacities: for example, information gathering, processing, and analyzing, through to partial decision-making. This echoes worries about “on the loop” MHC falling prey to well-established human tendencies to defer to normally accurate and reliable technological systems, even when those systems recommend actions other information challenges (e.g., Roff and Danks 2018, 9). However, the argument goes further—are technologically augmented humans still “humans”? Is their control “human” control? The “X human Y” debate again fails to grapple with the contestable “human” at its center. Robillard's certainty about the binary nature of agency looks doubtful when the human and machine blend. The assumption of LAWS as substitutes for humans falls when they are inextricably entangled elements of a single system. Picking up the discussion of relationality from section “Grounding MHC”, posthumanism points to the possibility that relationality will encompass technology, or, at least, technologically enhanced humans. There has been, up to now, a Western-centrism in key applications of posthumanism, such as xenofeminism (Jones 2019, 131). So the agenda here is also one that can benefit from non-Western engagement, decentering Western historical experience and intellectual assumptions to enrich debate about “meaningful humans.”
These epistemological debates are, though, absent in the engineering and technology policy literature considered here. Instead, epistemological questions are ducked through appeals to resolution via public opinion research and (invariably Western) “experts.” Why these are definitive sources of valid knowledge about LAWS is never, in the sources surveyed, properly explained. Verdiesen, Santoni de Sio, and Dignum (2019) instantiate how empirical evidence of public opinion is used to ostensibly validate ethical ideas about LAWS. “We have identified several values that people associate with Autonomous Weapons Systems ... derived from both validated ethical theories [all Western in origin] and from experts ... in the debate on Autonomous Weapons Systems or [who] work in the military domain” (Verdiesen, Santoni de Sio, and Dignum 2019, 42). While they recognize exploring non-Western perspectives on LAWS is an issue for further research (2019, 42–43), we can, it seems, get a good-enough sense from surveying Western public opinion and military personnel, and thus empirically determine “correct” ethical positions. Horowitz (2016b) does much the same to challenge claims that public opinion polling shows opposition to LAWS (Human Rights Watch 2016, 16–17), but without the caveats about needing to ask non-Westerners (or, even, non-Americans) in the future. Seemingly, only US public opinion matters as the US public's perspective can stand as universal. Morgan et al. (2020, 100–117) also focus public opinion research on the United States, showing “in principle” objections to LAWS decline when contextualized in relation to the United States securing victory against LAWS-using enemies. In the absence of public opinion data, technologists appeal to nonengineering expertise on the potential military utility of LAWS, found in current or retired Western military personnel, most commonly from the United States, or in think tanks closely connected to the military (e.g., Kirkpatrick 2016, 28–29). The epistemological closure through privileging Western public and “expert” opinion and specific accounts of Western technological development and progress, typically sanitized of its colonial and racist episodes, goes unnoticed.
Conclusion: Toward “Meaningful Human” Control
The MHC debate is hampered by excessively narrow epistemic consensus. This consensus does more than ignore the epistemologies of those people most likely to experience LAWS; it actively negates them. The MHC concept constructs a “meaningful human” who is a near-caricature archetype of the Western, male, white, rational, utility maximizing, rights-holding subject. The work on MHC stemming from the current epistemological location of LAWS, and the debates it supports, remains significant. My argument has not been that they should be replaced, or the control they seek to establish handed to others. However, it has shown the deeply limiting effects the epistemological assumptions and consequent closures of the current debate establish, explained key causes and consequences of this closure, and pointed toward ways to begin to overcome these limitations.
I have suggested a fourfold research agenda. First, a much richer and more honest account of war's functions in the West's engagements with the rest of the world, and of academic disciplinary development and its continuing legacies as they manifest in locating LAWS. Second, empirical research with communities most likely to be affected by LAWS deployments to understand peoples’ perspectives on these technologies and expand our evidential bases to include a far greater range of opinion in assessing MHC. This research should draw on methods and techniques identified in decolonial literature that emphasizes leadership by marginalized groups through praxis-based methods and a “rearguard intellectual” positionality. Third, utilizing non-Western philosophical traditions to challenge basic ontological claims that establish current ideas including subjectivity, agency, and accountability in restrictive terms. This taps into some of the oldest, richest, and most sophisticated ways of thinking about what it is to be a “meaningful human” and human relations to one another and to the world as a whole. Given the scope and potential seriousness of the challenge LAWS present, and the political imperatives many people and governments see in establishing effective regulation of LAWS, or even their outright prohibition, ignoring these resources seems bizarre. Fourth, destabilizing myths of Western technological and developmental superiority. I cannot address fully any of these items, let alone all of them, here. Some require significant empirical fieldwork, for example. I have outlined how the third advances discussion of MHC. The more perspectives on LAWS we have, the more human will be the meaningful control we need. Present debate exactly, if unwittingly, opposes this outcome. Heyns (2017, 68) writes, “[W]e are now at the point where we need to decide whether we want to retain meaningful control over ... war itself, now and in the centuries to come.” The “we” Heyns appeals to is humanity, yet the “meaningful human” at the heart of MHC instantiates an impoverished conception of the “human.” The boundaries of that space need breaking down and a plethora of “meaningful humans” welcomed to the debate. There is much to learn, if we choose to listen.
Acknowledgments
I am grateful to Ingvild Bode, Christian Braun, Chris Finlay, Emily Jones, Robert Schuett, and two anonymous reviewers for feedback on this article, which has greatly improved it. Shortcomings are exclusively my responsibility.
Footnotes
The MHC formulation is contested. Morgan et al. (2020, 43) usefully summarize the “X human Y” debate and its alternative formulations, and McDougall (2019, 62–63) reproduces the summary tabulation from the Group of Governmental Experts discussions at the UN Convention on Certain Conventional Weapons (CCW). However, MHC is the most widely used variant and all variants occupy consistent epistemic space, meaning that distinctions between the various formulations do not impact on my core argument.
I am grateful to an anonymous reviewer, a self-described long-standing CCW participant, for setting out the argument critiqued here and urging clarification of how I see the conceptual relationship between “meaningful human” and “meaningful control.”
I am grateful to a second anonymous reviewer for asking me to clarify this distinction, and for suggesting my argument may be misinterpreted if read as claiming current LAWS and MHC debates are “neo-colonial.”
This definition originates in the US Department of Defense (2012, 13). There is substantial debate over the nature and meaning of autonomy in weapons systems. For summary, see (Williams 2015a, 180–82, Bode and Watts 2021, 10–15)