Monday, March 1, 2021
Home World Artificial Intelligence, Weapons Systems and Human Control

Artificial Intelligence, Weapons Systems and Human Control

This is an excerpt from Remote Warfare: Interdisciplinary Perspectives. Get your free obtain from E-International Relations.

The use of drive exercised by the militarily most superior states within the final twenty years has been dominated by ‘remote warfare’, which, at its easiest, is a ‘strategy of countering threats at a distance, without the deployment of large military forces’ (Oxford Research Group cited in Biegon and Watts 2019, 1). Although distant warfare includes very totally different practices, educational analysis and the broader public pays a lot consideration to drone warfare as a really seen type of this ‘new’ interventionism. In this regard, analysis has produced necessary insights into the varied results of drone warfare in moral, authorized, political, but additionally social and financial contexts (Cavallaro, Sonnenberg and Knuckey 2012; Sauer and Schörnig 2012; Casey-Maslen 2012; Gregory 2015; Hall and Coyne 2013; Schwarz 2016; Warren and Bode 2015; Gusterson 2016; Restrepo 2019; Walsh and Schulzke 2018). But present technological developments recommend an growing, game-changing function of synthetic intelligence (AI) in weapons programs, represented by the talk on rising autonomous weapons programs (AWS). This growth poses a brand new set of necessary questions for worldwide relations, which pertain to the impression that more and more autonomous options in weapons programs can have on human decision-making in warfare – resulting in extremely problematic moral and authorized penalties.

In distinction to remote-controlled platforms corresponding to drones, this growth refers to weapons programs which might be AI-driven of their crucial capabilities. That is weapons that course of knowledge from on-board sensors and algorithms to ‘select (i.e., search for or detect, identify, track, select) and attack (i.e., use force against, neutralise, damage or destroy) targets without human intervention’ (ICRC 2016). AI-driven options in weapons programs can take many various varieties however clearly depart from what could be conventionally understood as ‘killer robots’ (Sparrow 2007). We argue that together with AI in weapons programs is necessary not as a result of we search to focus on the looming emergence of absolutely autonomous machines making life and loss of life choices with none human intervention, however as a result of human management is more and more turning into compromised in human-machine interactions.

AI-driven autonomy has already turn out to be a brand new actuality of warfare. We discover it, for instance, in aerial fight automobiles such because the British Taranis, in stationary sentries such because the South Korean SGR-A1, in aerial loitering munitions such because the Israeli Harop/Harpy, and in floor automobiles such because the Russian Uran-9 (see Boulanin and Verbruggen 2017). These various programs are captured by the (considerably problematic) catch-all class of autonomous weapons, a time period we use as a springboard to attract consideration to current types of human-machine relations and the function of AI in weapons programs in need of full autonomy.

The growing sophistication of weapons programs arguably exacerbates tendencies of technologically mediated types of distant warfare which have been round for some many years. The decisive query is how new technological improvements in warfare impression human-machine interactions and more and more compromise human management. The goal of our contribution is to analyze the importance of AWS within the context of distant warfare by discussing, first, their particular traits, significantly with regard to the important side of distance and, second, their implications for ‘meaningful human control’ (MHC), an idea that has gained growing significance within the political debate on AWS. We will contemplate MHC in additional element additional under.

We argue thatAWS enhance elementary asymmetries in warfare and that they characterize an excessive model of distant warfare in realising the potential absence of instant human decision-making on deadly drive. Furthermore, we look at the problem of MHC that has emerged as a core concern for states and different actors looking for to control AI-driven weapons programs. Here, we additionally contextualise the present debate with state practices of distant warfare referring to programs which have already set precedents by way of ceding significant human management. We will argue that these incremental practices are prone to change use of drive norms, which we loosely outline as requirements of acceptable motion (see Bode and Huelss 2018). Our argument is subsequently much less about highlighting the novelty of autonomy, and extra about how practices of warfare that compromise human management turn out to be accepted.

Autonomous Weapons Systems and Asymmetries in Warfare

AWS enhance elementary asymmetries in warfare by creating bodily, emotional and cognitivedistancing. First, AWS enhance asymmetry by creating bodily distance in utterly shielding their commanders/operators from bodily threats or from being on the receiving finish of any defensive makes an attempt. We don’t argue that the bodily distancing of combatants has began with AI-driven weapons programs. This want has traditionally been a typical characteristic of warfare – and each army drive has an obligation to guard its forces from hurt as a lot as attainable,which some additionally current as an argument for remotely-controlled weapons (see Strawser 2010). Creating an asymmetrical scenario the place the enemy combatant is on the threat of harm whereas your individual forces stay protected is, in spite of everything, a primary want and goal of warfare.

But the technological asymmetry related to AI-driven weapon programs utterly disturbs the ‘moral symmetry of mortal hazard’ (Fleischman 2015, 300) in fight and subsequently the inner morality of warfare. In this sort of ‘riskless warfare, […] the pursuit of asymmetry undermines reciprocity’ (Kahn 2002, 2). Following Kahn (2002, 4), the inner morality of warfare largely rests on ‘self-defence within conditions of reciprocal imposition of risk.’ Combatants are allowed to injure and kill one another ‘just as long as they stand in a relationship of mutual risk’ (Kahn 2002, 3). If the morality of the battlefield depends on these logics of self-defence, that is deeply challenged by varied types of technologically mediated asymmetrical warfare. It has been voiced as a big concern specifically since NATO’s Kosovo marketing campaign (Der Derian 2009) and has since grown extra pronounced by means of using drones and, specifically, AI-driven weapons programs that lower the affect of people on the instant decision-making of utilizing drive.

Second, AWS enhance asymmetry by creating an emotional distance from the brutal actuality of wars for many who are using them. While the extraordinary surveillance of targets and close-range expertise of goal engagement by means of stay photos can create intimacy between operator and goal, this expertise is totally different from residing by means of fight. At the identical time, the apply of killing from a distance triggers a way of deep injustice and helplessness amongst these populations affected by the more and more autonomous use of drive who’re ‘living under drones’ (Cavallaro, Sonnenberg and Knuckey 2012). Scholars have convincingly argued that ‘the asymmetrical capacities of Western – and particularly US forces – themselves create the conditions for increasing use of terrorism’ (Kahn 2002, 6), thus ‘protracting the conflict rather than bringing it to a swifter and less bloody end’ (Sauer and Schörnig 2012, 373; see additionally Kilcullen and McDonald Exum 2009; Oudes and Zwijnenburg 2011).

This distancing from the brutal actuality of warfare makes AWS interesting to casualty-averse, technologically superior states such because the USA, however doubtlessly alters the character of warfare. This additionally connects properly with different ‘risk transfer paths’ (Sauer and Schörnig 2012, 369) related to practices of distant warfare that could be chosen to avert casualties, corresponding to using non-public army safety firms or working through airpower and native allies on the bottom (Biegon and Watts 2017). Casualty aversion has been largely related to a democratic, largely Western, ‘post-heroic’ means of warfare relying on public opinion and the acceptance of utilizing drive (Scheipers and Greiner 2014; Kaempf 2018). But reviews in regards to the Russian aerial help marketing campaign in Syria, for instance, communicate of comparable tendencies of not looking for to place their very own troopers in danger (The Associated Press 2018). Mandel (2004) has analysed this casualty aversion pattern in safety technique because the ‘quest for bloodless war’ however, on the similar time, famous that warfare nonetheless and at all times consists of the lack of lives – and that the provision of recent and ever extra superior applied sciences mustn’t cloud excited about this stark actuality.

Some states are conscious about this actuality as the continued debate on the problem of AWS on the UN Convention on Certain Conventional Weapons (UN-CCW) demonstrates. It is value noting that the majority international locations in favour of banning autonomous weapons are creating international locations, that are sometimes much less prone to attend worldwide disarmament talks (Bode 2019). The proven fact that they’re keen to talk out strongly in opposition to AWS makes their doing so much more important. Their historical past of experiencing interventions and invasions from richer, extra highly effective international locations (corresponding to among the ones in favour of AWS) additionally reminds us that they’re most in danger from this expertise.

Third, AWS enhance cognitive distance by compromising the human capacity to ‘doubt algorithms’ (see Amoore 2019) by way of knowledge outputs on the coronary heart of the concentrating on course of. As people utilizing AI-driven programs encounter a scarcity of other data permitting them to substantively contest knowledge output, it’s more and more tough for human operators to doubt what ‘black box’ machines inform them. Their superior knowledge processing capability is strictly why goal identification through sample recognition in huge quantities of information is ‘delegated’ to AI-driven machines, utilizing, for instance, machine-learning algorithms at totally different levels of the concentrating on course of and in surveillance extra broadly.

But the extra goal acquisition and potential assaults are primarily based on AI-driven programs as expertise advances, the much less we appear to find out about how these choices are made. To establish potential targets, international locations such because the USA (e.g. SKYNET programme) already depend on meta-data generated by machine-learning options specializing in sample of life recognition (The Intercept 2015; see additionally Aradau and Blanke 2018). However, the missing capacity of people to retrace how algorithms make choices poses a critical moral, authorized and political drawback. The inexplicability of algorithms makes it more durable for any human operator, even when offered a ‘veto’ or the ability to intervene ‘on the loop’ of the weapons system, to query metadata as the premise of concentrating on and engagement choices. Notwithstanding these points, as former Assistant Secretary for Homeland Security Policy Stewart Baker put it, ‘metadata absolutely tells you everything about somebody’s life. If you’ve gotten sufficient metadata, you don’t actually need content material’, whereas General Michael Hayden, former director of the NSA and the CIA emphasises that ‘[w]e kill people based on metadata’ (each quoted in Cole 2014).

The want to seek out (fast) technological fixes or options for the ‘problem of warfare’ has lengthy been on the coronary heart of debates on AWS. We have more and more seen this on the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE) conferences on the UN-CCW in Geneva when international locations already creating such weapons spotlight their supposed advantages. Those in favour of AWS (together with the USA, Australia and South Korea) have turn out to be extra vocal than ever. The USA claimed that such weapons may truly make it simpler to comply with worldwide humanitarian regulation by making army motion extra exact (United States 2018). But it is a purely speculative argument at current, particularly in advanced, fast-changing contexts corresponding to city warfare. Key rules of worldwide humanitarian regulation require deliberate human judgements that machines are incapable of (Asaro 2018; Sharkey 2008). For instance, the authorized definition of who’s a civilian and who’s a combatant is just not written in a means that might be simply programmed into AI, and machines lack the situational consciousness and capacity to deduce issues essential to make this determination (Sharkey 2010).

Yet, some states appear to fake that these intricate and advanced points are simply solvable by means of programming AI-driven weapons programs in simply the best means. This feeds the technological ‘solutionism’ (Morozov 2014) narrative that doesn’t seem to just accept that some issues shouldn’t have technological options as a result of they’re inherently political in nature. So, fairly other than whether or not it’s technologically attainable, do we would like, normatively, to take out deliberate human decision-making on this means?

This brings us to our second set of arguments involved with the basic questions that introducing AWS into practices of distant warfare pose to human-machine interplay.  

The Problem of Meaningful Human Control

AI-driven programs sign the potential absence of instant human decision-making on deadly drive and the growing lack of so-called significant human management (MHC). The idea of MHC has turn out to be a central focus of the continued transnational debate on the UN-CCW. Originally coined by the non-governmental organisation (NGO) Article 36 (Article 36 2013, 36; see Roff and Moyes 2016), there are totally different understandings of what significant human management implies (Ekelhof 2019). It guarantees resolving the difficulties encountered when making an attempt to outline exactly what autonomy in weapons programs is but additionally meets considerably comparable issues in its definition of key ideas. Roff and Moyes (2016, 2–3) recommend a number of components that may improve human management over expertise: expertise is meant to be predictable, dependable, clear; customers ought to have correct data; there may be well timed human motion and a capability for well timed intervention, in addition to human accountability. These components underline the advanced calls for that might be necessary for sustaining MHC however how these components are linked and what diploma of predictability or reliability, for instance, are essential to make human management significant stays unclear and these components are underdefined.

In this regard, many states contemplate the appliance of violent drive with none human management as unacceptable and morally reprehensible. But there may be much less settlement about varied advanced types of human-machine interplay and at what level(s) human management ceases to be significant. Should people at all times be concerned in authorising actions or is monitoring such actions with the choice to veto and abort ample? Is significant human management realised by engineering weapons programs and AI in sure methods? Or, extra basically, is human management that consists of merely executing choices primarily based on indications from a pc that aren’t accessible to human reasoning because of the ‘black-boxed’ nature of algorithmic processing significant? The noteworthy level about MHC as a norm within the context of AWS can be that it has lengthy been compromised in numerous battlefield contexts. Complex human-machine interactions are usually not a current phenomenon – even the extent to which human management in a fighter jet is significant is questionable (Ekelhof 2019).

However, the makes an attempt to ascertain MHC as an rising norm meant to control AWS are tough. Indeed, over the previous 4 years of debate within the UN-CCW, some states, supported by civil society organisations, have advocated introducing new authorized norms to ban absolutely autonomous weapons programs, whereas different states go away the sphere open to be able to enhance their room of manoeuvre. As discussions drag on with little substantial progress, the operational pattern in the direction of creating AI-enabled weapons programs continues and is on monitor to turning into established as ‘the new normal’ in warfare (P. W. Singer 2010). For instance, in its Unmanned Systems Integrated Roadmap 2013–2038, the US Department of Defence units out a concrete plan to develop and deploy weapons with ever growing autonomous options within the air, on land, and at sea within the subsequent 20 years (US Department of Defense 2013).

While the US technique on autonomy is probably the most superior, a majority of the highest ten arms exporters, together with China and Russia, are creating or planning to develop some type of AI-driven weapon programs. Media reviews have repeatedly pointed to the profitable inclusion of machine studying methods in weapons programs developed by Russian arms maker Kalashnikov, coming alongside President Putin’s much-publicised quote that ‘whoever leads in AI will rule the world’ (Busby 2018; Vincent 2017). China has reportedly made advances in creating autonomous floor automobiles (Lin and Singer 2014) and, in 2017, printed an ambitiously worded government-led plan on AI with decisively elevated monetary expenditure (Metz 2018; Kania 2018).

The intention to control the apply of utilizing drive by setting norms stalls on the UN-CCW, however we spotlight the significance of a reverse and possible situation: practices shaping norms. These dynamics level to a doubtlessly influential trajectory AWS might take in the direction of altering what’s acceptable with regards to using drive, thereby additionally reworking worldwide norms governing using violent drive.

We have already seen how the provision of drones has led to modifications in how states think about using drive. Here, entry to drone expertise seems to have made focused killing appear an appropriate use of drive for some states, thereby deviating considerably from earlier understandings (Haas and Fischer 2017; Bode 2017; Warren and Bode 2014). In their utilization of drone expertise, states have subsequently explicitly or implicitly pushed novel interpretations of key requirements of worldwide regulation governing using drive, corresponding to attribution and imminence. These practices can’t be captured with the standard conceptual language of customary worldwide regulation if they don’t seem to be overtly mentioned or just don’t quantity to its tight necessities, corresponding to turning into ‘uniform and wide-spread’ in state apply or manifesting in a persistently said perception within the applicability of a selected rule. But these practices are important as they’ve arguably led to the emergence of a sequence of gray areas in worldwide regulation by way of shared understandings of worldwide regulation governing using drive (Bhuta et al. 2016). The ensuing lack of readability results in a extra permissive atmosphere for utilizing drive: justifications for its use can extra ‘easily’ be discovered inside these more and more elastic areas of worldwide regulation.

We subsequently argue that we will examine how worldwide norms relating to utilizing AI-driven weapons programs emerge and change from the bottom-up, through deliberative and non-deliberative practices. Deliberative practices as methods of doing issues could be the end result of reflection, consideration or negotiation. Non-deliberative practices, in distinction, seek advice from operational and sometimes non-verbalised practices undertaken within the strategy of creating, testing and deploying autonomous applied sciences.

We are at present witnessing, as described above, an effort to doubtlessly make new norms relating to AI-driven weapons applied sciences on the UN-CCW through deliberative practices. But on the similar time, non-deliberative and non-verbalised practices are consistently undertaken as properly and concurrently form new understandings of appropriateness. These non-deliberative practices might stand in distinction to the deliberative practices centred on making an attempt to formulate a (consensus) norm of significant human management.

This doesn’t solely have repercussions for programs at present in numerous levels of growth and testing, but additionally for programs with restricted AI-driven capabilities which have been in use for the previous two to a few many years corresponding to cruise missiles and air defence programs. Most air defence programs have already got important autonomy within the concentrating on course of and army aircrafts have extremely automatised options (Boulanin and Verbruggen 2017). Arguably, non-deliberative practices surrounding these programs have already created an understanding of what significant human management is. There is, then, already a norm, within the sense of an rising understanding of appropriateness, emanating from these practices that has not been verbally enacted or mirrored on. This makes it more durable to deliberatively create a brand new significant human management norm.

Friendly fireplace incidents involving the US Patriot system can serve for instance right here. In 2003, a Patriot battery stationed in Iraq downed a British Royal Airforce Tornado that had been mistakenly recognized as an Iraqi anti-radiation missile. Notably, ‘[t]he Patriot system is nearly autonomous, with only the final launch decision requiring human interaction’ (Missile Defense Project 2018). The 2003 incident demonstrates the extent to which even a comparatively easy weapons system – comprising of components corresponding to radar and various automated capabilities meant to help human operators – deeply compromises an understanding of MHC the place a human operator has all required data to make an impartial, knowledgeable determination which may contradict technologically generated knowledge.

While people have been clearly ‘in the loop’ of the Patriot system, they lacked the required data to doubt the system’s data competently and have been subsequently mislead: ‘[a]ccording to a summary of a report issued by a Pentagon advisory panel, Patriot missile systems used during battle in Iraq were given too much autonomy, which likely played a role in the accidental downings of friendly aircraft’ (Singer 2005). This instance needs to be seen within the context of different, well-known incidents such because the 1988 downing of Iran Air flight 655 on account of a deadly failure of the human-machine interplay of the Aegis system on board the USS Vincennes or the essential intervention of Stanislav Petrov who rightly doubted data offered by the Soviet missile defence system reporting a nuclear weapons assault (Aksenov 2013). A 2016 incident in Nagorno-Karabakh supplies one other instance of a system with autonomous anti-radar mode utilized in fight: Azerbaijan reportedly used an Israeli-made Harop ‘suicide drone’ to assault a bus of allegedly Armenian army volunteers, killing seven (Gibbons-Neff 2016). The Harop is a loitering munition in a position to launch autonomous assaults.

Overall, these examples level to the significance of concentrating on for contemplating the autonomy in weapons programs. There are at present a minimum of 154 weapons programs in use the place the concentrating on course of, comprising ‘identification, tracking, prioritisation and selection of targets to, in some cases, target engagement’ is supported by autonomous options (Boulanin and Verbruggen 2017, 23). The drawback we emphasise right here pertains to not the completion of the concentrating on cycle with none human intervention, however already emerges within the help performance of autonomous options. Historical and more moderen examples present that, right here, human management is already typically removed from what we might contemplate as significant. It is famous, for instance, that ‘[t]he S-400 Triumf, a Russian-made air defence system, can reportedly track more than 300 targets and engage with more than 36 targets simultaneously’ (Boulanin and Verbruggen 2017, 37). Is it attainable for a human operator to meaningfully supervise the operation of such programs?

Yet, the obvious lack/compromised type of human management is seemingly thought of as acceptable: neither using the Patriot system has been questioned in relation to deadly incidents neither is the S-400 contested for that includes an ‘unacceptable’ type of compromised human management. In this sense, the wider-spread utilization of such air defence programs over many years has already led to new understandings of ‘acceptable’ MHC and human-machine interplay, triggering the emergence of recent norms.

However, questions in regards to the nature and high quality of human management raised by these current programs are usually not a part of the continued dialogue on AWS amongst states on the UN-CCW. In reality, states utilizing automated weapons proceed to actively exclude them from the talk by referring to them as ‘semi-autonomous’ or so-called ‘legacy systems.’ This omission prevents the worldwide neighborhood from taking a more in-depth take a look at whether or not practices of utilizing these programs are basically acceptable.

Conclusion

To conclude, we wish to come again to the important thing query inspiring our contribution: to what extent will AI-driven weapons programs form and rework worldwide norms governing using (violent) drive?

In addressing this query, we must also keep in mind who has company on this course of. Governments can (and ought to) determine how they need to information this course of moderately than presenting a selected trajectory of the method as inevitable or framing technological progress of a sure type as inevitable. This requires an express dialog in regards to the values, ethics, rules and decisions that ought to restrict and information the event, function and the prohibition of sure varieties of AI-driven safety applied sciences in mild of requirements for acceptable human-machine interplay.

Technologies have at all times formed and altered warfare and subsequently how drive is used and perceived (Ben-Yehuda 2013; Farrell 2005). Yet, the function that expertise performs shouldn’t be conceived in deterministic phrases. Rather, expertise is ambivalent, making how it’s utilized in worldwide relations and in warfare a political query. We need to spotlight right here the ‘Collingridge dilemma of control’ (see Genus and Stirling 2018) that speaks of a typical trade-off between realizing the impression of a given expertise and the convenience of influencing its social, political, and innovation trajectories. Collingridge (1980, 19) said the next:

Attempting to manage a expertise is tough […] as a result of throughout its early levels, when it may be managed, not sufficient could be identified about its dangerous social penalties to warrant controlling its growth; however by the point these penalties are obvious, management has turn out to be pricey and gradual.

This describes the scenario aptly that we discover ourselves in relating to AI-driven weapon applied sciences. We are nonetheless at an preliminary, growth stage of those applied sciences. Not many programs are in operation which have important AI-capacities. This makes it doubtlessly more durable to evaluate what the exact penalties of their use in distant warfare will likely be.The multi-billion investments made in varied army functions of AI by, for instance, the USA does recommend the growing significance and essential future function of AI. In this context, human management is reducing and the following era of drones on the core of distant warfare because the apply of distance fight will incorporate extra autonomous options. If technological developments proceed at this tempo and the worldwide neighborhood fails to ban and even regulate autonomy in weapons programs, AWS are prone to play a significant function within the distant warfare of the nearer future.

At the identical time, we’re nonetheless very a lot within the stage of technological growth the place steering is feasible, cheaper, more easy, and much less time-consuming – which is exactly why it’s so necessary to have these wider, crucial conversations in regards to the penalties AI for warfare now.

References

Aksenov, Paul. 2013. ‘Stanislav Petrov: The Man Who May Have Saved the World.’ BBC Russian. September.

Amoore, Louise. 2019. ‘Doubtful Algorithms: Of Machine Learning Truths and Partial Accounts.’ Theory, Culture and Society, 36(6): 147–169.

Aradau, Claudia, and Tobias Blanke. 2018. ‘Governing Others: Anomaly and the Algorithmic Subject of Security.’ European Journal of International Security, 3(1): 1–21. https://doi.org/10.1017/eis.2017.14

Article 36. 2013. ‘Killer Robots: UK Government Policy on Fully Autonomous Weapons.’ http://www.article36.org/weapons-review/killer-robots-uk-government-policy-on-fully-autonomous-weapons-2/

Asaro, Peter. 2018. ‘Why the World Needs to Regulate Autonomous Weapons, and Soon.’ Bulletin of the Atomic Scientists (weblog). 27 April. https://thebulletin.org/landing_article/why-the-world-needs-to-regulate-autonomous-weapons-and-soon/

Ben-Yehuda, Nachman. 2013. Atrocity, Deviance, and Submarine Warfare. Ann Arbor, MI: University of Michigan Press. https://doi.org/10.3998/mpub.5131732

Bhuta, Nehal, Susanne Beck, Robin Geiss, Hin-Yan Liu, and Claus Kress, eds. 2016. Autonomous Weapons Systems: Law, Ethics, Policy. Cambridge: Cambridge University Press.

Biegon, Rubrick, and Tom Watts. 2017. ‘Defining Remote Warfare: Security Cooperation.’ Oxford Research Group.

———. 2019. ‘Conceptualising Remote Warfare: Security Cooperation.’ Oxford Research Group.

Bode, Ingvild. 2017. ‘”Manifestly Failing” and “Unable or Unwilling” as Intervention Formulas: A Critical Analysis.’ In Rethinking Humanitarian Intervention within the twenty first Century, edited by Aiden

Warren and Damian Grenfell. Edinburgh: Edinburgh University Press: 164–91.

——— 2019. ‘Norm-Making and the Global South: Attempts to Regulate Lethal Autonomous Weapons Systems.’ Global Policy, 10(3): 359–364.

Bode, Ingvild, and Hendrik Huelss. 2018. ‘Autonomous Weapons Systems and Changing Norms in International Relations.’ Review of International Studies, 44(3): 393–413.

Boulanin, Vincent, and Maaike Verbruggen. 2017. ‘Mapping the Development of Autonomy in Weapons Systems.’ Stockholm: Stockholm International Peace Research Institute. https://www.sipri.org/sites/default/files/2017-11/siprireport_mapping_the_development_of_autonomy_in_weapon_systems_1117_1.pdf

Busby, Mattha. 2018. ‘Killer Robots: Pressure Builds for Ban as Governments Meet.’ The Guardian, 9 April 9. sec. Technology. https://www.theguardian.com/technology/2018/apr/09/killer-robots-pressure-builds-for-ban-as-governments-meet

Casey-Maslen, Stuart. 2012. ‘Pandora’s Box? Drone Strikes beneath Jus Ad Bellum, Jus in Bello, and International Human Rights Law.’ International Review of the Red Cross, 94(886): 597–625.

Cavallaro, James, Stephan Sonnenberg, and Sarah Knuckey. 2012. ‘Living Under Drones: Death, Injury and Trauma to Civilians from US Drone Practices in Pakistan.’ International Human Rights and Conflict Resolution Clinic, Stanford Law School/NYU School of Law, Global Justice Clinic. https://law.stanford.edu/publications/living-under-drones-death-injury-and-trauma-to-civilians-from-us-drone-practices-in-pakistan/

Cole, David. 2014. ‘We Kill People Based on Metadata.’ The New York Review of Books (weblog). 10 May. https://www.nybooks.com/daily/2014/05/10/we-kill-people-based-metadata/

Collingridge, David. 1980. The Social Control of Technology. London: Frances Pinter.

Der Derian, James. 2009. Virtuous War: Mapping the Military-Industrial-Media-Entertainment Network. 2nd ed. New York: Routledge.

Ekelhof, Merel. 2019. ‘Moving Beyond Semantics on Autonomous Weapons: Meaningful Human Control in Operation.’ Global Policy, 10(3): 343–348. https://doi.org/10.1111/1758-5899.12665

Farrell, Theo. 2005. The Norms of War: Cultural Beliefs and Modern Conflict. Boulder: Lynne Rienner Publishers.

Fleischman, William M. 2015. ‘Just Say “No!” To Lethal Autonomous Robotic Weapons.’ Journal of Information, Communication and Ethics in Society, 13(3/4): 299–313.

Genus, Audley, and Andy Stirling. 2018. ‘Collingridge and the Dilemma of Control: Towards Responsible and Accountable Innovation.’ Research Policy, 47(1): 61–69.

Gibbons-Neff, Thomas. 2016. ‘Israeli-Made Kamikaze Drone Spotted in Nagorno-Karabakh Conflict.’ The Washington Post. 5 April. https://www.washingtonpost.com/news/checkpoint/wp/2016/04/05/israeli-made-kamikaze-drone-spotted-in-nagorno-karabakh-conflict/?utm_term=.6acc4522477c

Gregory, Thomas. 2015. ‘Drones, Targeted Killings, and the Limitations of International Law.’ International Political Sociology, 9(3): 197–212.

Gusterson, Hugh. 2016. Drone: Remote Control Warfare. Cambridge, MA/London: MIT Press.

Haas, Michael Carl, and Sophie-Charlotte Fischer. 2017. ‘The Evolution of Targeted Killing Practices: Autonomous Weapons, Future Conflict, and the International Order.’ Contemporary Security Policy, 38(2): 281–306.

Hall, Abigail R., and Christopher J. Coyne. 2013. ‘The Political Economy of Drones.’ Defence and Peace Economics, 25(5): 445–60.

ICRC. 2016. ‘Views of the International Committee of the Red Cross (ICRC) on Autonomous Weapon Systems.’ https://www.icrc.org/en/document/views-icrc-autonomous-weapon-system

Kaempf, Sebastian. 2018. Saving Soldiers or Civilians? Casualty-Aversion versus Civilian Protection in Asymmetric Conflicts. Cambridge: Cambridge University Press.

Kahn, Paul W., 2002. ‘The Paradox of Riskless Warfare.’ Philosophy and Public Policy Quarterly, 22(3): 2–8.

Kania, Elsa. 2018. ‘China’s AI Agenda Advances.’ The Diplomat. 14 February. https://thediplomat.com/2018/02/chinas-ai-agenda-advances/

Kilcullen, David, and Andrew McDonald Exum. 2009. ‘Death From Above, Outrage Down Below.’ The New York Times. 17 May.

Lin, Jeffrey, and Peter W. Singer. 2014. ‘Chinese Autonomous Tanks: Driving Themselves to a Battlefield Near You?’ Popular Science. 7 October. https://www.popsci.com/blog-network/eastern-arsenal/chinese-autonomous-tanks-driving-themselves-battlefield-near-you

Mandel, Robert. 2004. Security, Strategy, and the Quest for Bloodless War. Boulder, CO: Lynne Rienner Publishers.

Metz, Cade. 2018. ‘As China Marches Forward on A.I., the White House Is Silent.’ The New York Times. 12 February. sec. Technology. https://www.nytimes.com/2018/02/12/technology/china-trump-artificial-intelligence.html

Missile Defense Project. 2018. ‘Patriot.’ Missile Threat. https://missilethreat.csis.org/system/patriot/

Morozov, Evgeny. 2014. To Save Everything, Click Here: Technology, Solutionism and the Urge to Fix Problems That Don’t Exist. London: Penguin Books.

Oudes, Cor, and Wim Zwijnenburg. 2011. ‘Does Unmanned Make Unacceptable? Exploring the Debate on Using Drones and Robots in Warfare.’ IKV Pax Christi.

Restrepo, Daniel. 2019. ‘Naked Soldiers, Naked Terrorists, and the Justifiability of Drone Warfare:’ Social Theory and Practice, 45(1): 103–26.

Roff, Heather M., and Richard Moyes. 2016. ‘Meaningful Human Control, Artificial Intelligence and Autonomous Weapons. Briefing Paper Prepared for the Informal Meeting of Experts on Lethal Autonomous Weapons Systems. UN Convention on Certain Conventional Weapons.’

Sauer, Frank, and Niklas Schörnig. 2012. ‘Killer Drones: The ‘Silver Bullet’ of Democratic Warfare?’ Security Dialogue, 43(4): 363–80.

Scheipers, Sibylle, and Bernd Greiner, eds. 2014. Heroism and the Changing Character of War: Toward Post-Heroic Warfare? Houndmills: Palgrave Macmillan.

Schwarz, Elke. 2016. ‘Prescription Drones: On the Techno-Biopolitical Regimes of Contemporary ’moral Killing.’ Security Dialogue, 47(1): 59–75.

Sharkey, Noel. 2008. ‘The Ethical Frontiers of Robotics.’ Science, 322(5909): 1800–1801.

Sharkey, Noel. 2010. ‘Saying ‘No!’ To Lethal Autonomous Targeting.’ Journal of Military Ethics, 9(4): 369–83.

Singer, Jeremy. 2005. ‘Report Cites Patriot Autonomy as a Factor in Friendly Fire Incidents.’ HouseNews.Com. 14 March. https://spacenews.com/report-cites-patriot-autonomy-factor-friendly-fire-incidents/

Singer, Peter W., 2010. Wired for War. The Robotics Revolution and Conflict within the twenty first Century. New York: Penguin.

Sparrow, Robert. 2007. ‘Killer Robots.’ Journal of Applied Philosophy, 24(1): 62–77.

Strawser, Bradley Jay. 2010. ‘Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles.’ Journal of Military Ethics, 9(4): 342–68.

The Associated Press. 2018. ‘Tens of Thousands of Russian Troops Have Fought in Syria since 2015.’ Haaretz. 22 August. https://www.haaretz.com/middle-east-news/syria/tens-of-thousands-of-russian-troops-have-fought-in-syria-since-2015-1.6409649

The Intercept. 2015. ‘SKYNET: Courier Detection via Machine Learning – The Intercept.’ 2015. https://theintercept.com/document/2015/05/08/skynet-courier/

United States. 2018. ‘Human-Machine Interaction in the Deveopment, Deployment and Use of Emerging Technologies in the Area of Lethal Autonomous Weapons Systems. UN Document CCW/GGE.2/2018/WP.4.’ https://www.unog.ch/80256EDD006B8954/(httpAssets)/D1A2BA4B7B71D29FC12582F6004386EF/$file/2018_GGE+LAWS_August_Working+Paper_US.pdf

US Department of Defense. 2013. ‘Unmanned Systems Integrated Roadmap: FY2013-2038.’ https://info.publicintelligence.net/DoD-UnmannedRoadmap-2013.pdf

Vincent, James. 2017. ‘Putin Says the Nation That Leads in AI ‘Will Be the Ruler of the World.’’ The Verge. 4 September. https://www.theverge.com/2017/9/4/16251226/russia-ai-putin-rule-the-world

Walsh, James Igoe, and Marcus Schulzke. 2018. Drones and Support for the Use of Force. Ann Arbor: University of Michigan Press.

Warren, Aiden, and Ingvild Bode. 2014. Governing the Use-of-Force in International Relations. The Post-9/11 US Challenge on International Law. Basingstoke: Palgrave Macmillan.

———. 2015. ‘Altering the Playing Field: The US Redefinition of the Use-of-Force.’ Contemporary Security Policy, 36 (2): 174–99. https://doi.org/10.1080/13523260.2015.1061768

Further Reading on E-International Relations

Leave a Reply

All countries
114,689,757
Total confirmed cases
Updated on March 1, 2021 11:11 am

Most Popular

Most Popular

Recent Comments

Chat on WhatsApp
1
Hello
Hello,
How can we help you?