Exploring the Ethics of Weaponized Unmanned Aerial Vehicles

The below essay  appears in the current (fall 2013) issue of  the Navy Helicopter Association’s quarterly magazine, Rotor Review.  I was asked to prepare and submit this essay  by its editor, one of my all-star former students at USD – Allison Fletcher, herself a helo-pilot.  Though the topic won’t interest many of you, it may interest some of you.  If you’re interested in helicopters at all, you can view the entire magazine HERE and Allison has assembled a number of outstanding articles in this issue – apart (of course!) from my essay. Here is  my contribution:

predator-firing-missile4———————————————

Exploring the Ethics of Weaponized Unmanned Aerial Vehicles

We have recently seen discussions in the press regarding the morality of firing Hellfire missiles and other lethal munitions from Unmanned Aerial Vehicles (UAVs).  Those “pulling the trigger” to launch the missiles are at no risk, and those targeted have no defenses against them, especially when we can place UAVs undetected and overhead in much of the world. 
Additionally, we read about UAV operators suffering stress and guilt
after spending days, weeks, or months launching lethal strikes from their safe compounds in the United States, dealing death and destruction to their targets, and going home at 5PM to have dinner with their families. Many UAV operators feel that this is somehow unethical, launching lethal strikes from a position of relative safety, to kill, maim and destroy on remote battlefields.  The enemy not only has no idea what or who hit
him but has no means to respond in kind. It doesn’t seem fair.

And it isn’t fair, within the context of justice as we normally experience it in civilized life.  In our normal lives, ethical people don’t blindside their opponents.  Even in some military contexts, enemies confront each other in combat, and the best trained, best prepared win.  But of course, warfare isn’t that simple.  It isn’t even that simple in our normal lives.

But let’s stick with warfare.  The idea that battle should be fair – mano a mano – became obsolete with the competition for technological advantage in using standoff weapons.  In early days, it was considered unmanly to resort to archers in battle, when chivalry called for gentlemen to fight each other on relatively equal terms.  Eventually the chivalry of knights made room for the archer, and then the long- and crossbows gave way to the musket, and the musket to the automatic rifle.  The reach of artillery extended from yards to miles, then a century ago, aircraft were introduced as weapons in war, and World War II saw the introduction of the long-range missile.  In each case, the archer, rifleman, artilleryman, bomber pilot and missile launcher are dealing death to their targets from positions where the specific individuals targeted usually cannot respond or retaliate.

The reality is that war is not individuals confronting each other in a test of strength and skill.  Though it is individuals who die in war, war is between militaries, countries or cultures, and technological advantage is increasingly a key measure of strength and proficiency.  The weaponized UAV is simply another development in standoff weapons capabilities, and the UAV operator is like a sniper – s/he finds, stalks and kills specific targets from a position where targeted individuals cannot see their assassin.  Unlike the bomber, the artilleryman or missile launcher, UAV operators and snipers actually see the results of their handiwork.  The radio controlled IED, similar to the claymore mine used by U.S. soldiers in Vietnam, has also been an effort by our enemies to seek a technological advantage with a standoff weapon.

Military Ethics are driven by two fundamental principles – discrimination and proportionality.  Discrimination demands that we only intentionally target combatants, and proportionality demands that unintended or collateral damage, especially to civilian infrastructure and non-combatants, be proportional to the military value of the target.  The issues and nuances associated with applying these two principles in combat are complex and keep legions of military ethicists and attorneys busy.   Simply stated, the ethical issues in applying lethal force from UAVs are no different than for applying lethal force from other standoff weapons – whether that be a cruise missile launched from a ship, submarine, or aircraft, or a JDAM launched from an F/A 18.

Autonomous lethal UAVs pose a much more interesting ethical challenge.  UAVs have been developed that can be programmed to deliver precision guided lethal munitions as soon as its sensors determine that criteria have been met, within a shoot/don’t shoot decision matrix.   Such UAVs are included within the rubric of what are now referred to as Lethal Autonomous Robots (LAR).   Many of us may be morally repelled by the idea of a computerized machine making the “decision” to kill, but the debate about the ethics of LARs is not as simple as it might seem.  Whether we like it or not, pre-programmable smart bombs and cruise missiles, while improving the precision of air strikes, reducing collateral damage (remember carpet and fire-bombing?), and reducing risk to our own forces, have moved us in the direction of such weapons.

The most compelling argument against LARs is that when we take the human out of the loop, there is not the extra attention that comes with direct accountability to minimize mistakes that lead to unnecessary, unintended, or non-proportional non-combatant deaths.  With LARs, there is less opportunity for human intuition to sense that, though criteria may all line up to justify the decision to shoot or kill, something just doesn’t look or feel right. In such situations, the experienced (and morally sensitive and accountable) human may hesitate and take that extra second or two or three to confirm targeting criteria and ensure that “the bread van we’re targeting is not actually an ambulance,” or that the Toyota Land cruiser, that looks just like the one that intelligence said would be carrying terrorists at this place and time, is not a hapless group of refugees tragically unaware that they are moving through a designated kill zone. Take the human out of the loop and the LAR will dispassionately do its duty as programmed, fire as directed, feel no remorse or guilt, and stand by for its next assignment.

We (should) feel that the decision to kill human beings and impose the tragic impact such killing has on families and communities demands not only legal but also moral accountability from another human being.

And yet….

Programming mistakes and computer glitches similarly cause other “smart” weapons systems to go astray and kill the wrong people.  We make no arguments that, because of these occasional shortcomings, we should dispense with computerized cruise missiles or other programmable “smart” standoff weapons. The alternative to such precision-guided weapons is increased collateral damage, or putting our own warriors closer to the violence and thereby at greater risk.

Probably the prime argument made by those who support LARs is that taking human emotion and frailty out of the equation will result in fewer mistakes and actually reduce unintended killing in war.  They argue that cases of tragic collateral damage will be reduced when the fear, fatigue, rage and anger, emotional frailty, and misjudgments of human beings are taken out of the shoot/don’t shoot decision process.  Indeed, those who argue in favor of LARs pose the interesting question from a consequentialist perspective: Would you oppose LARs if there were overwhelming evidence that human error and misjudgments were the cause of a great percentage of weaponeering mistakes, and that a purely rational, dispassionate decision process would indeed reduce these unintended killings dramatically?  It is an interesting and not entirely theoretical question.  It pits the principle of wanting human engagement and accountability in delivering lethal force against the desired consequence of reducing unintended death and injury due to human frailty.

Part of what makes this issue interesting is that the military profession seeks, through training and continued professionalization, to insulate the warrior and the warrior’s emotional life from the act of killing.  We don’t want military people to want or like to kill, and we want them to return to their communities not only physically, but psychologically whole.  Our ideal warriors kill out of a sense of duty – dispassionate duty – to their country and their profession, not because they hate the enemy or love to kill.  (Many civilians, and I dare say many in the military, don’t get this, and our civilian and even military leaders will often fan the flames of hate to motivate their citizen-soldiers to kill and die for their country.)  Our ideal warrior is compassionate with buddies, shipmates, family, and communities, but, when confronting the enemy in planning or in battle, is expected to become like a chess master – coldly rational, efficient, dispassionate, and professional  – the Stoic warrior who sets his/her emotions aside to do what needs to be done, whatever that is, in accordance with Rules of Engagement that s/he may not understand or accept, but others have promulgated. If successful, then our warriors are expected to become human, emotional, and compassionate again, and to relax and enjoy the companionship of friends, family, and community.

Are we not asking too much of our warriors, that they follow the cold logic of ROE, kill, maim and destroy out of duty, not passion, and then flip a switch and become human again?   The impact of our recent wars on the young men and women in our military has forced this question upon us.   We have seen too many otherwise good warriors commit atrocities or other violations of the Laws of Armed Conflict (there have been many, though not many have been made public), and we hear the often tragic stories of warriors, their families and communities struggling with PTSD, building to an epidemic in active duty and veteran suicides.

If our ideal warriors are fearless, dispassionate and very efficient followers of orders, expected to risk their own death and injury in order to deliver it to others, and the costs of trying to make our young men and women into this professional ideal are so high, WHY NOT get a computerized robot, a Lethal Autonomous Robot, to do the killing?  LARs could significantly reduce death and injury to our own warriors, as well as reduce the simmering tragedy of PTSD and other life and family destroying psychological injuries to our warriors during and after service, while increasing it for our enemies.  Isn’t war, as Patton reportedly once said, simply trying to make the other poor, dumb bastard die for his country?

While I wonder whether Lethal Autonomous Robots might simply be just one more step in the millennia-long effort to gain technological advantage in war, I struggle on principle with the idea of autonomous robots doing our killing.  I have written several case studies in which military leaders, based on gut instinct and moral intuition, have injected themselves into the cold logic of scheduled fires and PGM targeting, cancelled planned strikes, and thereby averted tragedy.  Their leadership, intuition, and commitment to minimize harm to noncombatants have saved countless lives of the innocent.  But I do accept that there are compelling arguments in favor of LARs.  Computers don’t feel fear, sadness, anger, envy, regret, hope, joy, love, or moral accountability – all things that make us human and that have made war not only such a tragedy, but also such an incredible laboratory for human excellence.  And yet, it is our humanity that frequently gets in the way of rational and ethical decision making on the battlefield, and has caused so much tragedy and evil, as well as courage and selfless sacrifice, in war.   The computer chip in the Lethal Autonomous Robot, once programmed, changes the nature of “risk” in warfare and demands that we reconsider our concepts of “courage” – both physical and moral – in war, and indeed our concept of what it is to be a “warrior.”

Stay tuned – this discussion is ongoing, and will continue as long as technological advantage is a primary means for the U.S. to achieve military and political victory over our adversaries, while reducing risk to our own warriors.

2 thoughts on “Exploring the Ethics of Weaponized Unmanned Aerial Vehicles

  1. Skipper, thank you for the great insight and for leading us into this discussion.

    As you indicated above this ethical struggle with technological advantage has been an issue for centuries. UAVs and LARs are only the latest evolution of technological advantage – in this case – killing with extraordinary advantage worldwide. I have long been fascinated with the increasing array of human augmentation technologies whether it’s a helicopter, an iphone or an x-ray machine.

    Technology continues to extend our actions, intent, and influence across both space and time so I align with the perspective that there is always a human in the loop…somewhere and at some time along the way a human made a decision to light a fuse, pull a trigger, write code, bury a mine, set a timer, deploy a UAV or design, build and field a LARs. In a sense, technology kind of stretches human actions and intent across time and space. But does the elasticity of our actions make them less personal? The ethics less relevant?

    As you alluded to in the essay, intercontinental ballistic missiles are not unlike a UAV…simply a difference in scale. ICBMs however seem a good bit more impersonal. Today UAV optics and sensors afford us a more personal Slim-Whitman-ride-along to the target. Do ethical concerns emerge in direct proportion to how personal it all feels? What about the elasticity of our ethics? Will our ethics stretch to accommodate the increasing elasticity of human activity?

    Whether mano a mano or joystickundvideo, killing in war will remain an indelible human experience. And the ethics and the discussion about the ethics are critical as we adopt increasingly capable technologies while trying to keep the future on human terms.

    Thanks again
    Gus

    Like

  2. Pingback: I Robot, by Isaac Asimov | Bob's Books

Leave a comment