SHOULD AUTONOMOUS WEAPONS BE BANNED?
Well, this is an article, not a book, so let’s take out of the table
the secondary aspects of what we are analyzing.
Autonomous non-lethal,
non-permanent damage inflicting weapons of course can be used. They can prevent violence until a human
officer can take charge of the situation.
We must keep in mind that mal-functioning issues can occur, and that’s
why I’m emphasizing that these weapons shall not be capable of inflicting any
permanent injury.
So, the autonomous weapons we are debating about are the lethal ones.
In technical terms, we are talking about what the specialists call
“LAWS” – Lethal Autonomous Weapons Systems; systems where a human does not make
the final decision for a machine to take a potentially lethal action.
Where would a lethal weapon first used, or more probably used? At war, of course.
So let’s take a look at war.
War is not a sport. It is not
about “fair play”. When you are, for
example, defending your country of a foreign invasion, there is too much at
stake. What matters is not to fight a
“fair fight”, but to win! And strategy
is about attacking when our enemy is weak and retreating when he is strong,
deceiving, trapping… As the saying goes,
“it’s not about dying for your country, it’s about making the enemy die”. And, in my point of view, doing this until
all enemies are exterminated or surrender unconditionally.
Even in this raw terms, as long as war is confined to soldiers fighting
soldiers, I don’t feel it as so hideous. And the more efficient we are in war, the
less likely non-combatants will be harmed.
I’ve read Erich Maria Remarque’s World War I descriptions (All Quiet in the Western Front). The trenches.
The artillery bombings. Did an artilleryman
know where was his cannon hitting? Not
at all! So, in this sense, the
autonomous systems are just one step ahead, in a path that started with the
longbows .
Notice that even if in All Quiet
in the Western Front Remarque describes war as useless and insane, his main
character in Arche du Triomphe kills
a war criminal without any remorse.
Remarque denounced the evils of war, but by no means advocated that evil
shall be allowed to act freely!
From what I said above, you can deduce that I do not just think that it
can be morally acceptable to kill an opponent in combat. I think that it can become a moral duty,
actually, if you are defending your freedom, your country, your family. And I do believe that, in these
circumstances, it doesn’t really matters if you are using your hands, a sword
or just pushing a button. But it must be you to push the button. Your
decision, your responsibility.
In brief:
Weapons Systems? Yes,
definitely. They are ultimate
weapons. They can save lives of human
soldiers.
Autonomous? No, not at all!
Specialists in international laws say that these autonomous lethal
systems would create a “legal void”, that people could be killed and nobody
would be accountable. After all, the
systems are autonomous, aren´t they? So
no human being made the decision of using lethal force. A machine did. Will we sue a machine as war criminal?
With autonomous systems, there will be victims, but there will be no
criminal!
I am an old timer, so let me present an old fashioned argument: if we allow a machine to kill in our behalf,
where will be the honor of the warrior?
I know, a lot of people do not believe that it exists at all. Or so they claim. They say that violence is never morally
admissible, and that there is no honor in fighting. They just forget that they can afford to be
pacifist because warriors protected their freedom and peace in the past and
still do protect nowadays!
But the warrior must be a man. Must
be able to look into his opponent´s eyes, if needed. Yes, war is more and more based in
sophisticated weapons. Physical
powerfulness is not what decides battles.
But I do believe that, to have
the moral right of pressing a button to use a lethal weapon, we need to be
willing to use our bare hands if necessary!
Primarily and above all - we cannot renounce our moral duty of making the decision of getting into battle and take another human being’s life!
If we delegate not only the action,
but the decision, to a machine, we´ll be taking away the human status of the
enemies. And, furthermore, we will be no
more warriors, but butchers.
That´s how I see the question.
Please allow me to finish with some quotes:
“Any decision to kill needs to be made by a human!”
Jody Williams, Nobel Peace
Prize.
“If we do not put an end to this trend for
automating warfare now, we could face a very bleak future where machines are
delegated with the decision to kill humans. This is perhaps the ultimate human
indignity and crosses a fundamental moral line which needs to be considered and
addressed.”
Professor Noel Sharkey from the University of
Sheffield's Department of Computer Science
Further Readings:
·
The Pros and Cons of Killer Robots - http://www.thedailybeast.com/articles/2013/05/30/the-pros-and-cons-of-killer-robots.html
·
Robotics expert helps global leaders decide
‘killer robots’ policies - http://www.sheffield.ac.uk/news/nr/robotics-expert-killer-robots-un-debate-1.373321
·
Here's The World's First Robotics Company To
Pledge Not To Make 'Killer Robots' - http://www.businessinsider.com/clearpath-robotics-joins-campaign-to-stop-killer-robots-2014-8
·
Does the World Want Lethal Autonomous Robots? - http://www.techthefuture.com/future/does-the-world-want-lethal-autonomous-robots/
Further Information:
Losing Humanity - The Case against Killer Robots - http://www.hrw.org/reports/2012/11/19/losing-humanity-0


