New technology could lead humans to relinquish control over decisions to use lethal force.
As artificial intelligence advances, the possibility that machines could independently select and fire on targets is fast approaching.
Fully autonomous weapons, also known as `killer robots,` are quickly moving from the realm of science fiction toward reality.
These weapons, which could operate on land, in the air or at sea, threaten to revolutionize armed conflict and law enforcement in alarming ways.
Proponents say these killer robots are necessary because modern combat moves so quickly, and because having robots do the fighting would keep soldiers and police officers out of harm`s way.
But the threats to humanity would outweigh any military or law enforcement benefits.
Removing humans from the targeting decision would create a dangerous world.
Machines would make life-and-death determinations outside of human control.
The risk of disproportionate harm or erroneous targeting of civilians would increase.
No person could be held responsible.
Given the moral, legal and accountability risks of fully autonomous weapons, preempting their development, production and use cannot wait.
The best way to handle this threat is an international, legally binding ban on weapons that lack meaningful human control.
At least 20 countries have expressed in U.N. meetings the belief that humans should dictate the selection and engagement of targets.
Many of them have echoed arguments laid out in a new report, of which I was the lead author.
The report was released in April by Human Rights Watch and the Harvard Law School International Human Rights Clinic, two organizations that have been campaigning for a ban on fully autonomous weapons.
Retaining human control over weapons is a moral imperative.
Because they possess empathy, people can feel the emotional weight of harming another individual.
Their respect for human dignity can – and should – serve as a check on killing.
Robots, by contrast, lack real emotions, including compassion.
In addition, inanimate machines could not truly understand the value of any human life they chose to take.
Allowing them to determine when to use force would undermine human dignity.
Human control also promotes compliance with international law, which is designed to protect civilians and soldiers alike.
For example, the laws of war prohibit disproportionate attacks in which expected civilian harm outweighs anticipated military advantage.
Humans can apply their judgment, based on past experience and moral considerations, and make case-by-case determinations about proportionality.
It would be almost impossible, however, to replicate that judgment in fully autonomous weapons, and they could not be preprogrammed to handle all scenarios.
As a result, these weapons would be unable to act as `reasonable commanders,` the traditional legal standard for handling complex and unforeseeable situations.
In addition, the loss of human control would threaten a target`s right not to be arbitrarily deprived of life.