According to a recent UN report, an autonomous drone hunted down and attacked humans without being told to do so from human commanders. The attack, which took place in Libya in March 2020, was the first time such an attack by artificial intelligence (AI) took place on humans. It is unclear whether the drone killed any people during the attack.
The report to the UN Security Council states that on March 27, 2020, Libyan Prime Minister Fayez al-Sarraj ordered “Operation PEACE STORM”, under which unmanned combat aerial vehicles (UCAV) were used against Haftar Affiliated Forces.
Drones have been used in combat for years, but this attack was different because they were used without human input.
“Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see annex 30) and other loitering munitions,” the report states.
“The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability,” it continues.
The KARGU is a rotary-wing attack drone designed for asymmetric warfare or anti-terrorist operations. According to the manufacturers, it “can be effectively used against static or moving targets through its indigenous and real-time image processing capabilities and machine learning algorithms embedded on the platform.”
A video showcasing the drone shows it targeting mannequins in a field before diving at them and detonating an explosive charge.
Against human targets, the drones proved effective.
“Units were neither trained nor motivated to defend against the effective use of this new technology and usually retreated in disarray,” the report says. “Once in retreat, they were subject to continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems, which were proving to be a highly effective combination.”
The report did not specify whether there were casualties or deaths connected with the attack. But it did note that the drones were “highly effective” in helping to inflict “significant casualties” on enemy Pantsir S-1 surface-to-air missile systems. It’s perfectly possible that the first human has been attacked or killed by a drone operated by a machine learning algorithm.
The attack, whether it produced casualties or not, will not be welcomed by campaigners against the use of “killer robots.”
“There are serious doubts that fully autonomous weapons would be capable of meeting international humanitarian law standards, including the rules of distinction, proportionality, and military necessity, while they would threaten the fundamental right to life and principle of human dignity,” the Human Rights Watch says. “Human Rights Watch calls for a preemptive ban on the development, production, and use of fully autonomous weapons.”
Another concern is that AI algorithms used by the robots may not be robust enough. As well as being open to errors (such as a Tesla tricked into swerving off the road) there are countless examples of biases within machine-learning tech, from facial recognition that doesn’t recognize non-white skin tones, to cameras that tell Asian people to stop blinking, to racist soap dispensers that won’t give you soap if you’re black and self-driving cars that are more likely to run you over if you are not white.
Now, it appears we could be trusting life and death decisions to tech that may be open to similar problems.