Killer robots: pressure builds for ban as governments meet
Countries spending billions on ‘third revolution in warfare’ as UN debates regulation of AI-powered weapons
The US X-47B unmanned autonomous aircraft. Photograph: Rex Features |
They will be “weapons of terror, used by terrorists and rogue states against civilian populations. Unlike human soldiers, they will follow any orders however evil,” says Toby Walsh, professor of artificial intelligence at the University of New South Wales, Australia.
“These will be weapons of mass destruction. One programmer and a 3D printer can do what previously took an army of people. They will industrialise war, changing the speed and duration of how we can fight. They will be able to kill 24-7 and they will kill faster than humans can act to defend themselves.”
Governments are meeting at the UN in Geneva on Monday for the fifth time to discuss whether and how to regulate lethal autonomous weapons systems (Laws). Also known as killer robots, these AI-powered ships, tanks, planes and guns could fight the wars of the future without any human intervention.
The US launched an autonomous ship, Sea Hunter, on 7 April 2016. Photograph: Steve Dipaola/Reuters |
(...)
Supporters of a ban say fully autonomous weapons are unlikely to be able to fully comply with the complex and subjective rules of international humanitarian and human rights law, which require human understanding and judgment as well as compassion.
Pointing to the 1997 ban on landmines, now one of the most widely accepted treaties in international law, and the ban on cluster munitions, which has 120 signatories, Wareham says: “History shows how responsible governments have found it necessary in the past to supplement the limits already provided in the international legal framework due to the significant threat posed to civilians.”
Russia’s Armata T-14 battle tank can autonomously fire on targets and is expected to be fully autonomous in the near future. Photograph: Grigory Dukor/Reuters |
It is believed that the weaponisation of artificial intelligence could bring the world closer to apocalypse than ever before. “Imagine swarms of autonomous tanks and jet fighters meeting on a border and one of them fires in error or because it has been hacked,” says Noel Sharkey, professor of artificial intelligence and robots at the University of Sheffield, who first wrote about the reality of robot war in 2007.
“This could automatically invoke a battle that no human could understand or untangle. It is not even possible for us to know how the systems would interact in conflict. It could all be over in minutes with mass devastation and loss of life.”