Dr Dave Webb writes on the dangers of Artificial Intelligence – particularly when it comes to nuclear weapons and other military systems.
“In the last few months leading technology experts, including heads of Google Deepmind, leading academics and the US Center for AI Safety have warned of the dangers of Artificial Intelligence (AI) – possibly even leading to the extinction of humanity. A statement on the website of the Center for AI Safety and signed by a long list of technology experts, says that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
CND has recognised that the use of advanced AI technology in nuclear weapons systems is “potentially disastrous.” The Morning Star quotes Kate Hudson, as saying “We want a complete ban on any AI systems in any form of military decision-making” before adding “but our focus must be on ending the possibility of nuclear use entirely, rather than on whether a human or a machine decides.” As international tensions continue to rise and decision times shorten, the possibility of AI being introduced into nuclear weapons systems is likely to increase. It is even quite possible that some AI is already incorporated in the technological systems, although perhaps not the final decision on whether to drop the bomb.
AI is certainly finding its way into other military systems though. The AUKUS agreement between Australia, the UK and the US includes technology sharing in “artificial intelligence and autonomy” and a recent AUKUS demonstration of the use of AI to control a swarm of drones took place in Wiltshire. Over 70 military and civilian personnel and contractors from AUKUS countries took part. The MoD claimed it as the first time that AI models from the three nations had exchanged data and been retrained in flight.
Also in May, the Royal Aeronautical Society hosted the ‘Future Combat Air & Space Capabilities Summit’ conference that brought together over 200 delegates from around the world to discuss the future of military air and space capabilities. A blog reporting on the conference mentioned how AI was a major theme and a presentation from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, warned against an over reliance on AI systems and noted that they were easy to trick and deceive. They can also create unexpected strategies to achieve their goals, and he noted that in one simulated test an AI-enabled drone was told to identify and destroy ground-based missile sites. The final firing decision was to be made by a human but the system had been trained that destruction of the missile site was the top priority. The AI decided therefore that ‘no-go’ decisions from the human were interfering with its higher mission and, in the simulation, it attacked the operator. Hamilton was reported as saying that the human operator would tell it not to kill the threat, “but it got its points by killing that threat. So, what did it do? … It killed the operator because that person was keeping it from accomplishing its objective.” Although the system was trained not to kill the operator, It started destroying the communication tower used to connect with the drone.
A June update to the blog declares that Hamilton “mis-spoke” and the “rogue AI drone simulation” was a hypothetical “thought experiment” that was never actually simulated. Even so, it is clear that the use of these systems needs to be extremely well thought out and designed – there is still a possibility of human error, as is the case with all technology.
There may also be other problems associated with AI. Margrethe Vestager, the EU’s tech chief, is worried about the way that AI could be used to overemphasise bias or discrimination when making decisions that affect people’s livelihoods such as loan applications, and also that there is “definitely a risk” that AI could be used to influence elections. As the Center for AI Safety suggests, the power of AI could become increasingly concentrated in fewer hands, enabling “regimes to enforce narrow values through pervasive surveillance and oppressive censorship” and AI-generated misinformation could destabilise society and “undermine collective decision-making.” Some of this is probably already happening.
Rishi Sunak however, has stressed the benefits to the economy and society, saying “you’ve seen that recently it was helping paralysed people to walk, discovering new antibiotics, but we need to make sure this is done in a way that is safe and secure.”
Let’s hope so because helping the Beatles to bring a new song to light is one thing, but putting a computer algorithm in charge of swarms of drones or weapons of mass destruction is something entirely different.”