AI and autonomy in defence. The advent of risk-free warfare?
Published 11th November 2020
BY ANDY THOMIS, COHORT CHIEF EXECUTIVE
Machine autonomy is now commonplace in everyday life – cars that park themselves, fridges that automatically reorder when stocks run low. Applications of autonomy and wider AI in defence are already emerging, and they bring great potential benefits in both risk reduction and military effectiveness. But defence is also, inescapably, about machines that can cause damage and harm. Do we want to add autonomous decision-making into that mix? One doesn’t have to be Stanley Kubrick to see the possible downside.
A force for good
A good place to start is: what can AI and autonomy add to military capability? The first and most obvious benefit is removing the need for human input. Trained manpower is a scarce and expensive resource, and the time when it could be viewed as expendable has long since passed. Removing or reducing the human element from dangerous or repetitive work is the lowest-hanging fruit for AI. Tasks like logistic resupply, forward observation, and border security are already supplemented by technology and there is surely more that can be done. Every person saved from these tasks is one less person at risk, and one more person that can perform a role where human judgement and decision-making are at a premium.
Then there are the roles where human limitations are a constraint on performance. Aircraft lateral and vertical acceleration limits, for example, can be greatly relaxed if the need to avoid injury to a human pilot is removed. Machines can be faster, stronger and more accurate at mechanical movement, and they are not subject to fatigue or distraction.
Not least, the use of AI and autonomy offers the prospect of enhancing human performance. For instance, Boeing’s Loyal Wingman concept extends the capability of a single pilot to manage multiple airframes simultaneously, using unmanned aircraft for dangerous tasks such as scouting or absorbing hostile fire. That would result in an impossible workload using conventional technology. Only with machine autonomy is this vast potential force multiplier possible.
The emerging ethical debate
Not surprisingly, given these benefits, defensive and human supervised automated weapon systems are already being deployed across all domains [see examples from across the Cohort Group]. With each step forward, the sophistication, intelligence and operational impact of the systems increase. But where do we draw the line? If a machine can acquire a target, evaluate it, and formulate a fire control solution far faster than a human, should we slow it down by insisting on a human decision-maker in the kill chain? The legal and ethical consequences of not doing so are considerable. But our opponents may not share our qualms, and if they prevail then we may not be around to weigh the moral consequences. This is not an unfamiliar dilemma – for instance, it parallels the nuclear debate in the 1940’s and 50’s – but it is one we must navigate carefully once again.
As usual, though, the facts on the ground are moving faster than the ethical debate. The technology is already with us, is being used by our adversaries and has increasing momentum. Militaries that adopt AI will be able to field forces with greater range, persistence, coordination, and speed of response. Importantly, effective autonomous systems require high levels of systems engineering, computer science, materials science and mathematics, to name just a few of the disciplines involved. For some time at least, such systems will be more easily available to the armed forces of technically advanced societies, providing a valuable asymmetric tool to use against unsophisticated but clever, violent and ideologically motivated opponents. In a peer-to-peer situation, of course, we are talking a wholly different, and more subtle game of chess.
The changing future of conflict
It is clear that AI and autonomy will change the way that future conflicts are fought. They have the potential to bring a large increase in effectiveness, while reducing both the numbers of humans needed and their exposure to risk. In these respects, the technologies are especially well-suited to societies where risk-aversion is increasing, and armed forces are finding it harder to recruit.
Those benefits can’t be ignored and the facts on the ground are already changing. But in developing future AI-based defence systems we must have regard to the ethical considerations. If we don’t, in the worst case, rampaging killing machines, out of human control, may no longer be restricted to science fiction. Even more insidious is the potential corrosion of societal values that could result from the ability to inflict death and destruction without risk to ourselves. If we can find the right balance, though, we can provide more effective defence against current and developing threats, while bringing future military operations more closely into line with society’s tolerance of risk to its young men and women. That is a worthy objective to have in mind on Armistice Day.
How Cohort has been exploring the potential for AI
The technology of today
Today, unmanned platforms generally operate in three ways; with a human in the loop, with a human occasionally in the loop for monitoring purposes, or left to run on their own. Platforms perform tasks in dangerous or extreme environments that humans prefer to avoid or cannot perform in as well, due to limits in endurance or reaction speed. And are ideal for highly repetitive roles or those where there is a risk of injury or contamination, hazardous missions and deniable activities, such as covert operations.
Across our Group of defence technology businesses, Cohort has been exploring the potential for AI in several areas. For example, we’re currently focused on incorporating autonomy to lessen the logistical load for the dismounted soldier, and to enhance the performance and range of anti-submarine warfare capabilities.
MCL has recently supplied four Mission Adaptable Platform System (MAPS) Unmanned Ground Vehicles (UGVs) to the Ministry of Defence (MoD)’s Remote Patrol Vehicle (RPV) Experimentation Programme. The programme is evaluating the use of a modular unmanned platform that can resupply platoon soldiers, reduce load burden on soldiers, evacuate casualties and act as a power export. This presents clear benefits for ground forces, particularly when operating in hostile and dangerous combat environments.
In the maritime domain, SEA has integrated its innovative anti-submarine warfare (ASW) capability KraitArray™ with iXblue’s unmanned sea vessel, DriX to create SEADrix. Providing the opportunity to extend the range of ASW without the need for additional vessels and crew, a group of ten SEADriX, with KraitArray installed, can cover the same range and area as a nuclear submarine.
Another example is the automatic target classification system developed by Chess Technologies. Initially used in counter-UAS systems this removes the need for the constant full attention of a highly trained operator. It uses machine learning techniques and can be taught to recognise a wide range of potential targets from their appearance and behaviour.
To speak to us about this blog or give us feedback, please contact us.