As artificial intelligence changes our battlefields how can we maintain restraint and humanity in our military campaigns?

Artificial intelligence (AI) is changing our lives — whether it be ChatGPT and the way we write, or the advertising algorithms personalising our social media feeds.

It’s also changing society in less mundane ways. Mention AI in the arena of war and it conjures up apocalyptic images of killer robots taking over like something out of The Matrix. The use of AI in war, however, is not so futuristic.

AI is already changing the character of war, according to Dr Bianca Baggiarini, from the ANU Strategic and Defence Studies Centre, who researches the social and political effects of emerging military technologies.

“When states do not want to or cannot — for logistical or political reasons — send soldiers into battle, they rely more on technology to achieve military goals,” she says.

Recently, Israel made headlines for its AI system Hasbora, Hebrew for ‘the gospel’, which is reportedly used to produce 100 bombing targets a day. The United States (US) has also used machine learning algorithms to analyse big data sets and make recommendations for drone strike targets in Yemen. Closer to home, Australia has implemented Loyal Wingman, an AI-incorporated unmanned combat air vehicle that ‘teams’ with manned aircraft.

In situations like these, the use of AI has been justified with claims it can maximise soldier performance, improve decision-making, enhance efficiency, generate mass and, notably, preserve troops’ lives.

But Baggiarini says while these technologies raise huge ethical questions, they aren’t necessarily new.

“The most interesting questions about AI and war, to me, are not really about AI at all,” she says.

“What do we think war is? What do we think war should be? AI is part of a wider disruption in how we frame and think about these questions.”

Blaming the tools

Much of the commentary around AI in war focuses on some future point when systems are making life-and-death decisions, without human control and intervention.

But Professor Toni Erskine, from the ANU Coral Bell School of Asia Pacific Affairs, argues that even without complete autonomy, AI systems can risk undermining adherence to international norms —including restraint —through the way humans interact with these systems.

In her John Gee Memorial Lecture, Erskine highlighted the risks that emerge in what may seem to be more innocuous scenarios, especially when AI-driven decision-support systems and weapons make recommendations and initiate actions that ultimately see human beings make life-and-death decisions.

“Our tools become our moral proxies, our moral guides and compasses, and our scapegoats. Moral responsibility is thereby misplaced, and we are diminished,” Erskine said.

“The problem is not the unavoidable human-machine teaming that the prevalence of AI brings to the practice of war, but rather the avoidable abdication of responsibility that threatens to accompany it.”

Reining in the war machine

Erskine argued that a tendency to “relinquish responsibility” when using AI in war points to a need for “supplementary responsibilities of restraint” for those designing, operating, overseeing and deploying these systems.

“It will be important to make the decision-making functions of machine intelligence — along with its limitations — more transparent, to refrain from anthropomorphising AI, and to actively discourage the myth of moral agency,” she said.

Erskine pointed to the South Korean robotic sentry, Super aEgis II, as an example of an AI-enabled weapon with design features that reassert human responsibility, even if unintentionally.

When the robot identifies a potential target, the human operator must enter a password to unlock its firing ability. At the same time, a speaker broadcasts the message: “stop or we’ll shoot”.

“This declaration serves not only to warn any trespassers. And the plural subject also serves to remind the human operator of her direct role in the decision to fire the weapon and kill,” Erskine says. “It thereby minimises any imagined moral buffer.”

Apocalypse now?

One world-destroying application of AI is its use in launching nuclear weapons. This includes removing humans from intelligence gathering and analysis, which could then cause a human deciding to use nuclear weapons.

Dr Benjamin Zala, a Fellow in the ANU Coral Bell School’s International Relations Department with a research focus on the management of nuclear weapons, thinks the impact this could have on human judgement is reason to worry.

“One of the benefits of using AI in intelligence gathering and analysis is that it speeds up the process,” he says. “But when it comes to something as important as the use of nuclear weapons, what you want to do is slow the decision-making process down.”

Zala points out that all Cold War-era nuclear near-misses were prevented by a human interrupting decision-making or intelligence gathering processes to exercise extra caution and prudence. Unfortunately, advances in AI coincide with a contemporary unpicking of nuclear arms control.

“We’ve had this trend in the past decade or so of states withdrawing from arms control treaties or letting arms control treaties expire, and they’re not being replaced,” he says.

While there are glimmers of improvement — the US has made statements on not using AI automation in nuclear launch decision-making procedures — this is a bleak image.

Ultimately, we need to always question the impact of AI on war, including its tendency to distance ‘our side’ from the horrors war presents, its propensity to create moral buffers, and its impact on caution in using nuclear weapons.

“It’s much easier to control the spread and deployment of a technology when it’s in its infancy than when it’s already well-established in military structures and practices,” Zala says.

Let’s hope we haven’t already jumped the robot-controlled gun.

This article references research by Professor Toni Erskine recently published in Review of International Studies’ 50th anniversary issue.

Professor Toni Erskine, Dr Bianca Baggiarini, and Dr Benjamin Zala are contributing to a research project, ‘Anticipating the Future War‘, led by Erskine and funded by an Australian Department of Defence Strategic Policy Grant.

Top image: sibsky2016/shutterstock.com

You may also like

Article Card Image

‘Jumped the gun’: experts react to teen social media ban

Overdue reform or a knee-jerk reaction? Experts and teens are weighing in on Australia’s social media ban.

Article Card Image

Democracy Sausage: Punishing kids for adult failures

Teen journalist Leonardo Puglisi and youth justice expert Faith Gordon join Democracy Sausage to discuss social media bans.

Article Card Image

Picking your brain: the new techniques tracing brain evolution

To better understand how modern human brains work, one ANU expert is using cutting-edge technology to study skulls from our ancient ancestors.

Subscribe to ANU Reporter