Russia’s invasion of Ukraine points to an even more frightening future possibility: autonomous weapons


The Russian delegate replied a moment later: “There is discrimination suffered by my country because of the restrictive measures against us.

Ukraine was berating Russia not over the ongoing invasion of the country, but over a more abstract topic: autonomous weapons. The comments were part of the Convention on Certain Conventional Weapons, a UN gathering at which global delegates are expected to work towards a treaty on lethal autonomous weapons systems, the field charged with military experts and activists to peace say to be the future of war.

But citing visa restrictions that limited his team’s participation, the Russian delegate called for the meeting to be dissolved, prompting denunciations from Ukraine and many others. The skirmish was taking place in a kind of parallel to the war in Ukraine – a more genteel environment, equally high stakes.

Autonomous weapons — the catch-all description of the algorithms that help decide where and when a weapon should fire — are among the most challenging areas of modern warfare, making the human commandeered drone strike of decades past as picturesque as a bayonet.

Proponents claim they are nothing short of a godsend, improving accuracy and removing human error and even the fog of war itself.

Critics of weapons — and there are many of them — see it as a disaster. They see a dehumanization that opens up battles to all sorts of machinic errors, which ruthless numerical efficiency then makes more apocalyptic. While there are no signs that such “slaughterbots” have been deployed in Ukraine, critics say activities there hint at darker battlefields to come.

“Recent events bring this to the fore – they make us realize that the technology we develop can be deployed and exposed to people with devastating consequences,” said Jonathan Kewley, co-director of the Tech group in London. law firm Clifford Chance, emphasizing that this was a global problem and not centered on Russia.

Although they differ in their specifics, all fully autonomous weapons share one idea: that artificial intelligence can dictate firing decisions better than people. By being trained through thousands of battles and then adjusting its parameters to a specific conflict, the AI ​​can be integrated into a traditional weapon, then search for enemy combatants and surgically drop bombs, fire guns or decimate enemies without any human intervention. .

The 39-year-old CCW meets every five years to update its agreement on new threats, such as landmines. But AI weapons have proven their Waterloo. Delegates were baffled by the unknowable dimensions of intelligent combat machines and hampered by slow games from military powers, like Russia, eager to bleed the clock as technology advances. In December, the five-year meeting failed to produce “consensus” (the CCW requires this for any updates), forcing the group to return to the drawing board at another meeting this month.

“We are not holding this meeting on the basis of resounding success,” the Irish delegate noted dryly this week.

Activists fear that all these delays will come at a cost. The technology is now so advanced, they say, that armies around the world could deploy it in their next conflict.

“I believe it’s just politics at this point, not technology,” Daan Kayser, who leads the autonomous weapons project for Dutch group Pax for Peace, told The Post from Geneva. “Any of the many countries could have computers that kill without a single human nearby. And that should scare everyone.

Russian machine gun maker Kalashnikov Group announced four years ago that it was working on a weapon with a neural network. The country is also believed to have the potential to deploy the Lancet and the Kub – two “hover drones” that can hover near a target for hours and only activate when needed – with various autonomous capabilities.

Proponents fear that as Russia shows it is apparently willing to use other controversial weapons in Ukraine, such as cluster bombs, fully autonomous weapons will not be far behind. (Russia – and for that matter the United States and Ukraine – have not signed the 2008 cluster bomb treaty that more than 100 other countries have agreed to.)

But they also say it would be a mistake to throw all threats at Russia’s doorstep. The US military has embarked on its own race for autonomy, contracting with Microsoft and Amazon for AI services. He created an AI-focused training program for the 18th Airborne Corps at Fort Bragg — soldiers designing systems so machines can fight wars — and built a forward-looking technology center at Army Futures. Command in Austin.

The Air Force Research Lab, for its part, has spent years developing something called Agile Condor, a highly efficient computer with deep AI capabilities that can be attached to traditional weapons; in the fall, it was tested aboard a remotely piloted aircraft known as the MQ-9 Reaper. The US also has a stockpile of its own vagabond ammo, like the Mini Harpy, which it can equip with autonomous capabilities.

China is also pushing. A Brookings Institution report in 2020 said the nation’s defense industry was “pursuing significant investment in robotics, swarming, and other applications of artificial intelligence and machine learning.”

A Pax study found that between 2005 and 2015, the United States held 26% of all new AI patents granted in the military field, and China held 25%. In the years that followed, China eclipsed America. China is believed to have made particular progress in military-grade facial recognition, pumping billions into the effort; with such technology, a machine identifies an enemy, often from miles away, without any human confirmation.

The dangers of AI weapons were flagged last year when a UN Security Council report said a Turkish drone, the Kargu-2, appeared to have fired fully autonomously in the long war. Libyan civilian – potentially marking the first time on this planet a human being has died entirely because a machine thought he should.

All this made some non-governmental organizations very nervous. “Are we really ready to let machines decide to kill people? asked Isabelle Jones, campaign manager for an AI-essential umbrella group called Stop Killer Robots. “Are we ready for what this means?

Formed in 2012, Stop Killer Robots has a playful name but a hellish mission. The group includes some 180 NGOs and combines a spiritual argument for a human-centered world (“Less autonomy. More humanity”) with a brass argument about reducing losses.

Jones cited a popular defender goal: “meaningful human control.” (That this means a ban is partly what confuses the UN group.)

Military insiders say such goals are misguided.

“Any effort to ban these things is futile — they have too many advantages for states to accept,” said C. Anthony Pfaff, retired Army colonel and former State Department military adviser and now professor in the US Army. War College.

Instead, he said, the right rules for AI weapons would allay concerns while paying dividends.

“There is a powerful reason to explore these technologies,” he added. “The potential is there; nothing is necessarily bad in them. We just have to make sure that we use them in a way that gets the best result.

Like other proponents, Pfaff notes that it was an abundance of rage and revenge that led to war crimes. Machines lack all of these emotions.

But critics say that’s exactly the emotion governments should seek to protect. Even looking through the fog of war, they say, the eyes are attached to human beings, with all their ability to respond flexibly.

Military strategists describe a battle scenario in which an American autonomous weapon breaks down a door in distant urban warfare to identify a compact, loaded group of men attacking it with knives. Dealing with an obvious threat requires aiming.

He does not know that the war is in Indonesia, where males of all ages wear knives around their necks; that they are not short men but 10-year-old boys; that their emotion is not anger but laughter and play. An AI cannot, no matter how fast its microprocessor, infer intent.

There can also be a more macro effect.

“The rationale for going to war is important, and it happens because of the consequences for individuals,” said Nancy Sherman, a Georgetown professor who has written numerous books on ethics and the military. “When you reduce the consequences for individuals, you make the decision to go to war too easily.”

This could lead to more wars – and, given that the other side wouldn’t have the weapons of the AI, highly asymmetric wars.

If by chance both had autonomous weapons, it could give rise to the sci-fi scenario of two sides of robots destroying each other. No one can say whether this will take the conflict away from civilians or bring it closer.

It’s one-on-ones like this that seem to hold up negotiators. Last year, CCW got bogged down when a group of 10 countries, including many from South America, wanted the treaty to be updated to include a complete ban on AI, while other others wanted a more dynamic approach. Delegates debated the extent to which human conscience is sufficient and at what stage in the decision chain it should be applied.

And three military giants avoided the debate altogether: the United States, Russia and India wanted no AI update of the deal, arguing that existing humanitarian law was sufficient.

This week in Geneva did not bring much more progress. After several days of infighting sparked by Russia’s protest tactics, the president shifted proceedings to “informal” mode, putting hopes of a treaty even further out of reach.

Some attempts at regulation have been made at the level of individual nations. The US Department of Defense released a list of AI guidelines, while the European Union recently passed comprehensive new AI legislation.

But Kewley, the lawyer, pointed out that the law provides an exception for military uses.

“We worry about the impact of AI in so many departments and areas of our lives, but where it can have the most extreme impact – in the context of war – we leave that to the army,” he said.

He added: “If we don’t design laws, the whole world will follow – if we design a robot that can kill people and doesn’t have a sense of right and wrong built in – it will be a very, very high risk. .journey we follow.

Previous EMEA Daily: Visa and MLG team on contactless transit
Next Pittsburgh residents ready to welcome fleeing Ukrainians, but quick entry to US still unlikely