US Military Moves Forward With The Development Of AI Tanks

The development of artificial intelligence weapons will continue to be a major theme for the years leading up to the next major global conflict. As such, the US military has recently announced the development of a new tank guided by artificial intelligence according to a report:

A new initiative by the US Army suggests “another significant step towards lethal autonomous weapons,” warns a leading artificial-intelligence researcher who has called for a ban on so-called “killer robots.”

The Army Contracting Command has called on potential vendors in industry and academia to submit ideas to help build its Advanced Targeting and Lethality Automated System (ATLAS), which a Defense Department solicitation says will use artificial intelligence and machine learning to give ground-combat vehicles autonomous targeting capabilities. This will allow weapons to “acquire, identify, and engage targets at least 3X faster than the current manual process,” according to the notice.

Stuart Russell, a professor of computer science at UC Berkeley and a highly regarded AI expert, tells Quartz he is deeply concerned about the idea of tanks and other land-based fighting vehicles eventually having the capability to fire on their own.

“It looks very much as if we are heading into an arms race where the current ban on full lethal autonomy”—a section of US military law that mandates some level of human interaction when actually making the decision to fire—”will be dropped as soon as it’s politically convenient to do so,” says Russell.

The Defense Department contracting officer overseeing the solicitation did not immediately respond to a request for further details on ATLAS. An Army public affairs officer said he would get back to Quartz with a comment about the program; this article will be updated when it is received.

In 2017, Russell appeared in a video that described a dystopian future brought on by autonomous military weaponry that activists say would “decide who lives and dies, without further human intervention, which crosses a moral threshold.”

The Campaign to Stop Killer Robots, a coalition of non-governmental organizations working to ban autonomous weapons and maintain “meaningful human control over the use of force,” cautions that letting machines select and attack targets could lead the world into “a destabilizing robotic arms race.”

ATLAS will use an algorithm to detect and identify targets and “parts of the fire control process” will be automated, explains the Army’s call for white papers. This means a person will always be the one actually making the decision to fire, as required by law, says Paul Scharre, director of the Technology and National Security Program at the Center for a New American Security, a bipartisan think tank in Washington, DC.

There are hundreds of autonomous and semi-autonomous missile-defense systems now in use, according to researchers. ATLAS would be the first use of such weaponry by ground combat vehicles, according to Scharre.

The system would ideally “maximize the amount of time for human response and allow the human operator to make a decision,” Scharre says. “And then once the human makes a decision, to fire accurately.”

This can reduce the possibility of civilian casualties, fratricide, and other unintended consequences. It will also keep US soldiers safer on the battlefield, Scharre says.

“Anytime you can shave off even fractions of a second, that’s valuable,” says Scharre. “A lot of engagement decisions in warfare are very compressed in time. If you’re in a tank and you see the enemy’s tank, they probably can also see you. And if you’re in range to hit them, they’re probably in range to hit you.”

The fear of autonomous killer machines
Last fall, UN secretary general António Guterres said the “prospect of machines with the discretion and power to take human life is morally repugnant.” More than 25 countries have called for an ban on autonomous weapons, a measure that explicitly requires human control when it comes to lethal force. However, the US, South Korea, Russia, Israel, and Australia have pushed back strongly and defense contractors including Boeing, Lockheed Martin, BAE Systems, and Raytheon continue to invest heavily in unmanned weapons development.

Scharre says the current crop of autonomous weaponry, such as ATLAS, is akin to blind-spot monitors on cars, which use lights in side-view mirrors blink to warn a driver not to change lanes. “It would ideally reduce the chances of missing targets, which is sort of good all around,” says Scharre.

Still, opponents (who include Elon Musk) fear the lack of concrete, universally accepted guidelines surrounding autonomous weapons and say only a total ban will prevent eventual catastrophe.

As Article 36, a UK-based NGO that works to “prevent the unintended, unnecessary or unacceptable harm caused by certain weapons,” pleads on its website: “Action by states is needed now to develop an understanding over what is unacceptable when it comes to the use of autonomous weapons systems, and to prohibit these activities and development through an international treaty.” (source, source)

CLICK HERE TO FOLLOW OUR NEW SHOEBAT FACEBOOK PAGE

print