Recently, the Pentagon declared that the military must “accelerate” with “injecting” artificial intelligence technologies into battlefield technology according to a report:
The Pentagon made public for the first time on Feb. 12 the outlines of its master plan for speeding the injection of artificial intelligence (AI) into military equipment, including advanced technologies destined for the battlefield.
By declassifying key elements of a strategy it had adopted last summer, the Defense Department appeared to be trying to address disparate criticism that it was not being heedful enough of the risks of using AI in its weaponry or not being aggressive enough in the face of rival nations’ efforts to embrace AI.
The 17-page strategy summary said that AI — a shorthand term for machine-driven learning and decision-making — held out great promise for military applications, and that it “is expected to impact every corner of the Department, spanning operations, training, sustainment, force protection, recruiting, healthcare, and many others.”
It depicted AI’s embrace in solely positive terms, asserting that “with the application of AI to defense, we have an opportunity to improve support for and protection of U.S. service members, safeguard our citizens, defend our allies and partners, and improve the affordability and speed of our operations.”
Stepping back from AI in the face of aggressive AI research efforts by potential rivals would have dire — even apocalyptic — consequences, it further warned. It would “result in legacy systems irrelevant to the defense of our people, eroding cohesion among allies and partners, reduced access to markets that will contribute to a decline in our prosperity and standard of living, and growing challenges to societies that have been built upon individual freedoms.”
The publication of the Pentagon strategy’s core concepts comes eight months after a Silicon Valley revolt against the military’s premier AI research program. After thousands of Google employees signed a petition protesting the company’s involvement in an effort known as Project Maven, meant to speed up the analysis of videos taken by a drone so that military personnel could more readily identify potential targets, Google announced on June 1 that it would back out of it.
But the release of the strategy makes clear that the Trump administration isn’t having second thoughts about the utility of AI. It says the focus of the Defense Department’s Joint Artificial Intelligence Center (JAIC), created last June, will be on “near-term execution and AI adoption.” And in a section describing image analysis, the document suggests there are some things machines can do better than humans can. It says that “AI can generate and help commanders explore new options so that they can select courses of action that best achieve mission outcomes, minimizing risks to both deployed forces and civilians.”
The JAIC is still adding staff, and its new director Lt. Gen. Jack Shanahan was confirmed by the Senate only two months ago. Shanahan’s last posting before taking over the JAIC was running Project Maven. While the Center’s budget in 2019 was only $90 million, it is responsible for overseeing hundreds of AI programs costing more than $15 million, and total Defense Department spending on AI over the next five years has been projected at $1.7 billion.
The summary repeatedly states that the military has an ethical obligation to conscientiously use AI by publicly discussing guidelines for its use and by ensuring that it’s employed only when safe. But that benchmark is not precisely defined in the unclassified summary, and it reiterates an earlier, vague policy that the department will require “appropriate levels of human judgment over the use of force” by machines.
The strategy does calls for the development of new defense “principles” to guide how the military will use AI, mirroring what companies like Google have done in announcing a set of ethics for the use of its own technology. The Pentagon has said it will develop these principles through the Defense Innovation Board, an advisory group made up of outside technology experts, including some top executives from Silicon Valley, which will conduct meetings across the country as part of its outreach. The board is due to give the secretary of defense recommendations for principles this summer.
During his two years in office, former Secretary of Defense James Mattis repeatedly said that his main goal was to make the military “more lethal,” including through the use of AI. But groups like the Campaign to Stop Killer Robots have been working to promote the idea of an arms control ban for autonomous technologies in weapons and have been working to increase public support. The group sponsored a poll released in January that found 52 percent of Americans opposed the idea of armed weapons systems that could choose to kill.
Although the strategy summary describes other countries, particularly Russia and China, as investing heavily in AI and “eroding” the U.S. technical advantage, others are warning that the U.S. is already behind. “I think that both Russia and China are in a better position than we are. I think they’re ahead of us,” Senate Armed Services Committee Chairman James Inhofe, R-Okla., said speaking to reporters Tuesday morning before the release of the strategy.
China’s State Council released a report in 2017 calling for the country to become the global leader in AI by 2030. That includes broad applications of AI and the development of a domestic industry targeted to be worth $150 billion.
Despite his concern, Inhofe, who shapes defense spending through Congress’s annual defense policy bill, said that AI wasn’t his top priority. “There are other things that need to be done first,” he said.
The summary was released a day after President Trump announced the American AI Initiative, which focuses on broader commercial interest in artificial intelligence. Neither of the two documents outlined any new proposed funding. (source, source)