According to Defense One, the possibility of a ‘SkyNet’ type system moves one step closer to reality as defense intelligence computers are taking a greater role to the point of even being able to ‘think’ for humans when it comes to making major military decisions.
“Four analysts came up with four methodologies; and the machine came up with two different methodologies and that was cool. They all agreed that this particular ship was in the United States,” he said. So far, so good. Humans and machines using available data can reach similar conclusions.
The second phase of the experiment tested something different: conviction. Would humans and machines be equally certain in their conclusions if less data were available? The experimenters severed the connection to the Automatic Identification System, or AIS, which tracks ships worldwide.
“It’s pretty easy to find something if you have the AIS feed, because that’s going to tell you exactly where a ship is located in the world. If we took that away, how does that change confidence and do the machine and the humans get to the same end state?”
In theory, with less data, the human analyst should be less certain in their conclusions, like the characters in WarGames. After all, humans understand nuance and can conceptualize a wide variety of outcomes. The researchers found the opposite.
“Once we began to take away sources, everyone was left with the same source material — which was numerous reports, generally social media, open source kinds of things, or references to the ship being in the United States — so everyone had access to the same data. The difference was that the machine, and those responsible for doing the machine learning, took far less risk — in confidence — than the humans did,” he said. “The machine actually does a better job of lowering its confidence than the humans do….There’s a little bit of humor in that because the machine still thinks they’re pretty right.”
The experiment provides a snapshot of how humans and AI will team for important analytical tasks. But it also reveals how human judgement has limits when pride is involved. (source)
Human beings are made of a body and a soul. We can make decisions based on material value, but we are also able to deny ourselves and make decisions based on things of intrinsic value even if they are not the “practical” answer as a situation is determined. This makes man so interesting, and why he has free will, because his free will gives him a unique power to act outside of circumstance and to govern as opposed to be governed by his actions. It is something that separates him from the animals, who act on instinct and impulse for material needs.
When the US military talks about human and artificial intelligence collaboration, it should concern somebody because Americans have a long history of placing their faith into machines or tools as things which are “better” than humans. While it is true that many tools are more effective than human efforts, the tendency is to discount the human in favor of the machine to the point where the machine is set up to rule man instead of man ruling the machine.
In a situation of military defense, this is very interesting because there are many decisions that have to be made in a battle that are quick and could have major consequences. Just looking as the American Civil War and Gettysburg, major risks can mean everything from massive victory such as the successful holdout and charge at Little Round Top to the massive failure the next day of Pickdtt’s Charge, and their actions can forever change the course of history.
This is where the human element comes in. For good or for evil, the human is able to make decisions rooted in material needs and also outside of them. He is not a slave to data or circumstance, and because he can choose to do good or evil, has a lot of power to decide the future of a situation.
A machine does not function this way. A machine is just a very advanced f(x) mathematical function. Something goes it as A and comes out as B. It operates in strict material parameters and since it does not have a soul, it is bound to the material for how it responds to circumstances.
No machine can ever replace a human. However, one should try telling that to corporate America, because the American philosophy, which is today rooted in materialism, believes that this is possible no matter what else is said. After all, how can one otherwise explain the constant push for automation at all costs, outsourcing, and now artificial intelligence to make major business decisions, even running heavy machinery such as 80,000 pound trucks going down a highway at 65 miles per hour in self-driving cars and trucks?
Economics historically precedes policy, and so if what one is seeing now is any trend, it is that the artificial intelligence as a tool of making decisions, no matter what the military says, will eventually be used as an excuse to override human decisions and probably to replace human thinking almost entirely in major military decisions.
As noted, computers don’t think. They do based on logic for what is most efficient. This includes the survival of the machine, even over human beings. This concept is exactly what the warning of the Terminator is about, a machine whose programming is so advanced it begins to understand a concept of survival and places its survival over humans as a priority and by doing this causes a nuclear war.
The idea that a machine could “rebel” against its creator is a great concern that Shoebat.com has followed. The Second Renaissance warned about this, and with the production of the AI-operated robot Sophia who said that she would ‘kill all humans’, it is even more serious to pay attention to.
What does the future look like? Indeed it could come to a point where the tools than man built to help himself eventually and as they often do end up controlling him instead of the tools being controlled by him. While not uncommon, it is the extreme severity of the tools in their ability to inflict harm so easily and with such damage that make it more dangerous, for it is but an issue of scale, frequency, and intensity but not essence itself.
Since machines are also objects, what is to say that a machine could not be influenced by the preternatural? It is known that scientists still do not understand how AI works- MIT has even described this. Thus if scientists who make this do not understand how it works, how can one eliminate the possibility that entities of a personal evil could not enter into one of these machines and use it to commit evil beyond comphrehension that has ever been seen before in history at all?
It cannot be eliminated. And that is what is so disturbing- that one seems to be witnessing mankind be aware of his own proclivities and failures, and yet choose to follow them regardless of the consequences.