What Do All Of These Faces Have In Common?

Below are a series of photos. Look at them carefully.

 

Tell me, what do they all have in common?

Were you able to guess?

The answer is simple, and disturbing.

None of these people exist. They were all generated by an artificial intelligence program:

The most impressive magic trick AI‘s learned in the modern era is the one where it conjures people out of thin air. And there’s no better machine learning-powered wizardry than Nvidia‘s.

Nvidia is a company most lauded for its impressive graphics cards. But in the world of machine learning, it’s one of the most ingenious companies using deep learning today. A couple of years back TNW reported on a new generative adversarial network (GAN) the company developed. At the time, it was an amazing example of how powerful deep learning had become.

This was cutting edge technology barely a year ago. Today, you can use it on your phone. Just point your web browser to “thispersondoesnotexist.com” and voila: the next time your grandmother asks when you’re going to settle down with someone nice, you can conjure up a picture to show them.

A GAN is a neural network that works by splitting an AI‘s workload into separate parts. One set of algorithms (a generator) tries to create something – in this case a human face – while another set (a judge) tries to determine if it’s a real image or a fake one. If the judge figures out it’s fake, we have two more weeks of AI winter.

That’s not true. Just making sure you were still with me. Actually, if the judge figures out the image is a fake the generator starts over. Once the judge is fooled, an AI developer checks out the results and determines if the algorithms need tweaking. Nvidia didn’t invent the GAN — the GANfather, Google’s Ian Goodfellow, did that. But the company sure seems to be perfecting it.

Nvidia’s recent effort – you can read the paper here – isn’t just the same old neural network with a fancy web interface. Its layers have been upgraded, tweaked, and given robot therapy to increase their self esteem (the last one’s another lie). According to the research team:

Motivated by style transfer literature, we re-design the generator architecture in a way that exposes novel ways to control the image synthesis process. Our generator starts from a learned constant input and adjusts the “style” of the image at each convolution layer based on the latent code, therefore directly controlling the strength of image features at different scales.

What that means is: you can press refresh all you want and it’s going to keep spitting out the eerily convincing faces of people who do not exist. Damn that’s creepy. (source, source)

I have been constantly warning about the dangers of A.I. to the public. Aside from the obvious threat of robotics and weapons combined, the danger is that of being able to manufacture photographic and video evidence, and given the fact that both are long-established forms of court-acceptable evidence, their integrity is directly under attack.

Consider a situation where there is a video of an actual crime that takes place. What is to stop a man who gets his hands on this facial swapping technology from placing the face of, say, somebody he does not like or is having a personal feud with onto the face of the perpetrator? He could say “I didn’t do anything,” but the video evidence will prove him otherwise.

What about false crimes? For example, somebody sets up a camera in an area and then films himself “committing a crime” against his own property, and then just swaps the face of somebody he has a grudge against onto his? It’s a very convenient way to destroy somebody’s life in a cruel act of revenge that, if it was done well, serve as incontrovertible evidence in court as “evidence” of a crime that will be used to convict a man of guilt regardless if he says “I didn’t do anything.”

Now one might say that the police will be able to use digital forensics to detect if an alteration has been made. This might work for early stages, but given how fast technology is advancing particularly with AI and how scientists admit they don’t understand how AI works, what is to say that it will not reach such a point that it will be nearly impossible to tell if forged evidence is actually forged without considerable, concentrated work on a project, perhaps more work than a defense attorney team will be able to have the time or money to afford to do?

Now this is just for “simple” crimes. What if this were to be taken to another level of severity? Say, for example, of that of a foreign nation, where using advanced AI technology, a man can give the appearance that a foreign leader is speaking and giving commands or making statements against another nations that could cause a war, when the reality is otherwise?

The potential for abuse is almost unlimited. In the hands of a person with malicious intentions, it makes it all too easy to fabricate evidence that could destroy a person’s life, cause a war, start a revolution, or any number of other things.

The ancient Greeks had the tale of Pandora’s box, which once opened could not be closed. Artificial Intelligence is similar, for now that it has been released, it is not going to be “put back” because it cannot be. It is not something that only the US is doing, as all of the governments of the world are pursuing it and it will be a major factor in a coming world war. The future in that sense is very much the past, with possibly 2034 looking more like 1984 as, articulated in the words of the title for Aldous Huxley’s famous book, the human race stands on the threshhold of a brave new world, strange yet with the same evils as those of the Garden of Eden and that will yield the same ends.

Prepare yourselves accordingly.

CLICK HERE TO FOLLOW OUR NEW SHOEBAT FACEBOOK PAGE

print