‘UK Student And Writer’ Who Accused People Of Being Terrorists Exposed As Deepfake, Non-Existent Person

Reuters reported on a fascinating story about an “Oliver Taylor”, a man who claimed to be a student and wrote numerous articles for various newspapers has since been exposed by them alongside an Israeli deepfake detection company as being a deepfake, with the real identity of the person who made the original posts still unknown

Oliver Taylor, a student at England’s University of Birmingham, is a twenty-something with brown eyes, light stubble, and a slightly stiff smile.

A combination photograph showing an image purporting to be of British student and freelance writer Oliver Taylor (L) and a heat map of the same photograph produced by Tel Aviv-based deepfake detection company Cyabra is seen in this undated handout photo obtained by Reuters. The heat map, which was produced using one of Cyabra’s algorithms, highlights areas of suspected computer manipulation. The digital inconsistencies were one of several indicators used by experts to determine that Taylor was an online mirage.

Online profiles describe him as a coffee lover and politics junkie who was raised in a traditional Jewish home. His half dozen freelance editorials and blog posts reveal an active interest in anti-Semitism and Jewish affairs, with bylines in the Jerusalem Post and the Times of Israel.

The catch? Oliver Taylor seems to be an elaborate fiction.

His university says it has no record of him. He has no obvious online footprint beyond an account on the question-and-answer site Quora, where he was active for two days in March. Two newspapers that published his work say they have tried and failed to confirm his identity. And experts in deceptive imagery used state-of-the-art forensic analysis programs to determine that Taylor’s profile photo is a hyper-realistic forgery – a “deepfake.”

Who is behind Taylor isn’t known to Reuters. Calls to the U.K. phone number he supplied to editors drew an automated error message and he didn’t respond to messages left at the Gmail address he used for correspondence.

Reuters was alerted to Taylor by London academic Mazen Masri, who drew international attention in late 2018 when he helped launch an Israeli lawsuit against the surveillance company NSO on behalf of alleged Mexican victims of the company’s phone hacking technology.

In an article in U.S. Jewish newspaper The Algemeiner, Taylor had accused Masri and his wife, Palestinian rights campaigner Ryvka Barnard, of being “known terrorist sympathizers.”

Masri and Barnard were taken aback by the allegation, which they deny. But they were also baffled as to why a university student would single them out. Masri said he pulled up Taylor’s profile photo. He couldn’t put his finger on it, he said, but something about the young man’s face “seemed off.”

Six experts interviewed by Reuters say the image has the characteristics of a deepfake.

“The distortion and inconsistencies in the background are a tell-tale sign of a synthesized image, as are a few glitches around his neck and collar,” said digital image forensics pioneer Hany Farid, who teaches at the University of California, Berkeley.

Artist Mario Klingemann, who regularly uses deepfakes in his work, said the photo “has all the hallmarks.”

“I’m 100 percent sure,” he said.

The Taylor persona is a rare in-the-wild example of a phenomenon that has emerged as a key anxiety of the digital age: The marriage of deepfakes and disinformation.

The threat is drawing increasing concern in Washington and Silicon Valley. Last year House Intelligence Committee chairman Adam Schiff warned that computer-generated video could “turn a world leader into a ventriloquist’s dummy.” Last month Facebook announced the conclusion of its Deepfake Detection Challenge – a competition intended to help researchers automatically identify falsified footage. Last week online publication The Daily Beast revealed a network of deepfake journalists – part of a larger group of bogus personas seeding propaganda online.

Deepfakes like Taylor are dangerous because they can help build “a totally untraceable identity,” said Dan Brahmy, whose Israel-based startup Cyabra specializes in detecting such images.

Brahmy said investigators chasing the origin of such photos are left “searching for a needle in a haystack – except the needle doesn’t exist.”

Taylor appears to have had no online presence until he started writing articles in late December. The University of Birmingham said in a statement it could not find “any record of this individual using these details.” Editors at the Jerusalem Post and The Algemeiner say they published Taylor after he pitched them stories cold over email. He didn’t ask for payment, they said, and they didn’t take aggressive steps to vet his identity.

“We’re not a counterintelligence operation,” Algemeiner Editor-in-chief Dovid Efune said, although he noted that the paper had introduced new safeguards since.

After Reuters began asking about Taylor, The Algemeiner and the Times of Israel deleted his work. Taylor emailed both papers protesting the removal, but Times of Israel Opinion Editor Miriam Herschlag said she rebuffed him after he failed to prove his identity. Efune said he didn’t respond to Taylor’s messages.

The Jerusalem Post and Arutz Sheva have kept Taylor’s articles online, although the latter removed the “terrorist sympathizers” reference following a complaint from Masri and Barnard. The Post’s editor-in-chief, Yaakov Katz, didn’t respond when asked whether Taylor’s work would stay up. Arutz Sheva editor Yoni Kempinski said only that “in many cases” news outlets “use pseudonyms to byline opinion articles.” Kempinski declined to elaborate or say whether he considered Taylor a pseudonym.

Oliver Taylor’s articles drew minimal engagement on social media, but the Times of Israel’s Herschlag said they were still dangerous – not only because they could distort the public discourse but also because they risked making people in her position less willing to take chances on unknown writers.

“Absolutely we need to screen out impostors and up our defenses,” she said. “But I don’t want to set up these barriers that prevent new voices from being heard.” (source)

Now I leave further speculation about what may have happened here to further evidence being presented. The story in itself- that major news outlets were publishing articles from a man who never existed -is extremely dangerous.

I started discussing the emergence of “deepfakes” in January 2018, where I noted that as they were being used in the adult industry, they would soon cross mainstream and be a direct threat to the viability of evidence and the entire justice system because it would undermine the standards of trust used to support society, letting the innocent be convicted of crimes they did not commit and liberating the guilty.

This incident was a warning bell for the world, as if this person- whoever he is -was able to essentially stir up controvery with having no idea as to who he is all using deepfakes, it would be incredibly easy for people to incite the human race to war or other horrible conflicts on the basis of manufactured “proof”.

This truly is the edge of a Huxleyan future, and one that should concern everybody.

Donate now to help support the work of this site. When you donate, you are not donating to just any commentary group, but one that is endlessly observing the news, reading between the lines and separating hysteria and perception from reality. In shoebat.com, we are working every day, tirelessly investigating global trends and providing data and analysis to tell you what lies for the future.

CLICK HERE TO FOLLOW OUR NEW SHOEBAT FACEBOOK PAGE

print