Artificial intelligence technologies are finding many applications in social use. But one application is emerging strong, and that is the use of AI in catching people with algorithms in daily processes:
Artificial intelligence is putting new teeth on the old saw that cheaters never prosper.
New companies and new research are applying the cutting edge technology in at least three different ways to combat cheating — on homework, on the job hunt and even on one’s diet.
In California, a new company called Crosschq is using machine learning and data analytics to help employers with the job reference process. The technology is meant to help companies avoid bad hires and compare how job candidates present themselves with how their references see them.
In Pennsylvania, Drexel University researchers are developing an app that can predict when dieters are likely to lapse on their eating regimen, based on the time of day, the user’s emotions — even the temperature of their skin and heart rate.
And in Denmark, University of Copenhagen professors say they can spot cheating on an academic essay with up to 90% accuracy. The results add to the growing amount of technology that pinpoints plagiarism in schoolwork.
These are a few of the ways algorithms, analytics and machine learning are pervading the lives of consumers and workers. Darrell West, founding director of the Brooking Institution’s Center for Technology Innovation, said the use of artificial intelligence is widespread. It powers robo-advisers like Betterment and Wealth, it it assists in medical diagnoses, and it aids school systems when they sort through students’ ranked preferences for charter schools.
“Algorithms help manage information and can help reveal insights not immediately apparent tohumans,” West said. He’s not surprised the technology is being deployed against dishonesty. “Artificial intelligence can detect cheating just because it can compare what we say with what we do.”
There are plenty of places for gaps between action and words, he said. “Everybody lies to themselves about various things,” said West. “We lapse, we snack, we sneak that candy bar. …People want to present an image of themselves that’s not exactly true.”
Artificial intelligence won’t cure the human weakness to fudge facts and cut corners — and the technology itself isn’t foolproof. “The big challenges are privacy, fairness and transparency,” West said. “No algorithm is perfect,” he said, noting that its conclusions depended deeply on the data it received in the first place. “You have to make sure the conclusion reached by AI actually is true in fact.” One example of that issue: facial recognition algorithms have had trouble recognizing darker skin tones and women’s faces, in part because the algorithms are trained with images of lighter-skinned male faces.
Mike Fitzsimmons, Crosschq’s co-founder and CEO, was partly inspired to start the business by bad hires he’d made in the past, he told MarketWatch. “We believe there is so much bias in the old way of doing this,” he said, noting how job candidates can find friends and past colleagues who will overhype the candidate.
The program has candidates rate themselves on various factors like attention to detail and self motivation, and also has their references rate the candidate on the same things. The rating system is on a five-point, “OK to great” scale. The technology then compares the ratings, and triangulates the results with the job skills the employer values. All the reference scores are then averaged. “It’s when
The company was founded last year and tested its product until formally launching last week. Fitzsimmons said Crosschq’s customers include companies like the ticket platform Eventbrite EB, -0.18% and personal finance website NerdWallet. The goal is to expand farther into the private sector, and also into the public sector.
Fitzsimmons noted the Crosschq technology wasn’t passing judgment on whether to hire a candidate. That was the employer’s call, he said. Crosschq developers tried to make the technology as a transparent as possible, Fitzsimmons said. “What’s not fair is what’s happening here already,” he said of the reference process.
Artificial intelligence is coming into the hiring process in other ways. A recent survey from the large employment law firm Littler Mendelson said 37% of polled companies were using artificial intelligence. The technology was most commonly used to screen resumes; 25% said they used it for the task. Eight percent said they used it to analyze applicant body language, tone and facial expressions during interviews.
Cheating on your diet
Approximately 45 million Americans diet each year, but many don’t lose weight because they backslide, said Drexel University psychology professor Evan Forman. While there are plenty of apps telling users the foods they should be eating and the activities they should be doing, that only goes so far, said Forman, who is director of the school’s Center for Weight, Eating and Lifestyle Science.
“It’s easy to understand what change you ought to make. It’s much more difficult to actually make those changes and keep on making them,” he said.
Enter OnTrack, the app Forman and others have been developing.
Harnessing user data, the app learns when diet lapses are statistically likely and then warns users right before the next one could happen.
Forman hopes OnTrack will be publicly available in the next year or two. Though users had to manually input data in early trials — like telling the program if they felt stressed — Forman said the end goal is to make OnTrack as automated as possible. For example, participants using new versions of OnTrack are incorporating data from sensors including FitBits FIT, +1.12% to measure things like heart rate and even skin temperature, he said.
Forman used OnTrack himself to try breaking his post-dinner habit of snacking on Trader Joe’s tortilla chips. It worked — at least while Forman used the app. He knew it was just a machine acting on data he supplied. Still, it felt like “someone helping me do what I wanted to do,” he said.
He understands if someone would think that could get creepy. But Forman said the goal wasn’t forcing someone to do something against their will. “This was an extension of you helping you do what you want to do,” he said.
Cheating on school assignments
Late last month, Danish researchers unveiled a program that they say can determine with 90% accuracy whether a high school research paper was written by the student handing in the assignment, or someone else.
“Ghostwriter” compares students’ assignment with their past work, the research said. The program scrutinizes writing style and word choice, and then sees how the paper measures up against the student’s past work.
There are already established companies using artificial intelligence to spot bogus schoolwork, such as Turnitin. Over 15,000 K-12 and higher education institutions in 153 countries use the Oakland, Calif. company’s products, according to its website. “It’s common in higher education to check on student papers. We all know some students are not diligent,” said the Brooking Institution’s West.
But the Danish researchers said their “Ghostwriter” technology could also be applied elsewhere.
They said it could be used to spot forged documents during police work, and it could also be applied to social media. Sites like Twitter TWTR, +0.94% and Facebook FB, +1.93% , even Amazon AMZN, +3.97% , are grappling with misinformation from internet trolls, automated bots and fake accounts.
The researchers said they’ve been using the “Ghostwriter” technology to spot cheating in tweeting. They hope to determine whether it’s a genuine user, a chatbot or an imposter behind a tweet. That’s something that Twitter itself has had trouble doing. Last fall, Twitter CEO Jack Dorsey said even though his company uses machine learning to find fake accounts, even that advanced technology can’t catch all of them. He said the company was considered labelling accounts run by chatbots. “We can certainly label and add context to accounts that come through our API,” Dorsey said. “Where it becomes a lot trickier is where automation is actually scripting our website to look like a human actor. So as far as we can label and we can identify these automations we can label them — and I think that is useful context.” A Twitter spokeswoman declined to comment. (source, source)
I have emphasized for years the danger that artificial intelligence poses to the viability of evidence because as the technology has shown, beginning with its usage in the pornography industry, it can be used to easily edit photos and videos so much that one cannot distinguish a fake from the real. This will naturally undermine the viability of all video and photo evidence in time because anything will be able to be created and thus be used to manufacture evidence against a target if desired, and I warned that it should be expected this will be used to convict innocent people (such as political enemies) in time.
But there is another danger of artificial intelligence I have not discussed, and it is not one that most would perceive to be a danger initially. Perhaps it is even more dangerous than the manufacture of evidence itself is the ruthless efficiency which AI operates with.
One needs to remember that AI is simply a computer. It is no different than any of the “calculating” machines that has ever existed, albeit billions of times fancier than the most rudimentary, but at the end of all it sees the world in terms of “positive” (1) or “negative” (0)- on or off binary -and that is how it processes. It has no soul, it cannot understand abstract concepts such as mercy, justice, or love, and at the very best it can only provide the impression that it understands. A like example would be how in the Catholic Church, there can be evil men who attempt to give the impression that doctrine has changed when it truly has not.
AI is very efficient because of this. It can perform “calculating” tasks than any human ever can simply because of the nature of the machine. However, this does not mean that it is a replacement for a human being. Yet this is what businesses and government clearly want it to be by their actions. They already make clear by their actions and have for centuries, in particular beginning with the First Industrial Revolution of the early 19th century that they do not care at all for the good of man, and they see him as a cog in a machine that can be easily replaced or disposed of.
Artificial Intelligence is simply a continuation of the same eugenic processes and philosophy on which the Industrial Revolution was founded, the difference being it is at a further point than in the past. The eventual goal will be to remove all working men as much as possible, ideally all, so that those in power could theoretically have a “free” labor force and only a market of consumers.
The obvious problem with this is, of course, without money nobody can consume. This would then naturally bring about the next stage of this “process”, which is a justification for the mass extinction- genocide -of the human race in order to “purge” it of the “useless” people, which is anybody who is not rich and powerful, and in so doing to create a “perfect world” where the survivors can live as demigods with their technological toys and possibly a small army of human “slaves” to serve their will. It is a vision of the Tower of Babel that Nimrod and his followers as well as all tyrants throughout history who have dreamed of overthrowing God and setting themselves up as the supreme being of worship could only dream of, and yet now the potential for this is closer than ever.
Now to make it perfectly clear (although it should not need to be made so), God is not going to be overthrown. The Bible makes this very clear in the Psalms, that the Lord finds it comical that men would think they can seriously do this, and He laughs at them from His throne:
Why do the nations protest and the peoples conspire in vain?
Kings on earth rise up and princes plot together against the LORD and against his anointed one:
“Let us break their shackles and cast off their chains from us!”
The one enthroned in heaven laughs; the Lord derides them,
Then he speaks to them in his anger, in his wrath he terrifies them:
“I myself have installed my king on Zion, my holy mountain.”
I will proclaim the decree of the LORD, he said to me, “You are my son; today I have begotten you. Ask it of me, and I will give you the nations as your inheritance, and, as your possession, the ends of the earth. With an iron rod you will shepherd them, like a potter’s vessel you will shatter them.”
And now, kings, give heed; take warning, judges on earth.
Serve the LORD with fear; exult with trembling, Accept correction lest he become angry and you perish along the way when his anger suddenly blazes up.
Blessed are all who take refuge in him! (Psalm 2)
God will eventually get His way- He always does -and in the meantime He will give the choice to men to obey or disobey Him, to their reward or punishment.
The likelihood is that man is not going to listen, and following in the line of logic above, this brings us to the next step of how men are going to approach artificial intelligence in its social use that I said may be more dangerous than evidence manufacturing, which is to “weed out undesirables” in a business or group.
As note above, AI is “ruthlessly efficient”. It makes no room for “error” from any “plan”, and it easily “exposes” people and processes who do. On the positive side, this can be used to correct for errors or redundancies. To the contrary, it will naturally be used to “root out” and destroy people who do not fit a particular model of behavior or thought process. It is a form of what one might summarize under the Nyborgian term “positive eugenics”, where non-conformists have no place in what is perceived to be a uniform system.
If this sounds like something from the Soviet Union or a communist nation, it absolutely it. It is a tool that, in the hands of a person with socialistic inclinations- be it national or international -is a way to reduce man to a mere tool where his “usefulness” is judged by his perceived obedience to a series of rules set by man and enforced by a computer slave driver. No longer is there a slave class overseen by a fellow slave, who may have mercy on others, but a faceless, soulless machine that only knows “obedience” or “failure.”
Vladimir Ulyanov (Lenin), Adolf Hitler, Iosef Jughashvilli (Stalin), Feliks Dzherzhinsky, Gengrikh Yagoda, Jakub Berman, Nikolai Yezhov, Lavrenty Beria, Saloth Sar (Pol Pot), and any dictator or man of power who sought to judge the worth of a man based on his perceived efficiency to a human ideal made divine to the derogation of and rejection of God would have loved a tool such as this, and yet the tool is here and nobody cares for the danger it poses.
Make no mistake, this is a beginning shadow of the world warned about by Huxley, where men are “hatched” by artificial wombs (another popular topic of enthusiastic discussion on 4Chan and other websites), and men are judged by their efficiency in a neo-caste system run by a technocratic and eugenic overlord who care nothing for human life.
The Huxleyan nightmare, as with any process, does not happen all at once, but in progressive stages until it is too late to stop.
Christians must be hopeful, as if it really does become hopeless, that is the moment when the promise of ages will be fulfilled, and God will return for His people. But men must pass through the firey chasm on the road to the crucifixion of the human race by those who would seek the next phase of their own evolution before such comes to pass.
The future is only beginning.