American Courts Bow To The Example Of The Chinese Communists, Start Using Artificial Intelligence And “Social Credit” To Determine The Fate Of Prisoners

Judge, jury, and executioner are the defining marks of the court system. After a man is charged with a crime he faces a judge and sometimes a jury, and upon concluding an investigation of the matter a sentence of innocence or guilt is pronounced and executed on the accused. In America, the ideal embodied in the Constitution is that one has a trial that is “for the people and by the people,” so that one is not judged by a “foreign” body, but by ones fellow citizens.

This process, which has its roots in Anglo-Saxon law and before that Roman law, is under threat from the rise of Artificial Intelligence applications, which are being used at the current moment to “assist” judges in making their decisions about the fate of an accused. For now the programs are being used strictly to determine bail, but depending on how they work, they could be expanded for further use. This is not a test in just one state, but is taking place in states all across the nation:

The centuries-old process of releasing defendants on bail, long the province of judicial discretion, is getting a major assist … courtesy of artificial intelligence.

In late August, Hercules Shepherd Jr. walked up to the stand in a Cleveland courtroom, dressed in an orange jumpsuit. Two nights earlier, an officer had arrested him at a traffic stop with a small bag of cocaine, and he was about to be arraigned.

Judge Jimmy Jackson Jr. looked at Shepherd, then down at a computer-generated score on the front of the 18-year-old’s case file. Two out of six for likelihood of committing another crime. One out of six for likelihood of skipping court. The scores marked Shepherd as a prime candidate for pretrial release with low bail.

“We ask the court to take that all into consideration,” said Shepherd’s public defender, David Magee.

Not long ago, Jackson would have decided Shepherd’s near-term future based on a reading of court files and his own intuition. But in Cleveland and a growing number of other local and state courts, judges are now guided by computer algorithms before ruling whether criminal defendants can return to everyday life, or remain locked up awaiting trial.

Experts say the use of these risk assessments may be the biggest shift in courtroom decision-making since American judges began accepting social science and other expert evidence more than a century ago. Christopher Griffin, a research director at Harvard Law School’s Access to Justice Lab, calls the new digital tools “the next step in that revolution.”

Critics, however, worry that such algorithms might end up supplanting judges’ own judgment, and possibly even perpetuate biases in ostensibly neutral form.

AI gets a lot of attention for the jobs it eradicates. That’s not happening to judges, at least not yet. But as in many other white-collar careers that require advanced degrees or other specialized education, AI is reshaping, if not eliminating, some of judges’ most basic tasks — many of which can still have enormous consequences for the people involved.

Cash bail, which is designed to ensure that people charged of crimes turn up for trial, has been part of the U.S. court system since its beginning. But forcing defendants to pony up large sums has drawn fire in recent years for keeping poorer defendants in jail while letting the wealthier go free. Studies have also shown it widens racial disparities in pretrial incarceration.

A bipartisan bail reform movement looking for alternatives to cash bail has found it in statistics and computer science: AI algorithms that can scour through large sets of courthouse data to search for associations and predict how individual defendants might behave.

States such as Arizona, Kentucky and Alaska have adopted these tools, which aim to identify people most likely to flee or commit another crime. Defendants who receive low scores are recommended for release under court supervision.

A year ago, New Jersey took an even bigger leap into algorithmic assessments by overhauling its entire state court system for pretrial proceedings. The state’s judges now rely on what’s called the Public Safety Assessment score, developed by the Houston-based Laura and John Arnold Foundation.

That tool is part of a larger package of bail reforms that took effect in January 2017, effectively wiping out the bail-bond industry, emptying many jail cells and modernizing the computer systems that handle court cases. “We’re trying to go paperless, fully automated,” said Judge Ernest Caposela, who helped usher in the changes at the busy Passaic County courthouse in Paterson, New Jersey.

New Jersey’s assessments begin as soon as a suspect is fingerprinted by police. That information flows to an entirely new office division, called “Pretrial Services,” where cubicle workers oversee how defendants are processed through the computerized system.

The first hearing happens quickly, and from the jailhouse — defendants appear by videoconference as their risk score is presented to the judge. If released, they get text alerts to remind them of court appearances. Caposela compares the automation to “the same way you buy something from Amazon. Once you’re in the system, they’ve got everything they need on you.”

All of that gives more time for judges to carefully deliberate based on the best information available, Caposela said, while also keeping people out of jail when they’re not a safety threat.

Among other things, the algorithm aims to reduce biased rulings that could be influenced by a defendant’s race, gender or clothing — or maybe just how cranky a judge might be feeling after missing breakfast. The nine risk factors used to evaluate a defendant include age and past criminal convictions. But they exclude race, gender, employment history and where a person lives. They also exclude a history of arrests, which can stack up against people more likely to encounter police — even if they’re not found to have done anything wrong.

The Arnold Foundation takes pains to distinguish the Public Safety Assessment from other efforts to automate judicial decisions — in particular, a proprietary commercial system called Compas that’s been used to help determine prison sentences for convicted criminals. An investigative report by ProPublica found that Compas was falsely flagging black defendants as likely future criminals at almost twice the rate as white defendants.

Other experts have questioned those findings, and the U.S. Supreme Court last year declined to take up a case of an incarcerated Wisconsin man who argued the use of gender as a factor in the Compas assessment violated his rights.

Arnold notes that its algorithm is straightforward and open to inspection by anyone — although the underlying data it relies on is not. “There’s no mystery as to how a risk score is arrived at for a given defendant,” said Matt Alsdorf, who directed the foundation’s risk-assessment efforts until late last year.

Advocates of the new approach are quick to note that the people in robes are still in charge.

“This is not something where you put in a ticket, push a button and it tells you what bail to give somebody,” said Judge Ronald Adrine, who presides over the Cleveland Municipal Court. Instead, he says, the algorithmic score is just one among several factors for judges to consider.

But other experts worry the algorithms will make judging more automatic and rote over time — and that, instead of eliminating bias, could perpetuate it under the mask of data-driven objectivity. Research has shown that when people receive specific advisory guidelines, they tend to follow them in lieu of their own judgment, said Bernard Harcourt, a law and political science professor at Columbia.

“Those forms of expertise have a real gravitational pull on decision-makers,” he said. “It’s naive to think people are simply going to not rely on them.”

And if that happens, judges — like all people — may find it easy to drop their critical thinking skills when presented with what seems like an easy answer, said Kristian Hammond, a Northwestern University computer scientist who has co-founded his own AI company.

The solution is to “refuse to build boxes that give you answers,” he says.” What judges really need are “boxes that give you answers and explanations and ask you if there’s anything you want to change.”

Before his arrest on Aug. 29, Hercules Shepherd had no criminal record.

Coaches were interested in recruiting the star high school basketball player for their college teams. Recruitment would mean a big scholarship that could help Shepherd realize his dreams of becoming an engineer. But by sitting in jail, Shepherd was missing two days of classes. If he missed two more, he could get kicked out of school.

Judge Jackson looked up. “Doing OK today, Mr. Shepherd?” he asked. Shepherd nodded.

“If he sits in jail for another month, and gets expelled from school, it has wider ramifications,” Magee said.

“Duly noted. Mr. Shepherd? I’m giving you personal bond,” Jackson said. “Your opportunity to turn that around starts right now. Do so, and you’ve got the whole world right in front of you.” (Jackson subsequently lost an election in November and is no longer a judge; his winning opponent, however, also supports use of the pretrial algorithm.)

Smiling, Shepherd walked out of the courtroom. That night, he was led out of the Cuyahoga County Jail; the next day, he was in class. Shepherd says he wouldn’t have been able to afford bail. Shepherd’s mother is in prison, and his aging father is on Social Security.

His public defender said that Shepherd’s low score helped him. If he isn’t arrested again within a year, his record will be wiped clean. (source)

The idea of robotics and artificial intelligence merging with police and law enforcement is not a new concept. The Terminator series is perhaps the most famous, as it depicts the result of artificial intelligence allowed to make life-and-death military decisions that result in the near complete destruction of humanity. The 1983 hit Wargames discusses a similar concept, except that instead of a war with robots, it focuses on the danger of the overdependency of machines that lead men to project scenarios for war- war games- that result in a real war happening by accident.

The 1987 hit film Robocop that was remade in 2014 also explored these concepts through the character of Alex Murphy, a Detroit Policeman who after a terrible accident at the hand of criminal gangs (in the 1987 version he is tortured to death, in the 2014 version he is killed in a car bomb) survives by scientists taking the few remaining functional parts of his body, including his brain, and putting it into a machine that endows him with the processing capacity and accuracy of a machine while still remaining at least in part human.

While both versions of Robocop explore the moral and philosophical questions of such a merger, transhumanism, the military-industrial complex warned about by President Eisenhower, and the meaning of life, the 2014 version is particurdlarly interesting because it explores the question of machine versus human decisions about right and wrong and the soul as the central theme of the film.

After Officer Murphy survives the carbomb and is “rebuilt” into the robot, the doctor who oversaw his transformation at the directive of Omincorp, the corporation who sponsored his work, attempts to interfere with his biochemistry in order to make him “less emotional.” This is a critical part, because Omnicorp supported Officer Murphy’s transformation into “robocop” in order to justify putting full-scale military robots on American streets. The film shows this at the very beginning, when Omnicorp build robots are used to assist with an American invasion of Iran and during a broadcast accidentally show a robot massacring a child to the horror of the public.

The result of the modifications is that Officer Murphy becomes like a machine- cold, callous, and ruthlessly hunting down and exterminating criminals based upon the crimes database that has been downloaded into his brain as he screens people against it in realtime, similar to the license plate scanners employed by police today. Crime virtually disappears from Detroit, and then the CEO of Omnicorp, Raymond Sellars (played by Michael Keaton) goes on a popular television show (hosted by Samuel L. Jackson) to boast that this is the reason why more robots are needed and of course, from his corporation. The main character standing in opposition to Omnicorp and Raymond Sellars is a lone Senator who is against the repeal of a law, called the Dreyfus act, which would prevent the use of robots and artificial intelligence in law enforcement.

By the end of the film the President refuses to allow for robots to serve in a law enforcement capacity and Officer Murphy is “decommissioned” as Robocop and slowly “built back” into being a human by the same doctor who helped turn him into a machine, and Omnicorp’s plans for an artificial intelligence police force are foiled, but just barely.

Now film and other forms of art are not real life at the same time real life is not always displayed in art. However, the two often mirror each other, as reality is what one knows to be true based on is senses, while art is the expression of a man’s thoughts through his senses. What these films reflect is the reality of how the military-industrial complex is not something made to dominate far-away peoples in lands many have never heard of, but now rules the lives of ordinary people through the corporate-government partnership.

President Eisenhower’s final address famously warned of the rise of the a “permanent armaments industry,” which he called the “military-industrial complex” (starting at 6:52 to 9:25). But after this famous part of his speech, Eisenhower spoke of something which is not often mentioned, and that is with the rise of this complex also comes a technological revolution of equal danger:

Akin to, and largely responsible for the sweeping changes in our industrial-military posture, has been the technological revolution during recent decades.

In this revolution, research has become central; it also becomes more formalized, complex, and costly. A steadily increasing share is conducted for, by, or at the direction of, the Federal government.

Today, the solitary inventor, tinkering in his shop, has been overshadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research. Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. For every old blackboard there are now hundreds of new electronic computers.

The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present

and is gravely to be regarded. Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.

It is the task of statesmanship to mold, to balance, and to integrate these and other forces, new and old, within the principles of our democratic system — ever aiming toward the supreme goals of our free society. (source)

This “scientific-technological elite” would further blend with the “military-industrial complex” to create new chimera that would overtake government and act as a shadow entity capable of possessing and dominating the government with its own rules and by which it uses the government as a shield to legalize it crimes and shield it from public response. This has taken many forms today, as it has been allowed to grow and now is in full control over the nation so that as illustrated by the films and demonstrated by lived reality, government today is but the public face and justification given to entertain and impress conformity on nation ruled by an unaccountable oligarchy.

As we at Shoebat.com have noted, the projects which emerged from this unholy union have been many. The sacrifices made by many during the Second World War have been overshadowed by Operation Gladio, it was as the same National Socialists who the USA fought were then transferred to the intellectual class of the USA while National Socialism was promoted as a “hedge” against communism. It was the formation of the advanced research into biochemistry and genetics that works with the abortion industry and the government to develop everything from advanced bioweapons to preparing the way for the creation of cyborg-type robots that will be used in a future war. It also includes artificial intelligence, advanced computing, and the creation of nanotechnology.

But what is applied to foreign policy also comes back to domestic policy, as illustrated through experience and reflected in film. The difference is the form which they take owing to the different applications, but still possessing the same philosophy. The most visible change comes to law enforcement, as the weapons used abroad are turned against the people to not simply deal with crime, but to enforce conformity.

While this scene is from the first Terminator movie, it illustrates the same concept. Machines cannot feel, they have no souls, and they do what they are programmed to do.

This is the issue above with the use of artificial intelligence applications to for now, “assist” a judge with pronouncing a sentence upon a person requesting bail. However, just like with the film Robocop, the use of the program raises more questions than it solves because a robot is NOT a person. Robots cannot feel, they cannot act independently of their programming, and they do not have a soul. They cannot show compassion, question a fixed authority, or sympathize with a person. A man can act outside of his “protocol” that he is told to follow, but a robot under natural circumstances will not because as with all robots, it simply generates an output for a given input.

What the use of robots does is to create, based on a program and as noted by the story, a “points system” for “ranking” people and making decisions. While there is some merit to this, it is also dangerous because not only are the “points system” unavailable for people to see what they are being judge on (which also raises legal questions), but it turns human life into a commodity that “rises” and “falls” based upon not what a person knows or learns about what another, but what input is given to a machine about him. Since machines are machines, they can be manipulated no matter how advanced, and a person’s life can be forever changed simply based on having the “wrong input” at a certain time.

This “credit system” is directly reflected in something that China is testing right now and plans to implement for all its citizens, which is a “social credit” system that determines the “good behavior” of a citizen. Citizens with “high ranks” will be able to travel, get good jobs, get loans, and have other privelages. “Bad citizens” with low ranks could be denied these and as many suspect, will be “eliminated” if the score is too low:

Coincidentally or not, in 2014 the Chinese government announced it was developing what it called a system of “social credit.” In 2014, the State Council, China’s governing cabinet, publicly called for the establishment of a nationwide tracking system to rate the reputations of individuals, businesses, and even government officials. The aim is for every Chinese citizen to be trailed by a file compiling data from public and private sources by 2020, and for those files to be searchable by fingerprints and other biometric characteristics. The State Council calls it a “credit system that covers the whole society.”

For the Chinese Communist Party, social credit is an attempt at a softer, more invisible authoritarianism. The goal is to nudge people toward behaviors ranging from energy conservation to obedience to the Party. Samantha Hoffman, a consultant with the International Institute for Strategic Studies in London who is researching social credit, says that the government wants to preempt instability that might threaten the Party. “That’s why social credit ideally requires both coercive aspects and nicer aspects, like providing social services and solving real problems. It’s all under the same Orwellian umbrella.”

Ant Financial did state, however, in a 2015 press release that the company plans “to help build a social integrity system.” And the company has already cooperated with the Chinese government in one important way: It has integrated a blacklist of more than 6 million people who have defaulted on court fines into Zhima Credit’s database. According to Xinhua, the state news agency, this union of big tech and big government has helped courts punish more than 1.21 million defaulters, who opened their Zhima Credit one day to find their scores plunging.

The State Council has signaled that under the national social credit system people will be penalized for the crime of spreading online rumors, among other offenses, and that those deemed “seriously untrustworthy” can expect to receive substandard services. Ant Financial appears to be aiming for a society divided along moral lines as well. As Lucy Peng, the company’s chief executive, was quoted as saying in Ant Financial, Zhima Credit “will ensure that the bad people in society don’t have a place to go, while good people can move freely and without obstruction.”

“There is almost no oversight of the court executors” who maintain the blacklist, he told me. “There are many mistakes in implementation that go uncorrected.” If Liu had a Zhima Credit score, his troubles would have been compounded by other worries. The way Zhima Credit is designed, being blacklisted sends you on a rapid downward spiral. First your score drops. Then your friends hear you are on the blacklist and, fearful that their scores might be affected, quietly drop you as a contact. The algorithm notices, and your score plummets further. (source)

The difference between the Chinese system and the American system are many in practice for now but not in principle, for both are using machines to “rank” their citizens and direct their lives like how a farmer directs and animal for is use, including slaughter.

Machines do not see humanity. They see objects.

Machines do not know individuals. They see numbers

To a machine, you are an object with a number in a system, a mere file which can be “moved”, “changed”, “or “deleted”.

Now say for example that there is NO ISSUE whatsoever with the machines- for the purpose of argument- that they work perfectly and execute justice perfectly as well as balance the needs of people just as a human would (it is obvious this will not happen, but this is not the point). The machine may be able to perfectly operate on its own, but the fact still remains that it is men who created, manage, and ultimately are the masters over the machines no matter how advanced they become. Since men can always be bribed, influenced, or manipulated, there is nothing at all to stop somebody with machine knowledge from altering the input to the computer/robot/A.I. in order that a different output would be generated to either exonerate or incarcerate, or worse, to somebody who should not.

God created man in His image and likeness. He is a human being, a little higher than the animals and a little lower the angels. Each one is unique, and that cannot be taken away from him even as men may try to take it away, and this is the ultimate purpose of all the push for “machines” to take the place of men in making decisions about people’s lives, for just as corporations have co-opted government and use it as a shield to execute malice against the public for private gain at their expense, so is the computer a shield for those with a lust for power to harm others while protecting themselves against the same or worse crimes.

In the Twilight Zone episode The Obsolete Man, librarian Romney Wordsworth (played by Burgess Meredith) is condemned to death by the Chancellor (played by Fritz Weaver) in “the State,” which Rod Serling narrates could be any state, a possible future that is an “extension of the old world”, set to the times but with the same idea, that man is simply an animal to be manipulated for the “good” of the masses with no regard for his humanity.

The Book of Ecclesiastes (1:9) says:

What has been is what will be, and what has been done is what will be done; there is nothing new under the sun.

This also has been in the works for a long time. The Forum for the Future released a series of videos in 2011 talking about future societies, which they described through one “Planned-opolis,” in which all a person’s moves are calculated, measured, and organized. Most of the videos were pulled, but some can still be found online:

This future, that of the one in The Obsolete Man, and of the science fiction films of Hollywood are artistic reflections of what is being seen through the real substitution of or indistinguishable blending between machine and man, which is the loss of man’s humanity and his reduction to the status of an animal so that another person might force him to bend to his will for the sake of power and domination.

It is the struggle between Good and evil that began at Eden and will end when Christ returns.

Be ready.

…qui autem permanserit usque in finem hic salvus erit.

…but the one who perseveres to the end will be saved.

 

CHRISTIANS ARE BEING TORTURED AND MURDERED AS WE SPEAK. PLEASE CLICK HERE TO GIVE A DONATION THAT WILL HELP OUR RESCUE TEAM

OUR TIME ONLINE RUNS SHORT. SHOEBAT.COM FACES DAILY HARASSMENT BY FACEBOOK AND EACH TIME IT IS A DIFFERENT EXCUSE. IT IS BEST TO REACH US BY EITHER GOING DIRECTLY TO SHOEBAT.COM OR THROUGH OUR NEWSLETTER. CLICK HERE TO JOIN OUR NEWSLETTER, AND CONTINUE EDUCATING YOURSELF ON WHAT THE MAINSTREAM MEDIA WON’T REPORT

print