Algorithms Are Being Used To Determine Who Goes Free And Who Stays In Jail

One of the major trends that I have warned about is the integration of robotics and artificial intelligence into daily life. There is much good and bad to be had for this, and the changes will be comparable to the Industrial Revolution as well as the Internet in what it will do to society, and people must be ready for this.

One of the more interesting changes that A.I. will be used for is in making “decisions” once relegated to humans that will have a lasting effect on other human beings. While men can be fickle and make decisions, instead of people making decisions for people, it will likely now see a rise in machines making decisions for men about other men that directly impacts their lives. The Wall Street Journal recently covered this in a story that discussed how an algorithm is being used to determine which persons accused of crimes can receive bail.

The algorithm is at the center of a real-world experiment New York City began late last year to help decide who stays in jail before a criminal trial and who goes free. The tool, the result of a $2.7 million, five-year process, was designed to bring about better decisions by the 200 or so judges who make snap determinations every day about people’s freedom.

The algorithm—typically called a “risk assessment,” though city officials prefer “release assessment”—is set to return to use later this month after a six-month hiatus because of the coronavirus pandemic.

The tool has been mostly well received, with preliminary data showing its recommendations aligned with the likelihood of defendants showing up for court. Some judges have said they understand the science behind the tool and are therefore likely to trust its recommendations. (source)

The theory behind this is not a bad idea- that people who do certain things or have certain crimes will likely behave in a certain way, and therefore they can be mathematically structured in a program with specific “weights” for certain acts, at which point a successful or most likely successful outcome can be garnished, which in this case is the idea that people will come to their court dates for those released and those who are unreliable will remain in jail.

It sounds really good. However, Communism also sounds like a great idea on paper until people realize the implications behind it.

There is always a balance in life between mechanized decisions like a machine and the non-consistent and sometimes illogical intuition or feelings-based decisions that humans make. It is this balance, however, that makes a man different from an animal or a machine, because while the former simply acts on instinct or instructions like a mathematical function, a man can follow instructions but due to his free will, possesses the ability to make choice consciously. This is not to say that man always makes the correct decisions- one does not need a history book to demonstrate his errors. But at the same time, there are many times when human nuance is critical and good because it can prevent disaster from happening.

This reduction of decisions about the fate of a man to the will of a machine is a dangerous precedent, as it undermines the human element in decision making that while not per se measurable or tangible, is also what many times prevents legal abuse. Likewise, it is also an issue of philosophy, for here it says that as a machine can make decisions affecting the fate of a man, a man is little more valuable than a machine, and so a machine, not a man, can also make decisions about his life with no moral consequence differently than if a man made it, and since it is a machine making the decisions, any errors can be socially laid at feet of the machine as opposed to the reliance of man on a machine. Thus the employment of this tool is not so much a means to aid man, but the common phenomenon of subjecting the man to the tool so to protect the man using (or being used by) the tools from his own errors against the one being affected.

Its return comes at a challenging time for the city’s criminal courts, which curtailed operations during the pandemic. There are now 41,000 pending cases, about 40% more than this time last year. Shootings and homicides are up. Judges have been conducting video arraignments without using the assessment and have become accustomed to making decisions without it.

Still, the tool could help alleviate backlogs and avoid warrants, said Aubrey Fox, executive director of New York City Criminal Justice Agency, a pretrial-services nonprofit that administers the assessment and worked with the city to develop it.

“If anything, the courts are saying ‘We need your help in making sure people come back,’ ” he said.

The fact that the court system is backed up does not justify the use of an algorithm to “fix it”, as this is equal to a man burning down a home so to sell his very own “home repair services”.

One of the main problems with the legal system is not the lack of laws- there are plenty of laws and one would be able to argue too many since it is impossible to go through them all in a lifetime -but rather the politicization of law as a tool to subjugate people that constantly changes instead of it being a means of governing people and distinguishing between different forms of behavior. There are many reasons for this, but this problem is not new and is often exacerbated in the drive for profit, such as with the outsourcing of prisons to private corporations who make money on lucrative government contracts and then turn around to lobby for more and harsher sentencing laws so to keep the prisons full and thus to justify getting more contracts and more money, and all at the public’s expense and with no philosophy guiding them except the pursuit of unlimited greed in the name of social darwinism that would shock even Scrooge.

Jurisdictions across the U.S. have long used algorithms to help make decisions about bail, classify inmates and sentence convicts. The city set out to build a new system to address a criticism of other models: that they recommended lockup for disproportionate numbers of young Black and Latino men. Many critics of the models say they were built using inherently biased data.

“You’re codifying these structural inequalities into the tool,” said Scott Levy, chief policy counsel at the Bronx Defenders, a public-defender organization. “That is particularly pernicious because you are doing it under the guise of science.”

How are these algorithms made? Who determines their functioning? Who runs the companies that make these algorithms, and what are the relationships of these people to those in law, government, business, and other areas of interest? These are just a few of the vital questions that need to be answered but are not going to be answered or for which the answer is going to be difficult to obtain because this pertains to conflicts of interest and perhaps biases of a real nature that are being covered over in the name of attempting to find a more ‘expedient’ method of dealing with criminal issues.

This is how abuse happens, because it may not be intentional, but in the name of trying to find balance, tools are made for which their effects are not fully understood until it is too late and somebody is seriously hurt or his life is ruined. It is bad enough that the US court system too often functions as a practical theater where court trials are a show for many cases (but certainly not all). When a program such as this is added in, the effects are amplified so that what objectivity is present may be thrown out and a person may be wrongly convicted or wrongly set free, and the excuse will be that it is the fault of the tool but not the person. This likewise is another problem with the people, in that for a variety of reasons, people do not take responsibility for their actions, but seek to blame them on third parties and then distance themselves from that party, and in the case of a program such as this, it is an ideal excuse for lawyers, judges, and even the makers of this program to direct attention to the tool and then place the blame on so as to absolve themselves from responsibility for their actions while everybody is still able to maintain a profit from the current status quo even if another man’s life is unintentionally destroyed by it.

There are a lot of good uses for A.I., including in the courtroom, but all new tools have to be approached with caution. The failure to heed this is the warning of the original Terminator film, where people make an A.I. type system that is able to control defense network operations, but in the name of good intentions create a monster they lose control over and which turns against them, destroys them and then many innocent people.

This is just legal cases, but as the use of A.I. expands, people must be very careful to avoid putting themselves and others, accidentally or intentionally, into a situation where the tools made to help them turn to enslave them and then destroy them and others, for good intentions are always good to have in mind when acting, but if this does not translate into good actions, they are not just necessarily bad, but sometimes can be even worse than something bad done with bad intentions.

Donate now to help support the work of this site. When you donate, you are not donating to just any commentary group, but one that is endlessly observing the news, reading be+tween the lines and separating hysteria and perception from reality. In shoebat.com, we are working every day, tirelessly investigating global trends and providing data and analysis to tell you what lies for the future.

CLICK HERE TO FOLLOW OUR NEW SHOEBAT FACEBOOK PAGE

print