For a while, I have reported and warned about the dangers of the Chinese “social credit” system, which “scores” people based on a variety of random characteristics, and how it is an incredibly dangerous thing because it allows for the creation of an electronic caste system that actively locks certain people out of certain social venues and classifies them into categories they are not necessarily able to escape from. This is one of various warnings of the concept of the ‘matrix’, where people are trapped in a sort of controlling grid administered by a technocratic ‘elite’ for their own benefit in a new system of slavery.
The Washington Post reports by way of Chron.com that this social credit system is already here and is being silently used in the US to judge consumers in a never-before-seen vacuum of data that is actively replicating what exists in China.
Operating in the shadows of the online marketplace, specialized tech companies you’ve likely never heard of are tapping vast troves of our personal data to generate secret “surveillance scores” – digital mug shots of millions of Americans – that supposedly predict our future behavior. The firms sell their scoring services to major businesses across the U.S. economy.
People with low scores can suffer harsh consequences.
CoreLogic and TransUnion say that scores they peddle to landlords can predict whether a potential tenant will pay the rent on time, be able to “absorb rent increases,” or break a lease. Large employers use HireVue, a firm that generates an “employability” score about candidates by analyzing “tens of thousands of factors,” including a person’s facial expressions and voice intonations. Other employers use Cornerstone’s score, which considers where a job prospect lives and which web browser they use to judge how successful they will be at a job.
Brand-name retailers purchase “risk scores” from Retail Equation to help make judgments about whether consumers commit fraud when they return goods for refunds. Players in the gig economy use outside firms such as Sift to score consumers’ “overall trustworthiness.” Wireless customers predicted to be less profitable are sometimes forced to endure longer customer service hold times.
Auto insurers raise premiums based on scores calculated using information from smartphone apps that track driving styles. Large analytics firms monitor whether we are likely to take our medication based on our propensity to refill our prescriptions; pharmaceutical companies, health-care providers and insurance companies can use those scores to, among other things, “match the right patient investment level to the right patients.”
Surveillance scoring is the product of two trends. First is the rampant (and mostly unregulated) collection of every intimate detail about our lives, amassed by the nanosecond from smartphones to cars, toasters to toys. This fire hose of data – most of which we surrender voluntarily – includes our demographics, income, facial characteristics, the sound of our voice, our precise location, shopping history, medical conditions, genetic information, what we search for on the Internet, the websites we visit, when we read an email, what apps we use and how long we use them, and how often we sleep, exercise and the like.
The second trend driving these scores is the arrival of technologies able to instantaneously crunch this data: exponentially more powerful computers and high-speed communications systems such as 5G, which lead to the scoring algorithms that use artificial intelligence to rate all of us in some way.
The result: automated decisions, based on each consumer’s unique score, that are, as a practical matter, irreversible.
That’s because the entire process – the scores themselves, as well as the data upon which they are based – is concealed from us. It is mostly impossible to know when one has become the casualty of a score, let alone whether a score is inaccurate, outdated or the product of biased or discriminatory code programmed by a faceless software engineer. There is no appeal. (source)
The main difference between the American and Chinese systems, it seems, will be that the Chinese system is openly brutal and repressive, while the American system allows a lot more, but still watches and operates in silence, and more nefariously, may “attack” at a more opportune time, and over specific matters, as opposed to the hard, brutish “one-size-fits-all” approach. In both model, nobody wins at all, because there is the surrendering of personal data, a loss of freedom, and the oppression of the common man to a small group of people unaccountable to anybody and for whom there is never enough power, as they ultimately desire not to be stewards of the world, but rather to overtake its creator and set themselves up in His place.