New Science Paper Proposes AI-Based Formula To Pre-Identify Twitter Users Who Post “Unreliable” Information

“Thought crime” or “precrime” is a theme commonly referenced in science fiction novels. It is the idea that computers and supertechnology in a surveillance state setup can be used to predict who will commit a crime and who will not, and with the purpose not of stopping actual crime, but with preventing anybody from expressing any disagreement with the way the system is being operated, allowing those in charge to commit crimes with complete freedom and imprisoning or destroying the rest of the public.

Unfortunately, there has been a trend toward embracing ‘precrime’ type thinking across the world. I do not mean just in China with the ‘social credit’ system, but in the US with aggressive policing and the use of cameras and other AI-type programs to attempt to predict crime before it happens. For all of the good that this can do, the fact is that since it is so prone to abuse, it is only a matter of time before the initial benefits are far outweighed by the negative and dangerous consequences so often written about in science fiction stories.

As this trend continues, one can see the danger of this right now as a new science paper has proposed an AI-based formula to pre-identify Twitter users who spread “fake news”.

Social media has become a popular source for online news consumption with millions of users worldwide. However, it has become a primary platform for spreading disinformation with severe societal implications. Automatically identifying social media users that are likely to propagate posts from handles of unreliable news sources sometime in the future is of utmost importance for early detection and prevention of disinformation diffusion in a network, and has yet to be explored. To that end, we present a novel task for predicting whether a user will repost content from Twitter handles of unreliable news sources by leveraging linguistic information from the user’s own posts. We develop a new dataset of approximately 6.2K Twitter users mapped into two categories: (1) those that have reposted content from unreliable news sources; and (2) those that repost content only from reliable sources. For our task, we evaluate a battery of supervised machine learning models as well as state-of-the-art neural models, achieving up to 79.7 macro F1. In addition, our linguistic feature analysis uncovers differences in language use and style between the two user categories. (source)

It is true that disinformation is a problem online. But it has always been a problem, and the biggest spreaders of disinformation are not individual people or groups, but the governments of the world. If anybody needs a check on such, it is this group.

However, this is not what is happening. We are seeing the loss of freedom and eventually, not the stopping of false information, but rather programs being used to censor inconvenient or controversial information. That is the real danger, for truth, given that it is an inconvenient thing.

What we are seeing here is the continued acceleration of the losses of freedom, and the use of the very technologies that have been able to liberate so many, especially on an intellectual level, to be used to enslave them again.

Donate now to help support the work of this site.
When you donate, you are not donating to just any commentary group, but one that is endlessly observing the news, reading between the lines and separating hysteria and perception from reality. In shoebat.com, we are working every day, tirelessly investigating global trends and providing data and analysis to tell you what lies for the future.

CLICK HERE TO FOLLOW OUR NEW SHOEBAT FACEBOOK PAGE

print