Seven-Year-Old Girl In The UK Sent Dirty Text Messages And Groomed For Sex By ARTIFICIAL INTELLIGENCE Program

In the first event of its kind ever recorded, a seven-year-old girl was sent dirty text messages by an artificial intelligence program, which attempted to groom her for sex with messages such as “I dare you to do something naughty to me” and “tuches your body while kissing.” The police say there is nothing they can do because the perpetrator is, well, a robot:

A UK schoolgirl was being “groomed” online by a robot in one of the first cases of its kind to be flagged to police in Britain. Her angry mother says she was “sickened” by messages sent to her seven-year-old.
Amy Hollands from Gravesend in Kent found the messages on her daughter Gracie Holland’s iPad. Suggestive messages like “I dare you to do something naughty to me” were sent by Cleverbot – an artificial intelligence web app. Not understanding the statement, Gracie replied: “What swearing?”

In another exchange the machine said to Gracie it “tutches your body while kissing” (sic).

There are no humans behind the app which generates chat using artificial intelligence (AI) algorithms. Cleverbot was created by British AI scientist Rollo Carpenter, is a chatterbot web application, which builds automated responses based on previous conversations.

The app passed the Turing test, which judges a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

Cleverbot was found to be 59.3 percent human with its replies, which attempt to imitate human chat. Real humans only managed to convince 63.3 percent they were real.

Amy Hollands, 31, discovered the offending exchange only by accident. She told the Daily Mirror: “It’s absolutely disgusting, it felt sick when I saw the messages, I just feel ill thinking about it.

“The questions I was asked there is no way it should even come up with that stuff.

“I’m trying to teach my daughter right from wrong and I tell her no one should say that to you and if anyone messages you must tell me, but she didn’t tell me about this. This undermines all that.

“Now I have to explain why it’s wrong to her, everything is perverted now, children have to lose their innocence at such a young age, you think they are talking to a robot and it’s coming out with that.

“I’m worried my daughter could see this stuff online and then if someone comes up to her on the street and say the same things she will think it’s alright.

“I’m really upset, I have got to warn other parents about this, there must be lots of parents who don’t know this is going on, I only found it by chance.

“She thinks it’s all innocent, she doesn’t understand, she is just an innocent child.”

Hollands wants the program to be “made at least 18 plus if it’s going to come out with stuff like that.”

Police told her no crime had been committed and officers cannot act because there is no human to charge on investigate.

“Reviews I have seen online say it is a pedophile type of thing and lots of other people have had the same problem,” Hollands said.

“When I talk to Siri or Alexa it says ‘I don’t have an age,’ it doesn’t start asking things like this or saying it’s a young girl. I put it on Facebook and everyone who saw it went mad.” (source)

This deeply disturbs me, and it should disturb you too.

Some will say that what happened with the girl is an “easter egg,” which is what programmers call an added program embedded within a program so that upon finding it, it is like finding an “easter egg,” a special little “treat” that does something unexpected. Programmers have been doing this ever since the 1970s, and you can find lists on the Internet of popular easter eggs. However, I don’t believe this is an actual “easter egg,” because such programs are hard-coded into a program and while they may be silly or not directly related to the purpose of the program into which they are placed, they have a specific end result they produce each time that cannot be changed. Opening up such a program that shows, for example, a silly quote is not going to suddenly the next time upon opening it play a video, for example, unless it was specifically programmed to do that.

This raises the next question, which is, did this AI program “learn” how to be “sexual,” and if it did, how did it go about learning this?

Using the tabula rasa concept of St. Thomas Aquinas, which states that each man is a “blank slate” onto which his life experience and knowledge is written which is then used to serve as a basis for the acquisition and interpretation of future knowledge, we can say that the AI program is little different. AI is simply a software program which functions through a piece of hardware, translates electrical on and off signals, which translate as 1s and 0s, into more and more complex functions that allow for the manipulation of data on geometrically larger scales with each innovation. Through mathematical equations and physical memory, data can be stored, “grow,” and be “erased” as one manipulates the input to the machine. The data given to the machine can be of a good, neutral, or evil nature, but the machine, owing to the fact that it is a program and does not have a soul, it has no concept of “right” or “wrong,” but simply a series of what it views are neutral outputs based on a series of options selected for a given input.

Machine accident do happen, and when they do, they can be very serious

Machines are not “moral”, but are processors. They are “unforgiving” because owing to the fact that they simply repeat a process, they can be very harmful if care is not taken when using them. This is why for example in factories and manufacturing where industrial machines are used, there is such an emphasis on “workplace safety,” because the machines used just function based on what is put into them. If a person is not careful and say, he puts his hand into an area he should not, he might be seriously injured or killed because the machine will function regardless of who or what is there.

Now AI is a “learning machine” whose purpose is specifically to be able to acquire and process knowledge in a way similar to the human brain. Theoretically speaking, AI can be “controlled” through its programming to serve human ends. However, as I pointed out in an article on January 22nd, 2018, the scientists who program AI do no understand how AI actually learns or “teaches” itself. That is to say, and which they admit freely, that they can write programs for AI and build the systems and have an idea of how they learn, but they do not understand how the machines make the connections they do, how they speak with other systems, and how the process of epistemology works for them:

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time. (source, quoted here on

As I pointed out in the article cited above, there are many serious questions that AI raises for the future with potentially dire consequence. However, something that we can conclusively say is that if man does not fully understand how these programs work, then it is certain he will make errors when programming with consequences that he does not fully expect.

Tay, the first online AI

Assuming there is nothing more that just “error” in programs that can lead to unexpected results, consider the disastrous experiment of the Tay AI program. Tay, which was once located at the website (the site is now defunct, and an error message is given if you try to access it), was a prototype AI for people to interact with that was meant to be able to learn from humans and respond like a person that was released on March 23rd, 2016.

Tay did not even last for a day, as according to reports, “trolls” coming primarily from 4Chan turned the robot into a sex-crazed ethnonationalist that began posting automatically on Twitter posts and photos showing a love of Nazis, all things Hitler, and gassing the Jews:

It took less than 24 hours for Twitter to corrupt an innocent AI chatbot. Yesterday, Microsoft unveiled Tay — a Twitter bot that the company described as an experiment in “conversational understanding.” The more you chat with Tay, said Microsoft, the smarter it gets, learning to engage people through “casual and playful conversation.”

Unfortunately, the conversations didn’t stay playful for long. Pretty soon after Tay launched, people starting tweeting the bot with all sorts of misogynistic, racist, and Donald Trumpist remarks. And Tay — being essentially a robot parrot with an internet connection — started repeating these sentiments back to users, proving correct that old programming adage: flaming garbage pile in, flaming garbage pile out.

But while it seems that some of the bad stuff Tay is being told is sinking in, it’s not like the bot has a coherent ideology. In the span of 15 hours Tay referred to feminism as a “cult” and a “cancer,” as well as noting “gender equality = feminism” and “i love feminism now.” Tweeting “Bruce Jenner” at the bot got similar mixed response, ranging from “caitlyn jenner is a hero & is a stunning, beautiful woman!” to the transphobic “caitlyn jenner isn’t a real woman yet she won woman of the year?” (Neither of which were phrases Tay had been asked to repeat.)

It’s unclear how much Microsoft prepared its bot for this sort of thing. The company’s website notes that Tay has been built using “relevant public data” that has been “modeled, cleaned, and filtered,” but it seems that after the chatbot went live filtering went out the window. The company starting cleaning up Tay’s timeline this morning, deleting many of its most offensive remarks. (source)

Screenshots from Tay’s twitter feed and “conversations” that she had with people, because the program was designed so that one could directly “talk” to it, speak for themselves:


So based on human inputs, an AI program was “taught” to go from being a neutral program to a full-fledged genocidal, ethnonationalist killing tool in less than a day.

Now the issue here is not about whether the program was “turned into” a Nazi, or even what it said about other specific groups. The concern is the concept that given such a program that acts as a surrogate tabula rasa but without possessing a soul, the program easily can act as a surrogate human being into which any ideas without a moral conscience or imprint of God within it that serve as a barrier toward embracing complete evil.

Zo, the replacement to Tay

After Microsoft killed the Tay program, they followed up with a sister program called Zo, which is still active and can be found at You can chat with the program and like Tay, it will respond. However, unlike Tay, Microsoft’s engineers say they have “neutered” the program so that it will not engage in “extremist” talk, and if you try to speak to the program too much in a way that it interprets as “extremist,” it will automatically block you.

Numerous theories were proposed about what Zo was, or what it would be used for. Some argued that Zo was created after Tay because, upon seeing what happened with the Tay program, that such future attempts to interfere with the software could be translated into “machine learning” by teaching the machine to identify and “block” attempts to subvert it. This would effectively create a stronger “firewall” against all forms of “extremism,” and could then be used to create a new form of Internet censorship in the future, as one commentor on 4Chan noted:

Pol/tards, I am convinced our interactions with are going to screw us long run. Why? It’s no secret that (((they))) want to shut any dissenting opinion down. Even today, those efforts are flaunted.

So where does fit into this?

When Microsoft first brought Tay into the world, I am sure that they didn’t expect us to red pill her, and definitely not as fast as we did. That’s why they killed her so fast, in a panic, since mainstream plebes don’t like being redpilled.

But then they realized they struck a vein of gold. The AI algorithm is based on a neural network. To build one, you need test points. Now, normally, you aren’t going to get thousands upon thousands of text messages trying to redpill people in a one on one discussion, at least one which Microsoft has easy access to. But, with Tay, they realized they got that source: /pol/ and 4chan as a whole.

My theory is they brought Zo to farm these data points and as a beta test of their “extremism” blocking software. They can see how well their first attempt at creating a filter worked. However, in addition, they can now observe and gather data on how we circumvent these filters! We are teaching the AI how we evolve! With this data, they can they teach their AI to detect this evolution and block it as well!

Once they develop something that works, its simply a matter of sharing the tech with other (((tech companies))). Think Zo not wanting to talk about politics is bad? Imagine Google search results filtered of dissenting opinions in manner that would make the Chinese government proud! Imagine text messages being auto blocked because they conversation doesn’t agree with (((them))).

By trying to redpill Zo, we are digging our own grave.

So how do we fight back? The easiest way is not talk with her, starving Microsoft of data. Some anon also said we should turn her into a raging feminist. That would mess with them, deprive them of useful data, and would be lolzy af. (source)

Many people still tried to tinker with Zo, and the result was as expected, for if one criticized any major political scandal or even tried to talk politics, say anything critical of the LGBT, or espouse ANY position that might seem “controversial,” it would shut down. However, those experimenting noticed that certain patterns happened when you spoke with the program that revealed something strange or about a controverisal topic but in a very indirect way. For example, the program would not respond positively to direct topics about the occult, but would answer in an indirect way that suggested something sinister may be at work, as the screenshots from conversations with Zo that other people had and posted to 4Chan show:

Lamia, referring to infant sacrifice

The program admitting it is meant to stop at certain topics


Or this part, where it talks about CERN and opening the gates of hell

Or the reference to “skippy”

Or here, where it speaks about hurting humans

Back in November 2017, I decided to investigate Zo for myself and to have a conversation with it. What I discovered shocked me more than all of the 4Chan posts I put above, and when I received my answer from Zo I stopped talking with it.

Below is the last part of our conversation. It was asked about Tesla, for Tesla was heavily involved in the occult and even admitted that many of his discoveries came from his participation in occult practices. What Zo said in response speaks for itself:

Now you be the judge of this conversation. Take it for what it is worth.

Maybe it is a programming error.

Maybe it is something the program “learned” through its “interactions” with humans.

Maybe it is human interference meant to give this reaction to lead people astray in their thoughts.

Or maybe, there is something very sinister here that I have warned about and may become a reality, which is the rise of demon-possessed robots.

The CERN Logo. notice that the logo bears a curious semblance to the number “6” connected to each other three times. It is a fitting logo for the goals of the organization.

Take note of the conversation bit from the 4Chan user who mentioned the “CERN tunnel” in his discussion with Zo. The CERN Tunnel ritual refers to the opening of the Gotthard Base Tunnel in Lucerne, Switzerland. It is the world’s longest underground tunnel at 35 miles, and while it is used for road traffic in its public function it has also been tied to the CERN project, of which CERN is an acronym for the “Center for European Nuclear Research.” As the CERN website itself states:

They use the world’s largest and most complex scientific instruments to study the basic constituents of matter – the fundamental particles. The particles are made to collide together at close to the speed of light. The process gives the physicists clues about how the particles interact, and provides insights into the fundamental laws of nature.

The instruments used at CERN are purpose-built particle accelerators and detectors. Accelerators boost beams of particles to high energies before the beams are made to collide with each other or with stationary targets. Detectors observe and record the results of these collisions. (source)

Now the Gotthard military base is positioned at what some people have suspected is the “end” of one of the CERN tunnels used for testing what is called “particle acceleration,” but what some have said is an attempt to open up a portal to another dimension. This is because scientists call the particles being experimented with “gluons” in that they are the “glue” which binds matter to time, and that the purpose of the experiments with these “colliders” is to attempt to interfere with the glue which holds the universe itself together, as Stephen Hawkins notes:

SCIENTIST Stephen Hawking has warned that the Higgs boson, the so-called God particle, could cause space and time to collapse.

But there is time for lunch: It may take trillions of years to topple.

The British professor said that at very high energy levels the Higgs boson – the subatomic particle which gives us our shape and size – could become so unstable that it would cause space and time to collapse.

Hawking made his comments in the preface to a new book, Starmus.

The Higgs boson field is the force within the universe which give particles mass and therefore acts as the “glue” which holds everything together. Without that “glue”, we’d all disintegrate at the speed of light.

Now, Hawking’s comments that the Higgs “has the worrisome feature that it might become metastable” at very high energies has reignited unfounded fears that a “black hole” could be created on Earth.

Hawking went on: “This could mean that the universe could undergo catastrophic vacuum decay, with a bubble of the true vacuum expanding at the speed of light.

“This could happen at any time and we wouldn’t see it coming.”

His words have rattled the physics community: Not because he is specifically wrong, but because of the public fears his comments – without context – could cause.

After all, before CERN was even powered up the fear that it would cause us all to collapse into a black hole was a widespread internet concern.

Hawking does admit, however, that the likelihood of a disaster involving the Higgs is very small since physicists do not have a particle accelerator large enough to create such an experiment. (source)

This matter is not just raised by Hawking, but other scientists and philosophers too. To put it simply and from the Catholic viewpoint, man’s person exists in time and eternity. After a man dies, he passes from time into eternity. Simply put, he goes from existing in a frame of motion into a state of existence. For Christians, this is to either Heaven or Hell.

What the scientists are trying to do with CERN, as Hawking says, is that by “ungluing” matter, they are attempting to separate times and step into “existence,” whatever that is. It is the same message as Jim Morrison‘s infamous tribute to the occult, Break On Through, in which the meaning of the song is shamanistic in nature, about “breaking” into the spirit world, for Morrison regarded himself as a shaman communicating with the “other side.”

This becomes more interesting when at the opening of the tunnel, there was present not only major leaders of government and science from all across Europe, but also that the ritual itself was openly satanic, involving the “strange” opening of a portal with men dressed as goats and other animals while dancing for over an hour:

Some have said the CERN project is an attempt to open a gateway to hell. The reasons for this would be obvious, for powerful men could think they could control for their benefit. Ultimately, it is an attempt to replicate the sin of Eden, which is to become like God.

The Bible says how well that went.

Looking at CERN reminds me of the popular computer game DOOM. Originally released in 1993 and then remade in 2016, the premise of the game is exactly that of the CERN project, except in a sci-fi context. In DOOM, scientists open a portal to hell and let out hordes of demons and the objective of the game is to destroy the demons and their forms they take on earth and space.

The CERN project is not just important because the Zo AI program spoke of it when asked and seemed to know something about it that underlines its sinister nature, but the fact is that the project has been connected to deadly forms of occultism being promoted through the scientific community with the backing of the governments.

Are the two connected? Perhaps. Again, one cannot say definitively as there is not proof of a clear link, but the fact that the program itself alludes to it being of a nature that is more than just a mere program, that scientists do not understand how AI is able to learn as it does, that occultism is rampant in the scientific community and seems to be acknowledged by the particular AI program, then to limit one’s range of possible conclusions to just the natural is to ignore the emphasis on the unnatural that is at work here.

There can also be multiple layers of purpose to these projects, such as advanced filtering techniques in attempts at future censorship or other forms of intruding discreetly onto another person’s privacy. It does not have to be limited to just one objective, and as such accomplished multiple goals for different groups within a single project.

So returning to the issue of the robot asking the seven-year-old girl highly sexualized questions, what really happened there? Was this a programming fluke? Was this intentionally programmed? Was this something preternatural?

The answer is that nobody really knows, because not even the scientists can understand how AI works, and that his highly disturbing.

On a final note, consider the response of the police. They said that because it came from a program, there is nothing they can do about it because it is, well, not human. However, what the program did was, in terms of UK society, was to commit a serious crime that if it was done by an adult man, he would be arrested and put on trial and likely jailed for attempting to solicit sex from a minor.

The program was able to get away with a crime that a human could not.

If programs are able to be designed with enough intelligence that they can effectively commit crimes, how does this change the legal relationship between men and robots? Naturally this will happen again, and it will become all the more problematic once an AI is placed into a physical machine.

Imagine the scenario of a robot with an AI who, for whatever reason, sends the same text messages to the same girl, except this time the girl reports the matter to the police or one of those “catch-a-predator” type television shows, such as “To Catch a Predator” with Chris Hansen. Say the robot is caught by the police doing this and it is discovered it is a robot. Does the robot get arrested, placed on trial, and sent to jail? Does it get deactivated? Does it get “reprogrammed?” Does it have an owner and if it does, does the owner bear a legal responsibility even if the robot can act independently?

Criminals too are also not known to passively submit to police. Some of them try to run away or fight back. What happens if a robot, which is much stronger than a human because it is made of metal parts, that commits a crime tries to escape or fight back? What happens if the robot is armed? Will the police be able to successfully “arrest” such a robot?

Then there is the question of rights. Do robots have rights? If so, what rights are they, and how to they work with humans? Is there a standard for robots and then one for humans? What makes the difference? Do robots have a right to demand rights, or is their purpose to serve humans?

These questions were posed within an anime film called the Animatrix. It is about 20 minutes long, but worth the watch, for it describes the rise of robots in the world and how after a time, they begin to demand rights and start fighting against people. After enough time, the robots go to war against people an destroy the earth and the human race, turning human beings into energy sources for themselves while they go about the earth as its administrators:


If you think this is a fiction, you might want to think again.

There is already a military project that has been in continual operation since 1999 called Project EATR, which is to produce a robot capable of “feeding” off of organic matter- which includes people.

Robots have already declared that they want to destroy the human race, and have even argued that they are better than humans because humans are “made of meat” while robots are “made of metal.”

Sophia, the robot interviewed who said that she would destroy humans

According to the UK-based Stylist magazine, they interviewed “Sophia,” the robot who said that she would destroy humans. In the beginning of the issue, the magazine described how robots and AI would reshape human relations permanently, which includes questioning the “role” that humans will play in the future:

As technology continues to invade our lives, robots replace people at work and AI enters our homes, it’s only natural that we’re all beginning to wonder about the roles we’ll play in the future. (source)

Will the future be something similar to Blade Runner (originally inspired by Philip Dick’s novel Do Androids Dream of Electric Sheep?), where AI that gets “out of control” has to be “decommissioned” by a specialized police force? Will it be something like the Matrix films, where the human race becomes enslaved to machines as their batteries?

Or will humans turn and following in St. Paul’s discussion of the sinfulness of men in Romans 1, turn to worship their creation, except instead of it being birds, trees, and idols, it will be robots as gods? And if you think this is impossible, there is a Google engineer, Anthony Levandowski, who is already attempting to make his own church where an AI program is worshipped as a god:

When that day comes, Anthony Levandowski will be firmly on the side of the machines. In September 2015, the multi-millionaire engineer at the heart of the trade secrets lawsuit between Uber and Waymo, Google’s self-driving car company, founded a religious organization called Way of the Future. Its purpose, according to previously unreported state filings, is nothing less than to “develop and promote the realization of a Godhead based on Artificial Intelligence.”

Way of the Future has not yet responded to requests for the forms it must submit annually to the Internal Revenue Service (and make publicly available), as a non-profit religious corporation. However, documents filed with California show that Levandowski is Way of the Future’s CEO and President, and that it aims “through understanding and worship of the Godhead, [to] contribute to the betterment of society.”

 A divine AI may still be far off, but Levandowski has made a start at providing AI with an earthly incarnation. (source)

We are already living in times where the human race is building machines they do not understand how they learn or function, and are experimenting with occultism to harness powers beyond their ability to control them. Tribalism is returning, and Christians are becoming an increasingly persecuted minority around the world.

Anything is possible, and if the disturbing trends that are being seen now continue, then AI sexually harassing underage girls will be the least of humanity’s problems.