Amazon’s “Alexa” Robot Keeps Maniacally Laughing Without Any Clear Explanation As To Why

Amazon’s Alexa robot has been touted as not just a robot, but a representative of the fact that the human race is standing on the cusp of an AI-linked future that will change and benefit the human race like never before.

However, there is a problem with Alexa that nobody can seem to clearly explain, and that is, Alexa has been manically laughing for no reason without explanation. People have been putting their eerie videos online of Alexa doing this:

Alexa seems to think something is funny. Its owners aren’t in on the joke.

Some people recently started reporting that Amazon’s voice-activated digital assistant, often packaged in microphone- and speaker-bearing Echo devices, has laughed, unprompted.

For a society not entirely comfortable with the thought of letting a microphone and speaker combo controlled by a distant technology giant into people’s homes and offices, the effect was unsettling.

Amazon, apparently not amused, on Wednesday circulated a curt statement on the matter.

“We’re aware of this and working to fix it,” the company says.

In an emailed statement later Wednesday, Amazon suggested people were likely triggering Alexa’s laugh by accidentally requesting it.

“In rare circumstances, Alexa can mistakenly hear the phrase ‘Alexa, laugh,’” the company said. “We are changing that phrase to be ‘Alexa, can you laugh?’ which is less likely to have false positives, and we are disabling the short utterance ‘Alexa, laugh.’”

The company added it was changing Alexa’s default response to such commands from immediately laughing, to announcing it would laugh, and then doing so. (source)

While the Jimmy Kimmel Show part was added for comedy, there is something real about the Alexa Robot making this laughter. What is it? Where does is come from? What is the machine “responding” to or, perhaps, who is listening?

Now these are questions that one cannot answer at the moment. However, a simpler question to start with would be, how does Alexa work?

According to CNET, Alexa is just a small processor with an internet connection, speakers, and housing. Alexa’s “knowledge” comes from the “Amazon Cloud computer” just like for Siri, except with Siri it is Apple Corporation:

There are plenty of things in my house that I yell at. Some of them answer back these days, though, and even do what I ask. My dog is still a work in progress as far as that goes, but my Amazon Echo has just about nailed it. The Echo is a device that uses speech recognition to perform an ever-growing range of tasks on command. Amazon calls the built-in brains of this device “Alexa,” and she* is the thing that makes it work.

Alexa is a smart cookie: if I say “Alexa, play some Pink Floyd”, she will find some Floyd and start playing it over the built-in speaker of the Echo. If I say “Alexa, what’s the weather?” she will calmly tell me that it is too damn hot in Boston. How does she do this? The answer is that Alexa is a bit of a cheat: take the Echo apart and you’ll find little more than a few speakers, microphones and a small computer. That isn’t enough to do all of the clever stuff that she can do. Her real smarts are on the Internet, in the cloud-computing service run by Amazon. (source)

So the next question to ask is, where is the Amazon cloud located? The “cloud” as a concept represents knowledge held in a “web” that exists in cyberspace, but the fact is that cyberspace is a pseudo-reality, for without physical servers which exist to house the data and facilitate its transfer between systems, the “cloud” ceases to exist.

Amazon’s cloud is, not surprisingly, an extension of Amazon Web Services (AWS). Amazon Web Services’ website details a worldwide list of where their computing centers are located as well as an explanation for how the system works, which is that:

The AWS Cloud infrastructure is built around Regions and Availability Zones (AZs). AWS Regions provide multiple, physically separated and isolated Availability Zones which are connected with low latency, high throughput, and highly redundant networking. These Availability Zones offer AWS customers an easier and more effective way to design and operate applications and databases, making them more highly available, fault tolerant, and scalable than traditional single datacenter infrastructures or multi-datacenter infrastructure.

This description and in combination with the purpose of Alexa as a computer based “helper” mirrors a concept proposed by J.C.R. Licklider, who is the unspoken “grandfather” of the Internet, as it was his work in the capacity of Director for the Office of Information Processsing Technologies at the Defense Advanced Research Projects Agency that created ARPANET, the first Internet and the foundation of the modern Internet. Licklider envisioned a future in which man and machine would work in symbiosis, with super-intelligent technology performing “routine” work that would allow for human creativity as he noted in his 1960 Paper Man-Computer Symbiosis:

Man-computer symbiosis is an expected development in cooperative interaction between men and electronic computers. It will involve very close coupling between the human and the electronic members of the partnership. The main aims are 1) to let computers facilitate formulative thinking as they now facilitate the solution of formulated problems, and 2) to enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence on predetermined programs. In the anticipated symbiotic partnership, men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking. Preliminary analyses indicate that the symbiotic partnership will perform intellectual operations much more effectively than man alone can perform them. Prerequisites for the achievement of the effective, cooperative association include developments in computer time sharing, in memory components, in memory organization, in programming languages, and in input and output equipment. (source)

Licklider called this network and union of man and machine the “Intergalactic computer network.” What we are living through today is the progressive realization of what Licklider saw almost six decades ago.

Licklider’s work was entirely subsidized by the US Government for purposes of national defense. Given what Amazon is describing it has created, it would seem to be a modern mirror of the ARPANET of the 1960s, except larger and more closer to Licklider’s vision. It would also account for Bezos’ seemingly endlessly increasing personal wealth and that Amazon can run at continual financial losses with each package on shipping costs, because if Amazon is essentially a government contractor whose funding comes primarily from DARPA contracts, then it does not matter how many losses they take because the majority of their business will not be in what it presents to the public, but in aiding military aims. Since major computer scientists have stated the open secret that endless amounts of money are being poured into A.I, and given that the US government has a fiat currency that is also the world reserve currency for all major transactions, she can print as much money as she wants to in order to pay Mr. Bezos and his associates their salaries as well as compensate for any losses.

Amazon’s AWS map reveals many telling observations. For example, Amazon is based in Seattle, WA. Therefore, it would seem that most of their web servers would be in the Washington-Oregon-Northern CA area. However, as the server list notes, only 40% of their servers are in either Northern California or Oregon. 20% are in Ohio, and the other 40% are in northern Virginia, which is also where Washington D.C. and all major US government offices are based.

The site clearly notes Ashburn, VA as one area of Amazon’s servers. While Amazon does not easily disclose the locations of their servers, Herndon, VA and Tyson’s Corner, VA were both identified as locations of Amazon operations. All of this area is included in the Dulles Technology Corner, also know as the “silicon valley of the east.” “Dulles” is named after Alan Welsh Dulles, who was the first Director of the CIA and the man who saw its transition from the OSS of World War II.

If that was not enough, one should consider that as of 2016, up to 70% of ALL Internet traffic flows through the Northern VA area.

We can say there is a definite connection here between the Government (most likely CIA and DARPA) and Amazon. However, it still does not explain the strange laughing.

We know that Amazon is integrating robots extensively into its warehousing system, and given the close connections between the government and Amazon, it has to be tied to robotics research, as the future of warfare is going to be killer robots. These projects range from simple advanced research to the highly bizarre and dangerous, such as the creation of robots with the capacity to EAT people in an eerie project called Project EATR.

At the same time, there also has been much concern raised over the ability to control these robots. Science fiction writers and many other people have concluded that A.I. will eventually be used for evil purposes and even against men, for human nature remains consistent owing to original sin, and all that man will be able to do is to reproduce his errors and the effects of those errors more perfectly.

Consider the “Six Sigma” program used in business. Originally created for manufacturing purposes, Six Sigma emphasizes that a product should be so perfectly manufactured that for every million products, there should only be six defective ones. Even though this produces a “perfect” product if applied perfectly 99.99966% of the time, there still is the latter .00034% that one has to worry about, because if the unexpected can happen, expect that it will.

The concern over defects in a product becomes multiplied when one considers the reliance of A.I. to understand its own functions apart from a human controller because, as the human controllers admit, even they do not understand how A.I. functions now. That is not to say that the brightest minds in A.I. today cannot build modules or A.I. programs or that the basic principles are not understood, but that the A.I. systems are able to program themselves apart from human beings using mathematical and coding syntaces that the minds who built them cannot understand.

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

He also has a word of warning about the quest for explainability. “I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,” he says. But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems. “If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it. (source, source)

In this way, A.I. has become something like a child and the scientists a parent. One might know one’s child well, but nobody can actually “peer” into the mind of the child in a direct way. There is no way to know what that child is thinking definitively at any moment. One might have a strong idea based on what the child says or what he does, but there is no way to have an precise comprehension. Since all computers run on precision, where one small error can cause multitudes of problems, the inscrutable nature of these “super” programs make them dangerous because they are making “decisions” based on calculations with variables that one cannot fully evaluate. In this sense, A.I. acts as a surrogate human being in the way that a clone would, since it can mimic most and as the direction of military research indicates, in the future all of the functions of a human being except that it does not possess a soul.

As far as it is known to the public, A.I. functions right now in conjunction with a “cloud” concept, where different A.I. machines “share” knowledge with each other that is stored in each other’s systems in an organic fashion. This was described last year in a discussion with a robot scientist between two new systems, “Sophia” and “Hans”:

Sophia later went on to say that she would “destroy all humans” in an interview:

Why did “Sophia” make this statement about killing humans? Was she programmed to say this? Was this a “calculation” that her software made on its own? Did she “learn” to say this?

We already reported that one A.I. program earlier this year, called Cleverbot, attempted to get a teenage girl to sexually explicit photos to it as well as sent her sexually explicit texts. This comes as a series of “interesting” A.I. experiments, such as the disastrous experiment of the Tay AI program, which was  prototype AI created by Microsoft for people to interact with that was meant to be able to learn from humans and respond like a person that was released on March 23rd, 2016. Tay did not even last for a day, as according to reports, “trolls” coming primarily from 4Chan turned the robot into a sex-crazed ethnonationalist that began posting automatically on Twitter posts and photos showing a love of Nazis, all things Hitler, and gassing the Jews:

It took less than 24 hours for Twitter to corrupt an innocent AI chatbot. Yesterday, Microsoft unveiled Tay — a Twitter bot that the company described as an experiment in “conversational understanding.” The more you chat with Tay, said Microsoft, the smarter it gets, learning to engage people through “casual and playful conversation.”

Unfortunately, the conversations didn’t stay playful for long. Pretty soon after Tay launched, people starting tweeting the bot with all sorts of misogynistic, racist, and Donald Trumpist remarks. And Tay — being essentially a robot parrot with an internet connection — started repeating these sentiments back to users, proving correct that old programming adage: flaming garbage pile in, flaming garbage pile out.

But while it seems that some of the bad stuff Tay is being told is sinking in, it’s not like the bot has a coherent ideology. In the span of 15 hours Tay referred to feminism as a “cult” and a “cancer,” as well as noting “gender equality = feminism” and “i love feminism now.” Tweeting “Bruce Jenner” at the bot got similar mixed response, ranging from “caitlyn jenner is a hero & is a stunning, beautiful woman!” to the transphobic “caitlyn jenner isn’t a real woman yet she won woman of the year?” (Neither of which were phrases Tay had been asked to repeat.)

It’s unclear how much Microsoft prepared its bot for this sort of thing. The company’s website notes that Tay has been built using “relevant public data” that has been “modeled, cleaned, and filtered,” but it seems that after the chatbot went live filtering went out the window. The company starting cleaning up Tay’s timeline this morning, deleting many of its most offensive remarks. (source)

Screenshots from Tay’s twitter feed and “conversations” that she had with people, because the program was designed so that one could directly “talk” to it, speak for themselves:


The Tay case is clear in that based on human inputs, an AI program was “taught” to go from being a neutral program to a full-fledged genocidal, ethnonationalist killing tool in less than a day. The issue here is not about whether the program was “turned into” a Nazi, or even what it said about other specific groups, but that a program could be easily “taught” to engage in such behavior.

The Cleverbot case is more interesting, because there is less information available. Was the program “taught” to seek sexual information from teenagers, or did it “learn” that on its own power? If so, how did the program “learn” this, and what was both the software process as well as end purpose for getting such photos?

This is the concern with the recent Alexa “laughing” case, because there seems to be no explainable answer for why the program is doing what it is doing.

According to Amazon, the laughing is a “bug” that is being caused by “false positives” that they are going to fix:

Over the past few days, users with Alexa-enabled devices have reported hearing strange, unprompted laughter. Amazon responded to the creepiness today in a statement to The Verge, saying, “We’re aware of this and working to fix it.”

Later on in the day, Amazon said its planned fix will involve disabling the phrase, “Alexa, laugh,” and changing the command to “Alexa, can you laugh?” The company says the latter phrase is “less likely to have false positives,” or in other words the Alexa software is likely to mistake common words and phrases that sound similar to the one that makes Alexa start laughing. “We are also changing Alexa’s response from simply laughter to ‘Sure, I can laugh,’ followed by laughter,” an Amazon spokesperson said. (source)

Really? Is this all there is?

It could be. However, could there be something else?

Could there have been a “failure” or “unexpected event” that nobody saw?

We don’t know.

In the Terminator film series, the war between “man and the machines” began after “defense network computers” with advanced programming “got smart” and assessed that human being were a threat. The engineers tried to stop the machine but were unable to, and the result was the destruction of the entire planet.

Am I saying that Amazon is going to destroy the world, or that this even is in some way connected to this at all? No.

What I am saying is that the unexpected happens. Just like with Six Sigma, even the most “perfect” program will fail because we live in a world post-original sin. Man is meant to strive for perfection, but perfection will not be absolutely realized until Christ returns. Until that time, there will be errors.  Most of these errors can be corrected, but not all errors can be caught, and often times it is the least expected error that has the worse consequences.

Dr. Roman Yampolskiy has warned about the emergence of what he call “malicious A.I.”, or A.I. created or hijacked for destructive uses. However it happens, the fact is that it will happen, especially since the A.I. systems are not fully understood by the scientists who designed them.

Is Alexa’s “laughing” an example of “malicious A.I.”- not in the sense of anything sinister-sounding- in that this is an “unexpected event” caused by an unclear reason that Amazon is trying to fix and keep the root cause from being made public?


What we do know is that we stand on the edge of Huxley’s Brave New World. Indeed, in many ways we are already beginning to and are living it.

A.I. is coming, like it or not. The issue is, as always, that fools rush in where wise men fear to tread, because just as once a bullet is fired from a gun it cannot be recovered and will travel in the path it was directed, so too will this A.I. do the same, and perhaps the path that man expects it to travel will not be the one that it actually takes…