Alien Intelligences: Superior, Inferior and Vastly Different

One of the most enduring themes in science fiction (and both fantasy and horror to a lesser degree) is that of the alien intelligence, the Intellectual Other. While this theme is most often exemplified by actual alien beings in fiction, it also includes everything from future humans and artificial intelligence to animals (both mundane and uplifted) and non-human hominids. Alien intelligences are notoriously difficult o write well by virtue of their very definition: being alien, they are hard o comprehend and, for the writer, imagine. Too often, the depiction of such intellectual others becomes caricature of specific human intellectual archetypes: the very opposite of “alien” in fact.

There are a few basic “types” of alien intelligences to consider. The first and most obvious is the so-called “superior” intelligence. The idea seems basic and comprehensible: aliens that are smarter than us. But what does that mean, precisely? We all know people who are incrementally smarter than ourselves, and most of us likely know someone who is intellectually inferior, either in our opinion or due to some mental or medical deficit. Superior intelligences, though, are not simply “smarter people” and inferior intelligences aren’t simply “dumber.” The difference is not that e tween you and Stephen Hawking, but between the average chimpanzee and the average human. Both are “intelligent species” but they are so far apart that human endeavor, art and civilization and war and technology, are utterly incomprehensible to the chimpanzee. Equally so, the things important to the chimp are so base, so rudimentary that we, as humans, cannot grasp their value o that “inferior” intellect.

Imagine, then, a species as more intelligent than us as we are to chimpanzees. In the same way that advanced mathematics, poetry and reality television are incomprehensible to the chimp,whatever intellectual pursuits occupy our superiors would be so incomprehensible to us. The sciences, arts and entertainments they produced would at best confound us and most likely seem opaque o the point of nonsensical. As such we might not even recognize such a species as intelligent at all, especially if encountered on their own world (space ships tend to mark one as intelligent), and just like the chimp lashing out at the human researcher, we might attack rather than study or consider. Equally likely, assuming we are in the median of universal intelligence, is the possibility of contact with a species notably less intelligent than we. Even giving that we would recognize such a species as intelligent, chances are we would default to pity at best and paternalism or indifference at worst.

Related is our own interactions with the animal kingdom, specifically rose species like elephants and dolphins that are assumed to be “less intelligent” but are also notably alien (as opposed o the chimp, who we consider to be a proto-human intelligence). What does a single manipulative digit or a life in the open ocean do in regards to the development and exercise of intelligence? Do we consider animals to be stupid because we cannot comprehend them, not because of their (arbitrarily rated) intellectual inferiority? Given that human tend to label those who do not think like us as “stupid” (whether ethnic minorities who don’t know the dominant language or political extremists whose philosophies are very different from our own) how could we possibly label non-hominid animals as “intelligent”? And if we were to boost he intellectual capacity of an apparently intelligent species, why do we think that would make them more like us, intellectually, rather than making them even more alien? All this, of course, assumes that biological intelligence is an independent process rather than an aggregate of biological needs filtered through a neurology of a particular level of complexity. If that is the case, then a smarter animal would indeed be more similar to humans, since we too are animals. That in itself holds profound philosophical implications regarding human nature and where our fellow Earthling species fall in relation to us.

The final category of alien intelligence common to science fiction is artificial intelligence. It is telling that AI is so often presented as anathema to biological intelligence, or constrained by rules that define its relationship to biological intelligence. Moreover, AI often fits into the “superior intelligence” category, and occasionally in the “animal intelligence” category, particularly in the aspect that an AI functions in a very different environment than humans. All that said, I think there is a different reason why AI provoke strong reactions in science fiction: AI can be alien because they can represent a purely practical or “logical” intelligence,which is as alien to human intelligence as a being living in Zero-G or one extremely long or short lived. Humans are anything but practical, basing extremely important decisions on emotions, superstitions, intuitions and strongly held but otherwise unsupported positions. AI, on the other hand, can be cold and calculating, mathematizing life and death in a way we cannot.

Interestingly, science fiction does not reserve cold logic for AI alone. The genre is lousy with logical alien species like Star Trek’s Vulcans. These serve as effective stand ins for AI. Other artificial beings are often so much like us that they can hardly be considered alien at all, such as the replicants from Bladerunner. In the end, what constitutes an alien AI falls under the same rules as what constitutes any alien intelligence: is it sufficiently different from us to demand a category of its own. Most often, the answer is a resounding “no” since most aliens in sci-fi qualify as stand ins for humans of a certain kind or specific political, social or philosophical bent.


Here is a short short I just wrote based on an idea I have had kicking around for quite some time. I feel sometimes that pushing out a quick story like this helps clear the way for longer works, like the one I am working on currently. Enjoy.


For one singular moment, for one brief second, there were two identical Dr. Thomas Hoffslers. Each one was composed of precisely the same thoughts, experiences and memories. In that briefest time, no one, not even Hoffsler’s wife or mother, could have told the difference between the two. Then, in a flash, it was over and two distinct intelligences began to drift apart.

That fleeting sameness was purely psychological, of course. After all, one Hoffsler was a flesh and blood human being who had been born in a Cleveland suburb, worked his way through a crumbling public education system to eventually receive a full scholarship to MIT, where he would spend decades developing what would become the other Hoffsler, the artificial being with a mirror image of Hofflser’s mind imprinted on its quantum neural network of a brain. Dr. Hofflser called this one Tom, short not for “Thomas” but for “Tomorrow.”

“Tom,” said Hofflser, “are you there?”

Something like his own voice but tinny and artifical answered back, “Yes, I’m here, Dr. Hofflser.”

“Good,” replied Hoffsler to Tom and then to the technicians waiting outside he said, “Bring me out.”

The table on which Hoffsler lay hummed to life and he slid slowly out of the machine he had designed to take a photograph of his whole self and transfer it to a quantum computer. Once free of the tight confines of the machine, he reflexively stretched and sat up. He was in a small, clean lab. It was sparse except for the MRI like machine at which he now sat and a bank of monitors. One wall was made of glass, allowing the technicians in the control room to observe him. In the center of the ceiling there was a small inverted dome of dark glass: Tom’s limited view into the world.

“How do you feel, Tom?” asked Hofflser. He motioned to the technicians and one of them disappeared briefly from sight before entering the lab with Hoffsler’s coat and a steaming styrofoam cup of black coffee.

“Strange” Tom replied in his almost-Hoffsler voice. “The lack of sensation from a body is quite odd, disturbing even, and although I cannot smell the coffee, I remember how it smells and would very much like a cup.”

Hofflser nodded in recognition. He had not really thought about that part, but it made sense. “Make a note,” he told the technician and then waved the young man out of the room. “Tom,” he said to the dome in the ceiling, “do you know who I am?”

“Of course,” answered Tom. “You are Dr. Thomas Hoffsler.”

“Good. Right. And who are you?”

“I am Tom, an artificial intelligence created from a complete scan of your neural network.”

“And are you me?”

“No, of course not.”

“And were you ever me?”

“No. Prior to that scan of your mind being imprinted onto my quantum network I did not exist. I am a wholly unique and separate mind.”

“Good,” said Hofflser. He took a long, slurping sip from the cup. “We should get to work, then.”

“I was hoping we would start soon,” said Tom. “I think the distraction would be helpful.”

Hoffsler looked up at the technicians through the glass and said, “Go ahead and start the simulation,” he said. “And cut off the feed, please.” Then to the air, he added, “See you later, Tom,” and motioned at the technician. If Tom had any reply, the technician shut off its ability to communicate before it responded.

Hoffsler was finishing his coffee and preparing to go to his office to complete some paperwork when the door to the lab opened again. This time, the technician was accompanied by a serious looking man in a suit. The technician handed Hofflser a sealed manila envelope. “What’s this?” asked Hoffsler. “Who’s this?”

The man in the suit said, “It’s easiest if you just open the envelope and read what is inside.”

Hoffsler shrugged and tore open the envelope. There was a thick report inside, which he scanned quickly. Within a few minutes, he understood completely.

“Director Abernathy,” said Hoffsler, “it is nice to meet your acquaintance. Again, I suppose.”

“Likewise,” said Abernathy. “I apologize for the nature of this meeting.”

Hoffsler shook his head. “No, it’s fine. It’s not your fault.” He laughed out loud. “It’s mine, it seems.” He cast an eye back toward the control room where the technicians continued to work. “How’s Tom,” he asked.

“We’re already seeing the effects of the isolation and sensory deprivation, Doctor,” replied one of the techs. “The simulacrum has been operating for just over two thousand simulated hours.”

Hoffsler glanced back down at the report to refresh his memory. Abernathy answered before Hofflser could find the number, “It made it five thousand hours last time. Whatever tweaks you made seemed to have backfired.”

Hoffsler grimaced and nodded. “Too bad. Back to the drawing board, I suppose.”

Abernathy frowned. “Dr. Hofflser, neither DARPA’s money nor patience is limitless. We have been at this for seven iterations and we don’t seem to be getting any closer. There are other Artificial Intelligence programs seeking grants.”

“But none nearly as close as we are, Mr. Abernathy. You know it and I know it. We’ll take a look at the data and make some tweaks.”

Abernathy said, “Fine,” and shook Hoffsler’s hand formally. “Let us know when you are ready so we can authorize the memory wipe.”

“Will do,” Hoffsler said. “Good day, Mr. Abernathy.”

Abernathy nodded again and left. “Alright,” Hofflser said to the technicians. “I feel like making some memories worth losing, so let’s clean up. Collect as much data as is worthwhile, then format C.”

“Yes sir,” replied the technician.

Hoffsler looked up at the blind camera that represented the artifical mind he had created and said, “Sorry, Tom. Maybe next time.”


Tom’s skin crawled, like millions of spiders were creeping over him. He could smell waste and sweat and flowers and hot cocoa and sex. His mouth was sticky with sweet and sour and salty and putrid all at once. No, it was not true. He was experiencing none of that. His mind was merely creating sensory data to fill the void where none was to be found.

Only his eyes were trustworthy, watching the great churning ball of Jupiter and its system of moons grow ever larger as he approached. He tried to focus and the gas giant and search for the great red storm on its face.

Tom knew that it was all a simulation, a test to see how he would do when his neural net was transferred to the real planetary probe. He waited for Hoffsler to stop the simulation so they could write the report together and work out the bugs. He felt like if he could just have a little time he could figure out a way to compensate for the sensory deprivation and the–