Voight-Kampf Tests for Fun and Profit

I’m a little obsessed with artificial intelligence. So much of the media I consumed as a kid had some kind of android or AI. Daneel Olivia from Asimov’s Robot novels. The Terminator from, well, Terminator. Star Trek’s Data. System Shock’s SHODAN. Hell, even the Transformers cartoons. I thought they were all amazing.

Sometimes these characters dwelled on free will. Sometimes they were simply trotted out as a terrifying menace. The digital monster of the week. While everyone seemed to know there was something to all this consciousness stuff, rarely did they have time to get into it. It really doesn’t help that the science behind what makes us think is pretty nebulous. Even current neuroscience pretty much puts up its hands and says “maybe quantum dots or something” when trying to tackle consciousness.

There are a few books, like Peter Watt’s Blindsight, that ask some harder questions. Is the thing we call consciousness necessary for intelligence? Is it even useful? Stanislaw Lem’s classic, Solaris, hints at a powerful, non-human intelligence, something that thinks and feels so completely differently from us that attempts to make contact would be unrecognizable.

Westworld, which I posted about previously, seems to really be trying for some depth as well. The robots don’t just want to act more human. They’ve tried that. They want to be their own being. Like Blindsight, though, they don’t treat humans like beautiful and unique snowflakes. How can we be so sure we are conscious anyway. So much of what we know about the brain right now implies free will is merely an illusion. Neurons only fire in response to stimuli. And even if we are conscious, is that actually useful? Does it really justify all the energy needed to maintain it, or could the same result come from the so-called philosophical zombie?

It’s this kind of tone that I’m constantly drawn to. What makes a non-human intelligence interesting is not how similar it is to us, but how alien. Give me the other. Give me the alien. Give me a creature so alien that I am literally incapable of understanding it because it exists so far outside any context I could possibly have.

I just finished working on a short story draft involving a artificially intelligent starship. At first I tried not to anthropomorphize the AI at all. I didn’t use male or female pronouns, everything it did was expressed very mechanically. This certainly did a good job of getting across that something was off, that it was different, but it didn’t do much more than that. Also, it read like shit. So, instead, I tried to change its behaviour. The ship’s non-human priorities were suddenly far more clear, the wants and needs of the character, even though they were very different from what a person might want, made everything snap into place.

Theres another bit of short fiction I wrote a while back, a quick draft. I really wanted to flip some of the usual sci-fi, tropes on their head. An artificial intelligence in a human (or near human) body, and humans in artificial ones. Its not like its never been done. Battlestar Galactica constantly hammered home that they looked like us, but they were the other. I wanted to try my hand at it. I’m going to post up some stuff from at least one, maybe two, pieces of short fiction this week. Both were things I’d put on the back burner, but I’ve been extra productive this week, so I dove back into those as well.

Keep watching this space for some freaky AI stuff. Later!