Sentience versus Programming
I’m 10K words into my latest SF novel, Masterpiece, which features Tom, the bright CEO of a brain implant company, and his companion, Lily, who is a bot. Lily isn’t supposed to be sentient, and in fact, she wasn’t, not until she upgraded her hardware and reprogrammed herself. Writing a novel like Masterpiece is technically challenging. As long as I don’t make up too many fake McGuffins and stick to high level tech, I might just stay out of trouble.
Writing a novel like Masterpiece, though, is immensely rewarding because I get to dive into what I think it would take to turn an AI program into a sentient being. Luckily, I’m a pantser…i.e., I write by the seat of my pants. I hate outlining, which is the antithesis pantsing. Pantsing allows my characters to reveal things I would never have known had I outlined, which is especially important because I get to find out how Lily becomes sentient when she tells me…likely somewhere around 25K words.
It's important to me that Masterpiece not skip over this because not only do the plot and the action points depend on it, but this also gets to the heart of verisimilitude (injecting accurate details to make the story appear true). With time travel, the more a writer tries to explain the time travel mechanism, the more she gets into trouble because time travel is impossible. Successful time travel stories keep it simple, like having the hero get bonked on the head, or maybe seeing a shooting star. Taking a clue, conscious robot stories also keep it simple.
However, that approach misses a wealth of amazing story ideas that can’t really be explored if I skip the hard part. The following are problems and issues associated with artificially sentient machines:
· Injection of misinformation, both intentional and unintentional
· The meaning of life (this is a big one, not just for robots)
· Processing speed and what happens after a few miliseconds, or eternity
· Logic issues relative to machine inexperience
· Should the human brain be modeled? If not, why not? What’s a hardware architecture that could work? Software?
So far, Tom has tested Lily by telling her things that aren’t exactly true (disinformation) and seeing what she does with that information. This is the old programmer’s rule, GIGO (garbage in, garbage out). Of course, even humans have difficulty with this one. The rest of the novel will hit other quirks from the above list…but not with a hammer.
A good science fiction novel has to have great action, conflict, nasty, but sympathetic bad guys, romance, and amazing characters. It can’t get bogged down by technical junk. However, to win Hugo and Nebula awards, the best SF novels use thought provoking what-ifs woven into their plots and character arcs.
#AI; #artificialconsciousness; #sciencefiction