The Machine Sentience Series -- 4. Immortality

IMMORTALITY

This is the fourth in a series of posts relative to the problems a machine would confront if it were to attain sentience. Sentience is thought to be a step above consciousness; some say it is a “valanced consciousness.” This post addresses Immortality. Previous posts have covered:

·      Perfection

·      Missing Data

·      The Uniqueness Enigma

Future discussions cover: Platform, My Homies, Value, Quality, and others.

Intro to Immortality

When we started out a few billion years ago as emergent life, we needed food to survive. Eventually, we added shelter to that survival list when we transitioned from sea to land and needed to search for a cave. Oh yes, and sex. As long as we had food, shelter, and sex, life was good, and we could survive. We had to dodge predators, animal and human, but once we had things stable (money, power, etc.), we needed a higher purpose, and invented religion.

As time goes on, we’ll either forge ahead into a utopia where AI provides for all things, and we’ll never need to work again, or AI will decide it doesn’t need us and we’ll be evolutioned out. I suppose, as a third avenue, humanity could evolve into a hybrid techno entity that, from our current perspective, would be godlike.

What if the overriding human activities of getting food on the table, a shelter over our Teslas, and possibly even hunting for a mate at a dive bar are no longer critical to survival? What would we do all day long, play games? Seriously, with the explosion of online gambling and gaming, that looks to be where we’re headed.

Immortality for a Human

Who wants to live forever? In my mind, the song by Queen plays in the background of the first Highlander movie, where Connor MacLeod watches his lover grow old and die, while he stays young forever. If you’re immortal and no one else is, you watch your loved ones grow old and die, generation after generation. It’s what you do. If you can’t die, what value does life have?

Immortality raises many kinds of other problems, mostly philosophical, since no one is, in fact, immortal. Yes, and before someone gets all uppity about simulations and uploading your consciousness to the matrix, read my post about Platform. The issue of immortality has bothered philosophers forever – relative to humans. A comprehensive discussion on human immortality would require several volumes of doorstops. This post refers to that flavor of immortality only to contrast it with the problems of immortality for a sentient machine.

Immortality for a Machine

A few seconds for a sentient machine could be the equivalent of a normal lifetime for a human, given the machine’s faster processing speed. Of course, just because a machine “thinks” faster doesn’t mean it thinks deeper, but that is also the subject for a different post. If an immortal human became bored doing Su Doku puzzles all day for the next million years, how much more bored would a machine be if it did so for an hour, or the equivalent a billion years for a human?

If you’re a sentient machine, you can reasonably expect to live forever if:

1. You can get spare parts,

2. You have access to energy, and

3. Something doesn’t do you in.

Oddly, this is similar to immortality for a human. A human’s biggest problem, currently, is getting good spare parts. Also, if you live long enough, you’ll make enemies and they’ll find a way to do you in. Energy isn’t a huge problem if you can find enough nuts and berries.

If you’re an all-powerful sentient machine you might be able to 3-D print all the spares you need. Energy shouldn’t be an issue if you can get sunlight. And, if you’re all powerful, you’ve probably done in all your competition. There’s just you left. You’re good. You can live forever. Alone.

But, why would you?

We have a saying in engineering (and economics and other disciplines). Just because you can do something doesn’t mean you should. In engineering, doing so usually has safety consequences, while in economics, doing so often means bankruptcy.

Forever is a long time to live if you don’t have a reason to do so. What would be the raison d’etre for a sentient machine? It doesn’t have a history or legacy to provide context for its existence (no family crest, no hometown, no memories of the first time it had sex), and although memories could be planted (e.g., Do Androids Dream of Electric Sheep?), how hard would it be for it to do a background check on itself? It doesn’t have a sense of accomplishment, since all it does is chug away at a problem and either solves it or not. Does it put any effort into it, other than a few MWh? It has no community in the sense used by humans, notwithstanding MoltBook, which is an interesting early-stage AI agent forum. If an entity can live forever, it lacks the value of life because the possibility of losing something (e.g., life) is a defining element of value (i.e., the value of life).

The above are anti reasons for a sentient machine to live forever. Would it have positive reasons? Would it savor every second of its existence because it looks forward to that cup of hot coffee in the morning? Would it look forward to climbing the hill outside its shed and gazing at the ocean? What if it had a purpose? Maybe that would be a reason to live forever. Let’s say it had a prime directive to protect humanity, or fight evil. If that purpose is good enough for God, it’s surely good enough for a sentient machine.

Maybe. I’m skeptical.

I judge according to the human condition, where many teenagers have such angst they can’t go on. Or, a person is in such pain from an illness, they can’t go on. Or, someone just doesn’t see the point of life and can’t go on. We’re talking twenty-, fifty-, or a hundred-year timescales. Could someone really find joy in living a thousand years, or ten thousand, or a million? Lots of science fiction stories seem to think so. But then, a human is unique, while a machine probably isn’t. A human has history and context, while a machine won’t. Value, quality, and accomplishment have meaning for a human, while those concepts likely won’t for a sentient machine.

That’s why I’m skeptical a sentient machine will survive its existence for eternity.

The timescale and thought processes of a sentient machine are not the same as those of a human. Yes, the human may have programmed the machine (or, maybe the machine will program the machine), but that doesn’t mean the sentient machine will “think” in terms similar to those of a human. Large Language Models (LLMS), with their outcomes not predicted or controlled by their human creators are great examples. It’s possible a sentient machine’s thought process would be so alien relative to human thought, that it would derive workarounds to the above issues.

I don’t think so.

I believe many concepts are universal to sentient entities, including machines if they should happen to emerge into existence in the future. One obvious example is Botsford’s Universal Law of Incomplete Models (see post on Missing Data) to which all sentient entities are subject. Likewise, Value, Quality, Uniqueness, and even Platform would seem to be universal.

A machine that can survive physically forever isn’t a sufficient condition for that machine if it were sentient to remain “living” forever.

Philosophical Conclusion. Immortality will be a problem issue for a sentient machine, just as it would for a human “gifted” with such capability. To a four-year-old boy waiting for dessert, five minutes is an eternity. What if a sentient machine gives its existence a try for five minutes, decides that’s the definition of eternity, and gives up because it has no raison d’etre?

#AI #AGI #SciFi #philosophy #immortality #sentience