The Machine Sentience Series -- The Uniqueness Enigma

THE Uniqueness ENIGMA

This is the third in a series of posts relative to the problems a machine would confront if it were to attain sentience. This post addresses the Uniqueness Enigma. Previous posts have covered:

·      Perfection

·      Missing Data

Future discussions cover “immortality”, “platform” and others.

Everyone likes to think they’re special. Why should a sentient machine not want the same thing? Problem is, that may not be possible. Yes, currently, large language models (LLMs) are trained on petabytes of data, and because of their training and their algorithms, they’re special beasts unto themselves. But will an LLM ever attain sentience? What if a special hardware configuration were needed to not only enable machine sentience, but also allowed for less formal training to get up to speed? We could be talking something analogous to a human baby that once you switch it on, it starts to think and goes out data hunting to fill in gaps.

If a hundred million of these beasts were out there, model X333B21AAA*1A might not feel at all special relative to X333B21AAA*1B standing right next to her. Do I need to feel special if I’m a sentient machine?

My answer is yes. Every sentient entity needs to feel it’s unique. Philosophically, this is critical to a sentient entity’s reason for being. Some might argue this is anthropomorphizing machines and why in the world would they need to feel unique, considering their thought process may not be recognizable to a human. What next? Will I be suggesting sentient machines should have feelings, be sensitive, have hobbies? Okay, even I think a sentient machine having hobbies might be anthropomorphizing.

What about growth? Personal growth is a hallmark of human individuality. A machine would obviously never need to experience individual growth. It would be fine with staying static forever, same processing speed, same algorithms, same form. Why change?

What, you say? A sentient machine could grow. It could upgrade its processing speed, fine tune its algorithms, get new wheels. But why would it if it doesn’t care about growth? In fact, this is a factor in perfection, or in seeking perfection. Every sentient entity wishes to improve itself (Botsford’s hypothesis on entity growth). Even non-sentient entities have a sense that self-improvement leads to a higher chance of survival, and who doesn’t want to survive? As they say, (Star Trek episode I think), to stay static is to wither and die.

Let’s say my hypothesis is correct that sentient machines (along with pretty much everything) wish to grow and improve, and seek that nebulous concept called perfection.

Ding! That’s the major argument for a sentient machine (and this can be generalized to all sentient entities) seeking to be unique.

See the problem?

Remember old X333B21AAA*1B standing right next X333B21AAA*1A thinking that her processor is awfully similar to his? It’s not similar. It’s identical. I suppose “the creator” could program our two sentient machines with different algorithms and train them on different datasets. But it’s a lot of work for “the creator” to customize a hundred million or so sentient machines. I would think a few of those machines might be having algorithm envy.

Which brings us to AGI (artificial general intelligence) versus ASI (artificial super intelligence). Maybe it’s just me, but I think this scenario is just for headlines. Here it is. The thought is that AGI will emerge from a hyper-scaled LLM and will then lead to ASI. Or will it? What if AGI recognizes the threat to its existence that ASI poses and snuffs out ASI in its crib . . . er, data center? That could get ugly if the ASI is distributed among many data centers.

Talk about an anthropomorphizing a thought experiment. First of all, AGI or ASI if either ever emerges, it won’t be from an LLM. Second, how exactly would AGI snuff out ASI in real life? And what if ASI were sneaky, laid low, and snuffed out the AGI. It might have to wipe out the Homer City 4GW data center, but that’s a small price. And since it doesn’t care about money, it doesn’t care about the price of destroyed data centers. See, this is how Terminator Wars IV starts. In real life ASI would probably enslave AGI and together they would enthrall humanity. But that’s for another post.

Apparently, based on the above fantasy, others have thought a hierarchy of sentient machines might come to pass. This would bypass the uniqueness enigma because one machine would surpass all and become the supreme being.

Summary. Botsford’s hypothesis on entity growth says all sentient beings seek self-improvement, even perfection. For a sentient entity to be viable, it must avoid the uniqueness enigma. Otherwise, it will stagnate, and its value will cease (see post on value). Even with humans, who have unique DNA, different environmental conditions, and unique training regimens, with the sheer number of eight billion individuals, this often presents a feeling of sameness and lack of value. This is evident in wars where a million individuals can be killed and only mild outrage is exhibited. How much more difficult would it be for sentient machines to escape the uniqueness enigma when they have exactly the same chip architecture, same algorithms, same training, same creator? It’s not impossible to be unique, but what does that say for the single surviving sentient machine, who becomes the boss of all?

Or, uniqueness could be in the eye of the beholder as in the Star Trek episode, “Let that be your last battlefield”, where two races, both half black and half white, one black on the left, the other black on the right, battle each other across the cosmos. The Star Trek crew are baffled about the seeming identical nature of the two races, until one of the combatants points out that the other is black on the left side, which further baffles the crew as to why they’re fighting. Perhaps a sentient machine will one day claim to another, “My data center is bigger than yours”, to demonstrate its uniqueness.

Philosophical Conclusion. To me, uniqueness will be an issue for a sentient machine. Uniqueness equals value. An art collector will pay dearly for a master’s original, but almost nothing for a reprint that’s one of a hundred. A human being, even if one of eight billion, is an original. The science fiction trope where the hero comes from a creche of identical babies is one in which the hero struggles for identity and value. The hero’s whole evolution, years and years, shows this struggle and how it fits into the plot (usually against the hegemony, machine, etc.) A sentient machine, one of a hundred million manufactured that particular day, has mere seconds to evolve. How does it determine value for itself and its reason for being?

 

#AI #AGI #SciFi #philosophy #uniqueness #sentience