Mathematically, it is impossible to write a computer program that can become sentient—say some data scientists. The proof relies on a reductionist argument showing the progression of simple algorithms to ones as complex as Google’s LaMDA artificial intelligence (AI) program, and how they’re not structurally or mathematically different. Yet, sentience exists, at least if you believe humans are sentient. So, is it possible to create artificial sentience, or are applications such as fully self-driving cars a fool’s errand?