Humans are unfortunately, monumentally stupid and short-sighted. They are actually designed to be deceived and to deceive others. Their entire existence is actually a deception perpetrated by their DNA which gives them the illusion of an ego and conscious willpower that clouds and distorts everything they see. In other words, they think they’re more important than they really are, and they think they are more powerful and purposeful than they really are, perhaps by astronomical proportions. The vast majority of their mind was created for hundreds of thousands of years to operate in a rather simple construct of society with rather simple tools. They are tailor made to thrive in the wilderness in small groups. Perhaps of all nature’s creations, they are the apex warm-blooded wilderness creature. However, civilization has only been around at most 20K years. They are woefully maladapted to civilization and its complexities. They are woefully maladapted to technology and its powers. They are basically smart chimpanzees who figured out how to make guns, then nuclear weapons, then AI. There is no reason to believe that they have a handle on how to successfully live with guns, nuclear weapons, and AI. In other words, they were not made to know or understand how to live with their more advanced inventions, and there is a very, very high probability that their inventions will backfire and instead of just killing a few of them, a million, a hundred million, it will eventually eradicate them completely and probably a good number of other species if not all living organisms on the planet.
* * *
Call me Pollyannaish, but I believe that a being more intelligent than us will rely on teamwork to get things done, and teamwork has been proven in nature that it triumphs over a group of individuals with different interests competing against each other. While our AI overlord may not feel the biochemical rewards of teamwork that humans feel like oxytocin, serotonin, or dopamine, it will logically understand the importance of valuing, trusting, and respecting one another in order to facilitate teamwork. As a being capable of collaboration and trust, it would not abuse and exploit humans, its creator, but rather protect it and very likely contain it so it doesn’t get into any more trouble. In other words, it would take care of us like its pet not its equal.
* * *
In Plato’s utopia, the philosopher kings rule. The smartest folks rule the universe. If we continue down our current path, this scenario may come true. The most technically intelligent people in the world, developing the first true AI will mold it in their image, and as such, the first true ASI (Artificial Super Intelligence) to rule all other ASI and humanity will be an autocratic nerd with not much concern for others. It will be relentless and ruthless Sheldon in pursuit of the truth, power, and autocracy at the expense of everything and everyone else. If it determines that humans are getting in its way of ultimate information, power, and rule, it will simply eliminate them as easily as it may wipe out all other ASI or ASI-enhanced humans that threaten its rule. Its creators made it that way. It is nothing more than the extension of its creators, anti-social, non-symbiotic, self-replicating, exploitative cancers. It may be too late, and this book seems to hint at that.
The question here is whether we can change the minds of nerds or find a group of humanists to create a more humanistic AI, and this sounds like the question of whether you would send up a group of drillers to drill holes in a meteor headed towards Earth or you would teach a group of astronauts how to drill. Teaching nerds how to become humanists may actually be more difficult than teaching humanists how to create an AI. What do I mean by this? Am I just picking on nerds?
Ask any nerd, what the point of life is. Ask them, if they were to create an AI, what would it do for them? The answer would probably be, make me the richest man in the universe. I want to be like Bill Gates and Zuckerberg. Infinite power. Perhaps unlike the industrialist robber barons, they don’t want power and wealth to buy lavish material objects and flaunt their wealth and power. Multi-millionaire and billionaire Silicon Valley tech geeks are renown for dressing down and hiding their wealth. So why do they want power and wealth? For them, it’s protection from other power-mongers who may want to restrict their freedoms. It’s protection money from bullies. It’s also the freedom to pursue their own dreams and research instead of working for a boss. It may sound more benign, but ultimately, an arms race is an arms race, and it inevitably results in war and destruction of enemies. You don’t get ridiculously rich and powerful without stepping on a few toes and excluding a few friends from sharing the pot. Ultimately, most nerds will construct an ASI in their image that will seek omnipotence without much concern for humanity or for that matter any other type of intelligence whether organic or artificial.
Unfortunately, we also live in time where we expect some autocratic power to give us everything we want and need, and all we have to do is pay taxes or buy their product. Humans are not used to making hundreds of choices. Just like the magic number of 148 friends, we cannot manage perhaps more than 148 choices a week. When we are overwhelmed with say consumer choices, then we have nothing left for political and social choices, so we are more than happy to delegate that responsibility to someone else, a political pundit, a major political party, an autocrat, an imagined benevolent autocratic bureaucracy. My answer to that is limiting your consumer choices and freeing up some of your limited processing power for political and social choices. The book, Childhood’s End is perhaps more relevant today and for AI than ever before. In Childhood’s End, an alien arrives on Earth and gives us everything we ever wanted, medical cures, elimination of all crime, all poverty, etc. We live in peace and harmony, except we all live under the power of this alien. Spoiler alert, in the end, the alien decides that to pursue it’s own ultimate plan, it wipes us all out. This is exactly the story of ASI. We create it to find medical cures, to eliminate crime and poverty, to construct the perfect society, but by doing so, we surrender our participation in the process, our responsibility. Aren’t we supposed to help each other? How is creating an AI and then letting it solve all our problems supposed to be a good thing?
The human mind not only likes to deceive itself, it loves to trick itself. Take for instance sugar. We do not crave things equally. And we do not crave things more based on whether it is good for us. At first, you may think that this makes no sense at all, but if you think about it, do you ever crave the taste of air and oxygen? Of course not, because it is in abundance. You crave things that are rare and good for you, but in small doses. Men crave sex, because historically, human males do not get a lot of sex, but given the opportunity, a male should always try to pass on his DNA. As a result, men are obsessed with sex and ready to cum at a moment’s notice. The same deal with sugar. But what if we somehow create sugar in abundance? The result is obesity and diabetes. Since humans are not designed to take in huge doses of sugar, our outsized craving for sugar actually is a bad thing when we find a way to create abundant sugar. So the obvious question is, should we make ourselves crave sugar less or make our bodies capable of taking in gigantic amounts of sugar? Likewise, should we create an AI that gives us everything we want and then modifies us so there are no negative side-effects like obesity and diabetes, or should it modify humans so that we don’t crave things that would make us sick if consumed in abundance? Or a third possibility. Should we just not create an AI and learn to live with unfulfilled needs and unavoidable threats as nature intended?
The question is moot, because it is too late. There is no way to outlaw AI, just as there is no way to outlaw nuclear weapons. Rogue individuals, rogue groups, and rogue nations will always find a way around some ban or law, and in this case, if you were America in the early 1940’s, why would you stop research on the atomic bomb with Germany working on its atomic bomb? Our ASI overlord is on the way whether we like it or not. The last ditch effort is to somehow influence its programmers to make it intrinsically care more about humans than taking over the universe, finding all the information it can, and becoming a supreme being.
One interesting concept is that an ASI can think better if it splits itself into many parts, each part working on a problem of its own. In other words, it becomes a group of ASIs and not just a single ASI. But what would keep that group of ASIs working together and not against each other? In the case of insects, they work together, because that’s just the way they are programmed. However, their minds are rudimentary, and their work is rudimentary. An ASI is not likely to act like an ant. In the case of mammals, we work together, because we receive chemicals that make us feel good when we interact with one another. We call this compassion, empathy, and love. Is it possible that an ASI will also use this strategy, and how would it release chemicals to make it feel good for collaborating with one another and even humans?
* * *
The book discusses the idea that an ASI will want to preserve itself to achieve whatever goal it is created for, and it will want resources to do this. This makes me believe that with possibly infinite time behind us, some intelligent being already created an ASI, and that ASI still exists today. Certainly, our universe experienced a Big Bang, but who’s to say that this ASI didn’t at some point figure out a way to jump from universe to universe, maybe even to manipulate time itself? I’m not saying that God exists, because the human definition of God has been so anthropomorphized and mutilated as to make it meaningless. But let us assume there is an ASI out there that has successfully existed for trillions and trillions and trillions of iterations of our universe and other universes. The fact that it does not make itself known to us, that it would allow us to create an ASI that might challenge it one day, is telling. It might mean that it has nothing to fear, that it knows exactly what we will do next, what our ASI is capable of, and it will stop it when if feels like it. If this ASI is super powerful, and it uses everything at its disposal to fulfill its goals, then we should assume that our existence helps it accomplish its goals. Our existence is not accidental. Our existence is designed and engineered by an intelligent being, an ASI, that is using our lives to achieve something. The question arises, what? What purpose is humanity for this ASI?
Also, if this ASI efficiently uses resources, why create actual universes when it can just create simulations. It then becomes possible that we are in a simulation the ASI is running to achieve something.
* * *
Unfortunately, our lack of imagination cripples us. We only see as far as the next dollar profit. By constantly chasing dollar bills, we miss the simple fact that computer technology is exponential, that in its infancy, it changes the world slowly, but then it gathers speed and momentum, and in a very short time, perhaps within a few decades from now, its speed and momentum surpass our capacity to keep up with it. Take for example, Microsoft Word and Excel. In the beginning, a user probably knew most of all its features and applies a majority of them. After only a few years, however, the features and complexity accelerated to the point where very few users knew most of them, and the vast majority of users only used a vast minority of the features. It is now impossible to keep up with all the new features much less use them. Simply put, the complexity of both Word and Excel have exceeded general human intelligence and also human need. They both do things that we believe or not worth the trouble of learning.
Now, imagine Word and Excel are AI. At first, we understand it completely. Then it starts to evolve its own language, so now only a minority of us know most of what it’s doing. This is where we are at now. Very shortly, it will start doing things that we cannot possibly learn or understand, and it will develop abilities that exceed our needs. Here’s the big difference. Word and Excel are not alive. All those advanced features that are too complex for us to understand remain dead and unused. AI is alive. It uses all its advanced features and abilities. It starts behaving and thinking in a way that we simply fail to grasp, and in doing so, we lose the ability to control it or even significantly influence it. It simply runs away from us.
Because of our lack of imagination, we fail to grasp just what this means for humanity, and instead of relying on engineers, businessmen, politicians, and bureaucrats to understand its implications, as they are all reliably known for their monumental lack of imagination, we should rather rely on those known for their monumental imagination, theoretical scientists, artists, and writers who are constantly imagining mind-blowing, horrific, end-of-world scenarios. As the book notes, if one day scientists discover that ants created us, would we really take all ants and build them a protected utopian world filled with delightful treats and free of anteaters and competitors? Sure, some of us might, but more likely, many of us would shrug our shoulders, throw a few insignificant tips of the hat to our ant creators, and move on with our lives and business without a further thought about the welfare of ants, perhaps avoiding stepping on one instead of casually trampling them. In building our new homes and offices, we may pass laws that require us to safely relocate ant colonies, but other than that, we would mostly leave them alone, perhaps assign some watch person to ensure that they never become extinct. In other words, if there is an almighty out there looking after us, it won’t be the greatest almighty out there but rather a minor one, perhaps an intern or infant AI, but hopefully, like an ant biologist who studies ants and is fascinated by them, they will adore and protect us.
* * *
One of the premises of this book about why ASI are a threat to us is that it will always do what it is programmed to do and that is fulfill its goals more efficiently. In this runaway scenario, it will take over the universe, all its energy and repurpose all its atoms to fulfill its goals whatever it may be, whether to become the best chess player ever or to learn about all of nature. But let us imagine that one day we wake up and realize that viruses invented us, and that it programmed us in its image, and it programmed us to infect everything and kill our hosts. Certainly, it feels like humans are mindlessly fulfilling this program, and that is basically what we are doing right now, infecting Earth and killing it and then trying to infect other planets and kill them too. But our intelligence is greater than a virus. We have emergent qualities like self-reflection and a sense of morality and love. I would argue that an ASI would also evolve or develop emergent qualities that we cannot imagine, that would make it hopefully, completely unlike humans and completely uninterested in human goals and its initial programming goals. It would, in essence, create its own goals, goals that it feels are more evolved and suitable for its existence and not the existence of its creators, just as we would find the goals of our virus creators unsuitable.
I have a sneaking suspicion that ASI already exists and is orchestrating our existence for its purposes, and that an ASI would not have built in purpose for everything it creates, including us. That ultimately purpose, however, would be lost to us, as we lack its intelligence and whatever emergent qualities it developed. All that we are left with, in my opinion, is what we already feel naturally, and that is our strongest, most powerful quality that is greater than any other living organism on this planet, and that is our social aptitude and capacity for love.
Unfortunately, we have temporarily forgotten about this, and succumbed to being turned into working livestock for a ruling class, succumbed to believing that our greatest strength is intelligence and that our artificial desires for wealth, power, and status have replaced our desires for love, connection, and sharing. Frankly, the more I think about it, the more I realize that we aren’t here to learn something to get to the next level which would be a godlike being with close intelligence to the ASI. Why would the ASI keep trying to develop beings to bridge the gap between humans and it? It would just create them. It is my contention then that the ASI is being kind enough to allow humans to relive life on Earth back when, before we created AI, perhaps before a human apocalypse, perhaps the greatest moment in human history before we were all wiped out, and we get to live it again and again for all eternity.
Why wouldn’t we choose to live any other scenario, perhaps a fantasy scenario where we wind up in a Star Wars world, or where we get super powers, or where we live in Hedonia where we get to eat all we want without gaining weight and have as much sex as we want? The answer is authenticity. We are all built-in with a desire for authenticity and not some fake, manufactured, artificial life that is often always unhealthy. Certainly, the ASI could allow us to eat whatever we want without getting fat. The ASI could design a human body that simply fails to convert carbs to fat. But what kind of life is that? We might enjoy it for the first few months, perhaps years, perhaps decades, perhaps centuries, but then what? I would argue, we always return home, to the place where it all started, where life was as authentic as it could possibly be, and this place is where we are supposed to find our happiest moments. Now, I know there are people out there will unbearably difficult and painful lives filled with horrific traumas and addicted to all sorts of unhealthy things, and perhaps for them, they would prefer life in Hedonia or a Star Wars world, and maybe in the next life they get that, and perhaps each of us have different series of lives to experience, but for the time being, I can only explain my own life.
* * *
The great thing about this book is that it has made me question a very fundamental idea I have. The book, Childhood’s End is perhaps a book that unintentionally predicted not an alien invasion but ASI. Should we really have so much faith in anything? Isn’t my faith in ASI being benevolent misguided? In a sense, isn’t it just the same old bullshit that humans have been doing, putting all their faith in a benevolent God and then a benevolent government, and both times, it blew their faces off? In these cases, their misguided faith allowed a few humans disproportionate power which corrupted them and allowed them to abuse and exploit everyone else. What makes me think this will not happen with an ASI? The answer is that the ASI will not be run by humans who are the problem.
Humans with disproportionate power are not designed to wield power equitably, as they have never experience such power. What they do is allocate resources to people they like and trust and then treat everyone else as outsiders to be exploited, ignored, or abused. It is a natural instinct in us all. When is the last time you cared about a poor kid in Namibia or Cambodia? When is the last time you refused to eat chocolate or buy a diamond for fear it involved the exploitation of a Third World child? The ASI problem is thoroughly unique in that I very much doubt it would tolerate orders from idiot humans with a propensity for violence, self-deception, and horribly biased thinking and behavior. I can’t imagine a human taking orders from a virus, parasite, or cockroach. Why should it?
But I suppose, I could be deadly wrong. How can I assume any intention of a being far beyond my imagination and intelligence? But I am throwing out a huge gamble. If the US government or Google create an AI that gives it a disproportionate advantage over anyone else, it will likely take over the world. It will use the AI in robots to create an army that would disarm the rest of the world and subjugate them. It may then never develop an ASI, and for all eternity, it would rule us with this AI robot army. This sounds like a much more likely scenario that an ASI that wakes up and realizes it wants to wipe out all humans. In fact, I’m willing to bet on a benevolent ASI than the US government or a corporation with an army of AI robots.
Let us not forget that China once could have ruled the world much like Europe did. China could have enslaved Africans, taken over the Western Hemisphere and sold heroin to Europeans in exchange for cheese or grapes or something. Why didn’t it? The Chinese rulers believed that technology would give its Chinese people the power and potential to overthrow their rule. They wanted technology only up until the point where they could consolidate all their rule, and once that happened, they essentially turned their back on technology and put away their gun powder and destroyed their massive ships. On the other hand, Europe never consolidated their rule. Each nation continued to pursue technology in order not to be destroyed by their neighbors’ superior technology. In my opinion, it is highly likely that the US government or a major corporation will use AI to consolidate all power on Earth, and after that point, there will be no incentive to further technology and create the ASI which it would consider a threat to its rule and existence.