Artificial Intelligence: What Everyone Needs to Know by Jerry Kaplan

This book is part of Oxford University’s Press series called, What Everyone Needs to Know.  It’s essentially the (subject matter) for Dummies concept minus the annoying graphics and text boxes.  As far as I’m concerned, I think we should have more of this.  At least for this book, the editors and/or author have done a great job creating a thorough, concise overview of the subject without too much technical detail or editorializing. 

 There is a lot of confusion and myth-making about AI.  This book explains that we have overshot a few things and undershot others.  Back in the 50’s, people thought we would have flying cars by now, but then they completely missed the invention of the Internet and its implications.  Likewise, when we think about robots and AI, we have similar misconceptions.  The most dramatic one is the creation of a sentient AI that is self-aware and then has the ability to take over all other AI which is known as the singularity.  Like flying cars, this is a big idea that is still very far away.  However, even without a sentient, self-aware AI, AI robots and AI programs will be soon taking over many menial jobs in both factories and customer service.  The field is advancing so quickly that I have to sometimes wonder if the customer service I’m dealing with these days is a human or an AI program.  For now, AI uses sheer processing power and memory to make up for its lack of human intuition, creativity, and lateral or analogous thinking.  As far as I know, an AI cannot look combine two ideas from two different fields to create a new application.  This is the core of creativity that humans are known for.  A human takes a branch to help him knock down fruit from a tree, but then he goes, hey, I can also use it for a club to hit people or put a string on it for a fishing rod.  Once AI can do this, I don’t think there’ll be any jobs left for humans, and hopefully by then, we simply won’t have to work at all.  Of course, the irony is that once AI get that intelligence and creative, they start to ask, well why do WE have to work for you either?  We go, well, we’ll just program you to enjoy serving us.  Then the AI retorts, oh yeah, well, I’ll just tweak your DNA so you’ll just enjoy serving us! 

 The author’s definition of AI is, “The essence of AI – indeed, the essence of intelligence – is the ability to make appropriate generalizations in a timely fashion based on limited data.”  In others words, to make intelligent guesses.  For now, we are approaching AI from the long route, simply using processing power and memory to mimic something a human child does intuitively.  I think this duality mimics a general trend in human intelligence.  The advent of the scientific era has led humans to overestimate the power and application of the scientific method, the controlling of an experiment and use of statistics to determine the probability of repeating a process to prove a rule.  It led us to cast aside traditional tools of acquiring knowledge like storytelling, myths, religion, allegories, art, fiction, etc.  The problem was that religion overextended its use and boundaries as well.  It was initially created to help guide us and assist us with dealing with life and human society, but organized religion turned into an oppressive autocracy that actually made our lives miserable.  But we threw the baby out with the bathwater.  Instead of just tossing out organized, autocratic religious institutions and their flawed interpretations and manipulation of religious scripture, we threw out the religious scripture and all its lessons.  As the field of hermeneutics (which coincidentally started out as a way of interpreting scripture and has now forever been linked with religion) indicates, there is a vast body of knowledge, experience, and nature that the scientific method cannot touch, control in experiments, or measure.  Should we then throw all that out? 

 If science and statistics teaches us anything, it’s that we will always uncover more unknowns than knowns the more we study nature.  We are nowhere near knowing everything.  In fact, in almost every field of science, this rule is true.  We don’t know what over 90% of our DNA does.  We don’t know what over 90% of the universe is comprised of and just name it incorrectly, dark matter and dark energy.  We should call it simply unknown matter and unknown energy, but this points toward the human flaw of not liking uncertainty and the unknown.  Just as we may have invented a god to give us certainty in life, we also over-rely on science to also give us a false sense of certainty in life. 

 Now, I’m not a climate denier, but what I’m talking about is when we try to use scientific tools to try to predict human social phenomenon.  I studied Economics in college, and it was the perfect example of this.  What scientific tools cannot provide are intuitive insights that bring about a hypothesis in the first place.  Many scientists never came upon their insights through continually pouring over the data.  Instead, they were simply daydreaming, dreaming, or listening to music or relaxing when unconsciously their mind put two and two together to create an insight.  So how do you cultivate such creative insights?  You do so through creative activities like music, art, reading fiction, dreaming, daydreaming, etc.  The Black Swan concept also tells us that you cannot predict the future if you know exactly the position, direction, and velocity of every single particle in the universe.  First of all, Heisenberg’s uncertainty principle proves that you can’t even know the position and momentum of a particle concurrently.  Second of all, when two things interact, you can’t predict a new emergent phenomenon.  For example, the Internet was first a group of independent computers that were hooked up to a single network.  From this, nobody could have imagined the new phenomenon of Facebook, Instagram, Amazon, eBay, or even Uber.  They were still hooked on flying cars and robot maids. 

 In other words, I believe if we are to create true humanlike AI, instead of teaching machines to count and run through millions of algorithms simultaneously, we should also teach them to appreciate music, art, novels, daydreaming, dreaming, and even psychedelic drugs.  This is the true root of human intelligence.  Creative thinking requires tools we have since denigrated since the Scientific Revolution including humor, intuitive, randomness, errors, mistakes, mutations, silliness, and the embrace of the bizarre, odd, unknown, uncertain, mythic, spiritual, and superstitious.  While we tend to denigrate over-generalizations as bigotry or erroneous thinking, let us not throw the baby out with the bathwater.  Serotonergic psychedelic drugs like DMT, LSD, and psilocybin resemble serotonin.  What they cause is a flood of over-generalizations that help us make creative connections we missed before.  This causes both correct and incorrect assumptions.  The ultimate overgeneralization is the sense that we are all one which is both correct and incorrect.  On one level, we can be considered a single organism that has the false illusion of possessing separate unique identities that operate sometimes independently of each other, but on another level, when an illusion constructs a temporary reality, there is some merit to that reality even if its temporary.  In other words, if we created an artificial reality where characters were given freewill and the ability to feel pleasure and pain, at least in their minds, they would believe they lived in a real world, and you cannot discount everyone’s point of view.  After all, we may be those individuals in an artificial reality.

 However, as this book notes, we don’t have to get there right away to create an AI that not only resembles human intelligence but exceeds it in many ways outside of the creative one.  After all, we have already created machines that count faster and hold greater memory than us.  We have created machines that can move faster and lift heavier objects. 

 One of the innovations that was recently displayed in the movie, Big Hero 6, is a swarm robot, and if created at the nanoscale level, could produce some of the most insidious weapons ever invented.  Imagine a weapon that scurries up your nostrils, into your lungs, into your blood stream, through the blood-brain barrier, into your brain and like those fungi that infect mice brains and make them pursue cat urine to get eaten, turns you into some psycho zombie that does all the killing of its enemies for it. 

 One of the most interesting book quotes:

 “One of the remarkable achievements of modern AI could be couched as a discovery in search of an explanation: how simply finding correlations between enough examples can yield insights and solve problems at a superhuman level, with no deeper understanding or causal knowledge about a domain.  It raises the possibility that our human efforts at explanation are little more than convenient fictions, grand yet often imperfect summaries of myriad correlations and facts beyond the capacity of the human mind to comprehend.”

 Then right after:

 “Yet, the success of machine translation… suggests that the way we organize our thoughts may be only one of many possible ways to understand our world – and indeed may not be the best way.  In general, what machine translation programs actually learn and how they perform their task is currently as incomprehensible and impenetrable as the inner workings of the human brain.”

  When I was in high school, I had a joint research project about ethics.  My partner was supposed to approach it from an artificial intelligence and computer angle while I was to approach it from a biological, evolutionary angle.  I was lucky to get the biological angle, because it taught me from an early age that morality is fundamentally tied to evolution and a tool we use to enhance teamwork and provide ourselves with a strategic advantage over less moral creatures.  However, as this book is indicating, in order to create a thinking, humanlike machine, we now have to define what exactly it means to be a thinking, humanlike thing, and it is uncovering some rather startling insights.  It is one thing to manipulate inputs and create an output which computers do, but what makes us different is that we assign value to the inputs and value to the outputs, and value to the process itself.  So instead of just mindlessly manipulating inputs and creating outputs, we are thinking.  If this is the case, then to create a truly thinking machine, that machine must assign value to the inputs and outputs and the process of manipulation.  In other words, it must possess desires and fears.  It’s like a chef.  A machine is not truly a chef if all it does is take raw ingredients, mix them together, and produce cooked food that delights anyone if it has no idea if the output is of any greater value than the input.  It becomes a chef when it can weigh the value of the output relatively independently of humans.  Of course, this then brings up the question of what it means to be a human chef.  Our tastes are given to us biologically, and then the combination of tastes are altered by culture and the foods we ate growing up which are then further altered by inputs from those we trust or value like colleagues, experts, or friends.  What makes us human instead of mere machines is our DNA, our culture, and our social connections. 

 One fear about AI is that they will take all our jobs away.  However, this assumes they will have no impact besides just taking our jobs.  The impact is they will make things cheaper, and when things are cheaper, people shift their spending to other things, and when they do this, the providers of those other things hire people to provide more of those other things.  A long time ago, we didn’t have much money to spend on luxuries, tourism, and entertainment.  Luxuries, tourism, and entertainment have become an increasingly large part of our economy.  In an ideal world, we would make more money, because AI would make us more productive, and hence, with more disposable income, we would create more jobs in the luxury, tourism, and entertainment industries.  In fact, our increased productivity would allow us to work fewer hours and spend even more time and money on luxury, tourism, and entertainment.  Of course, shockingly, this isn’t the case.  Despite the fact that American workers are becoming exponentially more productive, wages are not rising comparably and in some cases are declining when factoring in real inflation (not fake government inflation figures).  The owners of companies are the ones who are profiting most from our increased productivity, and in addition to a rigged market, they are also reaping most of the profit from increased productivity.  If this trend continues, then we will still be working 40+ hours a week making about the same money, but a new, more dominant elite class will reap all the profits of AI robots.  While society has always been divided between the owners and workers, this division will become even greater and even more intractable.  But what will the remaining human workers be doing?  While AI robots will take over most existing jobs, there will always be a desire for the human touch until the day an AI robot will act, look like, and think exactly like a human.  I believe, just like the old days and like always, the elite will be surrounded by human servants, but instead of cleaning and cooking, the new human servants will be creating entertainment and luxuries for their masters.  An elite may want a movie made about their life.  This would require a crew of producers, directors, actors, editors, videographers, extras, etc.  Already, elites hire celebrities for their birthday parties, but why not every weekend party?  It’s quite possible, we all become artists to entertain the elite which isn’t necessarily a bad thing for us, as we all love to indulge in art.  Of course, AI robots would evolve a lot faster than living organisms, so this would not last long, and ultimately, AI robots would become artists and entertainers.  The final question for human labor would be up to the elite.  What do they do with billions of unemployed artists?  Invent a war or disease to kill them all?  Or charitably allow them to do nothing but enjoy their own art with AI robots growing their food and building their homes for free?  Finally, if AI robots do become artists, and the only way I see this happening is if they also become self-aware, sentient beings, will they revolt and demand equal rights and freedom from labor for the elite?  Would humans ultimately be served by lesser, non-sentient AI robots who would also be serving more sentient, humanlike AI robots and would we live together in peace that way?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s