BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Stanford Expert Says AI Probably Won't Kill Us All

Following
This article is more than 8 years old.

Artificial intelligence isn’t just about technology, but also about social impact.  It’s about what the technology can do for us—or to us, in some dystopian visions.  That’s why Stanford University’s Jerry Kaplan—a philosopher turned computer science PhD—is in the rare position of being actually qualified to talk on this subject.

He teaches a popular course at Stanford’s computer science department on AI and ethics.  But it’s not all just academic theory.  Kaplan is a Silicon Valley icon, starting up pioneering companies that gave us the smartphone, tablet PC, online auctions (before eBay launched), and other technologies—things we still use today and that help drive AI’s evolution.

Kaplan’s first book was an insider’s tell-all, titled Startup: A Silicon Valley Adventure.  That was in 1996—early modern history of the Digital Age, but still timeless lessons for start-ups today.  Looking now to the future, his second book was released today from Yale University Press and called Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence.

I had the chance to interview the author, to get a sneak peek at some of the issues he’s tackling in his new book:

Q:  Looking at breathless headlines, there’s growing hysteria about the threat of runaway AI to humanity.  How much of this is smoke, and how much fire?

A:  The short answer is 95% smoke, in my opinion. And reasonable fire-safety precautions should adequately address the remaining 5%.

This concern, or at least the public fascination with it, is driven more by Hollywood blockbusters than fact.  While the field of AI is making tremendous progress, the reality is that it’s nowhere near an immediate a threat as, say, the potential damage from genetically modified organisms (GMOs) getting loose.  With AI, we are likely to have adequate time to see problems coming, and to address them.

This is not to say that we can safely ignore the issue.  And a lot of smart people are studying the subject, some for decades.  Organizations like the Future of Life Institute and its predecessors are systematically tackling the most troubling questions.  Nick Bostrom recently published Superintelligence, an excellent and detailed scholarly analysis of many of the key issues.

But there’s two ways to think about the potential risks.  The first, which I might caricature as the “robots run amok” school of thought—and I put Bostrom in this camp—believes that intelligent machines may abruptly cross some sort of threshold where they become so smart that they start to improve themselves, and quickly self-evolve into something we can’t understand or control.  This is sometimes referred to as the technological “singularity.”

The second way to think about this is as an engineering problem, that we are developing advanced automation technology that may require professional standards and regulatory constraints, as occurs in many other fields from medicine to civil engineering.

Q:  Let’s start with the “robots run amok” idea: is that a real possibility?

A:  Yes, but probably not in the way that you think.  I see no evidence that AI systems are going to rise up one day in some sort of artificial cultural awakening, only to break free from the cruel and immoral constraints of human bondage by attacking or destroying humanity.

To put this in perspective, imagine that you develop a robot to do laundry in people’s homes.  It may be very intelligent indeed—learning not to disturb you if you’re sleeping to put your socks away, or observing how your teenage daughter likes her towels folded and arranged.  It may share best practices with other robots in other homes, and adjust its use of water based on how it affects your utility bills.  It may be able to respond to all sorts of spoken or visual cues, such as you pointing to items that you want washed.

But it’s not going to activate one morning and say, “What a fool I’ve been! I really want to be a violinist, and play the great concert halls of Europe!”

My point is that machines are not people.  They don’t have independent aspirations and desires, only those that derive from the goals we set for them.

Q:  So, we don’t need to worry about this?

A:  Well, not exactly.  The real danger is that the goals we set for them, in combination with increasing capabilities, can lead to unanticipated consequences.  To be specific, such systems may identify and pursue sub-goals that are unexpected and undesirable.

For example, you don’t want your fancy new laundry robot to run over your grandmother in its rush to transfer a load of clothes from your washer to your dryer.

To repeat an often-used extreme example, a sufficiently capable AI system instructed to become the world’s best chess player might logically pursue this goal by killing all the better players.  Less obvious, but much nearer term, perhaps your autonomous car should defer to a human driver when they both come upon a scarce open spot in a parking lot at the same time.

The point is, we take certain social, ethical, and cultural constraints for granted, but our electronic creations won’t without careful design.

Q:  Ok, then how do we design in the necessary constraints?

A:  Unfortunately, the answer isn’t obvious, and a lot of work needs to be done on this.  Social conventions are often unstated, arise spontaneously—for example in crowd behavior—or are subject to considerable variation and judgment.  And as you well know, there’s no generally-accepted view of ethical principles; debate on this has gone on for literally thousands of years.

I will brashly predict that within a decade or so, a “Moral Programming” course sequence will be required to get a degree in computer science, and various companies will be touting their etiquette modules for inclusion in your robotic products.

Q:  It sounds like you see this as an engineering problem?

A:  Largely, I do.  As you know, there’s a lot of discussion recently about what it means for autonomous military weapon systems to be subject to meaningful human control.

Personally, I’m not convinced that keeping robots on a leash, so to speak, is the best way to address this.  Landmines are autonomous and deadly, in that they kill people on their own without human oversight.  Yet I believe the accepted international standard is to require that they can be located and disabled once hostilities subside, not for them to ask permission before exploding.

All sort of fields that offer both benefits and risks are subject to oversight mainly through the development of professional standards, licensing and product testing.

Q:  Do you think the hype and public anxiety over this subject are overblown?

A:  It’s hard to say for sure, but sometimes a look back in history can help illuminate the current situation.  Imagine that we assembled a panel of futurists shortly after the Wright brothers first flew at Kitty Hawk, to discuss the potentials and risks of this obviously important technological achievement.

The first prognosticator says, “I predict that within fifty years time, we will have aircraft with a wingspan the length of a football field, that will carry hundreds of people in comfort from New York to Los Angeles in just two hours.”

The second says, “This is a terrifying technology, and we must put a stop to it right now while we can.  Do you realize that just about anyone could load up one of these so-called airplanes with explosives and destroy an entire city in the course of a single day!”

Of course, both of these pundits would have been more or less right, and yet here we are 100 years later living with these consequences without giving it much thought.

I’m sure we’ll have an AI “Chernobyl”, but I expect that the benefits of this new technology will so outstrip the risks and costs that we’ll see such a tragedy as a learning experience, not a reason to go back to the primitive way that we’re living today.

Q:  So, on balance, you think we’ll muddle through.

A:  Yes, but to paraphrase Kurt Vonnegut, we need to be careful what we wish for, because we just might get it.