Skip to main content Skip to secondary navigation
Page Content

 

Sixty-four years after John McCarthy coined the term “artificial intelligence,” Stanford university has launched an initiative to bring a focus on humanity’s most pressing problems to the study and practice of AI.

“The creators of AI need to represent humanity,” said Fei-Fei Li, co-director of the newly formed Stanford Institute for Human-Centered Artificial Intelligence. “This requires a true diversity of thought across gender, age, and ethnicity and cultural background as well as a diverse representation from different disciplines.”

Li, a computer scientist, AI pioneer, and former Google vice president, spoke Monday to an audience of more than 900 as the university introduced HAI, as the institute will be called. HAI, she said, will tap the expertise of nearly every department in the university, including engineering, robotics, statistics, philosophy, economics, anthropology, and law, and aims to influence policymakers as it develops new technologies and applications.

HAI, Etchemendy said, will be guided by three core principles:

  • A bet that the future of AI will be inspired by our understanding of human intelligence.
  • The technology has to be guided by our understanding of how it is impacting human society.
  • AI applications should be designed to enhance and augment what humans can do.

Stanford President Marc Tessier-Lavigne touched on the dual-nature of artificial intelligence, saying: “AI has the potential to change society for the better in so many ways; from promising medical applications to vastly safer cars.

“But the advance of AI carries risk, from job insecurity to the influence of AI-generated content on social media to the potential for bias in machine learning. And now is the moment to ensure that we are embarking along a path to develop technology that will serve, augment, and complement humanity, not replace or divide it.”

The institute plans to hire at least 20 new faculty members from across fields spanning humanities, engineering, medicine, the arts, and the basic sciences, with a particular interest in those whose work contributes to HAI’s mission. Approximately 200 faculty members have already signed on to devote at least part of their time to HAI.

Microsoft’s Bill Gates spoke at the launch event, along with Reid Hoffman, the co-founder of LinkedIn, Demis Hassabis, co-founder of DeepMind, and Eric Horvitz of Microsoft Research.

HAI’s advisory council include Reid Hoffman, who will be the chair; Jim Breyer of Breyer Capital and Accel Partners; former Yahoo CEO Marissa Mayer; Yahoo co-founder Jerry Yang; former IBM CEO Sam Palmisano and Google’s Eric Schmidt.

The institute will be open to researchers from other universities, policymakers, journalists and leaders of corporations. Nonprofits can work with HAI’s faculty to develop new solutions using AI, such as helping emergency room doctors make better decisions about patient care under stressful conditions.

Battling algorithmic bias

Although HAI’s official launch was March 18, the institute has already provided support to roughly 50 interdisciplinary research teams, including a project to assist the resettlement of refugees; a system to improve healthcare delivery in hospital intensive care units; and a study of the impact of autonomous vehicles on social governance and infrastructure. 

HAI Faculty members are now conducting research on topics related to the impact of AI and related technologies on society. Robert Reich, the former Secretary of Labor in the Clinton administration, said “the rapid advance of AI and the quest for artificial general intelligence raise profound ethical, political, and social questions.” Reich said he is working to integrate the research and teaching efforts of engineers, social scientists, and humanists.

Susan Athey, who studies the economics of technology, said that it might be difficult to foresee the risks of a particular algorithm, such as the use of AI to score loan applications.

Indeed, bias that is inadvertently programmed into certain algorithms is already a serious issue. There are facial recognition programs that can’t distinguish one black man from another, voice recognition applications that can’t understand anything but standard American English, and programs that search for candidates to fill particular jobs sometimes default to males. 

Kate Crawford, an NYU and Microsoft researcher who spoke at the launch event, co-authored a recent study which found that the algorithms increasingly used in police work are badly flawed.  “They are built on data produced within the context of flawed, racially fraught and sometimes unlawful practices,” she wrote. The results of the study were shocking, Crawford said. “We have no review processes in AI similar to what we have in social science.”

The lack of oversight in AI research raises troubling questions, Crawford said. “Who is the ‘we’ that is responsible? Which communities will be represented?” It becomes a question of power, and it is not at all clear that the private sector can regulate as AI becomes more pervasive, Crawford said. 

As powerful as it is, AI still lacks the learning ability of children. An AI system can analyze vast amounts of data, but has difficulty generalizing from small amounts of data, something that children are actually quite good at, Alison Gopnik, a UC Berkley researcher, said during a panel discussion at the launch. Nor can AI systems go out on their own to gather data needed to understand a problem, she said. Social learning, the ability to draw conclusions based on interactions with others, is a key to human learning -- but machines can’t do it.

Understanding how children learn will offer researchers a path to making artificial intelligence systems more intelligent, said Gopnik, who studies children’s cognitive behavior.

After a closing keynote by California Gov. Gavin Newsom, Professor Li ended the event, saying “I hope what we are creating here is a global hub and forum for this kind of ongoing conversation.”