100 Year Study on Artificial Intelligence: Why It Matters
by Shelly Kramer | November 29, 2016
Listen to this article now

If you asked the average person what they know about artificial intelligence (AI), they would probably launch into stories about intelligent computers taking over the world and rebellious robots running amok. While the misconception the movies have created may be wildly wide of the mark, AI is an area of technological development having a massive impact in all corners of our lives for generations to come.

That’s why Stanford University has launched a long-term project to study the impact of AI on society. A study that’s not necessarily going to offer solutions, but will promote a dialogue about AI to guide us through the ethical, legal, and technological challenges machine intelligence might bring. I think that’s a pretty cool undertaking.

A Century of Study 

The One Hundred Year Study (AI100) was the brainchild of Stanford alumnus, and now managing director of the Microsoft Research lab in Redmond, Eric Horvitz. Drawing on the expertise of a committee of interdisciplinary research academics, the aim of this ambitious study is to identify the most compelling aspects of AI at any given point in time and put together a panel of experts to study and report on those issues.

In an interview at Science Magazine Horvitz envisioned that the initial committee would evolve into a succession of standing committees and study panels over the next hundred years. Committee members are intended to serve on a rotating basis, and publish a major report every five years. This, of course, could change. A hundred years is a long time. Horvitz’s vision for the study is simple: “I’m very optimistic about the future and see great value ahead for humanity with advances in systems that can perceive, learn, and reason. However, it is difficult to anticipate all of the opportunities and issues, so we need to create an enduring process.”

Fellow committee member Tom Mitchell, University Professor and chair of the machine-learning department at Carnegie Mellon University, summed the need for the study perfectly saying, “AI technology is progressing along so many directions and progress is being driven by so many different organizations that it is bound to continue. AI100 is an innovative and far-sighted response to this trend‑an opportunity for us as a society to determine the path of our future and not to simply let it unfold unawares.”

The First Report

The initial committee was charged with considering “the likely influences of AI in a typical North American city by the year 2030.” The first report, published in September 2016, focused on eight domains considered the most relevant—namely transportation, service robots, healthcare, education, low-resource communities, public safety and security, employment and workplace, and entertainment. The report reflects the progress made in the last fifteen years and considers the potential developments in the next fifteen. I’ll dive deeper into some of the domains in future articles, but first I want to look at the wider picture and reflect on the issues reinforcing why this study matters so much.

Legality, liability, and accountability. Who will be liable when a driverless car crashes and causes injury? Who will be held responsible and liable to pay compensation if an intelligent medical device fails? If an AI application engages in what would be considered criminal activity, who will be accountable to the courts and which case law will apply? Will an individual whose job has been replaced by an intelligent machine be able to seek recourse in the courts?

It would be preferable if the study promotes a conversation sorting these issues out sooner, rather than later. It’s likely that many will only be addressed as disputes arise; either way, the lawyers are likely to be kept busy.

Privacy and the use of personal information. The same implications for privacy in our computer and internet use are of concern in the case of AI. However, by its very nature, and by the pervasiveness of its presence in our communities, AI adds more complexity than ever before as it relates to security.

Algorithms can be applied to predict future behavior, for credit risk or parole decisions for example, and pose technical challenges to avoid built in biases. Designed, tested, and deployed effectively, AI has the potential to remove human bias from decisions.

The study also raises an interesting question regarding our relationship with anthropomorphic interfaces. We apparently are hardwired to respond to anthropomorphic technology as if it were human. How will we react to being surrounded by “intelligent” machines? Interesting questions, aren’t they?

Equality. The study touches repeatedly on the issue of equality. Access to technology is already unfairly distributed across society. AI technologies have the capacity to improve life for those who can access them, but how can we facilitate access across all levels of society?

Trust. Cars crash daily due to human error without causing much disruption to those who are involved in the crash or witness to it. When an AI enabled, self-driven car crashes, as we have recently seen, it attracts a great deal of attention.

AI applications will be susceptible to failures and errors like any other IT system. With realistic expectations and understanding, design strategies can build trust. On that score, the study highlights children having early exposure to AI and how it likely becomes a natural part of their lives. My children, for example, have embraced Amazon’s Alexa robot as part of their everyday lives. According to the study, this early exposure may result in a generation gap in how AI is perceived and/or adopted or used. But, that’s nothing new when it comes to technology.

Moving forward with AI 

So how should government, industry, and society approach answering these concerns? The study panel offers three general policy recommendations.

  • Increase technical expertise in AI across all levels of government.
  • Remove perceived and actual impediments to AI research.
  • Increase public and private funding of interdisciplinary studies into the effect of AI on society.

The genie is out of the AI lamp and it’s an ever-increasing reality of life as we know it today. Artificial intelligence will increasingly play a part in our lives in ways we probably can’t even begin to imagine. It provides opportunities, challenges, and potential threats. I think the long-term, flexible approach the Stanford study takes is invaluable. Reviewing and monitoring developments with discipline will maximize the benefits and minimize the threats.

The report authors do a great job of summarizing why this study matters so much:

“In the coming years, as the public encounters new AI applications in domains such as transportation and healthcare, they must be introduced in ways that build trust and understanding, and respect human and civil rights. While encouraging innovation, policies and processes should address ethical, privacy, and security implications, and should work to ensure that the benefits of AI technologies will be spread broadly and fairly. Doing so will be critical if Artificial Intelligence research and its applications are to exert a positive influence on North American urban life in 2030 and beyond.” 

There’s more detailed information to be found in the full report, available to download at Stanford University. Artificial Intelligence is going to touch most aspects of our lives and have a major influence on the way society develops over the next hundred years. We must consider the implications—both positive and negative—on a continuous basis, and this study plays a vital role in driving the conversation now and in the future.

Photo Credit: Mobiloitte Flickr via Compfight cc

About the Author

Shelly Kramer is a Principal Analyst and Founding Partner at Futurum Research. A serial entrepreneur with a technology centric focus, she has worked alongside some of the world’s largest brands to embrace disruption and spur innovation, understand and address the realities of the connected customer, and help navigate the process of digital transformation. She brings 20 years' experience as a brand strategist to her work at Futurum, and has deep experience helping global companies with marketing challenges, GTM strategies, messaging development, and driving strategy and digital transformation for B2B brands across multiple verticals. Shelly's coverage areas include Collaboration/CX/SaaS, platforms, ESG, and Cybersecurity, as well as topics and trends related to the Future of Work, the transformation of the workplace and how people and technology are driving that transformation. A transplanted New Yorker, she has learned to love life in the Midwest, and has firsthand experience that some of the most innovative minds and most successful companies in the world also happen to live in “flyover country.”