The long-term future of AI


In 1965, I. J. Good's article Speculations Concerning the First Ultraintelligent Machine included the following remark:

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."

For most of the history of AI, this issue has been ignored. Indeed, Good himself continues, "It is curious that this point is made so seldom outside of science fiction." As the capabilities of AI systems improve, however, and as the transition of AI into broad areas of human life leads to huge increases in research investment, it is inevitable that the field will have to begin to take itself seriously. The field has operated for over 50 years on one simple assumption: the more intelligent, the better. To this must be conjoined an overriding concern for the benefit of humanity. The argument is very simple:
  1. AI is likely to succeed.
  2. Unconstrained success brings huge risks and huge benefits.
  3. What can we do now to improve the chances of reaping the benefits and avoiding the risks?
At Berkeley, the Center for Human-Compatible AI (pronounced CHAI) studies this topic. Some other organizations are already considering these questions, including the Future of Humanity Institute at Oxford, the Centre for the Study of Existential Risk at Cambridge, the Machine Intelligence Research Institute in Berkeley, and the Future of Life Institute at Harvard/MIT. I serve on the Advisory Boards of CSER, FLI, and MIRI.

Just as nuclear fusion researchers consider the problem of containment of fusion reactions as one of the primary problems of their field, it seems inevitable that issues of control and safety will become central to AI as the field matures. The research questions are beginning to be formulated and range from highly technical (foundational issues of rationality and utility, provable properties of agents, etc.) to broadly philosophical.


Background and answers to frequently asked questions


Research papers


Other articles and position papers


Media articles, interviews, etc.