Martin Ford, bestselling author of Architects of Intelligence, conducted wide-ranging conversations with 23 of the world’s foremost researchers and entrepreneurs working in AI and robotics, including Demis Hassabis (DeepMind), Ray Kurzweil (Google), Rodney Brooks (Rethink Robotics), Yann LeCun (Facebook), Fei-Fei Li (Stanford and Google), Daniela Rus (MIT), Jeff Dean (Google), Cynthia Breazeal (MIT), Oren Etzioni (Allen Institute for AI), and Bryan Johnson (Kernel). Ford also wrote Financial Times Business Book of the Year, Rise of the Robots.

The promise of a true thinking machine—a computer that would exhibit human-level intelligence—has been the holy grail of artificial intelligence since the field’s inception in 1950, when Alan Turing published his paper, Computing Machinery and Intelligence. While the phrase “artificial intelligence” wasn’t coined until six years later, at the Dartmouth conference organized by John McCarthy, Turing was the first to pose the question, “Can a machine think?” In his paper, Turing described his eponymous test, which would deem a machine intelligent if it could demonstrate the ability to carry on a conversation so as to be indistinguishable from a human. Although critics have since pointed out the limitations of the Turing Test, it remains the most popular benchmark for human-level AI, or artificial general intelligence (AGI).

Familiar examples of AGI— HAL from 2001: A Space Odyssey, Commander Data from Star Trek or Agent Smith from The Matrix— exist only in the realm of science fiction. The remarkable advances in artificial intelligence during the last few years are real, but highly specialized. The revolution in deep-learning neural networks has produced systems that can understand speech, translate languages and, in some cases, perform visual object recognition at a super-human level. However, nothing close to general, human-like intelligence has been achieved. Today’s AI is extraordinarily proficient at deciding what advertisements to display or which films an audience will enjoy but falls far short of what Turing imagined in 1950.

Nonetheless, the demonstrated progress in AI over the past few years—the advent of true self-driving cars, the rise of systems like Alexa and Siri that exhibit a genuine, if rudimentary, ability to engage in two-way conversation, and the other advances all around us—has led to a sense that human-level AI may finally be on the horizon. That, in turn, has led to hype, wild speculation and in some cases what might be called outright fear-mongering. The advent of AGI, after all, would almost certainly soon lead to superintelligence, or the rise of machines with intellectual capability far beyond that of any human. Such a development would bring unprecedented economic and social disruption, and perhaps even lead to an existential threat if humans were to lose control of a truly superintelligent system. This last concern, especially has gained traction in the past few years. Elon Musk has declared that AI is “more dangerous than nuclear weapons,” and the Oxford academic Nick Bostrom has argued that, if humanity isn’t careful, advanced artificial intelligence — unlike more mundane threats like climate change — could lead to the extinction of the human race.

Futurist and Google director Ray Kurzweil says human-level, artificial general intelligence will become a reality in about 2029

In wide-ranging conversations at the beginning of this year, the foremost minds shaping the field of artificial intelligence expressed opinions on some of these issues. These individuals are actively building the technology that will soon transform the world. Anyone acquainted with the field of AI will recognize them: The three pioneers of deep learning, Geoff Hinton, Yoshua Bengio and Yann LeCun; DeepMind CEO Demis Hassabis; futurist and Google (GOOGL) director Ray Kurzweil; Stanford’s Fei-Fei Li; IBM (IBM) Watson team leader David Ferrucci; iRobot (IRBT) co-founder Rodney Brooks, and many others.

The conversations focused on the future of artificial intelligence, including the innovations likely in the relatively near term, as well as the path to AGI. They also discussed the possibility of superintelligence, and the risks that should genuinely concern society as the technology advances. The discussions delved into the likelihood of achieving human-level AI, the timeframe when this might be accomplished, and the breakthroughs and research strategies required to get there.

All 23 AI experts shared important ideas about progress toward AGI. A few insights from three especially interesting conversations illustrate the range of approaches that researchers are pursuing.

Demis Hassabis discussed efforts underway at Alphabet (GOOGL) subsidiary, DeepMind—the largest, best-funded initiative geared specifically toward AGI. Hassabis, trained in AI and neuroscience, says the best strategy is to build a system inspired by, but that does not attempt to reverse engineer the human brain. DeepMind relies heavily on neural networks and reinforcement learning—learning through trial and error with a virtual “reward” driving the system toward success. Hassabis believes far more strongly in reinforcement learning than many AI researchers, and argues it may be a primary learning mechanism used by the biological brain, with the dopamine system rewarding success as the brain continuously seeks to find structure in the data it processes.

David Ferrucci, who led the team that created IBM Watson, is now the CEO ofElemental Cognition, a startup that hopes to achieve more general intelligence by leveraging an understanding of language. Ferrucci is far less concerned with direct inspiration from the structure of the brain. He says that we already have the practical tools we need to build a system with the ability to learn and explain itself at a human level. His approach relies on combining both deep neural networks and techniques from other spheres of AI.

34% of the industrial robots sold by 2025 will be collaborative — designed to work safely alongside humans in factories and plants


Source: Loup Ventures

Ray Kurzweil, who now directs a natural language-oriented proj- ect at Google, is best known for his 2005 book, The Singularity Is Near. In 2012, he published a book on machine intelligence, which caught the attention of Larry Page and led to his employment at Google. Kurzweil is working on a hierarchical, brain-inspired approach that combines the ideas he laid out in his 2012 book How to Create a Mind with the latest advances in deep learning.

As part of these discussions, members of this group of extraordinarily accomplished AI researchers were encouraged to guess just when AGI might be realized. Most preferred to provide their guesses anonymously, but two were willing to guess on the record: Ray Kurzweil believes, as he has stated many times previously, that human-level AI will be achieved around 2029 or just 10 years from now. Rodney Brooks, on the other hand, guessed the year 2200, or more than 180 years in the future. The average guess for AGI arrival was the year 2099, or 80 years from now, but as the predictions from Kurzweil and Brooks demonstrate, the range was wide, and a number of participants believed AGI might be achieved within the next 20 years.

Ray Kurzweil, From Martin Ford’s Architects of Intelligence

Just as your phone makes itself a million times smarter by accessing the cloud, we will do that directly from our brain. It’s something that we already do through our smartphones, even though they’re not inside our bodies and brains, which I think is an arbitrary distinction. We use our fingers and our eyes and ears, but they are nonetheless brain extenders. In the future, we’ll be able to do that directly from our brains, but not just to perform tasks like search and language translation directly from our brains, but to actually connect the top layers of our neocortex to synthetic neocortex in the cloud.

Two million years ago, we didn’t have these large foreheads, but as we evolved we got a bigger enclosure to accommodate more neocortex. What did we do with that?

This new extension in the 2030s to our neocortex will not be a one-shot deal. Even as we speak, the cloud is doubling in power every year. It’s not limited by a fixed enclosure, so the non-biological portion of our thinking will continue to grow. If we do the math, we will multiply our intelligence a billion-fold by 2045, and that’s such a profound transformation that it’s hard to see beyond that event horizon.

On the topic of existential risk from superintelligence, there was again a wide range of views, with most researchers either dismiss- ing the danger outright or suggest- ing that AGI was too far in the future for the problem to be tractable. Others, including especially Nick Bostrom and UC Berke- ley’s Stuart Russell, say it’s vital to begin investing immediately in research focused on engineering a benign, controllable superintelligence, even if its application lies far in the future. Nearly everyone emphasized the importance of dangers that will become real long before the advent of AGI. Among these are the specter of fully autonomous weapons, susceptibility of critical AI-controlled systems to cyberattack, the bias—sometimes on the basis of race or gender— that has already been detected in some machine-learning systems, and the threats AI might pose to privacy and democracy. Many of the researchers called for government regulation or are overseeing efforts to address these issues.

Dismantling HAL 2001: A Space Odyssey

The most important takeaway from the 23 interviews is that the only real area of consensus is that AI will continue to progress rapidly and will, in all likelihood, be highly disruptive to the job market, the economy and to society as a whole. Beyond that, the conversations were full of varied, and often sharply conflicting, insights, opinions and predictions. Artificial intelligence remains a wide open field. The nature of the innovations that lie ahead, the rate at which they will occur and the specific applications to which society will be apply them are all shrouded in deep uncertainty. This combination of massive potential disruption, together with fundamental uncertainty, makes it imperative to engage in a mean- ingful and inclusive conversation about the future of AI and what it may mean for humanity.


Martin Ford, bestselling author of Architects of Intelligence, conducted wide-ranging conversations with 23 of the world’s foremost researchers and entrepreneurs working in AI and robotics, including Demis Hassabis (DeepMind), Ray Kurzweil (Google), Rodney Brooks (Rethink Robotics), Yann LeCun (Facebook), Fei-Fei Li (Stanford and Google), Daniela Rus (MIT), Jeff Dean (Google), Cynthia Breazeal (MIT), Oren Etzioni (Allen Institute for AI), and Bryan Johnson (Kernel). Ford also wrote Financial Times Business Book of the Year, Rise of the Robots.