MARTIN FORD: Looking back, what would you say is the highlight of your career with either robots or AI?

RODNEY BROOKS: The thing I’m proudest of was in March 2011 when the earthquake hit Japan and the tidal wave knocked out the Fukushima Nuclear Power Plant. About a week after it happened, we got word that the Japanese authorities were really having problems in that they couldn’t get any robots into the plant to figure out what was going on. I was still on the board of iRobot at that time, and we shipped six robots in 48 hours to the Fukushima site and trained the power company tech team. As a result, they acknowledged that the shutdown of the reactors relied on our robots being able to do things for them that they on their own were unable to do.

I remember that story about Japan. It was a bit surprising because Japan is generally perceived as being on the very leading edge of robotics, and yet they had to turn to you to get working robots.

I think there’s a real lesson there. The real lesson is that the press hyped up things about them being far more advanced than they really are. Everyone thought Japan had incredible robotic capabilities, and this was led by an automobile company or two, when really what they had was great videos and nothing about reality.

Our robots had been in war zones for nine years being used in the thousands every day. They weren’t glamorous, and the AI capability would be dismissed as being almost nothing, but that’s the reality of what’s real and what is applicable today. I spend a large part of my life telling people that they are being delusional when they see videos and think that great things are around the corner, or that there will be mass unemployment tomorrow due to robots taking over all of our jobs.

At Rethink Robotics, I say, if there was no lab demo 30 years ago, then it’s too early to think that we could make it into a practical product now. That’s how long it takes from a lab demo to a practical product. It’s certainly true of autonomous driving; everyone’s really excited about autonomous driving now. People forget that the first automobile that drove autonomously on a freeway at over 55 miles an hour for 10 miles was in 1987 near Munich. The first time a car drove across the United States, hands off the wheel, feet off the pedals coast to coast, was No Hands Across America in 1995. Are we going to see mass-produced self-driving cars tomorrow? No. It takes a long, long, long time to develop something like this, and I think people are still overestimating how quickly this technology will be deployed.

Everyone thought Japan had incredible robotic capabilities, but what they really had was great videos and not much else

What is the reality? Thinking five to 10 years ahead, what are we going to see in the field of robotics and artificial intelligence? What kinds of breakthroughs should we realistically expect?

You can never expect breakthroughs. I expect 10 years from now the hot thing will not be deep learning; there’ll be a new hot thing driving progress. Deep learning has been a wonderful technology for us. It is what enables the speech systems for Amazon Echo and Google Home, and that’s a fantastic step forward. I know deep learning is going to enable other steps forward too, but something will come along to replace it.

What about real household consumer robots? The example people always give is the robot that would bring you a beer. It sounds like that might still be some way off.

Colin Angle, the CEO of iRobot, who co-founded it with me in 1990, has been talking about that for 28 years now. I think that I’m still going to be going to the fridge myself for a while.

Do you think that there will ever be a genuinely ubiquitous consumer robot, one that saturates the consumer market by doing something that people find absolutely indispensable?

Is Roomba indispensable? No, but it does something of value at a low enough cost that people are willing to pay for it. It’s not quite indispensable; it’s a convenience level.

When do we get there for a robot that can do more than move around and vacuum floors? A robot that has sufficient dexterity to perform some basic tasks?

I wish I knew! I think no one knows. Everyone’s saying robots are coming to take over the world, yet we can’t even answer the question of when one will bring us a beer.

I saw an article recently with the CEO of Boeing, Dennis Muilenburg, saying that they’re going to have autonomous drone taxis flying people around within the next decade, what do you think of his projection?

I will compare that to saying that we’re going to have flying cars. Flying cars that you can drive around in and then just take off have been a dream for a long time, but I don’t think it’s going to happen.

I think the former CEO of Uber, Travis Kalanick, claimed that they were going to have flying Ubers deployed autonomously in 2020. It’s not going to happen. That’s not to say that I don’t think we’ll have some form of autonomous personal transport. We already have helicopters and other machines that can reliably go from place to place without someone flying them. I think it’s more the economics of it that will determine when that happens, but I don’t have an answer as to when that will be.

What about artificial general intelligence? Do you think it is achievable? If so, in what timeframe do you think we have a 50% chance of achieving it?

Yes, I think it is achievable. My guess on that is the year 2200, but it’s just a guess.

Tell me about the path to get there. What are the hurdles we’ll face?

We already talked about the hurdle of dexterity. The ability to navigate and manipulate the world is important in understanding the world, but there’s a much wider context to the world than just the physical. For example, there isn’t a single robot or AI system out there that knows that today is a different day to yesterday, apart from a nominal digit on a calendar. There is no experiential memory, no understanding of being in the world from day to day, and no understanding of long-term goals and making incremental progress toward them. Any AI program in the world today is an idiot savant living in a sea of now. It’s given something, and it responds.

The AlphaGo program or chess-playing programs don’t know what a game is, they don’t know about playing a game, they don’t know that humans exist, they don’t know any of that. Surely, though, if an AGI is equivalent to a human, it’s got to have that full awareness.

As far back as 50 years ago people worked on research projects around those things. There was a whole community that I was a part of in the 1980s through the 1990s working on the simulation of adaptive behavior. We haven’t made much progress since then, and we can’t point to how it’s going to be done. No one’s currently working on it, and the people that claim to be advancing AGI are actually re-doing the same things that John McCarthy talked about in the 1960s, and they are making about as much progress.

It’s a hard problem. It doesn’t mean you don’t make progress on the way in a lot of technologies, but some things just take hundreds of years to achieve. We think that we’re the golden people at the critical time. A lot of people have thought that at lots of times, it doesn’t make it true for us right now and I see no evidence of it.

There are concerns that we will fall behind China in the race to advanced artificial intelligence. They have a larger population, and therefore more data, and they don’t have strict privacy concerns to hold back what they can do in AI. Do you think that we are entering a new AI arms race?

You’re correct, there is going to be a race. There’s been a race between companies, and there will be a race between countries.

Do you view it as a big danger for the West if a country like China gets a substantial lead in AI?

I don’t think it’s as simple as that. We will see uneven deployment of AI technologies. I think we are seeing this already in China in their deployment of facial recognition in ways that we would not like to see here in the United States. As for new AI chips, this is not something that a country like the United States can afford to even begin to fall behind with. However, to not fall behind would require leadership that we do not currently have.

We’ve seen policies saying that we need more coal miners, while science budgets are cut, including places like the National Institute of Standards and Technology. It’s craziness, it’s delusional, it’s backward thinking and it’s destructive.

Let’s talk about some of the risks or potential dangers associated with AI and robotics. Let’s start with the economic question. Many people believe we are on the cusp of a big disruption on the scale of a new Industrial Revolution. Do you buy into that? Is there going to be a big impact on the job market and the economy?

Yes, but not in the way people talk about. I don’t think it’s AI per se. I think it’s the digitalization of the world and the creation of new digital pathways in the world. The example I like to use is toll roads. In the United States, we’ve largely gotten rid of human toll takers on toll roads and toll bridges. It’s not particularly done with AI but it’s done because there’s a whole bunch of digital pathways that have been built up in our society over the last 30 years.

One of the things that allowed us to get rid of toll takers is the tag that you can put on your windscreen that gives a digital signature to your car. Another advance that made it practical to get rid of all the human toll lanes is computer vision, where there is an AI system with some deep learning that can take a snapshot of the license plate and read it reliably. It’s not just at the toll gate, though. There are other digital chains that have happened to get us to this point. You are able to go to a website and register the tag in your car and the particular serial code that belongs to you, and also provide your license number so that there’s a backup.

There’s also digital banking that allows a third party to bill your credit card regularly without ever touching your physical credit card. In the old days you had to have the physical credit card, now it’s become a digital chain. There’s also the side effect for the companies that run the toll booth, that they no longer need trucks to collect the money and take it to the bank because they have this digital supply chain. There’s a whole set of digital pieces that came together to automate that service and remove the human toll taker. AI was a small, but necessary piece in there, but it wasn’t that overnight that person was replaced by an AI system. It’s those incremental digital pathways that enable the change in labor markets, it’s not a simple one-for-one replacement.

Do you think those digital chains will disrupt a lot of those grassroots service jobs?

Digital chains can do a lot of things but they can’t do everything. What they leave behind are things that we typically don’t value very much but are necessary to keep our society running, like helping the elderly in the restroom, or getting them in and out of showers. It’s not just those kinds of tasks—look at teaching. In the United States, we’ve failed to give schoolteachers the recognition or the wages they deserve, and I don’t know how we’re going to change our society to value this important work, and make it economically worthwhile. As some jobs are lost to automation, how do we recognize and celebrate those other jobs that are not?

Predicting an AI future amounts to a power game for isolated academics who live in a bubble away from the real world

It sounds like you’re not suggesting that mass unemployment will happen, but that jobs will change. I think one thing that will happen is that a lot of desirable jobs are going to disappear. Think of the white-collar job where you’re sitting in front of a computer and you’re doing something predictable and routine, cranking out the same report again and again. It’s a very desirable high-paying job that people go to college to get and that job is going to be threatened, but the maid cleaning the hotel room is going to be safe.

I don’t deny that, but what I do deny is when people say, oh, that’s AI and robots doing that. As I say, I think this is more down to digitalization.

I agree, but it’s also true that AI is going to be deployed on that platform, so things may move even faster.

Yes, it certainly makes it easier to deploy AI given that platform. The other worry, of course, is that the platform is built on totally insecure components that can get hacked by anyone.

Let’s move on to that security question. What are the things that we really should worry about, aside from the economic disruption? What are the real risks, such as security, that you think are legitimate and that we should be concerned with?

Security is the big one. I worry about the security of these digital chains and the privacy that we have all given up willingly in return for a certain ease of use. We’ve already seen the weaponization of social platforms. Rather than worry about a self-aware AI doing something willful or bad, it’s much more likely that we’re going to see bad stuff happen from human actors figuring out how to exploit the weaknesses in these digital chains, whether they be nation states, criminal enterprises, or even lone hackers in their bedrooms.

What about the literal weaponization of robots and drones? Stuart Russell, one of the interviewees in this book, made a quite terrifying film called Slaughterbots about those concerns.

I think that kind of thing is very possible today because it doesn’t rely on AI. Slaughterbots was a knee-jerk reaction saying that robots and war are a bad combination. There’s another reaction that I have. It always seemed to me that a robot could afford to shoot second. A 19-year-old kid just out of high school in a foreign country in the dark of night with guns going off around them can’t afford to shoot second. There’s an argument that keeping AI out of the military will make the problem go away.

I think you need to instead think about what it is you don’t want to happen and legislate about that rather than the particular technology that is used. A lot of these things could be built without AI.

As an example, when we go to the Moon next, it will rely heavily on AI and machine learning, but in the ‘60s we got there and back without either of those. It’s the action itself that we need to think about, not which particular technology is being used to perform that action. It’s naive to legislate against a technology and it doesn’t take into account the good things that you can do with it, like have the system shoot second, not shoot first.

Slaughterbots

Be afraid. Be very afraid.

Those lines from the 1986 film The Fly only begin to describe the horror of slaughterbots—AI drones small enough to land in a human hand but smart enough to pick out a specific human target and deadly enough to assassinate that victim instantly with a single blow.

The tiny slaughterbot accomplishes its dirty work with widefield cameras, tactical sensors and facial-recognition processors. It reacts 100 times faster than a human and moves in ways that defy sniper fire. Its miniscule three grams of shaped explosives detonate in the middle of the chosen victim’s forehead.

In a video demonstration reminiscent of a TED Talks presentation, a clean-cut narrator dressed in a suit and T-shirt introduces the slaughterbot. The device separates the “good guys” and “bad guys,” eliminating the latter, the narrator enthuses. He explains the death blow and shows a brief video-within-a-video of the drone in action as the courier of death. “Did you see that?” the narrator exclaims. “That little bang is enough to penetrate the skull and destroy the contents.”

The video of the talk went viral in 2017 and has attracted nearly 3 million views on YouTube. In it, the narrator points to giant screens behind the stage that show hapless victims scurrying in every direction to no avail. “Now that is an airstrike of surgical precision,” he intones as the miniature mechanical beasts single out and exterminate their prey.

Thankfully, it’s all a put-on. The Future of Life Institute and computer scientist and AI maven Stuart J. Russell of the University of California, Berkeley, created the seven-minute video to warn of the potential dangers of autonomous weapons. The technology for slaughterbots doesn’t exist. But it might in the near future, and that’s what worries arms-control advocates. Their message emerges clearly in their slaughterbot video: Be very afraid.

What about the AGI control problem and Elon Musk’s comments about summoning the demon? Is that something that we should be having conversations about at this point?

In 1789 when the people of Paris saw hot-air balloons for the first time, they were worried about those people’s souls getting sucked out from up high. That’s the same level of understanding that’s going on here with AGI. We don’t have a clue what it would look like. Predicting an AI future is just a power game for isolated academics who live in a bubble away from the real world. That’s not to say that these technologies aren’t coming, but we won’t know what they will look like before they arrive.

When these technology breakthroughs do arrive, do you think there’s a place for regulation of them? As I said earlier, the place where regulation is required is on what these systems are and are not allowed to do, not on the technologies that underly them. Should we stop research today on optical computers because they let you perform matrix multiplication much faster, so you could apply greater deep learning much more quickly? No, that’s crazy. Are self- driving delivery trucks allowed to double-park in congested areas of San Francisco? That seems to be a good thing to regulate, not what the technology is.

Any AI program in the world today is an idiot savant living in a sea of now

Taking all of this into account, I assume that you’re an optimist overall? You continue to work on this so you must believe that the benefits of all this are going to outweigh any risks.

Yes, absolutely. We have overpopulated the world, so we have to go this way to survive. I’m very worried about the standard of living dropping because there’s not enough labor as I get older. I’m worried about security and privacy, to name two more. All of these are real and present dangers, and we can see the contours of what they look like.

The Hollywood idea of AGIs taking over is way in the future, and we have no clue even how to think about that. We should be worried about the real dangers and the real risks that we are facing right now.


Rodney Brooks, one of the world’s foremost roboticists, co-founded iRobot Corp., a leader in consumer robotics and purveyor of the Roomba vacuum cleaner. Brooks holds a Ph.D. in Computer Science from Stanford University. He was also chairman and chief technology officer of Rethink Robotics. For a decade between 1997 and2007, Brooks directed the Massachusetts Institute of Technology Artificial Intelligence Laboratory and later the MIT Computer Science and Artificial Intelligence Laboratory.