How did you get interested in this topic?
Twenty five years ago I began operating remotely operated vehicles in the deep ocean for science and archaeology. What we found surprised us — the robots did not make the work cheaper or safer, but they changed it in fundamental ways. Instead of two people diving in a submarine, the robots enabled a whole room full of people to explore the ocean together. Since then, I’ve made my career researching the technological and social issues around human, remote, and autonomous robots.
What are the myths of autonomy?
There are really three myths of autonomy. First, the idea that robots replace people one for one in jobs. They don’t. When we add robots, it changes the nature of the job, the work that gets done changes in some way too. An autopilot in an airplane, for example, helps keep the airplane on its flight path, and it changes the job the human pilot has to do.
The second myth is the myth of linear progress. That we move from human operated systems, to remotely operated systems, naturally toward fully autonomous systems. They don’t. These three modes co-exist, they are evolving together, and they are converging. An airliner is a lot like a robot that you sit inside. A small drone is a remotely-operated vehicle with some elements of autonomy.
The final myth is the myth of full autonomy. Engineers and roboticists tend to believe that technology is inevitably evolving toward machines that act completely on their own, and that full autonomy is somehow the highest expression of robotic technology. It isn’t. Full autonomy is a great problem to work on in the laboratory. But solving the problem in an abstracted world, difficult as it may be, is not as challenging, nor as worthwhile, as autonomy in real human environments. Every robot has to work within some human setting to be useful and valuable.
I’m arguing that we have to rethink our notion of progress, that the ultimate in technology is not full autonomy but safe, transparent, trusted, collaboration between people and machines.
Does that mean we should abandon all this cool technology?
Absolutely not. I’d love to have lasers on my car, machine vision, radars, algorithms that identify hazards in my path. I just want them to help me drive the car, even if I’m at a higher level of supervision. And the same level of effort that goes into the algorithm needs to into make sure they’re transparent to the user. In fact, at a very basic level, the algorithms have to be designed with the humans in mind.
But wouldn’t my autonomous car let me do other things while I’m sitting in traffic?
It could, and it will, but within limits. I think the vision of us sleeping while our car drives us places is neither practical, nor safe. If a hundred million dollar airliner, which flies through a very controlled space operated by highly trained people still needs humans to monitor and intervene as frequently as they do, how can we expect cars, which cost a few thousand dollars, which drive through much less controlled, more diverse environments can do better?
What’s missing in the dream of the fully autonomous car?
People. Driving is a social experience. My late friend Seth Teller used to say driving is composed of hundreds of short-lived social contracts. A little wave here; a little eye contact there, even a flipped bird. I don’t see computer scientists even beginning to study those social dynamics, and what role they play in operating complex systems. Driving too is heavily dependent on context — I can drive from one neighborhood to the next and see different cars, different driving, different pedestrian expectations and behavior. Again, I don’t see people even beginning to code that into their systems.
What have we learned from extreme environments?
Extreme environments are places we’ve had to use robots because they’re so hostile to human life. I think the small drone world is at a place where the undersea robots were in 1986 when Bob Ballard and Martin Bowen sent Jason Jr. down the grand staircase of the Titanic. They’re roving eyeballs. Drones are very good roving eyeballs, and very important as a new means of seeing and perception. But we spent twenty years in the deep ocean learning how robots could be quantitative tools that could digitize the world in amazing ways, and bring it to people to explore in the data.
How is the technology of the Apollo moon landings still relevant to this conversation?
Fifty years ago, when the engineers at MIT first got the assignment to build a computer to control the lunar landings, they thought it would only need two buttons for the user: “Go to moon,” and “take me home.” Instead, what they ended up with was a digital computer that had many different modes and interactions with the astronauts, and enabled them to choose exactly at what level they wanted to be involved. Which is why Neil Armstrong turned part of it off in the last moments of Apollo 11. But he still landed in a heavily digitally-aided fly-by-wire mode. By contrast, the Soviet spacecraft controls, which had less sophisticated analog computers, relied on full autonomy with very little involvement of the pilot.