It was a busy weekend at the local supermarket, and lines were forming at the checkout. Around a half-dozen people lined up at the automated checkout registers when I noticed there was no line at the checkout where a human cashier was waiting. When a customer approached the checkout area, they scanned the options and decided to wait in line for the automated checkout instead of walking right up to the cashier with no wait. I could not resist asking the customer why they chose to wait for a machine instead of getting immediate service from a human. Their response carries an important message for the future of artificial intelligence (AI) and the robots it enables: “I don’t want them (the human cashier) looking at everything that I’m buying, and I don’t care for their opinions of what I’m getting.”
(Author’s note: Throughout this column, I intentionally conflate the terms “robot” and “drone” and often ignore the difference between a robot that is remote-controlled by a human and a robot that is AI-enabled, and thus to some degree autonomous. There probably was a time — decades ago — when these terms described distinct “remote-controlled” or “AI-enabled/autonomous” categories, but there is little difference today, as remote control and AI merge — and less as each day passes.)
While surveys on automated supermarket checkout are limited and diverse, it’s clear that people are divided in their views towards robot cashiers — about a third prefer robots over humans, for various reasons. Similarly, bank ATMs (robot tellers, in a sense) have been widespread for a half-century, but were preferred by some from the outset — and are preferred over human tellers by a wide margin today. Perhaps more relevant, a recent survey of New Yorkers showed that while most preferred more thorough traffic enforcement, 59 percent preferred speed cameras — robot traffic cops — over human police officers and 65 percent of Blacks and 74 percent of Latinos preferred robot speed cops over human traffic police.
It’s clear that a substantial element of the public prefers to deal with robots instead of human cashiers, tellers and cops. While some of this has to do with minimizing time consumed, some has to do with the obvious fact that humans are opinionated, while machines often leave the impression that they are not.
In reality, while the shopper I interviewed preferred a robot cashier over a human cashier because they believed the machine would have no memory/opinion, many readers will immediately scoff that the robot cashier actually keeps a closer watch on the customer than does the human — and the robot never forgets.
Nevertheless, machines can leave a different impression.
Nor would it be surprising if we learned that victims of Nazi oppression or of Jim Crowe would have preferred (presumably-neutral) robot police to (bigoted) human police.
This phenomenon — of some people preferring robots (in this case, an apparently-neutral robot over an opinionated human) becomes more important as we enter an era of AI-enabled robots. And it may partially explain why AI-enabled robots enjoy public support despite warnings from prominent figures ranging from Henry Kissinger to former Google boss Eric Schmidt about the risks of unfettered AI. If someone suspects that opinionated-humans in authority intend to do them or their family harm, then that person will probably prefer an apparently neutral, AI-enabled robot over an obviously bigoted human.
As we recently saw in San Francisco, however, when a robot is equipped to physically harm a human (robots inflict financial, emotional and other harms on humans every day, but this rarely raises a public outcry), an entirely different set of public attitudes emerge. In this case, local police officials and officers proposed to use armed robots to violently deal with suspects in situations where human police officers and civilians would be in imminent deadly danger. Many human police prefer to deploy robot police over human officers in such situations; nevertheless, opposition was loud and immediate against “killer robots.”
Preferring robots (or drones as they are sometimes called) over humans has been a growing view of military commanders for decades. Military aviation commanders have preferred robot-piloted aircraft (surveillance/combat drones) over human-piloted aircraft for the same reason human police sometimes prefer robots to human officers: avoiding the loss of human life on your own side. Moreover, compared with human pilots, robot pilots cost less and don’t sleep or have families — and they will unquestioningly commit suicide or submit to extreme conditions that no human could survive. For these same reasons, naval commanders are introducing robot surface and underwater warships that involve no on-board sailors and the first generation of robot ground combat vehicles is now being promoted by army commanders.
Each major episode of military combat, from WWII through Ukraine, has seen a steady increase in the use of combat robots, and it is widely reported that combat robots are playing a principal warrior role in Ukraine today.
Although there is no agreed definition of AI, almost all involved in it today would agree that AI involves some type of machine learning in which a computer-like device is able, by itself, to adjust to evolving conditions by taking these conditions into account and responding to the changed conditions. By this type of definition, almost all remote-controlled robots/drones are evolving into autonomous robots — if for no other reason than that human commanders, officials and executives conclude that, in net, robots cost less than humans. As they do, the reasons why some people will prefer a robot to a person will become more pronounced and more controversial.
We are entering an era in which the use of AI-enabled robots — whether they are labeled “drones,” “autonomous vehicles” or “updated supermarket check-out machines” — will be widespread because some people prefer such robots to humans, for economic, social, personal, military or other reasons, while other people strenuously object to dealing with such robots.
If any one thing is clear, it is that we are intellectually unprepared for both this era and the debate that it will spur.
Roger Cochetti provides consulting and advisory services in Washington, DC He was a senior executive with Communications Satellite Corporation (COMSAT) from 1981 through 1994. He also directed internet public policy for IBM from 1994 through 2000 and later served as Senior Vice-President & Chief Policy Officer for VeriSign and Group Policy Director for CompTIA. He served on the State Department’s Advisory Committee on International Communications and Information Policy during the Bush and Obama administrations, has testified on internet policy issues numerous times and served on advisory committees to the FTC and various UN agencies. He is the author of the Handbook of Mobile Satellite Communications.