Americans are less afraid of the abstract threat of artificial intelligence eventually outpacing humans than of more tangible, personal interactions, like robots who could can perform surgery.
More than half of people (57 percent) already realize artificial intelligence is present in their daily lives, but they’re split on its level of threat, according to a recent Morning Consult poll. Forty-one percent of 2,200 adults polled believe AI is generally safe, while 38 percent of people think it’s unsafe.
But just over half of Americans support continuing research on artificial intelligence, even as more of them (36 percent) think AI will hurt the economy than those who believe it will help (28 percent).
Specific tasks that could be — or may already be — performed by artificial intelligence further revealed people’s misgivings about swapping out a human presence in some scenarios, even as Americans appear eager to hand over simpler, labor-intensive tasks, such as cleaning, to robots.
Three out of five people are uncomfortable with artificial intelligence making financial investments, even as Wall Street moves full steam ahead in its move to supplement traditional human efforts with AI; the asset management firm BlackRock recently announced it would use more robots for stock-picking.
Sixty-five percent of respondents are also wary about letting AI drive a car. And despite the popularity of the Tinder dating app, 68 percent of people said they’re uncomfortable letting AI choose their romantic partner.
People particularly don’t like the idea of artificial intelligence flying an airplane or performing surgery: When asked about each, seven out of 10 respondents said AI performing those tasks makes them uncomfortable.
Just over half of people also said they’re less likely to support AI research knowing robots could cause mass unemployment, and that figure climbed to 60 percent among those aged 55-64. People were also worried about further AI research when reminded that artificial intelligence machines are expensive to maintain and repair. That factor hampered support for AI research more than questions of ethics.
Wariness about AI bypassing human brainpower comes not just from the general public but from some of the people closest to it, like Tesla boss Elon Musk. Musk and physicist Stephen Hawking both joined AI researchers this year in pledging support for principles of AI that would make sure it supports people and doesn’t lead to an arms race.
Irving Wladawsky-Berger, a retired IBM executive who frequently writes and speaks on AI, said while the likes of Elon Musk, Bill Gates and other brilliant thinkers have every reason to worry about the broader ramifications of artificial intelligence, most people can’t fathom anything like that being a factor in their own lifetime and see the issue through a lens derived from Hollywood or science fiction.
“When AI is used in robotic surgery or when AI is used as IBM has been doing with Watson to help oncologists, or when we discuss self-driving cars, that’s very concrete and it’s immediate. Even if people don’t know exactly what you’re talking about, they know enough about the technology,” he said.
But Wladawsky-Berger said even the current possibilities of AI offer enough to worry about — such as automation putting more and more people out of a job, or machines taking on the discriminatory practices of police departments. Last month, an article in Government Technology magazine delved into the issue of predictive policing, focusing on a startup that is making an effort to acknowledge and counter the bias against minority populations in some programs’ algorithms.
Scrapping AI tools entirely is not the answer, said Wladawsky-Berger, but rather “it means we need to know what we’re doing.”
Polling was done in the United States between March 30 and April 1. See full results here.