Tech

Artificial Intelligence Demands Intelligent Inquiry, Scientists Say

According to some of the biggest names in tech, Artificial Intelligence is scary. Bill Gates says AI could pose existential risks to humanity, and Elon Musk, head of SpaceX and Tesla Motors, says it’s potentially “more dangerous than nukes.”

Many who work in the development of AI beg to differ.

“I don’t think artificial intelligence, of the sort that would be a real threat, is anywhere on the near-term horizon,” said Lee Vinsel, a science and technology professor at Stevens Institute of Technology.

Fears about artificial intelligences reaching a level of technological sophistication that mimic Terminator-style robots and wipe out humanity, he said in an interview, are “way overblown.”

Vinsel said the only certainty about the future of so-called artificial superintelligence – a term defined by author Nick Bostrom as “an intellect that is much smarter than the best human brains” – is that it’s uncertain.

Viktoriya Krakovna, co-founder of the Foundation of Life Institute, said in an email that it could be productive for researchers to look back at other disruptive technologies and see how the scientific community collaborated with the private sector to prevent or mitigate damage.

Krakovna, whose foundation studies the effects of artificial intelligence, said the automotive and pharmaceutical industries illustrate how “companies that were otherwise competitors collaborated on increasing safety.”

One of the more well-known examples of collaboration, Krakovna said, is the 1975 Asilomar Conference on recombinant — or artificially created ­— DNA, which is used for things like genetically modified foods. She said that as concerns grew in the 1970s over the ethical, scientific and societal implications of splicing genes, 140 concerned professionals – molecular biologists, physicians and lawyers – met to identify and evaluate the fears surrounding experiments with DNA.

“The Asilomar Conference is an informative and encouraging example of self-regulation and risk mitigation by the scientific community,” said Krakovna. “A great deal can be learned from it about approaching AI.”

Vinsel said that in order to set rules and regulations for what AI researchers can and cannot do, there needs to be a constructive conversation, instead of being “vague and science fictional.”

He suggested the federal government could initiate another Asilomar for AI, and he pointed to the 1920s as a success story.

During that disruptive decade, Commerce Secretary Herbert Hoover held several conferences covering technological issues such as auto safety, radio spectrum and aviation. The discussions eventually led to the formation of new agencies, like the Federal Communications Commission, to both oversee and legitimize the emerging technologies.

“When we look at the 20s, those are still really important moments for those industries,” Vinsel said.

But in today’s litigious society, Dan O’Connor, vice president of the Computer and Communications Industry Association and head of the Disruptive Competition Project, is more interested in answering liability questions.

Who is at fault when something goes wrong?

“Placing the right amount of liability in the right places will ensure that innovation can continue while also ensuring that those who can best mitigate risk are incentivized to do so,” O’Connor said.

But as AI develops and expands, touching on everything from medical advances to fighting forest fires, some experts question whether the current government structure can help develop superintelligent AI while combating potential harms.

Many following the issue have argued that the government should create a Federal Robotics Commission.

In a 2014 Brookings Institution report, Ryan Calo, assistant professor of law at the University of Washington, criticized the federal’s government approach of using a variety of agencies, such as the Federal Aviation Administration for establishing rules for drones, to oversee such a broad topic and recommended creating one agency to oversee advancing hardware and software.

But not everyone is on board with that approach.

“I think it would be an awful idea, the worst idea right now,” Robert Atkinson, president of the Washington-based Information Technology and Innovation Foundation, said during an ITIF-sponsored debate about AI on Wednesday, adding that such a commission would just add more oversight and slow down the rate of innovation.

“We shouldn’t worry about these things until we’re a lot farther along with robots,” he said.

Christine Hendren, executive director at the Center for the Environmental Implications of NanoTechnology at Duke University, pointed out that the federal government has a lot to catch up on to prepare for potential 21st century tech problems.

One persistent obstacle, Hendren said, is what’s known as the pacing problem: “the inherent difference between the rate of technological innovation, which is fast because it’s based on market and public incentives to reap the benefits of new technology, compared to the rate of governmental oversight, which is slow, because they have different legal requirements and research standards necessary to meet those legal requirements.”

A possible solution to that problem, Hendren said, is to follow guidelines crafted by the International Risk Governance Council, a science-based think tank that focuses on risk governance and assessment. The group has developed a risk-governance framework for countries to use as a way to assess and manage the risks of new technologies.

“So pre-assessment of risks and hazards, appraisal, characterization, and evaluation, all of those happen up front – could be happening now with AI for example – and they include things like societal impact and things that are just not fully included in the current legal framework, at least in the U.S., and really the EU too” she said.

Morning Consult