This summer, Mark Zuckerberg sparked up his backyard barbecue for a Q&A on Facebook Live — and burned fellow billionaire Elon Musk at the same time.
Here’s how the two tech moguls ignited a feud over artificial intelligence (AI). First, Tesla CEO Musk suggested the world should start developing regulations for AI now, before the technology outruns human ability to deal with its darker aspects.
“AI is the rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’ll be too late. AI is a fundamental risk to the existence of human civilization,” Musk warned a gathering of the US National Governors Association.
A few days later, Facebook CEO Zuckerberg hosted a Q&A livestream while grilling assorted meats in his backyard. When one viewer asked for his reaction to Musk’s AI comments, Zuckerberg went from folksy to all fired up.
“I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it. It’s really negative and in some ways I actually think it’s pretty irresponsible,” Zuckerberg said.
“You need to be careful about what you build and how it is going to be used,” he added. “But people who are arguing for slowing down the process of building AI, I just find that really questionable.”
No word if the two tech titans patched things up over a hot dog on Zuckerberg’s porch. But the debate about regulating AI continues to smoulder.
A key concern about regulating AI is that it will stifle innovation, thus “slowing down the process,” as Zuckerberg put it. Advances in AI are happening so quickly, however, that economists and governments are already racing to catch up with its impact on human employment.
Technologies like AI, machine learning, robotics and automation could lead to five million job losses in the world’s 15 largest economies by 2020, according to projections from the World Economic Forum.
Once those human jobs disappear, so do the associated income tax revenues collected by governments around the globe. Microsoft co-founder Bill Gates has floated the idea of one specific regulation to deal with this dent in government coffers: a tax on robots.
There’s also the issue of physical safety. Musk has pointed out that we already regulate activities like driving cars and flying planes to prevent accidents — so extending safety standards to AI-based products like self-driving cars and drones seems to make sense.
That’s why governments worldwide are modifying their existing transportation laws to prevent traffic accidents and keep drones from interfering with airplane flight paths.
Speaking of self-driving cars, who is liable — legally and financially — in accidents involving those automobiles? Can the owner of an autonomous vehicle abdicate legal responsibility, blaming the companies that made the car’s AI software and sensors? Who pays the damage costs? How will such accidents affect insurance rules and rates?
In February, the European Union (EU) passed a resolution to draft legislation clarifying liability around self-driving cars. It calls for a mandatory insurance scheme for the vehicles and a supplementary fund to make sure victims of accidents in autonomous cars are fully compensated.
Autonomous weapons are the ultimate AI-based threat to human safety. In 2014, renowned physicist Stephen Hawking ominously told the BBC — via his AI-powered predictive communication device — “the development of full artificial intelligence could spell the end of the human race.”
In 2015, more than 3,000 executives and scientists (including Musk, Hawking and Apple co-founder Steve Wozniak) signed an open letter warning about the dangers of “autonomous weapons (which) select and engage targets without human intervention.”
“It will only be a matter of time,” the letter cautioned, “until (autonomous weapons) appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, and warlords wishing to perpetrate ethnic cleansing.”
AI and automation will make cybersecurity even more challenging than it already is today, according to a white paper commissioned by Barack Obama shortly before he left the Oval Office. The report urges U.S. government officials to “ensure that AI systems and ecosystems are secure and resilient to intelligent adversaries.”
AI is intelligent but it isn’t perfect. Some cases of AI gone awry suggest the technology can reflect the biases of humans who develop, program and interact with it. These include an algorithm designed to judge beauty contests that ended up favouring white contestants, a Microsoft chatbot that ‘learned’ to tweet racist phrases, and automated identification systems that have mistakenly fingered innocent people as child support cheats and terrorists.
With no regulatory oversight or framework in place, where can people turn when they feel an AI system has discriminated against them on the basis of their race, gender, age, appearance or other characteristics? A group of British researchers has called for the establishment of a watchdog body to review such complaints.
It’s worth noting that governments in Europe and the U.S. have made it clear they want AI regulation that promotes innovation without compromising safety, ethics and privacy. The European Parliament resolution adopted in February states that any rules on robotics and AI should be developed “in order to fully exploit their economic potential and to guarantee a standard level of safety and security.”
The 2016 Obama white paper recommends that the U.S. government pursue “a policy, legal and regulatory environment that allows innovation to flourish while protecting the public from harm.”
Technology has opened the door to many AI benefits. But like the pod bay door in the classic film 2001: A Space Odyssey, it might be wise to maintain some control of the escape hatch before HAL refuses to open it in a human emergency.
Up Next: If it's time to add AI to your business, here's how it can help.