Vincent Everts is a fascinating Dutch technologist who also travels around the world interviewing interesting people. He has a Tesla car whose software just got a “self-driving” update. We discussed the social impact of autonomous vehicles. On the one hand, it is likely to have positive effects such as reducing car accidents, eliminating parking problems, ameliorating traffic jams, etc. On the other hand, it brings up moral and ethical issues related to autonomous technology: How can we be sure these vehicles will drive safely? How should they decide whose safety to prioritize? Will current truck drivers, taxi drivers, and Uber drivers soon be out of a job? Military drones bring these questions into even sharper focus. And recent reports of Russia developing nuclear-armed drone submarines make them even more pressing.
Vincent asks why, after many failed promises, we should believe AI is really happening now. The history of AI is definitely a cautionary tale: what we thought was easy (eg. vision, language, manipulation) turned out to be hard and what we thought was hard (playing chess, solving logic puzzles) turned out to be easy. But it really does seem like today is different. Recent advances in deep learning neural networks have met or surpassed human performance in a variety of tasks like character recognition, speech recognition, translation tasks, image labeling tasks, drug discovery, etc. McKinsey predicts that the economic impact of these new systems could reach $50 trillion over the next 10 years.
We played with the Amazon Echo and discussed the idea that, like the Tesla, it periodically gets software upgrades which make it smarter and more capable. We talk about the possibility for using new technologies like cryptocurrencies and smart contracts for regulating the new AI systems and extending today’s laws to them. Here’s the video:
And here’s an English version of Vincent’s page about the discussion.