On August 17, 2016 Steve Omohundro spoke to the “Million AI Startups” group about “AI and Human Safety”:
AI and robotics will create $50 trillion of value over the next 10 years according to McKinsey. This is causing their rapid development but six recent events show the need to be careful as they are integrated into human society. In the past few weeks we’ve seen three Tesla autopilot crashes, the Dallas police using a robot to kill a suspect, a Stanford Shopping Center security robot running over a small child, and the first “Decentralized Autonomous Organization” losing $56 million due to a bug in a smart contract. As we move forward with these technologies, we will need to incorporate human values and new principles of security so that their human benefits can be fully realized.
Here is a pdf file of the slides.
The TED conference, started in 1984, has become the standard bearer for hosting insightful talks on a variety of important subjects. They have made videos of over 1,900 of these talks freely available online and they have been watched more than a billion times! In 2009 they extended the concept to “TEDx Talks”in the same format but hosted by independent organizations all over the world.
On January 6, 2016 Mountain View High School hosted a TEDx event on the theme of “Next Generation: What Will It Look Like?”. They invited both students from the school and external speakers to present. I spoke on “What’s Happening With Artificial Intelligence?”. A video of the talk is available here:
and the slides are available here:
I talked about the multi-billion dollar investments in AI and robotics being made by all the top technology companies and the 50 trillion dollars of value they are expected to create over the next 10 years. The human brain has 86 billion neurons wired up according to the “connectome”. In 1957 Frank Rosenblatt created a teachable artificial neuron called a “Perceptron”. Three-layer networks of artificial neurons were common in 1986 and much more complex “Deep Learning Neural Networks” were being studied by 2007. These networks started winning a variety of AI competitions besting other approaches and often beating human performance. These systems are starting to have a big effect on robot manufacturing, self-driving cars, drones, and other emerging technologies. Deep learning systems which create images, music, and sentences are rapidly becoming more common. There are safety issues but several institutes are now working to address the problems. There are many sources of excellent free resources for learning and the future looks very bright!
Eileen Clegg did wonderful real time visual representations of the talks as they were being given. Here is her drawing of my talk:
Each year Edge, the online “Reality Club”, asks a number of thinkers a question and they publish the short essay answers. This year the question was “What do you consider the most interesting recent scientific news? What makes it important?” The responses are here:
My own essay on “Deep Learning, Semantics, And Society” is here:
On December 8, 2015 Steve Omohundro will be the special guest speaker at the VLAB Annual Holiday Party speaking about “AI, Deep Learning, and the Future of Business”. Followed by the “Chocolate Heads Movement Band”! See you there!
Here are the slides:
VLAB – AI, Deep Learning, and the Future of Business
On Saturday, November 28, 2015 at 2:00PM (Santiago, Chile time) Steve Omohundro will speak (remotely) at the Exosphere event “AI Nexus” on:
2:00 PMRemote Speaker: Steve Omohundro – Semantics, Deep Learning and the Transformation of Business
Steve Omohundro, recognised Artificial Intelligence scholar, explains why semantics matter when talking about AI, what the deep learning trend is, and how business is going to be transformed by it.
McKinsey predicts that AI and robotics will create $50 trillion of value over the next 10 years. Many predict that the recent technology of “deep learning” will be a big part of the transformation. Over 250 deep learning startup companies have attracted more than $1 billion of venture investment in the past year. Deep learning systems have recently broken records in speech recognition, image recognition, image captioning, translation, drug discovery and other tasks. Why is this happening now and how is it likely to play out? We review the development of AI and the pendulum swings between the “neats” and the “scruffies”. We describe traditional approaches to semantics through logics and grammars and the new deep learning vector semantics. We relate it to Roger Shepard’s cognitive geometry and the structure of biological networks. We also describe limitations of deep learning for safety and regulation. We show how it fits into the rational agent framework and discuss what the next steps may be.
On November 2, 2015, Steve Omohundro will speak at the Deep Learning Applications Meetup at the Mixtile Lab in Mountain View about “Semantics, Deep Learning, and the Transformation of Business”:
The event is free and all are welcome!
Here are the slides: Semantics, Deep Learning, and the Transformation of Business
Semantics, Deep Learning, and the Transformation of Business
by Steve Omohundro, Ph.D.
Deep learning is likely to have a big impact on business. McKinsey predicts that AI and robotics will create $50 trillion of value over the next 10 years. Over $1 billion of venture investment has gone to 250 deep learning startups over the past year. Deep learning systems have recently broken records in speech recognition, image recognition, image captioning, translation, drug discovery and other tasks. Why is this happening now and how is it likely to play out? We review the development of AI and the pendulum swings between the “neats” and the “scruffies”. We describe traditional approaches to semantics through logics and grammars and the new deep learning vector semantics. We relate it to Roger Shepard’s cognitive geometry and the structure of biological networks. We also describe limitations of deep learning for safety and regulation. We show how it fits into the rational agent framework and discuss what the next steps may be.
Vincent Everts is a fascinating Dutch technologist who also travels around the world interviewing interesting people. He has a Tesla car whose software just got a “self-driving” update. We discussed the social impact of autonomous vehicles. On the one hand, it is likely to have positive effects such as reducing car accidents, eliminating parking problems, ameliorating traffic jams, etc. On the other hand, it brings up moral and ethical issues related to autonomous technology: How can we be sure these vehicles will drive safely? How should they decide whose safety to prioritize? Will current truck drivers, taxi drivers, and Uber drivers soon be out of a job? Military drones bring these questions into even sharper focus. And recent reports of Russia developing nuclear-armed drone submarines make them even more pressing.
Vincent asks why, after many failed promises, we should believe AI is really happening now. The history of AI is definitely a cautionary tale: what we thought was easy (eg. vision, language, manipulation) turned out to be hard and what we thought was hard (playing chess, solving logic puzzles) turned out to be easy. But it really does seem like today is different. Recent advances in deep learning neural networks have met or surpassed human performance in a variety of tasks like character recognition, speech recognition, translation tasks, image labeling tasks, drug discovery, etc. McKinsey predicts that the economic impact of these new systems could reach $50 trillion over the next 10 years.
We played with the Amazon Echo and discussed the idea that, like the Tesla, it periodically gets software upgrades which make it smarter and more capable. We talk about the possibility for using new technologies like cryptocurrencies and smart contracts for regulating the new AI systems and extending today’s laws to them. Here’s the video:
And here’s an English version of Vincent’s page about the discussion.