Month: October 2015

Talk on Semantics, Deep Learning, and the Transformation of Business

On November 2, 2015, Steve Omohundro will speak at the Deep Learning Applications Meetup at the Mixtile Lab in Mountain View about “Semantics, Deep Learning, and the Transformation of Business”:

Semantics, Deep Learning, and the Transformation of Business

Monday, Nov 2, 2015, 7:00 PM

Mixtile Lab
935 Sierra Vista Ave Suite F Mountain View, CA

79 App Hackers Attending

Steve Omohundro has been a scientist, professor, author, software architect, and entrepreneur doing research that explores the interface between mind and matter. He has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. He was a computer science professor at the University of Illinois at Champaign-Urbana and…

Check out this Meetup →

The event is free and all are welcome!

Here are the slides: Semantics, Deep Learning, and the Transformation of Business

Semantics, Deep Learning, and the Transformation of Business

by Steve Omohundro, Ph.D.

Deep learning is likely to have a big impact on business. McKinsey predicts that AI and robotics will create $50 trillion of value over the next 10 years. Over $1 billion of venture investment has gone to 250 deep learning startups over the past year. Deep learning systems have recently broken records in speech recognition, image recognition, image captioning, translation, drug discovery and other tasks. Why is this happening now and how is it likely to play out? We review the development of AI and the pendulum swings between the “neats” and the “scruffies”. We describe traditional approaches to semantics through logics and grammars and the new deep learning vector semantics. We relate it to Roger Shepard’s cognitive geometry and the structure of biological networks. We also describe limitations of deep learning for safety and regulation. We show how it fits into the rational agent framework and discuss what the next steps may be.

Vincent Everts Interview on Self-Driving Teslas, Amazon Echo, and Moral AIs

Vincent Everts is a fascinating Dutch technologist who also travels around the world interviewing interesting people. He has a Tesla car whose software just got a “self-driving” update. We discussed the social impact of autonomous vehicles. On the one hand, it is likely to have positive effects such as reducing car accidents, eliminating parking problems, ameliorating traffic jams, etc. On the other hand, it brings up moral and ethical issues related to autonomous technology: How can we be sure these vehicles will drive safely? How should they decide whose safety to prioritize? Will current truck drivers, taxi drivers, and Uber drivers soon be out of a job? Military drones bring these questions into even sharper focus. And recent reports of Russia developing nuclear-armed drone submarines make them even more pressing.

Vincent asks why, after many failed promises, we should believe AI is really happening now. The history of AI is definitely a cautionary tale: what we thought was easy (eg. vision, language, manipulation) turned out to be hard and what we thought was hard (playing chess, solving logic puzzles) turned out to be easy. But it really does seem like today is different. Recent advances in deep learning neural networks have met or surpassed human performance in a variety of tasks like character recognition, speech recognition, translation tasks, image labeling tasks, drug discovery, etc. McKinsey predicts that the economic impact of these new systems could reach $50 trillion over the next 10 years.

We played with the Amazon Echo and discussed the idea that, like the Tesla, it periodically gets software upgrades which make it smarter and more capable. We talk about the possibility for using new technologies like cryptocurrencies and smart contracts for regulating the new AI systems and extending today’s laws to them. Here’s the video:

And here’s an English version of Vincent’s page about the discussion.