Cable TV Future Talk: “The AI Revolution”

On February 26, 2020 Steve Omohundro was interviewed by Marty Wasserman for the Palo Alto Cable TV program “Future Talk” about “The AI Revolution”:

Long time artificial intelligence researcher Steve Omohundro, Chief Scientist at the AI company AIBrain, discusses the exponential growth of AI, how it’s affecting every aspect of our lives, and the tradeoffs between the benefits and the dangers.

FXPAL Talk: The AI Platform Business Revolution, Matchmaking, Empathetic Technology, and AI Gamification

On October 15, Steve Omohundro spoke at FXPAL (FX Palo Alto Laboratory) about “The AI Platform Business Revolution, Matchmaking, Empathetic Technology, and AI Gamification”:


Popular media is full of stories about self-driving cars, video deepfakes, and robot citizens. But this kind of popular artificial intelligence is having very little business impact. The actual impact of AI on business is in automating business processes and in creating the “AI Platform Business Revolution”. Platform companies create value by facilitating exchanges between two or more groups. AI is central to these businesses for matchmaking between producers and consumers, organizing massive data flows, eliminating malicious content, providing empathetic personalization, and generating engagement through gamification. The platform structure creates moats which generate outsized sustainable profits. This is why platform businesses are now dominating the world economy. The top five companies by market cap, half of the unicorn startups, and most of the biggest IPOs and acquisitions are platforms. For example, the platform startup Bytedance is now worth $75 billion based on three simple AI technologies.

In this talk we survey the current state of AI and show how it will generate massive business value in coming years. A recent McKinsey study estimates that AI will likely create over 70 trillion dollars of value by 2030. Every business must carefully choose its AI strategy now in order to thrive over coming decades. We discuss the limitations of today’s deep learning based systems and the “Software 2.0” infrastructure which has arisen to support it. We discuss the likely next steps in natural language, machine vision, machine learning, and robotic systems. We argue that the biggest impact will be created by systems which serve to engage, connect, and help individuals. There is an enormous opportunity to use this technology to create both social and business value.

Cooperation is the Central Issue of our Time

Cooperation is the most important issue of our time. It is the key to understanding biology, the success of humans, effective business models, social media, and future society based on beneficial AI.

The challenge is that many interactions have the character of the “Prisoner’s Dilemma” or “Tragedy of the Commons” where selfish actors do better for themselves while arming the group benefit and cooperative actors help the group but can lose out in individual competition.

A variety of mechanisms that lead to cooperation have been invented and studied in biology, economics, political science, business, analysis of social technologies, and increasingly in analyzing AI.

All of these subjects are grounded in biology and today’s biology exhibits cooperation at every level of the “Major Transitions in Evolution”:

The Major Transitions in Evolution


From Smith and Szathmary’s book “The Major Transitions in Evolution”:


Biology has to explain how independent biological molecules work cooperatively inside of cellular compartments, how separate genes cooperate in a genome, how mitochondria and other organelles cooperate in eukaryotic cells, how the cells in multicellular organisms cooperate, how two or more sexes cooperate in creating offspring, how social insects and other animals cooperate in hives, how mutualisms between different species happen, how humans cooperated in creating and using language, how humans created cooperative societies.

Biological cooperation contains all the abstract elements of general cooperation studied by economics. But biological cooperation has the extra element of “relatedness” between organisms that share genes. Hamilton’s notion of “inclusive fitness” has been a central insight in understanding cooperation in many of these biological systems.

But it’s looking to me like “partner choice”, “partner switching”, and “cheater punishment” are the fundamental mechanisms underlying many of these cooperative interactions and they apply as well to economic interactions, business interactions, political interactions, and increasingly technological and AI interactions.

I therefore think it is very important to have a clear and mathematically precise theory of these mechanisms. And would love to see detailed simulation modelling and eventually AI models both for understanding and for mechanism design and policy design.

Those preliminary thoughts are meant to motivate the study of this excellent review article which tries to systematize the different explanations for cooperation in biology:

Evolutionary Explanations for Cooperation

Stuart A.West Ashleigh S.Griffin AndyGardner



Natural selection favours genes that increase an organism’s ability to survive and reproduce. This would appear to lead to a world dominated by selfish behaviour. However, cooperation can be found at all levels of biological organisation: genes cooperate in genomes, organelles cooperate to form eukaryotic cells, cells cooperate to make multicellular organisms, bacterial parasites cooperate to overcome host defences, animals breed cooperatively, and humans and insects cooperate to build societies. Over the last 40 years, biologists have developed a theoretical framework that can explain cooperation at all these levels. Here, we summarise this theory, illustrate how it may be applied to real organisms and discuss future directions.

Here is the pdf of the paper:


Here is the key figure which tries to categorize all of the biological cooperation mechanisms:



Interview for the Argentinian El Cronista: “Do presidents dream of electric ministers?”

On August 26, 2019, Sebastiande de Toma published an article in the Argentinian business newspaper El Cronista based in part on an interview with Steve Omohundro. His article is titled “Suenan los presidentes con ministros electricos?” or “Do presidents dream of electric ministers?”:


He explores whether AI will help politicians make better economic decisions.

Steve suggested 4 levels of AI support for politicians:

  1. AI’s can build much better economic models from a much wider range of data than traditional econometric data. For example, an AI model might include video feeds from TV news, social media posts, video feeds from commerce hubs, audio from radio shows, etc. All of the data can inform much richer economic models. Monte Carlo simulations could then make much better predictions about the impact of policy interventions and repeated simulations can reveal how robust the response to an intervention might be.
  2. AI’s can help politicians recognize their cognitive biases and counteract them. The field of “behavioral economics” has identified a large number of biases, especially around small probability events and the different perceptions of gains and losses. AI’s can model the correct Bayesian responses and help a politician to counteract his intuitive biases.
  3. In addition to helping a politician simulate the effects of a policy intervention, AI’s can help to create policies with a desired impact. Economic models with policy knobs can be automatically optimized for the best predicted outcomes.
  4. Recently there have been advances in using AI to solve complex game theoretic problems (eg. the Libratus and Pluribus AI’s which recently beat expert human poker players). This kind of AI could be applied to the problem of new policy causing other parties to change their behavior. Well-designed policy should account for these responses and lead to desirable outcomes taking account of all participant’s likely behaviors.

Sebastiande’s wrote (as translated by Google Translate):


How Researchers Changed the World Podcast: “The Ethical Implications of Artificial Intelligence”

On June 18, 2019, the podcast “How Researchers Changed the World” supported by the Taylor & Francis Group featured Steve Omohundro on “The ethical implications of artificial intelligence”.


Steve’s paper “Autonomous technology and the greater human good” was the most read paper in the history of the Journal of Experimental & Theoretical Artificial Intelligence. It’s available here:


The podcast explores the origins of that work and is available here along with a transcript:

Steve Omohundro – The ethical implications of artificial intelligence

The press release for the episode is available here:


Linghacks Keynote: “Language and AI: Hacking Humanity’s Greatest Invention”

On March 30-31 the wonderful “Linghack” organization supporting computational linguistics held their “Linghacks II” event in Silicon Valley:


Steve Omohundro was invited to give the opening Keynote Address on “AI and Language: Hacking Humanity’s Greatest Invention”. His talk is available here starting at 14:20:

The slides are available here:

Autopiloto Podcast from a Self-Driving Car

On November 15, 2018 Steve Omohundro was interviewed live for the Autopiloto Podcast from a self-driving car which was exploring places in Silicon Valley of interest for self-driving. Here is the 12 hour podcast:


The interview with Steve begins at the timestamp 3:45:20.

Autopiloto Podcast Thursday

AUTOPILOTO is a 24-hour live online radio broadcast about all
things self-driving hosted from a semi-autonomous vehicle looping the
Bay Area. This broadcast takes up questions of how autonomy and
automatic movement will shape Bay Area geographies, societies, and
cultures. Considering  self-driving as technology, psychological
state, anthropological condition and systems, what will our cities
sound like in a driverless future? How will society and infrastructure
systems adapt? What might humans do during newfound transit time? In
what ways do machines imitate human auto-pilot modes, and vice versa?
How can we build equitable, planetary, intelligent transit for all?

Video Highlights of the Responsible AI/DI Summit at SAP

SAP is setting an excellent example in making sure that artificial intelligence is beneficial for its customers, employees, and the broader society. They recently released a set of “Guiding Principles for Artificial Intelligence”:



They sponsored and hosted the 2018 “Responsible AI/DI Summit” and invited Steve Omohundro to present.  A video of the highlights of the summit is available here:

Responsible AIDI Summit 2018 Highlights

The Responsible AI/DI Blog is here:


Risk Group: “Rise of Algorithms in Decision Making”

On November 20, 2018 Steve Omohundro participated in Risk Group’s “Risk Roundup” discussing the “Rise of Algorithms in Decision Making” with Jayshree Pandya:


This episode of Risk Roundup discusses the rise of algorithmic decision making, the complex challenges, risks and rewards. Prof. Omohundro provided a thoughtful insight on the need to ensure integrity, transparency and trust in algorithmic decision-making.

Here’s the video of our discussion:

Risk Roundup Webcast: Algorithmic Decision Making


AUTOPILOTO Radio Show from an Autonomous Vehicle

On November 15, 2018, Steve Omohundro will be interviewed about the social impact of AI in an autonomous vehicle driving around Silicon Valley as a part of the “AUTOPILOTO” art project:



Thursday, November 15, 2018 – Friday, November 16, 2018

What will our streets and cities look and sound like in a driverless future?

The Lucas Artists Program presents AUTOPILOTO by artist collective RadioEE.net, an online live-streaming 24-hour broadcast from a semi-autonomous vehicle traveling around the Bay Area, on November 15 and 16.

AUTOPILOTO will investigate the challenges and opportunities of emerging autonomous mobilities through live soundscapes, music, and Spanish-English-Vietnamese conversations with drivers, designers, technologists, municipal agents, researchers, artists, and scientists, opening a channel for music, storytelling, and sonic experiments.



November 15-16, 2018
SARATOGA, CA (1 October 2018) — This November, the Sally & Don Lucas Artists Program at Montalvo Arts Center presents a new commission from international creative collective Radioee.net: AUTOPILOTO, a marathon radio transmission broadcast while on the move in a semi-autonomous vehicle traversing the Bay Area, examining how emerging autopilot technologies are transforming the world. Live streaming on November 15 and 16, AUTOPILOTO will include interviews with drivers, designers, technologists, municipal agents, researchers, artists, scientists, mechanics and more, as well as soundscapes and music. Through storytelling and sonic experiments, it will compose an audio portrait of the Bay Area at a specific moment in time. The live-stream of the broadcast will be available on both radioee.net and montalvoarts.org.
AUTOPILOTO is a commissioned project by the Lucas Artist Program at the Montalvo Arts Center, and is presented as part of New Terrains: Mobility and Migration, a series of cross-disciplinary exhibitions, programs and experiences that explore how bodies move through spaces—social, political, literal, and figurative. The broadcast is co-hosted with Trami Cron of Chopsticks Alley Art. Special guests will include voices from ARUP; fka SV Inc; Nissan Research Center; SETI Institute; Transportation Sustainability Research Center, University of California, Berkeley, Yu-Ai Kai Community Center, and others. It will feature music and live performance by such artists as Anna Fritz, Taylor Ho Bynum, Philip Hermans, Motoko HondaShane A. Myrbeck & Emily Shisko, and San Jose Jazz. For more information, the public may visit Radioee.net or montalvoarts.org or call Donna Conwell at 408-777-2100.

Million AI Startups Talk: AI for Human Flourishing

Steve Omohundro will speak on “AI for Human Flourishing” on November 27, 2018 at 6:00 PM at BootUP Silicon Valley in Menlo Park as a part of Million AI Startups workshop on “AI for Mankind”.

AI for Human Flourishing

2018 is the best year in human history. The rates of hunger, poverty, violence, and illiteracy are all at their lowest levels ever. We have achieved this using both human intelligence and collective intelligence. But things are about to get even better using Artificial Intelligence. A recent UN report predicts that today’s AI will create at least $70 trillion of value through 2030 and new AI technologies could double that. AI will impact every single challenge humanity currently faces. In addition to vastly improving productivity, it will provide new solutions to social dilemmas and will provide new coordination mechanisms to foster cooperation. It will be used to predict and mitigate extreme behavior in a wide range of complex systems including the climate, economy, disease, politics, social media, transportation, and energy flows. It will usher in a new era of creativity and invention that will lead to unprecedented human flourishing. (Steve Omohundro, Ph.D.)

Here are some background materials for the talk:

Why 2017 May Be the Best Year Ever

Our world is changing

Explore the ongoing history of human civilization at the broadest level, through research and data visualization.

Factfulness: Ten Reasons We’re Wrong About the World–and Why Things Are Better Than You Think

Bill Gates: These 4 books make me feel optimistic about the world

Enlightenment Now: The Case for Reason, Science, Humanism, and Progress

Assessing the Economic Impact of Artificial Intelligence

Critical Transitions in Nature and Society

Social Self-Organization: Agent-Based Simulations and Experiments to Study Emergent Social Behavior

Theme: AI for Mankind

6:00 pm – 6:30 pm Check In, Food & Networking
6:30 pm – 6:50 pm AI for Human Flourishing
Speaker: Steve Omohundro, Ph.D., President, Self-Aware Systems
6:50 pm – 7:10 pm (To be announced)
Speaker: David Ayman Shamma, Ph.D., Sr Research Scientist, FXPAL
7:10 pm – 7:30 pm AI-Powered Future Simulation in Life and Business
Speaker: Richard Shinn, Ph.D., AIBrain,
7:30 pm – 8:00 pm Discussion
8:00 pm – 8:30 pm Announcement & Networking

Social Media Storms Workshop: Steve Omohundro speaks on AI mitigation strategies

On October 10, 2018, Steve Omohundro will speak in the “Social Media Storms Workshop” put on by the Nautilus Institute, the Preventive Defense Project at Stanford, and Technology for Global Security. It is funded by the MacArthur Foundation.

We have seen the huge impact of “social media storms” across Facebook, Twitter, and other social media networks. Often these storms are driven by fake news, false alarms, extremist positions, and other forces of memetic contagion. How can we understand the dynamics? How can we detect when social media storms are happening? When they are dangerous? What are the best ways to dampen them down? To stop them? To guide them in a positive direction?

Steve Omohundro will discuss the role that AI has in creating fake news (eg. the DeepFakes synthetic video software), in forming memetic storms, in detecting these storms, and in stopping them.

Responsible AI/DI Summit 2018: Panel on “Balancing Organization Goals with Responsibility in Complex Decisions”

On September 19, 2018 from 3-7:30, Steve Omohundro will present at the “Responsible AI/DI Summit 2018” at SAP Labs in Palo Alto. The event is sponsored by SAP, Qantellia, and Carol Tong Consulting. There is an excellent group of presenters who will provide a multi-disciplinary perspective on these important issues. Steve will be in the panel on “Pulling it all Together: Balancing Organization Goals with Responsibility in Complex Decisions”:


The intention of the summit is to bring a sense of “trusteeship” to emerging powerful technologies. The decision methodologies of “Decision Intelligence” will be essential in guiding the deployment of AI and other powerful technologies.

Registration is free!

SAP and Google publish their ethical AI principles

I’m very excited that more companies and governments are thinking about the ethical issues involved with AI. Two great examples are SAP and Google. SAP just published their 7 ethical AI guidelines:

German firm’s 7 commandments for ethical AI


and Google published their AI principles a few months ago:

AI at Google: our principles


Final Edge Question: “Mathematical Beauty” by Steve Omohundro

Edge.org is a wonderful online version of “The Reality Club” and had a yearly tradition of inviting diverse thinkers to respond to stimulating questions over the 20 years from 1998 until 2018. For the final question, they invited a wide variety of people to give their own answer to: “What is the last question?”

Steve Omohundro’s response was:

How did our sense of mathematical beauty arise?

Other’s responses are here:


Steve is interested in the question of mathematical beauty because it represents an inner sense of what abstract models, knowledge, and inference is valuable that seems rather disconnected from ordinary evolutionary pressures. If we can fully understand the nature of mathematical beauty, I think it will shed light on unique aspects of human cognition.

Edge essay: “Costly Signalling” by Steve Omohundro

Edge.org is a wonderful online version of “The Reality Club” and had a yearly tradition of inviting diverse thinkers to respond to stimulating questions over the 20 years from 1998 until the final question in 2018. The responses were turned into books and published on the Edge website. The 2017 question was “What scientific term or concept ought to be more widely known?” Steve Omohundro’s response was this essay on the topic of “Costly Signalling”:


Stanford LAST Festival: Faking Life: AI, Deception, Blockchain

On March 24, 2018, Steve Omohundro spoke at the 5th LAST (Life/Art/Science/Tech) Festival presented by Stanford University, held at the Stanford Linear Accelerator Center:


He spoke on “Faking Life: AI, Deception, Blockchain”:

Here’s the video of his talk:

Here’s a photo from the event:

Million AI Startups AI Arts and Culture: Music, Arts, and Robotics

On March 21, 2018, Steve Omohundro spoke in the “Million AI Startups” meetup on “AI Arts and Culture: Music, Arts, and Robotics” about “AI, Deception, and Blockchains”. Piero Scaruffi and Richard Shinn also spoke:

AI Arts and Culture: Music, Arts, and Robotics

Wednesday, Mar 21, 2018, 6:00 PM

Bootup Ventures
68 Willow Road Menlo Park, CA

61 Members Went

We are happy to invite you to our first meetup of this year on Wednesday, March 21, 2018 to share the recent developments of Arts and Cultures in AI including Music, Visual Arts, Game, and Robotics. During the meetup, you will hear from the speakers who are heavily involved in the R&D for AI Arts and Cultural technologies and the related activities…

Check out this Meetup →

Here are the slides:

180321 AIBrain AI Deception Blockchain

Stanford Leonardo Art/Science Evening Rendezvous: “AI, Deception, and Blockchains”

On December 14, 2017, Steve Omohundro spoke in the the Stanford Leonardo Art/Science Evening Rendezvous on “AI, Deception, and Blockchains”:


Here are the slides:

Here’s the abstract:

Recent AI systems can create fake images, sound files, and videos that are hard to distinguish from real ones. For example, Lyrebird’s software can mimic anyone saying anything from a one minute sample of their speech, Adobe’s “Photoshop of Voice” VoCo software has similar capabilities, and the “Face2Face” system can generates realistic real time video of anyone saying anything. Continuing advances in deep learning “GAN” systems will lead to ever more accurate deceptions in a variety of domains. But AI is also getting better at detecting fakes. The recent rash of “fake news” has led to a demand for deception detection. We are in an arms race between the deceivers and the fraud detectors. Who will win? The science of cryptographic pseudorandomness suggests that the deceivers will have the upper hand. It is computationally much cheaper to generate pseudorandom bits than it is to detect that they aren’t random. The issue has enormous social implications. A synthesized video of a world leader could start a war. Altered media could implicate people in crimes they didn’t commit. Governments have tampered with photographs since the beginning of photography. Stalin, for example, was famous for removing people from historical photos when they fell out of favor. The art world has had to deal with forgeries for centuries. Good forgers can create works that fool even the best art critics. The solution there is “provenance”. We not only need the work, we need its history. But provenances can also be faked if we aren’t careful! Can we create an unmodifiable digital provenance for media? We describe several approaches to using blockchains, the technology underlying cryptocurrencies, to do this. We discuss how the time and location of events can be cryptographically certified. And how future media hardware might provide guarantees of authenticity.

and a photo:

tricycle magazine: “AI, Karma & Our Robot Future”

The Spring 2018 issue of the Buddhist magazine “tricycle” published the article “AI, Karma & Our Robot Future, Two artificial intelligence scientists discuss what’s to come, A conversation with Steve Omohundro and Nikki Mirghafori” based on a presentation given at CIIS:

AI, Karma & Our Robot Future

Two artificial intelligence scientists discuss what’s to come.

A conversation with Steve Omohundro and Nikki Mirghafori


CIIS: “Artificial Intelligence and Karma” A Conversation With Nikki Mirghafori and Steve Omohundro

On November 2, 2017 the San Francisco-based California Institute of Integral Studies held the event “Artificial Intelligence and Karma, A Conversation With Nikki Mirghafori and Steve Omohundro”:


Here is a recording of the event:


Nikki Mirghafori and Steve Omohundro: AI and Karma
Recorded November 2, 2017

In this episode, Artificial Intelligence scientist and buddhist teacher Nikki Mirghafori and computer scientist Steve Omohundro discuss how the concept of karma can guide us as we push forward towards creating non-human intelligence.

Foresight Institute Great Debate on “Drop Everything And Work on Artificial Intelligence?”

On November 19, 2016, Steve Omohundro participated in the Foresight Institute’s “Great Debate” on whether we should “Drop Everything and Work on Artificial Intelligence?” Here is a video of the event:

This was the second of four debates at Foresight Institute’s The Great Debates in San Francisco.

Speakers on this panel:

Peter Voss, Head of AGI Innovations Inc

Steve Omohundro, President, Possibility Research

Monica Anderson, Director of Research at Syntience Inc

Michael Andregg, Co-Founder and CSO at Fathom Computing

Moderator: David Yanofsky, Reporter at Quartz

Introduction: Allison Duettmann, Foresight Institute

Discussion topics included:

Morality and Ethics of Artificial Intelligence

Narrow AI vs. Artificial General Intelligence

AI Safety

Deep Learning and Neural Networks

Predictions about the Singularity

Existential Risk

Longterm Futurism


Ashoka Foundation panel on “Empathy and Technology”

On July 26, 2017, Steve Omohundro was on a panel hosted by the Ashoka Foundation and the Hive on “Empathy and Technology”.

What is the role of empathy in technology — and what should it be?


Role of Empathy in Technology

Role of Empathy in Technology

Wednesday, Jul 26, 2017, 6:00 PM

Location details are available to members only.

4 Data Bees Went

Registration:Register here to confirm your attendance. Registered attendees only.https://www.eventbrite.com/e/empathy-and-technology-the-critical-intersection-tickets-35614021497Agenda:6:00pm – 6:30pm Registration and Networking6:30pm – 6:45pm Introduction by The Hive & Ashoka6:45pm – 7:45pm Panel Discussion and Q&A7:45pm – 8:15pm Wrap-up an…

Check out this Meetup →

Here is a video of the event on Facebook:


KZSU Radio Henry George Program: “Steve Omohundro on AI Risk, Human Values, and Decentralized Resource Sharing”

On July 15, 2017, Steve Omohundro did an interview with Mark Mollineaux’s radio show “The Henry George Program” on “AI Risk, Human Values, and Decentralized Resource Sharing”. Here’s a description of the show:

Steve Omohundro on AI Risk, Human Values, and Decentralized Resource Sharing

ReleasedJul 18, 2017

Steve Omohundro shares plans for creating provably correct protections against AI superintelligence, and thoughts on how human values can be imbued into AI. Resource allocation, decentralized cooperation, and discussions on how Blockchain Proofs of Work/Stake can possibly be compatible with basic needs.

Here’s a link to the show on CastBox:


and on iTunes:


Stanford CS22a Social and Economic Impact of Artificial Intelligence: “Social Impact and Ethics of AI”

On May 25, 2017 Steve Omohundro spoke in Jerry Kaplan’s Stanford CS22a class “Social and Economic Impact of Artificial Intelligence” on “Social Impact and Ethics of AI”.

Here’s Steve’s bio:

Steve Omohundro founded Possibility Research and Self-Aware Systems to develop beneficial intelligent technologies. He has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from Berkeley. He was a computer science professor at the University of Illinois and cofounded the Center for Complex Systems Research. He published the book “Geometric Perturbation Theory in Physics”, designed the programming languages StarLisp and Sather, wrote the 3D graphics system for Mathematica, invented many machine learning algorithms (including manifold learning, model merging, bumptrees, and family discovery), and built systems that learn to read lips, control robots, and induce grammars. He’s done internationally recognized work on AI safety and strategies for its beneficial development. He is on the advisory boards of several AI and Blockchain companies.

And here are the slides:

Million AI Startups talk: AI and Games

On February 15, 2017, Steve Omohundro spoke to the Million AI Startups group about the opportunities in “AI and Games”:

Next Generation AI Games

Wednesday, Feb 15, 2017, 6:00 PM

Bootup Ventures
68 Willow Road Menlo Park, CA

70 Members Went

The use of Artificial Intelligence (AI) techniques in computerized games is as long as the history of AI itself. With recent advancements in AI, new possibilities are emerging for building video games that take entertainment to the next level. In these games every character can exhibit human-like intelligent behavior capable of incrementally learni…

Check out this Meetup →

Here is a pdf of the slides for the talk.

Video games are now a $100 billion industry. For comparison, global movie box office revenues for 2017 are estimated at $41.2 billion.

Blizzard’s “Overwatch” has generated $1 billion in revenue (from their Q1 2017 financial statement). It is their fastest growing franchise with 30 million registered players.

AI characters, like Cortana in Halo, are becoming more important to games. DeepMind and Blizzard are about to release a version of StarCraft II as an AI research tool.

There are at least 5 ways in which AI will improve games:

  1. AI as characters in games.
  2. AI as player of games.
  3. AI for improving VR/AR and game interfaces.
  4. AI for modelling learners and tuning games to their needs.
  5. AI for gamification of work and society.

New Voting Systems

Voting (and other forms of social decision making) are fundamental to our society. Today’s voting machines and technologies are antiquated, inefficient, and insecure. Here’s an excellent 8 minute description by Ron Rivest of how homomorphic encryption could help implement a better system:

Several groups are working to implement this kind of cryptographically secure voting on the blockchain:


In addition to better implementation technology, there are also a number of voting systems which are far superior to the one used in the US. Here’s a nice video describing the problems with “First Past the Post Voting”:

I’ve supported the “Center for Election Science” for years which is trying to institute Approval Voting (originally range voting). This is a simple modification to the current US system with much better properties:
More radical ideas are being explored in “Liquid Democracy” which allows voters to delegate their votes:
A somewhat more complex voting system “Quadratic Voting” is being hailed as one of the most significant advances in recent years. Here’s the paper:

Eric Posner says (http://ericposner.com/quadraticvoting/):

Glen Weyl has uploaded a new version of his paper, QuadraticVoting (written with Steven Lalley), to SSRN, which now includes the completed proofs. Quadratic voting is the most important idea for law and public policy that has emerged from economics in (at least) the last ten years.

Quadratic voting is a procedure that a group of people can use to jointly choose a collective good for themselves. Each person can buy votes for or against a proposal by paying into a fund the square of the number of votes that he or she buys. The money is then returned to voters on a per capita basis. Weyl and Lalley prove that the collective decision rapidly approximates efficiency as the number of voters increases. By contrast, no extant votingprocedure is efficient. Majority rule based on one-person-one-votenotoriously results in tyranny of the majority–a large number of people who care only a little about an outcome prevail over a minority that cares passionately, resulting in a reduction of aggregate welfare.

The applications to law and public policy are too numerous to count. In many areas of the law, we rely on highly imperfect votingsystems (corporate governancebankruptcy) that are inferior to quadratic voting. In other areas of the law, we require judges or bureaucrats to make valuations while knowing they are not in any position to do so (environmental regulation, eminent domain). Quadratic voting can be used to supply better valuations that aggregate private information of dispersed multitudes. But the most important setting is democracy itself. An incredibly complicated system of institutional self-checking (separation of powers, federalism) and judicially enforced constitutional rights try to correct for the defects of one-person-one-vote, but do so very badly. Can quadratic voting do better? Glen and I argue that it can.

And here are Tyler Cowen’s thoughts on it:
Interestingly, it’s been discovered that bees have been using this mechanism for millions of years to choose their next hive location! The energy bees spend on dances grows quadratically in proportion to the attractiveness of the site they saw.

Humans are doing democracy wrong. Bees are doing it right

There is a system that accounts for intensity of passion as well as idle opinion – hives have used it successfully for millions of years

AIBrain Talk: AI and Human Safety

On August 17, 2016 Steve Omohundro spoke to the “Million AI Startups” group about “AI and Human Safety”:

Top 10 AI Applications

Wednesday, Aug 17, 2016, 6:00 PM

AIBrain Inc
5 Palo Alto Square 1st Floor, CA

90 Members Went

Is AI flourishing now for every one?Can we make money out of AI? If so, how?In this vein, we are happy that the four presenters will lead the discussion in an effort to search for top 10 killer AI applications.AI and Human Safety, Steve Omohundro, Ph.D., President, Self Aware Systems AI and robotics will create $50 trillion of value over the ne…

Check out this Meetup →

AI and robotics will create $50 trillion of value over the next 10 years according to McKinsey. This is causing their rapid development but six recent events show the need to be careful as they are integrated into human society. In the past few weeks we’ve seen three Tesla autopilot crashes, the Dallas police using a robot to kill a suspect, a Stanford Shopping Center security robot running over a small child, and the first “Decentralized Autonomous Organization” losing $56 million due to a bug in a smart contract. As we move forward with these technologies, we will need to incorporate human values and new principles of security so that their human benefits can be fully realized.

Here is a pdf file of the slides.

TEDX Talk: What’s Happening With Artificial Intelligence?

The TED conference, started in 1984, has become the standard bearer for hosting insightful talks on a variety of important subjects. They have made videos of over 1,900 of these talks freely available online and they have been watched more than a billion times! In 2009 they extended the concept to “TEDx Talks”in the same format but hosted by independent organizations all over the world.

On January 6, 2016 Mountain View High School hosted a TEDx event on the theme of “Next Generation: What Will It Look Like?”. They invited both students from the school and external speakers to present. I spoke on “What’s Happening With Artificial Intelligence?”. A video of the talk is available here:


and the slides are available here:


I talked about the multi-billion dollar investments in AI and robotics being made by all the top technology companies and the 50 trillion dollars of value they are expected to create over the next 10 years. The human brain has 86 billion neurons wired up according to the “connectome”. In 1957 Frank Rosenblatt created a teachable artificial neuron called a “Perceptron”. Three-layer networks of artificial neurons were common in 1986 and much more complex “Deep Learning Neural Networks” were being studied by 2007. These networks started winning a variety of AI competitions besting other approaches and often beating human performance. These systems are starting to have a big effect on robot manufacturing, self-driving cars, drones, and other emerging technologies. Deep learning systems which create images, music, and sentences are rapidly becoming more common. There are safety issues but several institutes are now working to address the problems. There are many sources of excellent free resources for learning and the future looks very bright!

Eileen Clegg did wonderful real time visual representations of the talks as they were being given. Here is her drawing of my talk:


Edge Essay: Deep Learning, Semantics, and Society

Each year Edge, the online “Reality Club”, asks a number of thinkers a question and they publish the short essay answers. This year the question was “What do you consider the most interesting recent scientific news? What makes it important?” The responses are here:


My own essay on “Deep Learning, Semantics, And Society” is here:


VLAB Talk: AI, Deep Learning, and the Future of Business

On December 8, 2015 Steve Omohundro will be the special guest speaker at the VLAB Annual Holiday Party speaking about “AI, Deep Learning, and the Future of Business”. Followed by the “Chocolate Heads Movement Band”! See you there!


Here are the slides:

VLAB – AI, Deep Learning, and the Future of Business


AI Nexus Talk: Semantics, Deep Learning, and the Transformation of Business

On Saturday, November 28, 2015 at 2:00PM (Santiago, Chile time) Steve Omohundro will speak (remotely) at the Exosphere event “AI Nexus” on:

2:00 PMRemote Speaker: Steve Omohundro – Semantics, Deep Learning and the Transformation of Business

A pdf of the slides is here:

Chile – Semantics, Deep Learning, and the Transformation of Business

The SlideShare version is here:


Steve Omohundro, recognised Artificial Intelligence scholar, explains why semantics matter when talking about AI, what the deep learning trend is, and how business is going to be transformed by it.12182528_1002546619788560_500037707203850069_o

McKinsey predicts that AI and robotics will create $50 trillion of value over the next 10 years. Many predict that the recent technology of “deep learning” will be a big part of the transformation. Over 250 deep learning startup companies have attracted more than $1 billion of venture investment in the past year. Deep learning systems have recently broken records in speech recognition, image recognition, image captioning, translation, drug discovery and other tasks. Why is this happening now and how is it likely to play out? We review the development of AI and the pendulum swings between the “neats” and the “scruffies”. We describe traditional approaches to semantics through logics and grammars and the new deep learning vector semantics. We relate it to Roger Shepard’s cognitive geometry and the structure of biological networks. We also describe limitations of deep learning for safety and regulation. We show how it fits into the rational agent framework and discuss what the next steps may be.