Month: April 2018

Stanford LAST Festival: Faking Life: AI, Deception, Blockchain

On March 24, 2018, Steve Omohundro spoke at the 5th LAST (Life/Art/Science/Tech) Festival presented by Stanford University, held at the Stanford Linear Accelerator Center:

http://www.lastfestival.org/

He spoke on “Faking Life: AI, Deception, Blockchain”:

Here’s the video of his talk:

Here’s a photo from the event:

Stanford Leonardo Art/Science Evening Rendezvous: “AI, Deception, and Blockchains”

On December 14, 2017, Steve Omohundro spoke in the the Stanford Leonardo Art/Science Evening Rendezvous on “AI, Deception, and Blockchains”:

https://www.scaruffi.com/leonardo/dec2017.html

Here are the slides:

Here’s the abstract:

Omohundro
Recent AI systems can create fake images, sound files, and videos that are hard to distinguish from real ones. For example, Lyrebird’s software can mimic anyone saying anything from a one minute sample of their speech, Adobe’s “Photoshop of Voice” VoCo software has similar capabilities, and the “Face2Face” system can generates realistic real time video of anyone saying anything. Continuing advances in deep learning “GAN” systems will lead to ever more accurate deceptions in a variety of domains. But AI is also getting better at detecting fakes. The recent rash of “fake news” has led to a demand for deception detection. We are in an arms race between the deceivers and the fraud detectors. Who will win? The science of cryptographic pseudorandomness suggests that the deceivers will have the upper hand. It is computationally much cheaper to generate pseudorandom bits than it is to detect that they aren’t random. The issue has enormous social implications. A synthesized video of a world leader could start a war. Altered media could implicate people in crimes they didn’t commit. Governments have tampered with photographs since the beginning of photography. Stalin, for example, was famous for removing people from historical photos when they fell out of favor. The art world has had to deal with forgeries for centuries. Good forgers can create works that fool even the best art critics. The solution there is “provenance”. We not only need the work, we need its history. But provenances can also be faked if we aren’t careful! Can we create an unmodifiable digital provenance for media? We describe several approaches to using blockchains, the technology underlying cryptocurrencies, to do this. We discuss how the time and location of events can be cryptographically certified. And how future media hardware might provide guarantees of authenticity.

and a photo:

tricycle magazine: “AI, Karma & Our Robot Future”

The Spring 2018 issue of the Buddhist magazine “tricycle” published the article “AI, Karma & Our Robot Future, Two artificial intelligence scientists discuss what’s to come, A conversation with Steve Omohundro and Nikki Mirghafori” based on a presentation given at CIIS:

AI, Karma & Our Robot Future

Two artificial intelligence scientists discuss what’s to come.

A conversation with Steve Omohundro and Nikki Mirghafori

AI, Karma & Our Robot Future

CIIS: “Artificial Intelligence and Karma” A Conversation With Nikki Mirghafori and Steve Omohundro

On November 2, 2017 the San Francisco-based California Institute of Integral Studies held the event “Artificial Intelligence and Karma, A Conversation With Nikki Mirghafori and Steve Omohundro”:

https://www.ciis.edu/public-programs/event-archive/mirghafori-omohundro-lec-fw17

Here is a recording of the event:

https://www.ciis.edu/public-programs/public-programs-podcast

Nikki Mirghafori and Steve Omohundro: AI and Karma
Recorded November 2, 2017

In this episode, Artificial Intelligence scientist and buddhist teacher Nikki Mirghafori and computer scientist Steve Omohundro discuss how the concept of karma can guide us as we push forward towards creating non-human intelligence.

Foresight Institute Great Debate on “Drop Everything And Work on Artificial Intelligence?”

On November 19, 2016, Steve Omohundro participated in the Foresight Institute’s “Great Debate” on whether we should “Drop Everything and Work on Artificial Intelligence?” Here is a video of the event:

This was the second of four debates at Foresight Institute’s The Great Debates in San Francisco.

Speakers on this panel:

Peter Voss, Head of AGI Innovations Inc

Steve Omohundro, President, Possibility Research

Monica Anderson, Director of Research at Syntience Inc

Michael Andregg, Co-Founder and CSO at Fathom Computing

Moderator: David Yanofsky, Reporter at Quartz

Introduction: Allison Duettmann, Foresight Institute

Discussion topics included:

Morality and Ethics of Artificial Intelligence

Narrow AI vs. Artificial General Intelligence

AI Safety

Deep Learning and Neural Networks

Predictions about the Singularity

Existential Risk

Longterm Futurism

Forecasting

Ashoka Foundation panel on “Empathy and Technology”

On July 26, 2017, Steve Omohundro was on a panel hosted by the Ashoka Foundation and the Hive on “Empathy and Technology”.

What is the role of empathy in technology — and what should it be?

What is the role of empathy in technology — and what should it be?

Role of Empathy in Technology

https://www.meetup.com/SF-Bay-Areas-Big-Data-Think-Tank/events/241342678/

Here is a video of the event on Facebook:

 

KZSU Radio Henry George Program: “Steve Omohundro on AI Risk, Human Values, and Decentralized Resource Sharing”

On July 15, 2017, Steve Omohundro did an interview with Mark Mollineaux’s radio show “The Henry George Program” on “AI Risk, Human Values, and Decentralized Resource Sharing”. Here’s a description of the show:

Steve Omohundro on AI Risk, Human Values, and Decentralized Resource Sharing

ReleasedJul 18, 2017

Steve Omohundro shares plans for creating provably correct protections against AI superintelligence, and thoughts on how human values can be imbued into AI. Resource allocation, decentralized cooperation, and discussions on how Blockchain Proofs of Work/Stake can possibly be compatible with basic needs.

Here’s a link to the show on CastBox:

https://castbox.fm/episode/Steve-Omohundro-on-AI-Risk%2C-Human-Values%2C-and-Decentralized-Resource-Sharing-id935232-id44481874?country=us

and on iTunes:

https://itunes.apple.com/us/podcast/the-henry-george-program/id1241740873?mt=2#

Stanford CS22a Social and Economic Impact of Artificial Intelligence: “Social Impact and Ethics of AI”

On May 25, 2017 Steve Omohundro spoke in Jerry Kaplan’s Stanford CS22a class “Social and Economic Impact of Artificial Intelligence” on “Social Impact and Ethics of AI”.

Here’s Steve’s bio:

Steve Omohundro founded Possibility Research and Self-Aware Systems to develop beneficial intelligent technologies. He has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from Berkeley. He was a computer science professor at the University of Illinois and cofounded the Center for Complex Systems Research. He published the book “Geometric Perturbation Theory in Physics”, designed the programming languages StarLisp and Sather, wrote the 3D graphics system for Mathematica, invented many machine learning algorithms (including manifold learning, model merging, bumptrees, and family discovery), and built systems that learn to read lips, control robots, and induce grammars. He’s done internationally recognized work on AI safety and strategies for its beneficial development. He is on the advisory boards of several AI and Blockchain companies.

And here are the slides: