TechEmergence Interview: How Can We Safely Build Something Smarter Than Us?

On April 26, 2015 Daniel Faggella interviewed Steve Omohundro about ethical and safety issues related to AI systems and approaches to developing them safely:

An mp3 of the show is available here:

We talked about several ethical issues that are starting to be relevant today. McKinsey estimates that $50 trillion of value will be created by AI and robotics in the next 10 years. Self-driving cars are being created by lots of companies and look like they may become economically important in the next 10 years. They also introduce a number of ethical issues: Who’s liable when they get into an accident? How should they prioritize who to protect?

3D printed houses are being created by several companies. The legal profession is starting to be impacted by automated discovery, automated patent search, automated contract construction, and many other disruptive technologies. Gartner predicts 1/3 of all jobs will be automated in the next 10 years.

Big data and machine learning is transforming consumer businesses. Recently there was a price fixing lawsuit against several companies selling posters on the internet. It turns out that they were all running bots to algorithmically determine their prices based in part on the prices currently being charged by the other companies. Who’s liable when a bot is colluding to commit price fixing? What about bots committing insider trading? “The AI did it!”

A Swiss artist created a bot to randomly buy things on the darknet using bitcoin. Many of its purchases were illegal and were displayed as part of the art exhibit. The Swiss police waited for the art exhibit to finish and then arrested the bot! They carted the computer away.

We have 3 waves of change coming: AI that transforms the economy, AI that transforms the military, and AI that transforms society in general. AI drones, AI subs, AI soldiers. Concentration of power via robot armies. What is the ethics of all this? I advocate developing these technologies slowly and carefully. But will economic and military arms races force their rapid development?

Over the longer term what will be the role of humans and AI systems in future society? Will there be a rapid singularity? I argue that it would be better to carefully consider our values and to design systems that reflect the kind of future that we want. We are building these systems and there is nothing inevitable about what we create. Our only limit is our imagination.

Doing this will probably be the biggest challenge that humanity has ever faced. Nuclear technology is another dual use technology and we can look at that history to see how we managed to avoid unintended detonations (though we did drop two live hydrogen bombs on North Carolina, fortunately they didn’t go off). The realization that AI could be a dangerous technology is just starting to dawn on large numbers of people. One challenge is that an AI system could in principle be developed on a standard computer in somebody’s basement. Once the technology becomes standard, the analog of “script kiddies” might create versions with harmful goals.

The “Safe-AI Scaffolding Strategy” is an approach to careful development which provides a high confidence of safety at every stage. A future variation of today’s cryptocurrencies like Bitcoin may provide a secure infrastructure for the future AI internet. Safety First!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s