We discussed the huge near term impact on a number of industries. McKinsey estimates $50 trillion of value created over the next 10 years. Gartner predicts one-third of current jobs to be automated in the same time period. Internet of Things, automated cars, automated 18-wheel semis. Heathcare, medicine, 3D printed houses, the legal profession. E-discovery, patent searches, automated contract creation. Liability questions when robots or automated vehicles harm someone. Big data, bots that commit insider trading, price fixing, or purchase illegal items on the darknet. With careful thought and introspection about values we can expand today’s legal and business environment to include automated systems and create a much more efficient and humane future.
We discussed the huge economic impact that AI and Robotics are likely to have over the next 10 years, especially manufacturing robots, medicine, business processes, self-driving cars, and military applications. Narrow AI vs. General AI. The 420 Chinese robot companies and manufacturers like Foxconn who are in the process of automating manufacturing and assembly. What should be the balance between robots and human workers? “Any job a robot can do is probably a job you don’t want to do!” Will huge productivity gains from automation be used to uplift everyone? Robot brain surgery and hair replacement surgery. Data mining for improvements in health diagnosis and treatment. Self-driving cars, Uber, moral choices, uses for shipping. Drones for agriculture, detecting forest fires, lifesaving, and medicine delivery. Military drones, drone swarms, mosquito drones, drone boats, drones with guns. Robots to help disabled people, modifying the human body. Bitcoin and the underlying blockchain as a mechanism for decentralized contracts between parties that don’t know or trust one another. The nature of money, there is a quadrillion tons of gold at the center of the earth, what has value? Bitcoin mining, bitcoin millionaires, bubbles, Ethereum, fixing internet security, preparing for the AI Society.
Bill Fenwick, founding partner of the great Silicon Valley law firm Fenwick & West, has had a lifelong practice of doing at least 10 kind things for people every day without expectation of reciprocation. He attributes many of the positive things in his life to this practice.
Recent research shows that even small kindnesses have a ripple effect through society. The excellent film “Kindness is Contagious” explores this idea:
The results suggest that each additional contribution a subject makes to the public good in the first period is tripled over the course of the experiment by other subjects who are directly or indirectly influenced to contribute more as a consequence.
The contagious effect in the study was symmetric; uncooperative behavior also spread, but there was nothing to suggest that it spread any more or any less robustly than cooperative behavior, Fowler said.
Adam Grant’s excellent book “Give and Take” has related insights:
Many studies have shown that mindfulness meditation that includes LKM (loving-kindness meditation) can rewire your brain. Practicing LKM is easy. All you have to do is take a few minutes everyday to sit quietly and systematically send loving and compassionate thoughts to: 1) Family and friends. 2) Someone with whom you have tension or a conflict. 3) Strangers around the world who are suffering. 4) Self-compassion, forgiveness and self-love to yourself.
Doing this simple 4-step LKM practice literally rewires your brain by engaging neural connections linked to empathy. You can literally feel the tumblers in your brain shift and open up to empathy by spending just a few minutes going through this systematic LKM practice.
Check out this inspiring little film demonstrating the “Kindness Boomerang”:
To go along with the film “Terminator Genisys”, Taylor & Francis (the publisher of a popular AI risks paper of mine) created a comic strip about possible risks associated with a sophisticated chess robot:
We talked about several ethical issues that are starting to be relevant today. McKinsey estimates that $50 trillion of value will be created by AI and robotics in the next 10 years. Self-driving cars are being created by lots of companies and look like they may become economically important in the next 10 years. They also introduce a number of ethical issues: Who’s liable when they get into an accident? How should they prioritize who to protect?
3D printed houses are being created by several companies. The legal profession is starting to be impacted by automated discovery, automated patent search, automated contract construction, and many other disruptive technologies. Gartner predicts 1/3 of all jobs will be automated in the next 10 years.
Big data and machine learning is transforming consumer businesses. Recently there was a price fixing lawsuit against several companies selling posters on the internet. It turns out that they were all running bots to algorithmically determine their prices based in part on the prices currently being charged by the other companies. Who’s liable when a bot is colluding to commit price fixing? What about bots committing insider trading? “The AI did it!”
A Swiss artist created a bot to randomly buy things on the darknet using bitcoin. Many of its purchases were illegal and were displayed as part of the art exhibit. The Swiss police waited for the art exhibit to finish and then arrested the bot! They carted the computer away.
We have 3 waves of change coming: AI that transforms the economy, AI that transforms the military, and AI that transforms society in general. AI drones, AI subs, AI soldiers. Concentration of power via robot armies. What is the ethics of all this? I advocate developing these technologies slowly and carefully. But will economic and military arms races force their rapid development?
Over the longer term what will be the role of humans and AI systems in future society? Will there be a rapid singularity? I argue that it would be better to carefully consider our values and to design systems that reflect the kind of future that we want. We are building these systems and there is nothing inevitable about what we create. Our only limit is our imagination.
Doing this will probably be the biggest challenge that humanity has ever faced. Nuclear technology is another dual use technology and we can look at that history to see how we managed to avoid unintended detonations (though we did drop two live hydrogen bombs on North Carolina, fortunately they didn’t go off). The realization that AI could be a dangerous technology is just starting to dawn on large numbers of people. One challenge is that an AI system could in principle be developed on a standard computer in somebody’s basement. Once the technology becomes standard, the analog of “script kiddies” might create versions with harmful goals.
The “Safe-AI Scaffolding Strategy” is an approach to careful development which provides a high confidence of safety at every stage. A future variation of today’s cryptocurrencies like Bitcoin may provide a secure infrastructure for the future AI internet. Safety First!
In the June, 2015 issue of the AI Matters newsletter, George Gregory, Tuna Oezer, and I describe the formation of the ACM SIGAI Bay Chapter. We’ve had some fantastic speakers on natural language understanding, deep learning systems, and the DeepDive machine learning project. A great community is growing which currently has over 600 members. Join us at the next meeting!