Author: omohundro

Provably Safe AGI – MIT Mechanistic Interpretability Conference – May 7, 2023

In this short video: (https://www.youtube.com/watch?v=sp0L-zuHWgI&t=2s&ab_channel=SteveOmohundro)

Steve Omohundro sketches how AI technologies based on mathematical proof can be used to ensure human safety as AGI is developed and deployed. Many people are worried about the imminent development of “Artificial General Intelligence” (AGI). Metaculus estimates “Weak AGI” will be developed in 2026 and “AGI with robots” in 2031. It estimates that “Artificial Super Intelligence” (ASI) will arrive 6 months after AGI. Half of AI researchers believe there is a >10% chance of human extinction due to uncontrolled AGI. Today’s AI alignment methods are very important but are too “soft” to provide “hard” guarantees of safety. We need provable “guardrails” in an adversarial analysis. Mathematical proof is humanity’s most powerful safety technology and recent transformer-based theorem provers are advancing rapidly. For example, Meta’s “HyperTree Proof Search” is able to prove 82.6% of held out MetaMath theorems. This talk presents a sketch of how these technologies can create a network of proven contracts to ensure human flourishing in an AGI world of abundance. It also describes some of the challenges in implementing this approach.

The slides are available here:

The Hive Think Tank AI Reading Group: Probabilistic Programming

On July 29, 2021 Steve Omohundro was part of the Hive Think Tank’s “AI Reading Group” along with George Gregory, Daniel Goncharov, and Nikesh Kotecha discussing probabilistic programming and its applications to biology:

https://www.meetup.com/SF-Bay-Areas-Big-Data-Think-Tank/events/279000005/

WEBINAR URL: https://us02web.zoom.us/webinar/register/WN_0-m2ataGTQGiqKWSUeMqHQ

📢 UPDATE – MATERIALS SELECTED: Please review the video and slides below for next week’s discussion❗️

đź“ŚA talk by Fritz Obermeyer, Deep Probabilistic Programming with Pyro (his talk starts at 56:30 time marker):
https://www.youtube.com/watch?v=H6BPgSiobYI

đź“ŚSlides for his talk:
https://docs.google.com/presentation/d/1skQFd5quqVt5-7B_SdmXtY7UnITn4BqsGp_LwdPwxoU/preview?slide=id.p

ABOUT THE SERIES – “AI READING CLUB”:
A new quarterly book club focusing on the most relevant and recent AI publications & literature – this first session will focus on Probabilistic Programming. This special quarterly 1-hour event with select guests who pre-select and discuss interesting papers that they have recently read. Attendees are encouraged to share their views or ask questions on the chat and Q&A interface.

ABOUT THE EVENT – “PROBABILISTIC PROGRAMMING”:
Deep learning neural networks combined with Bayesian probabilistic models are enabling AI to have a huge impact on science and engineering. Join a group of AI researchers and scientists in a discussion of these trends as they impact biology. AI has recently discovered the antibiotic “halicin”, is making great strides in folding proteins, and is giving biologists new clarity in analyzing cell data. We discuss the role of probability and machine learning and where we see these trends heading in the future.

SPEAKERS:
*George Gregory – Co-Founder & CEO, System AI, Inc. [MODERATOR]
*Daniel Goncharov – Head of 42 AI & Robotics + Google Developer Expert in ML
*Steve Omohundro – Research Scientist, Facebook + Author, “Geometric Perturbation Theory in Physics”
*Nikesh Kotecha – Adjunct Prof. Stanford University + frmr Informatics VP Parker Institute for Cancer Immunotherapy

The Hive Think Tank Lecture: The Future of AI is Generative not Discriminative

On May 26, 2021, Steve Omohundro gave a lecture on “The Future of AI is Generative not Discriminative” to The Hive’s excellent “Think Tank” group:

https://www.meetup.com/pl-PL/SF-Bay-Areas-Big-Data-Think-Tank/events/278003653/

Here’s the video of the talk:

and the slides:

and the abstract:

The deep learning AI revolution has been sweeping the world for a decade now. Deep neural nets are routinely used for tasks like translation, fraud detection, and image classification. PwC estimates that they will create $15.7 trillion/year of value by 2030. But most current networks are “discriminative” in that they directly map inputs to predictions. This type of model requires lots of training examples, doesn’t generalize well outside of its training set, creates inscrutable representations, is subject to adversarial examples, and makes knowledge transfer difficult. People, in contrast, can learn from just a few examples, generalize far beyond their experience, and can easily transfer and reuse knowledge. In recent years, new kinds of “generative” AI models have begun to exhibit these desirable human characteristics. They represent the causal generative processes by which the data is created and can be compositional, compact, and directly interpretable. Generative AI systems that assist people can model their needs and desires and interact with empathy. Their adaptability to changing circumstances will likely be required by rapidly changing AI-driven business and social systems. Generative AI will be the engine of future AI innovation.

KEYNOTE SPEAKER: Steve Omohundro, PhD – Research Scientist @ Facebook + Author, “Geometric Perturbation Theory in Physics” Steve Omohundro has done fundamental research in AI for 35 years and is currently a Research Scientist at Facebook working on AI-based simulation. He has a PhD in physics, was an AI professor at the University of Illinois, and has been a scientist at several research labs and startup companies. He co-founded one of the first complex systems institutes, designed the first data-parallel programming language, invented manifold learning, co-developed the first image recommender system, co-developed the first attention-driven neural nets, co-built the first lip reading system, and developed many other machine learning algorithms. His work on AI’s social impact was featured in the book “Our Final Invention” and he appears in the Universal Pictures documentary “We Need to Talk About AI”. He believes that AI is about to unlock enormous business and social value.

Interesting Conversations with Mislav Juric Interview with Steve Omohundro

On September 1, 2020, Mislav Juric interviewed Steve Omohundro for his series “Interesting Conversations with Mislav Juric”:

In this podcast episode, I have a conversation with Steve Omohundro. Steve is one of the first people to point out the potential dangers of advanced AI systems and in this podcast we discuss topics related to AI, mainly personal AI and AGI (Artificial General Intelligence). Hope you enjoy!

Here is the video version:

and the audio version:

Things mentioned in this podcast episode:

Timestamps:

  • 00:00:00 – 00:01:40 Introduction
  • 00:01:40 – 00:06:26 Steve’s experience with startups
  • 00:06:26 – 00:10:49 Personal AI
  • 00:10:49 – 00:12:28 Steve’s research company
  • 00:12:28 – 00:20:37 Combining symbolism and connectionism in AI
  • 00:20:37 – 00:25:22 Can GPT-3’s successors eventually build an accurate world model?
  • 00:25:22 – 00:30:27 Contributing to AI or AI safety research as an individual?
  • 00:30:27 – 00:34:28 Entrepreneurship opportunities for individuals in AI
  • 00:34:28 – 00:45:28 Personal AI capabilities
  • 00:45:28 – 00:49:14 The outcome of AGI
  • 00:49:14 – 00:56:26 The reasoning behind The Basic AI Drives
  • 00:56:26 – 01:00:01 Can we mathematically formalize emotions?
  • 01:00:01 – 01:03:42 Can we slow down AI progress?
  • 01:03:42 – 01:06:35 Next steps for AGI and personal AI
  • 01:06:35 – 01:10:39 Ideal educational background for AI researchers?
  • 01:10:39 – 01:13:05 How to approach learning math?
  • 01:13:05 – 01:14:21 Parting thoughts

The Hive Medium Article: “How Personal AI Will Transform Business and Society”

On August 28, 2020, The Hive Medium blog published an article by Steve Omohundro entitled “How “Personal AI” Will Transform Business and Society”: https://medium.com/hivedata/how-personal-ai-will-transform-business-and-society-cdb72065628c

PwC predicts that Artificial Intelligence (AI) will create $90 trillion of value between now and 2030.[1] But this huge economic value only hints at AI’s profound impact on information networks, commerce, and governance. Many are worried that powerful AI will disempower individuals. The Wall Street Journal recently published best selling author Yuval Harari’s commencement speech to the class of 2020 entitled “Rebellion of the Hackable Animals.”[2] He argued that AI will allow corporations and governments to manipulate individuals and challenged the students to find ways to counteract this manipulation.

This article describes “Personal AI” and argues that it will be the antidote to AI-powered manipulation. It will, instead, dramatically empower individuals to reshape their social and economic networks. We define “Personal AIs” as artificial intelligences trusted by individual “owners” to represent them in interactions with other individuals, organizations, and networks. There are great challenges in building personal AIs, but their impact will be profoundly positive for humanity. To understand why, we must first understand the current role of AI in society.

The Rise of Platform AI

Flashy AI applications like self-driving cars, deepfake videos, and the Sophia robot have dominated news headlines. But the AI technology with the greatest economic impact has actually been “recommender systems.”[3] These simple AI systems model users to make recommendations such as movies on Netflix, products on Amazon, and friends on Twitter.

Recommender systems were only invented in the 1990’s but have had an enormous impact. Netflix reports that their movie recommender has been responsible for creating more than $1 billion of business value. Amazon’s recommenders generate 35% of the purchases on their site. ByteDance, the parent company of TikTok, was recently privately valued at $140 billion primarily due to their innovative recommender AI.

One reason that recommender systems have had such a big impact is that they enable the “Platform Business Model.” Platform companies match up producers and consumers and take a cut from each transaction. For example, Uber’s AI connects nearby drivers with people who need rides.

The platform business model creates sustainable outsized profits and is responsible for the rise of the most valuable companies over the past 15 years. In 2004, the top ten companies were General Electric, Exxon, Microsoft, Pfizer, Citigroup, Walmart, BP, AIG, Intel, and Bank of America. By 2019, they were Microsoft, Amazon, Apple, Alphabet, Facebook, Berkshire Hathaway, Alibaba, Tencent, Visa, and Johnson and Johnson.[4] Seven of these are based on an AI-driven platform business model.

According to Applico, 60% of the billion-dollar “unicorn” startups are platform companies and most IPOs and acquisitions also make use of this model. It is estimated to have created over $3 trillion in market capitalization.

Many aspects of platform companies are counter-intuitive from a traditional business perspective. A popular meme states that:

● Uber, the world’s largest taxi company, owns no vehicles.

â—Ź Airbnb, the largest accommodation provider, owns no real estate.

â—Ź Facebook, the most popular media provider, creates no content.

â—Ź Instagram, the most valuable photo company, sells no cameras.

â—Ź Netflix, the fastest-growing television network, lays no cables.

â—Ź Alibaba, the most valuable retailer, has no inventory.

While recommender systems are critical to platforms, several other forms of AI are also important. On the producer side, platform companies provide: AI-driven content creation tools, AI-driven auctions for placement, AI-driven A/B testing for optimization, AI analytics to track performance, AI-based producer reputations and AI-driven malicious content blocking. On the consumer side, platform companies use: AI-based gamification for engagement, AI-personalized marketing, AI-driven pricing, AI-based consumer reputation and AI-driven malicious consumer blocking. Each of these functions will improve as AI technologies improve.

The remarkable rise of platform companies can be understood through “Coase’s Theorem.” Ronald Coase was an economist in the 1930s who studied the nature of the firm. Economists understood that market mechanisms produced efficient results and Coase wondered why firms weren’t organized as markets internally. He showed that if information and contracting were inexpensive enough, then market mechanisms produce the most efficient outcomes. He concluded that traditional firms are organized hierarchically because business information was not freely available and contracting was too expensive.

AI dramatically lowers the costs of both information gathering and contracting. Traditional taxi companies owned their own cars, hired drivers as employees, and had managers who determined which car would transport which customer. Uber’s AI systems enable their cellphone app to turn the traditional taxi company “inside out” and to profit by intermediating between external drivers and riders.

This “inversion of the firm” is also happening in HR, marketing, innovation, finance, logistics, etc. An extreme example was Instagram which had only 13 employees when it was bought by Facebook for $1 billion. This remarkable purchase has been called the “most brilliant tech acquisition ever made.”

Many of the consequences of the platform revolution are quite positive for society. Airbnb unlocked resources (people’s spare bedrooms) which would otherwise have gone unused. Individual consumer needs can be better met by platforms (eg. the long tail of demand met by Amazon’s many sellers). Platforms enable more producers (eg. Uber’s many part-time drivers). We can understand Platform AI as creating both business value and social value.

Platform companies gain value through network effects on both the producer and the consumer side. These networks create strong “moats” around their businesses and allow them to sustain outsized profits. In typical platform niches, one company is dominant (eg. Uber) with a much smaller company in second-place (eg. Lyft) and third-place being insignificant. The strong position of the dominant company gives them great power in interactions with both producers and consumers. As AI improves, you might think that this platform power will only increase and that Harari’s fears are justified.

Platforms use their power over producers to gain the advantage. Uber has been criticized for squeezing drivers and taking a bigger share of profits. Amazon has repeatedly created their own branded versions of products which they observe are profitable for third-party sellers. Netflix notices what elements of movies and TV shows are most liked by customers and creates their own shows using that knowledge.

Platforms also use their power over consumers. Platform advertising has been criticized for being manipulative and for rewarding click-bait headlines. YouTube has been blamed for “radicalizing” viewers who watch a video out of curiosity and then receive recommendations for increasingly extreme related videos. Deceptive news stories generate outrage which causes clicks and recommender systems incentivize their creation in a vicious loop. There is increasing concern about privacy and the use of personal information by platform companies.

The Rise of “Personal AI”

If the simple AI underlying platform companies have had such a transformative societal effect, what will be the impact of more powerful AI? All indications are that AI is improving at a rapid pace and is likely to power another phase of Coase’s theorem. This will create more market-like structures and will spread power throughout networks. While AI is the enabler, the underlying forces are economic.

Two technological trends, “Moore’s Law” and “Nielsen’s Law,” are driving the improvement in AI. Moore’s Law says that the number of transistors in a CPU grows by 60% per year and has held since 1970. Nielsen’s Law says that internet bandwidth grows by 50% per year and has held since 1984. Together, they give AI learning systems increasing amounts of computation and data to improve independently from algorithmic innovation.

But learning and reasoning algorithms are also rapidly improving. The last decade has seen dramatic improvements in machine vision, natural language processing, and game playing. As advanced AI becomes more commercially viable, it attracts more investment, students, researchers, and practitioners.

Rich Sutton’s influential essay “The Bitter Lesson”[5] argued that simple algorithmic techniques like search and statistical learning have always overcome clever human-designed algorithms as computation and data increase. OpenAI’s GPT-3 “transformer” language model is essentially a scaled-up version of their GPT-2 model, but exhibits a wide range of new behaviors. Many are speculating that scaling up this class of models by another factor of 10 or 100 may lead to dramatically improved AI systems.

What will these more powerful AI’s be used for? “Digital Twins” are an AI application that has seen increasing interest over the past decade. These are digital AI replicas of living or non-living physical systems. The physical systems are continuously monitored by sensors which are used to update the corresponding AI twin models. The digital twin models are then used for estimation, diagnosis, policy design, control, and governance. Each of these is first tested on the twin and then deployed on the real system. Monte Carlo simulations estimate interactions between multiple twins for game-theoretic analysis, contract design, and analysis of larger system dynamics.

“Personal AIs” are related to digital twins but model a human “owner” and act for that owner’s benefit. They are trusted AI agents which model their owners’ values, beliefs, and goals, are continually updated based on their owner’s actions, and act as the owner’s proxy in interacting with other agents. They filter ads, news, and other content according to their owners’ preferences. They control the dissemination of the owner’s personal information according to the owner’s preferences. They continually search for new business and purchase opportunities for their owners. They communicate their owners’ preferences to governmental and other organizations. When personal AIs become widespread, they will have a profound impact on the nature of human society.

What AI advances are needed to create personal AIs? Simple versions could be built today but powerful versions will require advances in natural language processing, modeling of human psychology, and smart contract design. Each of these areas is undergoing active research and powerful personal AIs should be possible within a few years.

The simplest personal AI contract is making a purchase. If an owner trusts their personal AI, they will allow it to search Amazon and other sellers for the best product at the best price for their needs. More complex contracts will allow an owner to contract to watch a video in return for watching ads that meet their value criteria. More complex purchase contracts could include terms for insurance, shipping, return policies, and put constraints on the sourcing of components and labor. As personal AIs become more powerful, contracts can become arbitrarily complex. A new era of highly personalized purchases and interactions will follow that better meets each person’s needs and desires.

Personal AI will dramatically change the nature of marketing. If an owner knows they are emotionally vulnerable to depictions of alcohol, fast cars, or chocolate cake, they can instruct their personal AI to refuse advertising with that content. In today’s internet, recommender systems might discover an owner’s vulnerability and start specifically showing them the manipulative content they are sensitive to because it generates a stronger response. This is disempowering for the viewer and harmful for society.

With personal AI negotiation owners can block manipulative advertisements and only enable calm, informative ads about products they are interested in. If enough individuals use personal AIs, advertisers will no longer have an incentive to create manipulative ads. Cigarette advertising was only banned after governmental intervention, but personal AIs provide a more effective direct mechanism to move advertising in a positive direction.

Personal AI will also dramatically change the nature of social media. Today’s popular social media sites have power because no one wants to spend time on sites that their friends aren’t on. Lock-in is maintained by the annoyance of maintaining accounts on multiple sites. Each site has its own user interface, profiles, password, and identity system. Tracking content on multiple sites is time-consuming and confusing for users. But powerful personal AIs will easily be able to interface with multiple social media sites. They will present their owners with unified interfaces for information from a wide variety of sites personalized to their owner’s tastes. The owner need not even be aware of which site particular messages or interactions are from. This new flexibility will put additional pressure on social media sites to truly meet their user’s needs rather than relying on the power of network effects for lock-in.

Personal AI will also dramatically change the nature of governance. Today, voting gives citizens a small bit of influence over governmental decisions. But the expense and complexity of voting mechanisms means that elections happen rarely and only support a limited expression of preferences. New voting procedures like “range voting”, “quadratic voting”, and “liquid democracy” would improve the current system. But personal AIs will allow detailed “semantic voting” in which citizens can express their ideas and preferences in real-time. Governments will be able to create detailed models of their citizen’s actual needs moment by moment.

Personal AI will also dramatically change the nature of commerce. Instead of being locked into a few online marketplaces, personal AIs can explore the entirety of the web for products and deals. Complex negotiations with a wide variety of sellers will allow personalized contracts that better meet the owner’s true needs. As increasing numbers of people shop using personal AIs, this will change the nature of commerce. Buyers will be able to demand greater transparency about supply chains, counterfeiting, and forced labor. They will be able to know the exact history of a product and the exact ingredients in food and supplements.

Perhaps the largest impact of personal AI will be in the transformation of information gathering. The internet shifted news from a few powerful channels to a wide variety of sources and networks. Unfortunately, this has also enabled the spread of disinformation and misinformation. Recent AI technologies can create fake text, audio, images, and video which is indistinguishable from real content. Various groups are developing AI to detect fake content but it appears that the fakers will ultimately win the arms race. That means that careful tracking of the source and “provenance” of content will be fundamental to future information networks. Today, various gatekeepers are attempting to take control of “fact-checking” and information tracking but many are themselves being questioned.

Personal AI enables individuals to choose their own sources of validation. New sources of validation, reputation, and information tracking will arise and personal AIs will be able to choose among these according to their owner’s preferences. “Liquid Democracy” allows voters to delegate their votes to trusted knowledgeable third parties (eg. the Sierra club) who may in turn delegate their votes to even more informed groups. A similar mechanism can be used to create networks of information validated by an owner’s trusted groups. The societal effect of these kinds of information networks will be to democratize knowledge and to weaken the power of centralized information sources.

Our Empowered AI Future

W. Edwards Deming helped create the Japanese “post-war economic miracle” from 1950–1960. He proposed management and manufacturing processes that dramatically improved Japanese productivity and the quality of their goods. The Japanese word “Kaizen” means “change for the better” and has come to represent continuous improvement of all functions and full engagement of all stakeholders. Personal AI will enable a kind of “Deming 2.0” for the whole of society.

Interactions between an owner and their personal AI continuously improves the AI’s model of its owner’s ideas, values, and beliefs. Interactions between personal AIs and AIs associated with larger groups will enable those groups to integrate the detailed knowledge and needs of all stakeholders in a kind of societal “Kaizen”. This responsive interaction will happen from the local level up to the global level improving effectiveness at all scales.

The impact on the global level is especially interesting given the huge number of global crises we are currently struggling with: climate change, pandemic, economic crises, poverty, pollution, and transformative technological change. The United Nations maintains a list of the 17 most important “Sustainable Development Goals.”[6] Every one of these goals can be addressed with advanced artificial intelligence and extensive networks of personal AIs will enable every human to contribute their perspective.

The picture of our future that emerges when we include the personal AI revolution is a far cry from the “Hackable Animals” dystopia that Harari worries about. It is a future of extensive inclusiveness and individual empowerment. It is a future in which global problems are solved through careful consideration of every human’s needs and ideas. It is a future in which empowered networks enable each person to contribute and connect to the whole of humanity through their unique individual gifts.


Sources

[1] Estimate extracted from the Forbes chart of PwC analysis at: https://www.forbes.com/sites/greatspeculations/2019/02/25/ai-will-add-15-trillion-to-the-world-economy-by-2030/#147b348c1852
[2] https://www.wsj.com/articles/rebellion-of-the-hackable-animals-11588352123
[3] https://en.wikipedia.org/wiki/Recommender_system
[4] https://www.visualcapitalist.com/a-visual-history-of-the-largest-companies-by-market-cap-1999-today/
[5] http://www.incompleteideas.net/IncIdeas/BitterLesson.html
[6] https://sdgs.un.org/goals

Steve recently teamed up with The Hive Think Tank to present “Platform AI, Personal AI, and Global AI”. Watch the webinar recording here!

Emerj AI Futures: “The Transition to AGI Governance with Dr. Steve Omohundro”

On August 27, 2020, Dan Fagggella’s Emerj AI Futures published the podcast “The Transition to AGI Governance – with Dr. Steve Omohundro (S1E10)”:

Today’s guest is the great and brilliant Dr. Steve Omohundro, Chief Scientist at AIBrain. AIBrain is creating Turingworld, a powerful AI learning social media platform based on AI-optimized learning, AI-powered gamification, and AI-enhanced social interaction. Dr. Steve Omohundro received his Ph.D. in Physics from U.C. Berkeley. He also founded an organization to support AI safety and another organization to advance new intelligence architectures based on the programming language Omda, the specification language Omex, and the semantics language Omai. Episode topics include: how humans can build safe AI, what facets of AI development might/might-not require global governance, how the international community might best collaborate to prioritize AGI development efforts, and how AI may influence our lives as consumers.

RadicalxChange Talk: “Pluralism Through Personal AIs”

On July 3, 2020, Steve Omohundro and Puja Ohlhaver discussed “Pluralism Through Personal AIs” at the 2020 RadicalxChange Conference:

Artificial Intelligence is transforming every aspect of business and society. The usual narrative focuses on monolithic AIs owned by large corporations and governments that promote the interests of the powerful. But imagine a world in which each person has their own “personal AI” which deeply models their beliefs, desires, and values and which promotes those interests. Such agents enable much richer and more frequent “semantic voting” improving feedback for governance. They dramatically change the incentives for advertisers and news sources. When personal agents filter manipulative and malicious content, it incentivizes the creation of content that is aligned with a person’s values. Economic transactions, social interactions, personal transformation, and ability to contribute to the greater good will all be dramatically transformed by personal AI agents. But there are also many challenges and new ideas are needed. Come join this fireside chat to discuss the possibilities and perils of personal AIs and how they relate to the RadicalXChange movement.

SPEAKERS

Puja Ohlhaver is a technologist and lawyer who explores the intersection of technology, democracy, and markets. She is an advocate of digital social innovation, as a path to rebooting democracy and testing regulatory innovations. She is an inventor and founder of ClearPath Surgical, a company that seeks to improve health outcomes in minimally invasive surgery. She holds a law degree from Stanford Law School and was previously an investment management attorney.

Steve Omohundro has been a scientist, professor, author, software architect, and entrepreneur and is developing the next generation of artificial intelligence. He has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. He was an award-winning computer science professor at the University of Illinois at Champaign-Urbana and cofounded the Center for Complex Systems Research. He is the Chief Scientist of AIBrain and serves on its Board of Directors. AIBrain is creating new AI technologies for learning, conversation, robotics, simulation, and music and has offices in Menlo Park, Seoul, Berlin, and Shenzhen. It is creating Turingworld, a powerful AI learning social media platform based on AI-optimized learning, AI-powered gamification, and AI-enhanced social interaction. He is also Founder and CEO of Possibility Research which is working to develop new foundations for Artificial Intelligence based on precise mathematical semantics and Self-Aware Systems which is working to ensure that intelligent technologies have a positive impact. Steve published the book “Geometric Perturbation Theory in Physics”, designed the first data parallel language StarLisp, wrote the 3D graphics for Mathematica, developed fast neural data structures like balltrees, designed the fastest and safest object-oriented language Sather, invented manifold learning, co-created the first neural focus of attention systems, co-designed the best lip reading system, invented model merging for fast one-shot learning, co-designed the best stochastic grammar learning system, co-created the first Bayesian image search engine PicHunter, invented self-improving AI, discovered the Basic AI Drives, and proposed many of the basic AI safety mechanisms including AI smart contracts. Steve is an award-winning teacher and has given hundreds of talks around the world. Some of his talks and scientific papers are available here. He holds the vision that new technologies can help humanity create a more compassionate, peaceful, and life-serving world.

Here is the conference website with the other presentations:

https://www.radicalxchange.org/2020-conference/#

Numenta Research Meeting: “Steve Omohundro on GPT-3”

On July 1, 2020, Steve Omohundro gave a talk on GPT-3 and it’s implications for artificial intelligence to Numenta’s Research Meeting:

In this research meeting, guest Stephen Omohundro gave a fascinating talk on GPT-3, the new massive OpenAI Natural Language Processing model. He reviewed the network architecture, training process, and results in the context of past work. There was extensive discussion on the implications for NLP and for Machine Intelligence / AGI.

Link to GPT-3 paper: https://arxiv.org/abs/2005.14165

Link to slides from this presentation: https://www.slideshare.net/numenta/op…

The Hive Think Tank Lecture: “Platform AI, Personal AI, and Global AI”

On June 17, 2020, Steve Omohundro spoke about “Platform AI, Personal AI, and Global AI” to The Hive’s excellent “Think Tank” group:

Here’s the video of the talk:

and the abstract:

Simple artificial intelligence has transformed the world economy over the past 15 years. In 2004, the 5 largest companies were GE, Exxon, Microsoft, Pfizer, and Citigroup. By 2019, they were Microsoft, Amazon, Apple, Alphabet, and Facebook, all based on the powerful “AI Platform Model”. These companies use simple AI-based search, recommendation, matchmaking, ad serving, and malicious content filtering to create new channels between producers and consumers. Today’s most valuable startup is ByteDance (recently valued at $140 billion) whose TikTok platform is driven by 3 simple AI technologies.

As AI becomes more powerful, these basic channels will expand into a wide array of new forms of business and social interaction. We argue that every person will have a trusted “Personal AI” that promotes their interests and filters content and interactions not aligned with their values. At a large scale, “Global AI” will improve governance through increasingly detailed world simulations to manage global challenges like pandemics, global warming, financial crises, etc. New social mechanisms like “quadratic voting” and “semantic voting” will enable society to better meet citizen’s needs. AI will help people filter false and manipulative content which will shift the incentives for advertisers and news sources. The impact of this “Multi-Scale AI” is likely to be immense. We describe recent ideas from the science of complex systems that help us to analyze and manage it.

SPEAKER:

Steve Omohundro has done fundamental research in AI for the past 35 years. He has a PhD in physics, was an AI professor at the University of Illinois, was a scientist at several research labs and worked with many startups. He is the Chief Scientist at AIBrain, works with Facebook on bringing advanced technologies to ad serving, and founded Possibility Research to help ensure that AI will be beneficial for humanity. He co-founded one of the first complex systems institutes, designed the first data-parallel programming language, invented manifold learning, co-developed the first image recommender system, co-developed the first attention-driven neural nets, co-built the first lip reading system, and developed many other learning algorithms. His work on AI’s social impact was featured in the book “Our Final Invention” and he appears in the recent Universal Pictures documentary “We Need to Talk About AI”.

Universal Pictures Documentary: “We Need to Talk About AI”

Steve Omohundro was interviewed for the Universal Pictures documentary film “We Need to Talk About AI” which was released in the United States on May 18, 2020. It explores the impact of AI in an even-handed way and features James Cameron and a number of AI scientists.

Here’s the IMDb page: https://www.imdb.com/title/tt7658158/?ref_=tt_mv_close

You can watch it on Amazon: https://www.amazon.com/We-Need-Talk-About-I/dp/B088MK2SBC

Here’s a clip about the “Trolley Problem” that Steve appears in:

Cable TV Future Talk: “The AI Revolution”

On February 26, 2020 Steve Omohundro was interviewed by Marty Wasserman for the Palo Alto Cable TV program “Future Talk” about “The AI Revolution”:

Long time artificial intelligence researcher Steve Omohundro, Chief Scientist at the AI company AIBrain, discusses the exponential growth of AI, how it’s affecting every aspect of our lives, and the tradeoffs between the benefits and the dangers.

FXPAL Talk: The AI Platform Business Revolution, Matchmaking, Empathetic Technology, and AI Gamification

On October 15, Steve Omohundro spoke at FXPAL (FX Palo Alto Laboratory) about “The AI Platform Business Revolution, Matchmaking, Empathetic Technology, and AI Gamification”:

Abstract

Popular media is full of stories about self-driving cars, video deepfakes, and robot citizens. But this kind of popular artificial intelligence is having very little business impact. The actual impact of AI on business is in automating business processes and in creating the “AI Platform Business Revolution”. Platform companies create value by facilitating exchanges between two or more groups. AI is central to these businesses for matchmaking between producers and consumers, organizing massive data flows, eliminating malicious content, providing empathetic personalization, and generating engagement through gamification. The platform structure creates moats which generate outsized sustainable profits. This is why platform businesses are now dominating the world economy. The top five companies by market cap, half of the unicorn startups, and most of the biggest IPOs and acquisitions are platforms. For example, the platform startup Bytedance is now worth $75 billion based on three simple AI technologies.

In this talk we survey the current state of AI and show how it will generate massive business value in coming years. A recent McKinsey study estimates that AI will likely create over 70 trillion dollars of value by 2030. Every business must carefully choose its AI strategy now in order to thrive over coming decades. We discuss the limitations of today’s deep learning based systems and the “Software 2.0” infrastructure which has arisen to support it. We discuss the likely next steps in natural language, machine vision, machine learning, and robotic systems. We argue that the biggest impact will be created by systems which serve to engage, connect, and help individuals. There is an enormous opportunity to use this technology to create both social and business value.

Cooperation is the Central Issue of our Time

Cooperation is the most important issue of our time. It is the key to understanding biology, the success of humans, effective business models, social media, and future society based on beneficial AI.

The challenge is that many interactions have the character of the “Prisoner’s Dilemma” or “Tragedy of the Commons” where selfish actors do better for themselves while arming the group benefit and cooperative actors help the group but can lose out in individual competition.

A variety of mechanisms that lead to cooperation have been invented and studied in biology, economics, political science, business, analysis of social technologies, and increasingly in analyzing AI.

All of these subjects are grounded in biology and today’s biology exhibits cooperation at every level of the “Major Transitions in Evolution”:

The Major Transitions in Evolution

https://en.wikipedia.org/wiki/The_Major_Transitions_in_Evolution

From Smith and Szathmary’s book “The Major Transitions in Evolution”:

image.png

Biology has to explain how independent biological molecules work cooperatively inside of cellular compartments, how separate genes cooperate in a genome, how mitochondria and other organelles cooperate in eukaryotic cells, how the cells in multicellular organisms cooperate, how two or more sexes cooperate in creating offspring, how social insects and other animals cooperate in hives, how mutualisms between different species happen, how humans cooperated in creating and using language, how humans created cooperative societies.

Biological cooperation contains all the abstract elements of general cooperation studied by economics. But biological cooperation has the extra element of “relatedness” between organisms that share genes. Hamilton’s notion of “inclusive fitness” has been a central insight in understanding cooperation in many of these biological systems.

But it’s looking to me like “partner choice”, “partner switching”, and “cheater punishment” are the fundamental mechanisms underlying many of these cooperative interactions and they apply as well to economic interactions, business interactions, political interactions, and increasingly technological and AI interactions.

I therefore think it is very important to have a clear and mathematically precise theory of these mechanisms. And would love to see detailed simulation modelling and eventually AI models both for understanding and for mechanism design and policy design.

Those preliminary thoughts are meant to motivate the study of this excellent review article which tries to systematize the different explanations for cooperation in biology:

Evolutionary Explanations for Cooperation

Stuart A.West Ashleigh S.Griffin AndyGardner

https://doi.org/10.1016/j.cub.2007.06.004

https://www.sciencedirect.com/science/article/pii/S0960982207014996

Natural selection favours genes that increase an organism’s ability to survive and reproduce. This would appear to lead to a world dominated by selfish behaviour. However, cooperation can be found at all levels of biological organisation: genes cooperate in genomes, organelles cooperate to form eukaryotic cells, cells cooperate to make multicellular organisms, bacterial parasites cooperate to overcome host defences, animals breed cooperatively, and humans and insects cooperate to build societies. Over the last 40 years, biologists have developed a theoretical framework that can explain cooperation at all these levels. Here, we summarise this theory, illustrate how it may be applied to real organisms and discuss future directions.

Here is the pdf of the paper:

https://reader.elsevier.com/reader/sd/pii/S0960982207014996?token=B9D724619BFFB17FA65B126953DB5D328716FEA3649D91186D3D12F0DED97F1C32E1614B2F1682AEE91E20CE85DF49EC

Here is the key figure which tries to categorize all of the biological cooperation mechanisms:

biological_cooperation_mechanisms

 

Interview for the Argentinian El Cronista: “Do presidents dream of electric ministers?”

On August 26, 2019, Sebastiande de Toma published an article in the Argentinian business newspaper El Cronista based in part on an interview with Steve Omohundro. His article is titled “Suenan los presidentes con ministros electricos?” or “Do presidents dream of electric ministers?”:

https://www.cronista.com/columnistas/Suenan-los-presidentes-con-ministros-electricos-20190826-0057.html

He explores whether AI will help politicians make better economic decisions.

Steve suggested 4 levels of AI support for politicians:

  1. AI’s can build much better economic models from a much wider range of data than traditional econometric data. For example, an AI model might include video feeds from TV news, social media posts, video feeds from commerce hubs, audio from radio shows, etc. All of the data can inform much richer economic models. Monte Carlo simulations could then make much better predictions about the impact of policy interventions and repeated simulations can reveal how robust the response to an intervention might be.
  2. AI’s can help politicians recognize their cognitive biases and counteract them. The field of “behavioral economics” has identified a large number of biases, especially around small probability events and the different perceptions of gains and losses. AI’s can model the correct Bayesian responses and help a politician to counteract his intuitive biases.
  3. In addition to helping a politician simulate the effects of a policy intervention, AI’s can help to create policies with a desired impact. Economic models with policy knobs can be automatically optimized for the best predicted outcomes.
  4. Recently there have been advances in using AI to solve complex game theoretic problems (eg. the Libratus and Pluribus AI’s which recently beat expert human poker players). This kind of AI could be applied to the problem of new policy causing other parties to change their behavior. Well-designed policy should account for these responses and lead to desirable outcomes taking account of all participant’s likely behaviors.

Sebastiande’s wrote (as translated by Google Translate):

2019-08-27_182200

How Researchers Changed the World Podcast: “The Ethical Implications of Artificial Intelligence”

On June 18, 2019, the podcast “How Researchers Changed the World” supported by the Taylor & Francis Group featured Steve Omohundro on “The ethical implications of artificial intelligence”.

HRCW-ep4-Facebook

Steve’s paper “Autonomous technology and the greater human good” was the most read paper in the history of the Journal of Experimental & Theoretical Artificial Intelligence. It’s available here:

https://www.tandfonline.com/doi/full/10.1080/0952813X.2014.895111

The podcast explores the origins of that work and is available here along with a transcript:

https://www.howresearchers.com/episodes/episode-4/

The press release for the episode is available here:

HRCW_Press-Release-Steve-Omohundro

Linghacks Keynote: “Language and AI: Hacking Humanity’s Greatest Invention”

On March 30-31 the wonderful “Linghack” organization supporting computational linguistics held their “Linghacks II” event in Silicon Valley:

https://linghacks.weebly.com/linghacks-ii.html

Steve Omohundro was invited to give the opening Keynote Address on “AI and Language: Hacking Humanity’s Greatest Invention”. His talk is available here starting at 14:20:

The slides are available here:

Autopiloto Podcast from a Self-Driving Car

On November 15, 2018 Steve Omohundro was interviewed live for the Autopiloto Podcast from a self-driving car which was exploring places in Silicon Valley of interest for self-driving. Here is the 12 hour podcast:

https://archive.org/details/AutopilotoPodcastThursday

The interview with Steve begins at the timestamp 3:45:20.

Autopiloto Podcast Thursday

AUTOPILOTO is a 24-hour live online radio broadcast about all
things self-driving hosted from a semi-autonomous vehicle looping the
Bay Area. This broadcast takes up questions of how autonomy and
automatic movement will shape Bay Area geographies, societies, and
cultures. Considering  self-driving as technology, psychological
state, anthropological condition and systems, what will our cities
sound like in a driverless future? How will society and infrastructure
systems adapt? What might humans do during newfound transit time? In
what ways do machines imitate human auto-pilot modes, and vice versa?
How can we build equitable, planetary, intelligent transit for all?

Video Highlights of the Responsible AI/DI Summit at SAP

SAP is setting an excellent example in making sure that artificial intelligence is beneficial for its customers, employees, and the broader society. They recently released a set of “Guiding Principles for Artificial Intelligence”:

https://news.sap.com/2018/09/sap-guiding-principles-for-artificial-intelligence/

https://www.sap.com/products/leonardo/machine-learning/ai-ethics.html

They sponsored and hosted the 2018 “Responsible AI/DI Summit” and invited Steve Omohundro to present.  A video of the highlights of the summit is available here:

Responsible AIDI Summit 2018 Highlights

The Responsible AI/DI Blog is here:

https://www.responsibleaidi.org/mesmerize/blog/

Risk Group: “Rise of Algorithms in Decision Making”

On November 20, 2018 Steve Omohundro participated in Risk Group’s “Risk Roundup” discussing the “Rise of Algorithms in Decision Making” with Jayshree Pandya:

The Rise of Algorithms in Decision-Making

This episode of Risk Roundup discusses the rise of algorithmic decision making, the complex challenges, risks and rewards. Prof. Omohundro provided a thoughtful insight on the need to ensure integrity, transparency and trust in algorithmic decision-making.

Here’s the video of our discussion:

Risk Roundup Webcast: Algorithmic Decision Making

 

AUTOPILOTO Radio Show from an Autonomous Vehicle

On November 15, 2018, Steve Omohundro will be interviewed about the social impact of AI in an autonomous vehicle driving around Silicon Valley as a part of the “AUTOPILOTO” art project:

 

AUTOPILOTO

Thursday, November 15, 2018 – Friday, November 16, 2018

What will our streets and cities look and sound like in a driverless future?

The Lucas Artists Program presents AUTOPILOTO by artist collective RadioEE.net, an online live-streaming 24-hour broadcast from a semi-autonomous vehicle traveling around the Bay Area, on November 15 and 16.

AUTOPILOTO will investigate the challenges and opportunities of emerging autonomous mobilities through live soundscapes, music, and Spanish-English-Vietnamese conversations with drivers, designers, technologists, municipal agents, researchers, artists, and scientists, opening a channel for music, storytelling, and sonic experiments.
THE SALLY & DON LUCAS ARTISTS PROGRAM AT

MONTALVO ARTS CENTER PRESENTS A NEW PROJECT 
BY RADIOEE.NET 

AUTOPILOTO

November 15-16, 2018
SARATOGA, CA (1 October 2018) — This November, the Sally & Don Lucas Artists Program at Montalvo Arts Center presents a new commission from international creative collective Radioee.net: AUTOPILOTO, a marathon radio transmission broadcast while on the move in a semi-autonomous vehicle traversing the Bay Area, examining how emerging autopilot technologies are transforming the world. Live streaming on November 15 and 16, AUTOPILOTO will include interviews with drivers, designers, technologists, municipal agents, researchers, artists, scientists, mechanics and more, as well as soundscapes and music. Through storytelling and sonic experiments, it will compose an audio portrait of the Bay Area at a specific moment in time. The live-stream of the broadcast will be available on both radioee.net and montalvoarts.org.
AUTOPILOTO is a commissioned project by the Lucas Artist Program at the Montalvo Arts Center, and is presented as part of New Terrains: Mobility and Migration, a series of cross-disciplinary exhibitions, programs and experiences that explore how bodies move through spaces—social, political, literal, and figurative. The broadcast is co-hosted with Trami Cron of Chopsticks Alley Art. Special guests will include voices from ARUP; fka SV Inc; Nissan Research Center; SETI Institute; Transportation Sustainability Research Center, University of California, Berkeley, Yu-Ai Kai Community Center, and others. It will feature music and live performance by such artists as Anna Fritz, Taylor Ho Bynum, Philip Hermans, Motoko Honda, Shane A. Myrbeck & Emily Shisko, and San Jose Jazz. For more information, the public may visit Radioee.net or montalvoarts.org or call Donna Conwell at 408-777-2100.

Million AI Startups Talk: AI for Human Flourishing

Steve Omohundro will speak on “AI for Human Flourishing” on November 27, 2018 at 6:00 PM at BootUP Silicon Valley in Menlo Park as a part of Million AI Startups workshop on “AI for Mankind”.

AI for Human Flourishing

2018 is the best year in human history. The rates of hunger, poverty, violence, and illiteracy are all at their lowest levels ever. We have achieved this using both human intelligence and collective intelligence. But things are about to get even better using Artificial Intelligence. A recent UN report predicts that today’s AI will create at least $70 trillion of value through 2030 and new AI technologies could double that. AI will impact every single challenge humanity currently faces. In addition to vastly improving productivity, it will provide new solutions to social dilemmas and will provide new coordination mechanisms to foster cooperation. It will be used to predict and mitigate extreme behavior in a wide range of complex systems including the climate, economy, disease, politics, social media, transportation, and energy flows. It will usher in a new era of creativity and invention that will lead to unprecedented human flourishing. (Steve Omohundro, Ph.D.)

Here are some background materials for the talk:

Why 2017 May Be the Best Year Ever

Our world is changing

Explore the ongoing history of human civilization at the broadest level, through research and data visualization.

Factfulness: Ten Reasons We’re Wrong About the World–and Why Things Are Better Than You Think

Bill Gates: These 4 books make me feel optimistic about the world

Enlightenment Now: The Case for Reason, Science, Humanism, and Progress

Assessing the Economic Impact of Artificial Intelligence

Critical Transitions in Nature and Society

Social Self-Organization: Agent-Based Simulations and Experiments to Study Emergent Social Behavior

Theme: AI for Mankind

6:00 pm – 6:30 pm Check In, Food & Networking
6:30 pm – 6:50 pm AI for Human Flourishing
Speaker: Steve Omohundro, Ph.D., President, Self-Aware Systems
6:50 pm – 7:10 pm (To be announced)
Speaker: David Ayman Shamma, Ph.D., Sr Research Scientist, FXPAL
7:10 pm – 7:30 pm AI-Powered Future Simulation in Life and Business
Speaker: Richard Shinn, Ph.D., AIBrain,
7:30 pm – 8:00 pm Discussion
8:00 pm – 8:30 pm Announcement & Networking

Social Media Storms Workshop: Steve Omohundro speaks on AI mitigation strategies

On October 10, 2018, Steve Omohundro will speak in the “Social Media Storms Workshop” put on by the Nautilus Institute, the Preventive Defense Project at Stanford, and Technology for Global Security. It is funded by the MacArthur Foundation.

We have seen the huge impact of “social media storms” across Facebook, Twitter, and other social media networks. Often these storms are driven by fake news, false alarms, extremist positions, and other forces of memetic contagion. How can we understand the dynamics? How can we detect when social media storms are happening? When they are dangerous? What are the best ways to dampen them down? To stop them? To guide them in a positive direction?

Steve Omohundro will discuss the role that AI has in creating fake news (eg. the DeepFakes synthetic video software), in forming memetic storms, in detecting these storms, and in stopping them.

Responsible AI/DI Summit 2018: Panel on “Balancing Organization Goals with Responsibility in Complex Decisions”

On September 19, 2018 from 3-7:30, Steve Omohundro will present at the “Responsible AI/DI Summit 2018” at SAP Labs in Palo Alto. The event is sponsored by SAP, Qantellia, and Carol Tong Consulting. There is an excellent group of presenters who will provide a multi-disciplinary perspective on these important issues. Steve will be in the panel on “Pulling it all Together: Balancing Organization Goals with Responsibility in Complex Decisions”:

https://aidisummit.org/#agenda

The intention of the summit is to bring a sense of “trusteeship” to emerging powerful technologies. The decision methodologies of “Decision Intelligence” will be essential in guiding the deployment of AI and other powerful technologies.

Registration is free!

SAP and Google publish their ethical AI principles

I’m very excited that more companies and governments are thinking about the ethical issues involved with AI. Two great examples are SAP and Google. SAP just published their 7 ethical AI guidelines:

German firm’s 7 commandments for ethical AI

https://www.france24.com/en/20180918-german-firms-7-commandments-ethical-ai

and Google published their AI principles a few months ago:

AI at Google: our principles

https://www.blog.google/technology/ai/ai-principles/

Final Edge Question: “Mathematical Beauty” by Steve Omohundro

Edge.org is a wonderful online version of “The Reality Club” and had a yearly tradition of inviting diverse thinkers to respond to stimulating questions over the 20 years from 1998 until 2018. For the final question, they invited a wide variety of people to give their own answer to: “What is the last question?”

Steve Omohundro’s response was:

How did our sense of mathematical beauty arise?

Other’s responses are here:

https://www.edge.org/responses/what-is-the-last-question

Steve is interested in the question of mathematical beauty because it represents an inner sense of what abstract models, knowledge, and inference is valuable that seems rather disconnected from ordinary evolutionary pressures. If we can fully understand the nature of mathematical beauty, I think it will shed light on unique aspects of human cognition.

Edge essay: “Costly Signalling” by Steve Omohundro

Edge.org is a wonderful online version of “The Reality Club” and had a yearly tradition of inviting diverse thinkers to respond to stimulating questions over the 20 years from 1998 until the final question in 2018. The responses were turned into books and published on the Edge website. The 2017 question was “What scientific term or concept ought to be more widely known?” Steve Omohundro’s response was this essay on the topic of “Costly Signalling”:

https://www.edge.org/response-detail/27076

Stanford LAST Festival: Faking Life: AI, Deception, Blockchain

On March 24, 2018, Steve Omohundro spoke at the 5th LAST (Life/Art/Science/Tech) Festival presented by Stanford University, held at the Stanford Linear Accelerator Center:

http://www.lastfestival.org/

He spoke on “Faking Life: AI, Deception, Blockchain”:

Here’s the video of his talk:

Here’s a photo from the event:

Stanford Leonardo Art/Science Evening Rendezvous: “AI, Deception, and Blockchains”

On December 14, 2017, Steve Omohundro spoke in the the Stanford Leonardo Art/Science Evening Rendezvous on “AI, Deception, and Blockchains”:

https://www.scaruffi.com/leonardo/dec2017.html

Here are the slides:

Here’s the abstract:

Omohundro
Recent AI systems can create fake images, sound files, and videos that are hard to distinguish from real ones. For example, Lyrebird’s software can mimic anyone saying anything from a one minute sample of their speech, Adobe’s “Photoshop of Voice” VoCo software has similar capabilities, and the “Face2Face” system can generates realistic real time video of anyone saying anything. Continuing advances in deep learning “GAN” systems will lead to ever more accurate deceptions in a variety of domains. But AI is also getting better at detecting fakes. The recent rash of “fake news” has led to a demand for deception detection. We are in an arms race between the deceivers and the fraud detectors. Who will win? The science of cryptographic pseudorandomness suggests that the deceivers will have the upper hand. It is computationally much cheaper to generate pseudorandom bits than it is to detect that they aren’t random. The issue has enormous social implications. A synthesized video of a world leader could start a war. Altered media could implicate people in crimes they didn’t commit. Governments have tampered with photographs since the beginning of photography. Stalin, for example, was famous for removing people from historical photos when they fell out of favor. The art world has had to deal with forgeries for centuries. Good forgers can create works that fool even the best art critics. The solution there is “provenance”. We not only need the work, we need its history. But provenances can also be faked if we aren’t careful! Can we create an unmodifiable digital provenance for media? We describe several approaches to using blockchains, the technology underlying cryptocurrencies, to do this. We discuss how the time and location of events can be cryptographically certified. And how future media hardware might provide guarantees of authenticity.

and a photo:

tricycle magazine: “AI, Karma & Our Robot Future”

The Spring 2018 issue of the Buddhist magazine “tricycle” published the article “AI, Karma & Our Robot Future, Two artificial intelligence scientists discuss what’s to come, A conversation with Steve Omohundro and Nikki Mirghafori” based on a presentation given at CIIS:

AI, Karma & Our Robot Future

Two artificial intelligence scientists discuss what’s to come.

A conversation with Steve Omohundro and Nikki Mirghafori

AI, Karma & Our Robot Future

CIIS: “Artificial Intelligence and Karma” A Conversation With Nikki Mirghafori and Steve Omohundro

On November 2, 2017 the San Francisco-based California Institute of Integral Studies held the event “Artificial Intelligence and Karma, A Conversation With Nikki Mirghafori and Steve Omohundro”:

https://www.ciis.edu/public-programs/event-archive/mirghafori-omohundro-lec-fw17

Here is a recording of the event:

https://www.ciis.edu/public-programs/public-programs-podcast

Nikki Mirghafori and Steve Omohundro: AI and Karma
Recorded November 2, 2017

In this episode, Artificial Intelligence scientist and buddhist teacher Nikki Mirghafori and computer scientist Steve Omohundro discuss how the concept of karma can guide us as we push forward towards creating non-human intelligence.

Foresight Institute Great Debate on “Drop Everything And Work on Artificial Intelligence?”

On November 19, 2016, Steve Omohundro participated in the Foresight Institute’s “Great Debate” on whether we should “Drop Everything and Work on Artificial Intelligence?” Here is a video of the event:

This was the second of four debates at Foresight Institute’s The Great Debates in San Francisco.

Speakers on this panel:

Peter Voss, Head of AGI Innovations Inc

Steve Omohundro, President, Possibility Research

Monica Anderson, Director of Research at Syntience Inc

Michael Andregg, Co-Founder and CSO at Fathom Computing

Moderator: David Yanofsky, Reporter at Quartz

Introduction: Allison Duettmann, Foresight Institute

Discussion topics included:

Morality and Ethics of Artificial Intelligence

Narrow AI vs. Artificial General Intelligence

AI Safety

Deep Learning and Neural Networks

Predictions about the Singularity

Existential Risk

Longterm Futurism

Forecasting

Ashoka Foundation panel on “Empathy and Technology”

On July 26, 2017, Steve Omohundro was on a panel hosted by the Ashoka Foundation and the Hive on “Empathy and Technology”.

What is the role of empathy in technology — and what should it be?

What is the role of empathy in technology — and what should it be?

Role of Empathy in Technology

https://www.meetup.com/SF-Bay-Areas-Big-Data-Think-Tank/events/241342678/

Here is a video of the event on Facebook:

 

KZSU Radio Henry George Program: “Steve Omohundro on AI Risk, Human Values, and Decentralized Resource Sharing”

On July 15, 2017, Steve Omohundro did an interview with Mark Mollineaux’s radio show “The Henry George Program” on “AI Risk, Human Values, and Decentralized Resource Sharing”. Here’s a description of the show:

Steve Omohundro on AI Risk, Human Values, and Decentralized Resource Sharing

ReleasedJul 18, 2017

Steve Omohundro shares plans for creating provably correct protections against AI superintelligence, and thoughts on how human values can be imbued into AI. Resource allocation, decentralized cooperation, and discussions on how Blockchain Proofs of Work/Stake can possibly be compatible with basic needs.

Here’s a link to the show on CastBox:

https://castbox.fm/episode/Steve-Omohundro-on-AI-Risk%2C-Human-Values%2C-and-Decentralized-Resource-Sharing-id935232-id44481874?country=us

and on iTunes:

https://itunes.apple.com/us/podcast/the-henry-george-program/id1241740873?mt=2#

Stanford CS22a Social and Economic Impact of Artificial Intelligence: “Social Impact and Ethics of AI”

On May 25, 2017 Steve Omohundro spoke in Jerry Kaplan’s Stanford CS22a class “Social and Economic Impact of Artificial Intelligence” on “Social Impact and Ethics of AI”.

Here’s Steve’s bio:

Steve Omohundro founded Possibility Research and Self-Aware Systems to develop beneficial intelligent technologies. He has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from Berkeley. He was a computer science professor at the University of Illinois and cofounded the Center for Complex Systems Research. He published the book “Geometric Perturbation Theory in Physics”, designed the programming languages StarLisp and Sather, wrote the 3D graphics system for Mathematica, invented many machine learning algorithms (including manifold learning, model merging, bumptrees, and family discovery), and built systems that learn to read lips, control robots, and induce grammars. He’s done internationally recognized work on AI safety and strategies for its beneficial development. He is on the advisory boards of several AI and Blockchain companies.

And here are the slides: