Scientific Contributions

Steve Omohundro is a scientist known for his research on machine learning, machine vision, programming languages, Hamiltonian physics, dynamical systems, and the social implications of artificial intelligence.  His current work uses rational economics to develop safe and beneficial intelligent technologies for better collaborative modeling, understanding, innovation, and decision making. This page has descriptions of some of his scientific contributions with links to papers and talk slides.

Rational AI for the Greater Good

Steve is developing the “Safe-AI Scaffolding” strategy for building powerful new technology in a way that contributes to the greater human good. The idea is to start with provably safe intentionally limited systems and then to use those to build more powerful systems.

Paper: Autonomous Technology and the Greater Human Good

Paper: Rational Artificial Intelligence for the Greater Good

Paper: The Future of Computing: Meaning and Values

Talk: Oxford keynote on Autonomous Technology and the Greater Human Good

Talk: TEDx talk on Smart Technology for the Greater Good

Talk: Rationally Shaped Minds: A Framework for Analyzing Self-Improving AI

Talk: The Emerging Global Mind, Cooperation, and Compassion

Talk: Minds Making Minds: Artificial Intelligence and the Future of Humanity

Talk: Design Principles for a Safe and Beneficial AGI Infrastructure

Talk: Complexity, Virtualization, and the Future of Cooperation

Talk: The Science and Technology of Cooperation

Emerging Intelligence

Steve has given several talks about the universal principles underlying the emergence of natural intelligence. The principles are equally applicable to biological organisms, ecosystems, brains, social insects, economies, and the emerging global mind:

Talk: The Wisdom of the Global Brain

Talk: Co-opetition in Economics, Biology, and AI

Talk: Evolution, Artificial Intelligence, and the Future of Humanity

Self-Improving Artificial Intelligence

As AI systems become more powerful, they will be naturally driven to improve themselves. Can we say anything about the behavior of systems after multiple iterations of self-improvement? Steve used analytical techniques from microeconomics to create a framework for analyzing self-improvement. The website Self-Aware Systems contains more information about this research.

The following paper uses rational economic analysis to argue that there are natural drives which AI systems will tend to exhibit unless they are specifically designed not to. It introduces a new version of the fundamental expected utility theorem of microeconomics that only relies on avoidance of exploitation. It argues that self-preservation, efficiency, and resource seeking are natural tendencies that systems will tend to exhibit even if they are not built in. It introduces a number of new concepts and results including “energy encryption” as a new defensive strategy, a new asymptotic state for Conway’s “Game of Life” and similar cellular automata, and a new “deliberative Baldwin effect” that allows Darwinian evolution to look into the future:

Stephen M. Omohundro, “The Nature of Self-Improving Artificial Intelligence“, presented and distributed at the 2007 Singularity Summit, San Francisco, CA.

The following paper gives a more intuitive description of the basic arguments in the previous paper and analyzes a number of new aspects. It argues that systems will usually act to prevent their utility functions from changing but also discusses three situations in which they will be motivated to change them:

Stephen M. Omohundro, “The Basic AI Drives“, in the Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, edited by P. Wang, B. Goertzel, and S. Franklin, February 2008, IOS Press.

Steve has given a number of talks on this work and on mechanisms to produce cooperative outcomes which promote human values as these technologies begin to be deployed. Steve proposed using formal methods to provide provable mathematical constraints on AI systems even after repeated self-improvement, using revelation of utility functions as a commitment mechanism, and the creation of cooperative social contracts that encourage powerful systems to collaborate to prevent individual systems from taking over. He also showed that the asymmetry between the complexity of posing questions and solving them (assuming P!=NP) enables weaker systems to defend themselves against stronger systems within a range. “Mutually Assured Distraction” from conflict provides an incentive for systems to create enforceable cooperative relationships:

Talk: Creating a Cooperative Future

Talk: Self-Improving AI and the Future of Computing

Talk: AI and the Future of Human Morality

Talk: Self-Improving AI and Designing 2030

Talk: The Science and Technology of Cooperation

StarLisp and Parallel Programming

At Thinking Machines Corporation, Cliff Lasser and Steve designed the StarLisp parallel programming language for the massively parallel Connection Machine. The language and the numerous parallel algorithms, libraries, and applications that Steve developed had a significant influence on the future development of data parallel programming.

Clifford Lasser and Stephen M. Omohundro, “The Essential Star-lisp Manual, Release 1, Revision 3″, Thinking Machines Technical Report 86.15 (1986).

Technote: Connection Machine Linear Algebra

Talk: Parallel Programming on the Connection Machine

Manifold Learning and Lipreading

Steve was one of the first to recognize the importance of using learning to build models for machine vision. He co-founded the Center for Complex Systems Research at the University of Illinois and started the Vision and Learning Group there where he supervised 4 Masters and 2 Ph.D. theses.

The configuration spaces which arise in many robotics and vision tasks are naturally modeled as smooth manifolds. Steve and his students built systems to automatically learn these kinds of models from data. In the following paper, Steve introduced the fundamental “manifold learning” task and described a number of powerful algorithms for it. Given noisy samples from a smooth manifold embedded in a larger space, the system should discover the dimension of the manifold and build an efficient algorithmic representation for it. It should efficiently support operations such as finding the nearest point on the learned manifold to a specified query point:

Stephen M. Omohundro, “Fundamentals of Geometric Learning”,University of Illinois at Urbana-Champaign, Department of Computer Science Technical Report UILU-ENG-88-1713 (February 1988).

Chris Bregler and Steve developed these manifold learning ideas and applied them to the task of reading lips. The space of possible lip configurations is large but the configurations which actually occur in speech lie on a lower dimensional manifold. By learning this manifold, Chris and Steve were able to achieve high lipreading recognition accuracy and to perform novel tasks such as interpolating between lip images and generating realistic simulations of speakers talking. These ideas are now widely used both in 3d graphics to model the motion of bodies and in 3d machine vision for recognition.

Chris Bregler, Stephen M. Omohundro, and Yochai Konig, “A Hybrid Approach to Bimodal Speech Recognition“, Proceedings of the 28th Annual Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, California, November 1994.

Christoph Bregler and Stephen M. Omohundro, “Surface Learning with Applications to Lipreading“, in Cowan, J. D., Tesauro, G., and Alspector, J., (eds.) Advances in Neural Information Processing Systems 6, Morgan Kaufmann Publishers, San Francisco, CA, 1994, pp. 43-50. ICSI Technical Report TR-94-001.

Christoph Bregler and Stephen M. Omohundro, “Nonlinear Image Interpolation using Manifold Learning“, in Advances in Neural Information Processing Systems 7, eds. Gerry Tesauro, David S. Touretzky, and Todd K. Leen, MIT Press, Cambridge, MA, 1995, pp. 973-980.

Christoph Bregler and Stephen M. Omohundro, “Nonlinear Manifold Learning for Visual Speech Recognition“, in the Proceedings of the Fifth International Conference on Computer Vision, ed. W. Eric L. Grimson, IEEE Computer Society Press, Los Alamitos, CA, June 1995, pp. 494-499.

Christoph Bregler and Stephen M. Omohundro, “Learning Visual Motion Models for Lip Reading” in Motion-Based Recognition, eds. M. Sha and R. Jain, Kluwer Academic Press, 1997.

Christoph Bregler, Stephen Omohundro, Michelle Covell, Malcolm Slaney, Subutai Ahmad, David Forsyth, Jerry Feldman, “Probabilistic Models of Verbal and Body Gestures” in Computer Vision in Man-Machine Interfaces, eds. R. Cipolla and A. Pentland, Cambridge University Press, 1998.

Talk:  Nonlinear Manifold Learning for Visual Speech Recognition

Talk: Beyond Symbolic AI

Talk: Geometric Learning Algorithms

Talk: Geometric Learning Algorithms for Vision, Robotics, and Graphics

Talk: Learning and Recognition by Model Merging

Efficient Neural Algorithms

Many millions of dollars have been spent on special-purpose hardware for speeding up simulations of neural networks. Steve developed a wide range of algorithms that dramatically speedup neural computations on ordinary serial computers. He introduced numerous data structures based on kd-trees with various models at the leaves to implement a wide range of statistical and learning tasks. He introduced the boxtree, balltree, octboxtree, and bumptree data structures for speeding up geometric algorithms and algorithms based on mixtures of Gaussians. Faster algorithms for learning and evaluation means a better quality of learned model on any given hardware. He also studied a variety of techniques for adaptively building these structures from data. These algorithms are widely used in practical learning systems and in 3-d graphics systems for collision detection.

Stephen M. Omohundro, “Efficient Algorithms with Neural Network Behavior”Complex Systems 1:2 (1987) 273-347.

Stephen M. Omohundro, “How Can Slow Components Think So Fast?”, Thinking Machines 1988 Spring Symposium Series on Parallel Models of Intelligence, Cambridge, Massachusetts.

Darrell Hougen and Stephen M. Omohundro, “Fast Texture Recognition Using Information Trees”, University of Illinois at Urbana-Champaign, Department of Computer Science Technical Report UILU-ENG-88-1714 (February 1988).

Stephen M. Omohundro, “Five Balltree Construction Algorithms“, ICSI Technical Report TR-89-063 (December 1989).

Stephen M. Omohundro, “The Delaunay Triangulation and Function Learning“, ICSI Technical Report TR-90-001 (January 1990).

Stephen M. Omohundro, “Geometric Learning Algorithms” Physica D, 42 (1990) 307-321, and in Emergent Computation, edited by Stephanie Forrest, MIT Press (1991) 307-321.

Stephen M. Omohundro, “Bumptrees for Efficient Function, Constraint, and Classification Learning” in Advances in Neural Information Processing Systems 3, eds. R. P. Lippmann, J. E. Moody, D. S. Touretzky, Morgan Kaufmann Publishers, San Francisco, CA, 1991, pp. 693-699.

Stephen M. Omohundro, “Building Faster Connectionist Systems with Bumptrees” in W. Brauer and D. Hernandez, (eds.) Verteilte Kunstliche Intelligenz und kooperatives Arbeiten, the Proceedings of the Fourth International GI-Congress, Berlin: Springer-Verlag, 459-466, (June 1991).

Technote: Octboxtrees for Fast Image Feature Search

Talk Video: Steve Omohundro talk on “Efficient Algorithms with Neural Network Behavior”

Mathematica 3D Graphics

Steve wrote the three-dimensional graphics portion of Wolfram Research’s Mathematica program as one of the seven original developers. He developed new algorithms for object-based hidden surface removal, three-dimensional function plotting, and spline-based contour mapping.

Neural Models of Visual Attention

Subutai Ahmad and Steve did a number of studies to understand how biologically realistic neural models could use selective attention to solve the variable binding problem suffered by most connectionist approaches:

Subutai Ahmad and Stephen M. Omohundro, “Equilateral Triangles: A Challenge for Connectionist Vision“, Proceedings of the 12th Annual meeting of the Cognitive Science Society, MIT, (1990).

Subutai Ahmad and Stephen M. Omohundro, “A Network for Extracting the Locations of Point Clusters Using Selective Attention“, ICSI Technical Report No. TR-90-011, (1990).

Subutai Ahmad and Stephen M. Omohundro, “Efficient Visual Search: A Connectionist Solution“, Proceedings of the 13th Annual meeting of the Cognitive Science Society, Chicago, (1991).

Sather Object-Oriented Programming Language

Steve led an international team in developing the object-oriented programming language Sather which was featured in O’Reilly’s History of Programming Languages. The language was originally based on Eiffel but introduced a number of powerful innovations which have influenced other important languages such as Java, Ruby, and Python. Sather aimed to have the efficiency of C while providing powerful, safe, and easy to use object-oriented constructs. Some of its innovations included separating subtyping from implementation inheritance, novel object-oriented iteration abstraction, safe contravariant typing,  novel higher order routines and iters, a precise semantics for language constructs independent of implementation or platform, and several new efficient compilation techniques. Its libraries also introduced innovations which have been adopted by other languages including cache-efficient stacks based on amortized doubling, and cache-efficient hash tables which efficiently support deletion.  pSather extended the basic language with a number of powerful parallel constructs. Sather was one of the first language systems to release its compiler, interpreter, debugger, and libraries under an open source license. The language developed over time as may be seen from the specifications for Sather 0.2, Sather 1.0, and Sather 1.1 with David Stoutamire.

Stephen M. Omohundro, “The Differences Between Sather and Eiffel“,Eiffel Outlook, 1:1, (April 1991).

Stephen M. Omohundro, “Sather’s Design“, Eiffel Outlook, 1:3, (June 1991).

Stephen M. Omohundro and Chu-Cheow Lim, “The Sather Language and Libraries“, ICSI Technical Report No. TR-92-017, 1992.

Stephen M. Omohundro, “Sather Provides Nonproprietary Access to Object-Oriented Programming“,Computers in Physics, Vol.6, No. 5, September, 1992, p. 444-449.

Heinz Schmidt and Stephen M. Omohundro, “CLOS, Eiffel, and Sather: A Comparison“, in Object-Oriented Programming: The CLOS Perspective, ed. Andreas Paepcke, MIT Press, Cambridge, Massachusetts, 1993, pp. 181-213.

Stephen M. Omohundro, “The Sather Programming Language“, Dr. Dobb’s Journal, Volume 18, Issue 11, October 1993, p. 42 and in Dr. Dobb’s Sourcebook for Alternative Programming Languages.

Clemens Szyperski, Stephen M. Omohundro, and Stephan Murer, “Engineering a Programming Language: The Type and Class System of Sather“, in Jurg Gutknecht, ed. Programming Languages and System Architectures, Lecture Notes in Computer Science Volume 782, Springer-Verlag, Berlin (1994) pp. 208-227.

David Stoutamire and Stephen M. Omohundro, “The Sather 1.1 Specification“, International Computer Science Institute Technical Report TR-95-057, October 1995.

Stephan Murer, Stephen M. Omohundro, David Stoutamire, and Clemens Szyperski, “Iteration Abstraction in Sather“, Transactions on Programming Languages and Systems, Volume 18, Number 1, January 1996, pp. 1-15.

Talk: The Sather 0.2 Language and Libraries

Talk: The Sather 1.0 Language and Libraries

Model Merging and Grammar Learning

Steve developed a widely applicable learning algorithm that he called “model merging”. It begins with single instance learning and nearest neighbor generalization. As more data is collected it builds local models which it smoothly interpolates between. These models are merged together in a best-first order when the data supports it. It avoids overfitting while learning very quickly and efficiently. He applied it to a wide variety of domains:

Stephen M. Omohundro, “Best-First Model Merging for Dynamic Learning and Recognition” in Moody, J. E., Hanson, S. J., and Lippmann, R. P., (eds.) Advances in Neural Information Processing Systems 4, pp. 958-965, San Mateo, CA: Morgan Kaufmann Publishers, (1992).

Andreas Stolcke and Steve applied the basic model merging idea to learning stochastic grammars. They were able to learn the structure of Hidden Markov Models and other stochastic grammars much more effectively than competing approaches:

Andreas Stolcke and Stephen M. Omohundro, “Hidden Markov Model Induction by Bayesian Model Merging“, in Advances in Neural Information Processing Systems 5, ed. Steve J. Hanson and Jack D. Cowan, J. D. and C. Lee Giles, Morgan Kaufmann Publishers, Inc., San Mateo, California, 1993, pp. 11-18.

Andreas Stolcke and Stephen M. Omohundro, “Best-first Model Merging for Hidden Markov Model Induction“, ICSI Technical ReportTR-94-003, January 1994.

Andreas Stolcke and Stephen M. Omohundro, “Inducing Probabilistic Grammars by Bayesian Model Merging“, Proceedings of the International Colloquium on Grammatical Inference, Alicante, Spain, Lecture Notes in Artificial Intelligence 862, Springer-Verlag, Berlin, September 1994, pp. 106-118.

Andreas Stolcke and Stephen M. Omohundro, “Model Merging for Hidden Markov Model Induction“, NEC Research Institute, Inc. Technical Report, 1996.

Talk Video: Steve Omohundro talk on “Learning and Recognition by Model Merging”, Part 1

Talk Video: Steve Omohundro talk on “Learning and Recognition by Model Merging”, Part 2

Image Search

Steve and colleagues at NEC Research built the PicHunter image database search system. They studied a number of aspects of learning a user’s intentions from the queries that he makes:

I. J. Cox, M. L. Miller, S. M. Omohundro, and P. N. Yianilos, “Target Testing and the PicHunter Bayesian Multimedia Retrieval System“, in the Proceedings of the 3rd Forum on Research and Technology Advances in Digital Libraries, DL’96, 1996, pp. 66-75.

Ingemar J. Cox, Matt L. Miller, Stephen M. Omohundro, and Peter N. Yianilos, “PicHunter: Bayesian Relevance Feedback for Image Retrieval“, in the Proceedings of the 13th International Conference on Pattern Recognition, 1996, pp. 361-369.

U.S. Patent 5,696,964, “Multimedia Database Retrieval System Which Maintains a Posterior Probability Distribution That Each Item in the Database is a Target of a Search“, Ingemar J. Cox, Matthew L. Miller, Stephen M. Omohundro, and P. N. Yianilos, granted December 9, 1997, assigned to NEC Research Institute, Inc.

T. P. Minka, M. L. Miller, I. J. Cox, P. N. Yianilos, S. M. Omohundro, “Toward Optimal Search of Image Databases“, in Proceedings of the International Conference on Computer Vision and Pattern Recognition, 1998.

Family Discovery Learning Algorithm

Steve developed a powerful set of learning algorithms for the task he calls “family discovery”. This task is to learn a parameterized family of models and discover the dimension and structure of the parameterization. It is applicable to learning accents and gender in speech recognition, fonts or handwriting styles in optical character recognition, body parameters in robotics, and a wide variety of other important learning tasks:

Stephen M. Omohundro, “Family Discovery“, in Advances in Neural Information Processing Systems 8, eds. D. S. Touretzky, M. C. Mozer and M. E. Hasselmo, MIT Press, Cambridge, MA, 1996.

Talk: The Family Discovery Learning Algorithm

Dynamical Systems Theory

The theory of dynamical systems revolutionized our understanding of chaos and of the ways that behavior changes as parameters vary. Steve contributed to various aspects of this theory, some of which are described in his book:

Stephen M. Omohundro, Geometric Perturbation Theory in Physics, World Scientific Publishing Co. Pte. Ltd., Singapore (1986) 560 pages. ISBN 9971-50-136-8

The following paper shows that there are smooth partial differential equations which can stably perform universal computation. Amazingly, this means that their asymptotic behavior is logically undecidable:

Stephen M. Omohundro, “Modelling Cellular Automata with Partial Differential Equations”, Physica D, 10D (1984) 128-134.

The following paper shows that the orbits of three-dimensional period doubling systems can form an infinite number of topologically distinct torus knots and describes the structure of the stable and unstable manifolds of these systems:

John David Crawford and Stephen M. Omohundro, “On the Global Structure of Period Doubling Flows”, Physica D, 12D (1984), pp. 161-180.

This book review discusses the amazing complexity of the Lorenz equations:

Stephen M. Omohundro, Book Review of “Bifurcations and Chaos in the Lorenz Equations” by Collin Sparrow, Mathematical Biosciences 68:2 (1984).

This survey talk highlights ten important ideas from nonlinear dynamical systems with relevance to robotics, machine vision, and stochastic search: 1) Manifolds, 2) Defects, 3) Structural stability, 4) Dynamics, 5) Bifurcation theory, 6) Chaos, 7) Hamiltonian systems, 8 ) Renormalization, 9) Curvature, and 10) Adiabatic invariance:

Talk: Ten Ideas From Nonlinear Dynamics

Hamiltonian Physics

General relativity came from Einstein’s recognition of the natural Riemannian geometry of spacetime. The natural geometry of Hamiltonian physical systems is “symplectic geometry”. Over the last few decades mathematical physics has discovered the intrinsic Hamiltonian geometric structure of virtually every area of physics. Steve contributed to this revolution by discovering natural symplectic structures underlying many of the perturbation methods in physics. He also discovered the symplectic structure of thermodynamics and showed how it arises asymptotically from statistical mechanics in the same way that the symplectic structure of classical mechanics arises asymptotically from quantum mechanics. The full details are presented in his book:

Stephen M. Omohundro, Geometric Perturbation Theory in Physics, World Scientific Publishing Co. Pte. Ltd., Singapore (1986) 560 pages.ISBN 9971-50-136-8

This paper describes the overall framework of Hamiltonian perturbation theory:

Stephen M. Omohundro, “Geometric Hamiltonian Structures and Perturbation Theory”, in Local and Global Methods of Nonlinear Dynamics, Proceedings of a Workshop, Silver Spring, Maryland, 1984, Lecture Notes in Physics Volume 252, Springer-Verlag, Berlin (1986) 91-120.

This paper surveys the Hamiltonian structures which arise in plasma physics:

Alan Weinstein and Stephen M. Omohundro, “A Geometric Approach to the Equations of Plasma Physics, notes on a lecture by Alan Weinstein”, in Seminar on Partial Differential Equations edited by S. S. Chern, Springer-Verlag (1984).

These three talks focus on different aspects of this work:

Talk: “The Geometry of Averaging”

Talk: “A Hamiltonian Approach to Perturbation Theory”

Talk: “Symmetry, Its Breaking, and Hierarchical Levels”

Miscellaneous Contributions

Steve contributed innovative ideas to many other projects, some of which are described in these publications:

Stephen M. Omohundro, “Some Topological Excitations in Physics”, Department of Physics Honors Thesis, Stanford University.

Peter Blicher and Stephen M. Omohundro, “Unique Recovery of Motion and Optic Flow via Lie Algebras”, Proceedings of the Ninth International Joint Conference on Artificial Intelligence (1985).

Bartlett Mel, Stephen Omohundro, Arch Robison, Steven Skiena, Kurt Thearling, Luke Young, and Stephen Wolfram, “Tablet: Personal Computer in the Year 2000″, Communications of the ACM, 31:6 (1988) 638-646.

Bartlett Mel, Stephen Omohundro, Arch Robison, Steven Skiena, Kurt Thearling, Luke Young, and Stephen Wolfram, “Academic Computing in the Year 2000″, Academic Computing, 2:7 (1988) 7-62.

Bartlett Mel and Stephen M. Omohundro, “How Receptive Field Parameters Affect Neural Learning” in Advances in Neural Information Processing Systems 3, edited by Lippmann, Moody, and Touretzky, Morgan Kaufmann Publishers, Inc. (1991) 757-766.

Stephen M. Omohundro, “Toward a Synthesis of Symbolic AI and Connectionism“, in the Proceedings of the BMFT-Statusseminar on Artificial Intelligence, edited by Gottfried Wolf, Bundesministerium fur Forschung und Technologie, Berlin, April 1993, pp. 3-15.

Stephen M. Omohundro and A. Peter Blicher, “RCL: A C++ Library and Programming Discipline for Research“, Technical Report 97-079, NEC Research Institute, Princeton, NJ, 1997.

Ian Stewart, “A Puzzle for Pirates“, Mathematical Recreations,Scientific American, May 1999, pp. 98-99 is devoted to a problem in game theory that Steve discovered.

He and 5 others invented U.S. Patent 7,775,439 B2: Donald G. Kimber, Feng Guo, Eleanor G. Rieffel, Kazumasa Murai, Stephen Omohundro, U.S. Patent 7,775,439 B2, August 17, 2010, “Feature Wands for Camera Calibration and as a Gesture Based 3D Interface Device”.

Technote: Floyd-Steinberg Dithering

Technote: Bayesian Segmentation of Dot Pictures

Technote: An Equilateral Triangle Recognition System