Recent advances in artificial intelligence, driven largely by foundational models, have made impressive progress. However, achieving artificial general intelligence, which involves reaching human-level performance on a range of tasks, remains a major challenge. A key ingredient missing is a formal description of what is needed for autonomous systems to endlessly self-improve toward increasingly creative and diverse discoveries. In other words, a “Cambrian explosion” of new capabilities, i.e., the creation of unconstrained, constantly self-improving AI, remains elusive. This unconstrained invention is how humans and societies accumulate new knowledge and technologies, and is essential for artificial superhuman intelligence.
DeepMind researchers propose a specific and formal definition of open-endedness in AI systems in terms of novelty and learnability. They lay the path to achieving artificial superhuman intelligence (ASI) by developing open-ended systems built on foundational models. These open-ended systems can make robust and relevant discoveries that are understandable and useful to humans. The researchers argue that such open-endedness, enabled by the combination of foundational models and open-ended algorithms, is an essential property for any ASI system to continually expand its capabilities and knowledge in ways that are usable by humanity.
The researchers present a formal definition of open-endedness from the observer’s perspective. An open-ended system generates a set of artifacts that are novel and learnable. Novelty is defined as artifacts becoming increasingly unpredictable to the observer’s model over time. Learnability requires that future artifacts become more predictable conditional on a longer history of past artifacts. The observer uses a statistical model to predict future artifacts based on history and a loss metric to judge the quality of the prediction. Interestingness is expressed by the observer’s choice of loss function, capturing which features the observer considers useful for learning. This formal definition quantifies the key intuition that an open-ended system generates an endless number of artifacts that are novel and meaningful to the observer.
The researchers argue that although continuously scaling foundational models trained on passive data can lead to further improvements, this approach alone cannot achieve ASI. They argue that open-endedness, or the ability to endlessly generate new but learnable artifacts, is an essential property of any ASI system. Foundational models provide powerful fundamental capabilities, but they must be combined with open-ended algorithms to enable the continuous, experiential learning process necessary for true open-endedness. Taking inspiration from the scientific method of forming hypotheses, experimenting, and codifying new knowledge, the researchers outline four overlapping pathways toward developing open-ended foundational models. This paradigm of actively compiling online datasets through open-ended exploration may be the fastest route to realizing ASI.
With the emergence of a strong foundational model, they believe that the design of truly general-purpose open-ended learning systems is now feasible. However, the vast capabilities of such open-ended AI systems also entail significant safety risks that go beyond the existing concerns of the foundational model alone. They emphasize that solutions to these safety challenges may depend on the specific design of the open-ended system and therefore must be pursued in parallel with the development of open-endedness itself. They outline key risk areas related to how knowledge is created and transferred in the human-AI interaction loop. Addressing these fundamental safety issues is not just about mitigating shortcomings, but also about ensuring that open-ended systems meet minimum usability specifications to be beneficial to humanity.
In this study, the researchers contend that the combination of foundational models and open-ended algorithms can provide a promising path to achieving ASI. Foundational models are highly capable, but alone are limited in their ability to discover truly new knowledge. Developing open-ended systems that can endlessly generate novel, yet learnable, artifacts could make ASI a reality and greatly advance scientific and technological progress. However, such powerful open-ended AI systems also raise new safety concerns that must be carefully addressed through responsible development focused on ensuring artifacts are human-interpretable. If these challenges can be overcome, open-ended foundational models could bring enormous benefits to society.
Please check paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us. twitter. participate Telegram Channel, Discord Channeland LinkedIn GroupsUp.
If you like our work, you will love our Newsletter..
Please join us 44k+ ML Subreddit
Asjad is an Intern Consultant at Marktechpost. He is pursuing a B.Tech in Mechanical Engineering from Indian Institute of Technology Kharagpur. Asjad is an avid advocate of Machine Learning and Deep Learning and is constantly exploring the application of Machine Learning in Healthcare.