No, the UK did not move closer to resolving all policy issues related to artificial intelligence (AI) during the UK AI Safety Summit held last week. But as delegates from around the world gather on the outskirts of London to discuss the policy implications of major advances in machine learning and AI, British officials are orchestrating a major diplomatic breakthrough to reduce risks and accelerate this rapid development. It set the world on a path to securing greater benefits from progress. -Evolving technology.
The summit, hosted by Chancellor Rishi Sunak, defied expectations on several fronts. British leaders have brought together senior government officials, executives from major AI companies and civil society leaders for the first time to lay the foundations for an international AI safety regime. Result is, joint promise Advanced AI models will undergo a series of safety tests before release by 28 governments and leading AI companies, as well as the announcement of a new UK-based AI Safety Institute and regular scientist-led AI assessments. A large-scale promotion was undertaken to support the Capacity and Safety Risks.
Discussions also began to plan the long and winding road ahead. Neither technological breakthroughs nor summit agreements are enough to achieve a sensible balance between risk management and innovation. Tackling global challenges also requires cunning diplomacy and the pragmatic design of institutional arrangements (such as international aviation safety processes). Obtaining sufficient quantities of either of these is likely to be a challenge, especially when both are in short supply and major crises are raging in Ukraine and the Middle East.
Despite hallway conversations about these geopolitical issues, summit participants were spurred to action by a shared recognition that cutting-edge AI systems are advancing. amazing speed. The amount of computing power used to train AI systems has expanded over the past decade. 55 million times. The next generation of so-called frontier models uses perhaps 10 times more computing power for training than OpenAI’s GPT-4, and poses new risks to society unless appropriate safeguards and policy responses are quickly established. may bring. (These models could be available as early as next year.) Even with current generation AI systems, guardrail It appears that it can be thwarted too often and can help malicious actors create disinformation and design blackmail code more effectively. A prominent private sector representative familiar with what is happening on the front lines of AI development says that between 2025 and 2030, emerging systems could pose fraud risks that are difficult for humans to control. suggested.
In light of these risks, the progress made at this summit is nothing short of a major diplomatic achievement. The UK has encouraged China and major developing countries, including Brazil, India and Indonesia, as well as the EU and the US, to sign joint commitments on pre-deployment testing.of England And that America Each announced the establishment of an AI Safety Institute. The first two are among the envisioned worldwide network of centers. More importantly, the summit generated support for an international panel of scientists assembled under AI guru Yoshua Bengio to produce a report on AI safety. This panel could be the first step towards a permanent organization dedicated to providing the international community with scientific assessments of the current and projected capabilities of advanced AI models.
The summit also galvanized other jurisdictions towards faster and potentially more comprehensive action. Days before the summit, the White House issued a sweeping warning. presidential order This includes requirements for certain companies to disclose training runs and testing information to the government (as recommended in a recent Carnegie paper) for advanced AI models that could threaten national security. I did. The Frontier Model Forum, founded by Anthropic, Google, Microsoft, and OpenAI to share AI safety information and strengthen best practices, has been named its first director. G7—operates under the auspices of the Japanese-led government Hiroshima process—Draft published code of conduct Guide organizations as they develop and deploy advanced AI systems.The United Nations international expert committee Advise the Secretary-General on AI governance.
While policymakers are currently debating how best to combine these efforts, the relationships and trust built between stakeholders in the lead up to and during the UK summit remain clear. It can be said that it is just as important as the public commitments made. Ministers responsible for digital policy, many of whom are serving in such positions for the first time in their countries, interacted with diplomats, entrepreneurs and private sector leaders. Elon Musk, including research personnel and civil society representatives. Many people were meeting for the first time. Korea and France The agreement to host the next two summits is commendable, and will be critical to strengthening these new relationships and fostering further progress on specific policy issues. These questions include how to evaluate the increasing capabilities of AI models, and how to design institutions that will impact the world’s ability to expand access to frontier-level AI technologies without increasing the risk of abuse. Contains problems.
Participants’ discussions around these issues will explore the new rhythms of 21st century technology diplomacy, including the critical role that institutions like the Carnegie Endowment can play in brokering diplomatic breakthroughs that lack connective tissue. And it also revealed a lot about complexity. Carnegie staff worked behind the scenes with the UK to support elements of the summit and identify key issues. We have been at the forefront of conceptualizing and advocating for international expert panels to validate technical knowledge, build greater scientific consensus, and engage countries from all parts of the world. Ta. We helped envision the potential for AI labs and advised them on how to maximize their chances of success. And we insisted on an international commitment to test sophisticated AI models before they are released.
Much technical and standards-setting work remains to ensure a path for humanity to take full advantage of cutting-edge AI technologies. Challenges include creating “tripwires” that subject certain models to sophisticated monitoring and constraints, and developing AI safety research that more thoroughly incorporates the complexities of human-AI system interaction. included. Another challenge is understanding how frontier AI technologies will work when ultimately incorporated into billions of automated problem-solving software “agents” that interact with each other to meet human demands. It is to do.
Advancing a strong agenda to address these issues will require a combination of nuance, coalition building, and institutional design. Despite relative agreement among participants on a variety of issues, such as the need to pay close attention to the risks of proliferation of lethal autonomous weapons, the AI safety community remains uncertain about upcoming advanced open source models. There are different views on how to handle it, etc., which could potentially give rise to disinformation and national security challenges. While most of the community recognizes the serious risks of a fully open source model, a minority of Puritans remain adamant about open source orthodoxy. More accessible models have more potential for abuse, but may also help prevent the concentration of economic power in a few companies.
Participants also differed on the extent to which the scope of the policy agenda should be expanded. In the pursuit of managing catastrophic risks that may seem more abstract, some participants do not lose sight of tangible challenges, such as the risks of bias and disinformation and the potential for labor market disruption. I asked. Other participants focused on ensuring that citizens of both developing and wealthy countries can fully benefit from the promise of AI and technology transfer, avoiding discrimination, and ensuring that AI supports participatory governance and development. Care was taken to find ways to make a profit. Although few participants denied the importance of these issues, there was a lot of discussion about how to address them in international and domestic policy-making forums.
The final closed-door session featured Sunak, U.S. Vice President Kamala Harris, European Commission President Ursula von der Leyen, Italian Prime Minister Giorgia Meloni, CEOs of Frontier Institute and major technology companies, and some citizens. The most difficult questions with social groups were the focus. , including Carnegie. These dilemmas include how to best define the thresholds of functionality and model complexity that make an AI system dangerous. The best way to engage all countries around the world, including China, in productive AI policy discussions. How to incorporate human values into AI systems when people and cultures strongly disagree about ideals. How to “trust and verify” that there will be reasonable action from countries that agree to work together to make AI safer. There were broader questions in the background. How cutting-edge AI technology could upend assumptions about the coalitions and ideas that will drive political, economic, and social change in the next decade, just as the internet once did. That is to say.
These are challenges that Carnegie continues to explore in its own AI-focused efforts. How to balance the benefits of freely shared open source AI models with effective policies that limit proliferation risks and impose civil liability on AI systems without unnecessarily stifling innovation. How to leverage existing laws. and how democracies can benefit from AI while reducing the risk of misinformation. Also on the agenda is how to engage governments in developing countries, which represent billions of people. asking for participation Amid sometimes cacophonous conversations about the potential for AI systems to overturn assumptions and open new possibilities, the world’s middle class, whose livelihoods are likely to depend on their relationships with these models, people.
The summit itself brought new possibilities with the announcement of new research institutes, testing agreements and the process of producing scientific reports. But there is a subtle irony in the choice of Bletchley Park as the location. It was an environment associated with huge technological advances that contributed to the cause of peace.It is also reported that at Bletchley Park, Alan Turing and his colleagues used their early computing power to crack the Nazi Enigma code. shortening World War II progresses over months or even years. In the years after the war, he began to explore what it meant for machines to be “intelligent.” The world in which he explored these questions faced difficult challenges of institutional design and diplomacy. Governments sought to maintain peace by establishing institutions such as the United Nations and NATO, and to expand prosperity, however imperfectly, through the establishment of specialized institutions such as the Bretton Woods system and the International Civil Aviation Organization. .
Leaders attending the UK summit now face similar questions in a new era of geopolitical change and technological innovation. As these leaders envision the next chapters of global AI policy, the well-being of a planet filled with increasingly powerful AI systems will, as always, require insightful questions, smart It’s good to remember that it relies on good diplomacy and clever strategy. The institution that was created.