Last year, the UK Government hosted the first major global Summit on frontier AI safety at Bletchley Park. It focused the world’s attention on rapid progress at the frontier of AI development and delivered concrete international action to respond to potential future risks, including the Bletchley Declaration; new AI Safety Institutes; and the International Scientific Report on Advanced AI Safety.
Six months on from Bletchley, the international community has an opportunity to build on that momentum and galvanize further global cooperation at the
AI Seoul Summit. It covers the emergence of general-purpose AI – AI systems capable of performing a wide variety of tasks – has highlighted both the immense potential benefits and the significant risks associated with this technology.
General-purpose AI represents a major leap forward in AI capabilities. Just a few years ago, the most advanced AI language models struggled to produce coherent paragraphs. Today, state-of-the-art general-purpose AI can engage in multi-turn conversations, write computer programs, and even generate videos from text descriptions. This rapid progress has been driven by continuous increases in computing power, training data, and algorithmic efficiency.
However, the future trajectory of general-purpose AI is highly uncertain. Experts disagree on whether continued scaling of existing techniques will be sufficient to drive rapid progress, or if new research breakthroughs will be required. This uncertainty makes it difficult to predict how quickly general-purpose AI capabilities will advance in the coming years.
Assessing the Risks
As general-purpose AI systems become more capable, it is crucial that we develop a thorough understanding of the risks they may pose. However, our current understanding of how these systems operate is limited, making it challenging to reliably assess their capabilities and potential impacts.
Some key risks associated with general-purpose AI include:
- Malicious use: General-purpose AI could be used to create sophisticated scams, disinformation, cyberattacks, or even biological weapons.
- Malfunctions: Biased outputs, unintended behaviors, or loss of control over autonomous systems could lead to serious harm.
- Systemic risks: Widespread adoption of general-purpose AI could disrupt labor markets, concentrate power in the hands of a few actors, and contribute to privacy violations and environmental damage.
Underpinning these risks are several cross-cutting factors, such as the difficulty of ensuring reliable behavior, the lack of transparency in how these systems work, and the potential for rapid technological progress to outpace regulatory responses.
Mitigating the Risks
Researchers are making progress on technical approaches to mitigate the risks posed by general-purpose AI, such as:
- Training models to be more robust and behave more safely
- Monitoring deployed systems to identify risks and evaluate performance
- Developing techniques to reduce bias and protect privacy
However, no currently known method can provide strong assurances against all potential harms. Entirely preventing risks like bias is extremely challenging, and techniques for protecting privacy often struggle to scale to the largest general-purpose AI models.
Shaping the Future of AI
Ultimately, the future of general-purpose AI will be shaped by the choices we make as a society. Governments, companies, and individuals all have a role to play in determining how this technology is developed and deployed. Some key questions we must grapple with include:
- How can we ensure that the benefits of general-purpose AI are widely shared?
- What safeguards and governance structures are needed to mitigate risks?
- How much should we invest in research to better understand and manage the impacts of general-purpose AI?
The stakes are high, and the path forward is uncertain. But by working to build a shared understanding of the challenges and opportunities ahead, we can strive to create a future in which general-purpose AI advances the public good while its risks are carefully managed. Nothing is inevitable – the future of AI will be what we make of it.
Full Report:
International Scientific Report on the Safety of Advanced AIRecent Posts