In anticipation of the upcoming AI safety summit in Seoul, South Korea, the United Kingdom is intensifying its initiatives to address AI risk. The AI Safety Institute, established by the UK in November 2023 to evaluate and mitigate risks associated with AI platforms, has announced plans to launch a second location in San Francisco.
This strategic move aims to position the institute closer to the hub of AI innovation, where major players like OpenAI, Anthropic, Google, and Meta are based. These companies are at the forefront of developing foundational AI models, which underpin a wide array of generative AI services and applications. Despite the UK signing a Memorandum of Understanding (MOU) with the US to jointly address AI safety, the UK’s decision to establish a tangible presence in the US highlights the significance of directly engaging with the AI industry.
Michelle Donelan, the UK Secretary of State for Science, Innovation, and Technology, emphasized the benefits of this expansion in a TechCrunch interview. “By having people on the ground in San Francisco, we gain direct access to the headquarters of many key AI companies. While several of these firms have bases in the UK, having a local base in San Francisco opens up an additional talent pool and fosters closer collaboration with the US.”
Proximity to the epicenter of AI development will not only enhance the UK’s understanding of emerging technologies but also increase visibility and influence within the industry—a crucial point considering the UK views AI and technology as drivers of economic growth and investment. The recent upheavals at OpenAI, particularly concerning the Superalignment team, underscore the timely nature of establishing a presence in the area.
Currently, the AI Safety Institute operates with a modest team of 32 individuals, a stark contrast to the massive financial stakes and investments involved in developing AI models. Among its key undertakings is “Inspect,” a pioneering suite of tools released earlier this month for evaluating the safety of foundational AI models.
Donelan described this release as the initial phase of their efforts. Despite the voluntary and inconsistent nature of current model evaluations, the tools aim to set benchmarks for safety. As one senior UK regulator noted, companies are not legally required to submit their models for pre-release vetting, highlighting the challenge of preemptively identifying AI risks.
On engaging AI companies for evaluations, Donelan noted, “Our evaluation process is an evolving field. Each assessment helps us refine and enhance our methods.” The summit in Seoul will serve as a platform to introduce Inspect to global regulators, with hopes of wider adoption.
“Now that we have an evaluation system, the next phase is about ensuring AI safety across society at large,” Donelan added.
Looking ahead, Donelan anticipates more comprehensive AI legislation from the UK, aligning with Prime Minister Rishi Sunak’s approach of understanding AI risks before implementing regulations. “We believe that legislating without full comprehension is premature. Our recent international AI safety report highlighted significant research gaps, emphasizing the need for a global collaborative effort in AI research.”
Ian Hogarth, Chair of the AI Safety Institute, reiterated the importance of international cooperation. “Since the institute’s inception, we’ve stressed the value of a global approach to AI safety. Today marks a pivotal moment as we expand our operations to a tech-rich area, enhancing the expertise complemented by our London staff.”
By scaling their operations in San Francisco, the AI Safety Institute aims to advance its mission of addressing AI risk through international collaboration, research sharing, and model testing, reinforcing the UK’s proactive stance in the global AI landscape.