The European Union has revealed the structure of its new AI Office, an oversight and ecosystem-building body being established under the bloc’s AI Act. The risk-based regulatory framework for artificial intelligence is expected to be enforced before the end of July, following the recent approval by EU lawmakers. The AI Office will officially begin its operations on June 16.
Reflecting the EU’s ambitious vision for AI, the AI Office will be pivotal in shaping the European AI ecosystem by regulating AI risks and fostering innovation and uptake. Additionally, the bloc hopes the AI Office will influence global AI governance, as many countries look to establish their own frameworks. The Office will comprise five key units.
Here’s a breakdown of the five units of the EU’s AI Office:
- Regulation and Compliance: This unit will liaise with EU Member States to support the harmonized application and enforcement of the AI Act. It will contribute to investigations, address possible infringements, and administer sanctions. The unit will also produce templates for General Purpose AI (GPAI) models, such as summarizing copyrighted material used to train these models.
- AI Safety: This unit focuses on identifying systemic risks associated with highly capable general-purpose models, such as those underpinning tools like ChatGPT. It will implement mitigation measures and conduct evaluation and testing. The AI Office will enforce the AI Act’s rules for GPAIs, including conducting evaluations and requesting information from AI companies.
- Excellence in AI and Robotics: This unit is dedicated to supporting and funding AI research and development (R&D). It will coordinate with the “GenAI4EU” initiative, which aims to promote the development and adoption of generative AI models, including upgrading Europe’s network of supercomputers.
- AI for Social Good: This unit will oversee the Office’s international engagement for projects where AI can have a positive societal impact, such as weather modeling and cancer diagnosis. This component aligns with the EU’s planned AI collaboration with the US on AI safety and risk research, including a focus on public good applications.
- AI Innovation and Policy Coordination: This unit ensures the execution of the EU’s AI strategy, including monitoring AI trends and investment, stimulating AI adoption through European Digital Innovation Hubs, and supporting regulatory sandboxes and real-world testing environments.
The composition of the AI Office, with three units focusing on AI uptake and innovation and two on regulatory compliance and safety, aims to reassure the industry that the EU’s AI rulebook is not anti-innovation. The EU believes that building trust in AI will foster its adoption.
The Commission has appointed several heads for the AI Office units: Lucilla Sioli (head of AI Office), Kilian Gross (Regulation & Compliance unit), Cecile Huet (Excellence in AI and Robotics Unit), Martin Bailey (AI for Social Good Unit), and Malgorzata Nikowska (AI Innovation and Policy Coordination Unit). The AI Safety unit’s chief and the lead scientific advisor roles remain vacant.
Established by a Commission decision in January and structured since late February, the AI Office falls under the EU’s digital department, DG Connect, currently headed by Internal Market Commissioner Thierry Breton. The AI Office will eventually employ over 140 people, including technical staff, lawyers, political scientists, and economists. The EU aims to ramp up hiring over the next few years as the law is phased in.
One upcoming role for the AI Office will be developing Codes of Practice and best practices for AI developers, serving as interim guidelines while the legal framework is fully implemented.
Other responsibilities include liaising with various fora and expert bodies, such as the European Artificial Intelligence Board, a scientific panel of independent experts, and an advisory forum comprising industry stakeholders, startups, academia, think tanks, and civil society.
The first meeting of the AI Board is expected by the end of June. The AI Office is preparing guidelines on AI system definitions and prohibitions, due six months after the AI Act’s entry into force. The Office will also coordinate the creation of codes of practice for general-purpose AI models, due nine months after entry into force.
This report has been updated to include the names of confirmed appointments provided by the Commission.