EU’s Draft Code of Practice for General Purpose AI Models 

The European Union (EU) published the Code of Practice for general-purpose AI (GPAI)’s first draft providers, offering guidance on compliance

The European Union (EU) published the Code of Practice for general-purpose AI (GPAI)’s first draft providers, offering guidance on compliance with the EU AI act conformity assessment for AI developers.  

The EU’s AI entered into force on August 1 of this year to regulate AI technology such as general-purpose AI (GPAI) models through risk-based framework, according to the European Commission.  

The EU ai act implementation period allowed public feedback until November 28 of this year, with certain provision of the EU AI act conformity assessment targeting strong AI models, including GPAIs and risk-based frameworks functionalities. 

GPAI Models and EU Regulations 

The Code of Practice draft applies specifically to GPAI models’ AI systems used across various industries and applications, such as OpenAI’s ChatGPT, Google’s Gemini, Meta’s Llama, and will be required to follow the guidance provided in the Code to avoid potential risks. 

The EU AI act checklist categorizes AI applications based on their associated risks, with higher risk AI systems like GPAIs facing stricter regulations. Now these models are considered critical due to their broad capabilities and potential to influence sectors ranging from healthcare to finance.  

The Code of Practice will provide standards to help companies like OpenAI, Google, and Anthropic follow these obligations. It is important to note that while the Code offers a framework for compliance, AI providers can opt for alternative compliance methods if they meet the law’s requirements. 

Draft Code of Practice Key Obligations 

The document of 36 pages, includes a number of open questions aimed at managing systemic risks, including privacy violations and misuse of AI. For instance, one major focus is on transparency requirements for AI companies, especially in terms of their data usage. Moreover, it deals with systemic risks posed by powerful AI models.  

The EU AI act implementation forces powerful AI systems to conduct risk assessments and reduce risks effectively. Some of the potential risks stated in the draft: 

  • Cybersecurity threats such as vulnerabilities in AI models. 
  • Disinformation and misinformation can disrupt democratic processes. 
  • Large-scale discrimination and bias. 
  • Loss of control, particularly with autonomous AI systems. 

The Code also suggests that GPAI makers should be practical in identifying and managing other emerging risks, such as privacy breaches or the use of AI for surveillance

In parallel, feedback from the AI industry is welcomed, as well as civil society, to help refine its provisions. This step is important in resolving questions about how to handle AI models and whether different rules should apply to small startups compared to tech giants.  

Future EU AI act law enforcement may introduce more compliance measures, focusing primarily on models with high risks. Additionally, providers will be expected to follow a Safety and Security Framework that includes risk predicting.  

What About the Deadlines? 

As for the deadlines, EU AI act conformity assessment puts strict deadlines for compliance. For GPAI providers, transparency measures will come into force by August 1, 2025. Meanwhile, for models with systemic risks, more strict regulations will be out on August 1, 2027.  

These well studied measures include incident reporting requirements, in which companies must notify the AI Office about serious incidents that could result in harm or misuse of AI systems.  

The act also requires that, starting in 2027, GPAI makers will need to report when their models are approaching systemic risk thresholds, providing timeframes for when these risks might occur. 

Final Thoughts 

The EU AI act deployer obligations offers a clear direction for general purpose AI models aiming to meet the EU AI Act’s standards. While the draft is still in its early stages, it reflects the EU’s commitment to ensure that powerful AI systems are developed responsibly and safely for users. As the consultation period continues, feedback from AI developers and the public will help shape a final version of the Code, set for release in May 2025. 


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.