The EU AI Act: A New Era for Artificial Intelligence Regulation

In a historic move, the European Union has reached a provisional agreement on the Artificial Intelligence Act. This sets the stage for the world's first comprehensive AI regulation.  

This landmark decision, born from marathon ‘trilogue’ negotiations between the Council of the EU, the European Parliament and the European Commission has been two years in the making. The EU AI Act is poised to shape the future of AI in Europe and beyond. It aims to ensure AI systems are safe and respectful of fundamental rights while fostering innovation and investment.  

This article explores the AI Act's key components, its impact on businesses, and what it means for the future of AI governance. 

An Overview of the Act 

The EU's AI Act introduces a risk-based approach to regulate AI, categorizing systems as high-risk, limited-risk, and low-risk. It sets stringent rules for high-risk AI, focusing on safety and fundamental rights. The Act also prohibits certain AI uses deemed harmful, such as: 

  • biometric categorisation systems that use sensitive characteristics (e.g., political, religious, or philosophical beliefs, sexual orientation, or race); 

  • certain uses of predictive policing; 

  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases; 

  • emotion recognition in the workplace and educational institutions; 

  • social scoring based on social behaviour or personal characteristics; 

  • AI systems that manipulate human behaviour “to circumvent their free will”; 

  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation); and 

  • Real time one from many biometric identification in publicly accessible spaces except as set out below. 

Additionally, it establishes governance mechanisms, including an AI Office for oversight and enforcement, and mandates transparency and rights protection in AI deployment.  

This office will centralize AI regulation at the EU level. Additionally, an AI Board comprising Member States' representatives will be set up for coordination and advisory purposes. In terms of enforcement, the Act stipulates significant penalties for non-compliance, with fines up to €35 million or 7% of global annual turnover for the most serious infringements. These measures are aimed at ensuring uniform application and compliance across the EU. 

EU AI Act: Impact on UK Business 

Although the EU AI Act will not technically apply to UK businesses, any business with cross-border operations will want to take notice. UK Government have committed to an ‘innovation-centric’ push to account for AI in UK law, which is outlined in their recent whitepaper. They believe that any legislation should be tech-agnostic and should leverage existing legal structures, rather than creating AI-specific regulation.  

However, is also likely that any UK legislation will echo the EU proposals to ensure ease of entry to the EU market. Both the Competition and Markets Authority (CMA) and Information Commissioner's Office (ICO) have published guidance, which you can find here and here, respectively. 

As the EU AI Act sets new standards in AI regulation, businesses operating across borders, need to stay informed about these evolving regulations. Adapting to these changes is crucial for future success and to prevent any AI-related snafus that may arise. Keeping up to date on both EU and UK regulatory developments will be key for businesses to maintain competitive advantage in the digital age. 

Previous
Previous

From Safeguarding to Integration: Unpacking the Challenges of the Homes for Ukraine Scheme

Next
Next

Combating Fraud in 2023: Strategies for UK Businesses and Individuals