Article

The bold gamble of regulating AI

29th January 2024

Following the European Council and Parliament’s political agreement on the EU’s Artificial Intelligence Act (“EU AI Act”) on 8 December 2023, the EU seeks to create a comprehensive legal framework for regulating AI systems across the EU.

The EU’s approach

The EU recognises that the introduction and development of AI technologies are likely to bring a wide array of economic, societal, and environmental benefits across numerous sectors. In particular, the emergence of generative AI, an AI system that can produce a variety of content including text, imagery and audio, is set to challenge the human production of artwork and informative texts.

Accordingly, the general purpose of the EU AI Act is to protect fundamental rights of consumers whilst ensuring the safety of AI systems and providing some certainty for providers and deployers. It is hoped that the EU’s approach will support continued innovation and development in AI.

Risk levels and AI uses

The EU has adopted a risk-based approach to the EU AI Act, setting different obligations for providers and deployers of AI systems depending on the level of risk that the relevant AI system poses to society.

The draft regulations explicitly prohibit some AI uses in the EU where the risk is deemed ‘unacceptable’. These include AI systems which deploy harmful, manipulative ‘subliminal techniques’ to subvert or circumvent a user’s free will, exploit human vulnerabilities due to age, disability, social or economic circumstances, and rank people based on their social and economic activity.

AI systems, which could potentially significantly harm health, safety and fundamental rights of consumers are categorised as ‘high-risk’. These systems are categorised into those used as a safety component of a product falling under EU health and safety harmonisation legislation, and those deployed in eight specific areas – which could be expanded as necessary. These are:

  • Biometric identification and categorisation of natural persons
  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management and access to self-employment
  • Access to and enjoyment of essential private services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Administration of justice and democratic processes.

‘High risk’ systems will be subject to a set of requirements in order to gain access to the EU market. Such requirements include:

  • Registering providers of the systems in an EU-wide database
  • Ex-ante conformity assessments
  • Risk-mitigation systems
  • Testing
  • Technical robustness
  • Data training and data governance
  • Transparency
  • Human oversight

AI systems posing ‘limited-risk’ would be subject to a limited set of transparency obligations. Examples of limited-risk AI systems include those that interact with humans, for example chatbots, and AI systems that generate or manipulate image, audio or video content, such as deepfakes.

Low or ‘minimal-risk’ AI systems can be developed and used in the EU without conforming to any additional regulations. However, the EU AI Act suggests that a code of conduct be created to guide providers of minimal-risk AI systems to voluntarily comply with the regulations set out for higher risk AI systems.

The UK’s approach

The UK government has set out a pro-innovation approach to AI governance. In the
White Paper ‘Establishing a pro-innovation approach to AI regulation’ released in March 2023, the government set out its aim of building public trust in AI technology, and balancing regulation with innovation. The government also intends to use existing regulators and other organisations in its approach in the hope that rules can be adapted as fast as AI technology develops.

The White Paper presents five principles that regulators will be required to implement:

  • Safety, security, and robustness
  • Appropriate transparency and explain-ability
  • Fairness
  • Accountability and governance
  • Contestability and redress.

It also confirms that ‘central support functions’ will be established to assist regulators in enacting the principles. These central functions include evaluating the effectiveness and implementation of the principles, assessing risks across the economy, and providing education and awareness.

The UK’s approach is similar to that of the US, which is to first allow developers to experiment in a regulatory sandbox. In light of the staggering pace of AI development, however, there is a real concern that AI systems will not be developed safely or deployed appropriately.

After all, the White Paper does not create new legal obligations on regulators, developers or deployers of AI despite general-purpose AI systems already being woven into sectors which are less regulated. These include retail, recruitment and employment, education, and policing.

Some principles and central functions laid out in the White Paper echo the EU’s position on AI technologies. This shows that both jurisdictions recognise the potential harm of some AI uses along with the need to mitigate such risks and make AI system developers and deployers accountable if necessary.

It is therefore likely that, as the UK firms up its new regulatory framework, it will adopt a similar approach of assessing the danger that an AI system poses to society and imposing stricter rules if the risks are higher. This is particularly so given the proximity of the EU market to UK-based developers and that the EU AI Act will be applicable to UK entities that deploy and offer AI systems within the EU.

There is a real opportunity for the UK to become a leader in AI governance given its context-based model for regulating AI. However, the success of its approach will depend on the capability of existing regulators and how effective it is at providing redress where AI harms occur.

Related Blogs

View All