Truth, Without Favour  ·  Est. 2025
National Herald
Technology

Government Unveils AI Regulation Framework as Tech Firms Warn of Brain Drain

The framework creates binding rules for the highest-risk AI applications while preserving regulatory flexibility — but critics say it is already behind the EU's approach.

Herald Summary
The framework creates binding rules for the highest-risk AI applications while preserving regulatory flexibility — but critics say it is already behind the EU's approach.
Government Unveils AI Regulation Framework as Tech Firms Warn of Brain Drain
Image: Technology — National Herald

The government has published its long-awaited framework for regulating artificial intelligence, setting out binding rules for the highest-risk AI applications while seeking to preserve the flexibility that UK regulators say has helped attract AI investment to Britain.

The framework, which will require primary legislation to be fully enacted, creates three tiers of AI regulation: prohibited applications (facial recognition in public spaces without judicial authorisation, social scoring systems), high-risk applications requiring pre-market assessment (medical diagnosis, criminal justice, credit scoring), and everything else, which will be regulated by existing sector-specific regulators.

How It Compares to the EU AI Act

The UK framework is less prescriptive than the EU AI Act, which came into force in 2024. The government argues this creates a competitive advantage; critics argue it creates a regulatory gap that could attract harmful applications to the UK market.

The Information Commissioner's Office, Financial Conduct Authority, and Care Quality Commission will regulate AI in their respective sectors rather than a new dedicated regulator — an approach that has been widely welcomed by industry but which academics warn may lead to inconsistency.

Industry Reaction

The response from the tech sector was mixed. Larger AI companies — which have the compliance capacity to handle regulation — broadly welcomed the framework's clarity. Startups expressed concern that even proportionate compliance costs could disadvantage them relative to US competitors operating without equivalent requirements.

Several senior AI researchers warned that without adequate research carve-outs, some fundamental AI safety research could migrate to the United States, where regulatory requirements are currently less stringent.

D
Dr. Maya Patel, Technology Editor
National Herald · Technology