UK mulling potential AI regulation

UK mulling potential AI regulation

/

The country’s AI Safety Institute has been evaluating AI models for safety even without an official regulatory framework in place.

Illustration by Cath Virginia / The Verge | Photos from Getty Images

Officials at the UK’s Department of Science, Innovation and Technology have started drafting legislation to regulate AI models, Bloomberg reports. It’s unclear how any future regulation will intersect with the UK’s already-extant AI Safety Institute, which already conducts safety tests of the most powerful AI models.

After hosting the first global AI Safety Summit at Bletchley Park in November 2023, which was attended by many world leaders, the UK established an AI Safety Institute the following November. The institute began evaluating AI models for safety this year, though some technology companies requested more clarity on the timelines and what would happen if AI models are found risky. The UK also agreed to do joint safety testing of models with the US. 

However, the UK does not officially have a policy preventing companies from releasing AI models that have not been evaluated for safety. Neither does it have the power to pull any existing model from the market if it violates safety standards or to fine a company over those violations. (In comparison, the European Union’s AI Act can impose fines if AI companies violate certain safety benchmarks.)

Prime Minister Rishi Sunak has previously said there’s no need to “rush to regulate” AI models and platforms. Meanwhile, Bloomberg reports, other government officials have raised the possibility of amending the UK’s copyright rules to strengthen the opt-out option for training datasets. Any potential bill is still a ways off, according to Bloomberg.

https://www.theverge.com/rss/index.xml

Emilia David

Leave a Reply