While the United Nations hashes out regulations, the UK’s ‘context-based’ approach is intended to spur innovation but may cause uncertainty in the industry.
Attempts to create standards and regulations for the way generative AI intersects with many aspects of society are underway across the world. For instance, in March, the U.K. government released a white paper promoting the country as a place to “turbocharge growth” in AI. According to the white paper, 500,000 people in the U.K. are employed in the AI industry, and AI contributed £3.7 billion ($4.75 billion) to the national economy in 2022.
In response, on July 18, the independent research body Ada Lovelace Institute, in a lengthy report, called for a more “robust domestic policy” in order to regulate AI through legislation that clarifies and organizes the U.K.’s effort to promote AI as an industry.
Jump to:
“The UK’s diffuse legal and regulatory network for AI currently has significant gaps. Clearer rights and new institutions are needed to ensure that safeguards extend across the economy,” Matt Davies and Michael Birtwistle of the Ada Lovelace Institute wrote.
Both groups are essentially calling for more clarity around AI regulation, but the U.K. government is focusing on being “pro-innovation,” while the Ada Lovelace Institute promotes an emphasis on oversight. The U.K. government is also working on gradually shifting away from the GDPR as part of post-Brexit reshuffling.
The Ada Lovelace Institute’s recommendations include:
Meanwhile, the U.K. prefers to let existing governmental bodies decide how to handle AI on a case-by-case basis. Specifically, the white paper recommends the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority work on their own “context-specific approaches” to generative AI.
Gerald Kierce Iturrioz, co-founder and chief executive officer at AI governance management platform Trustible, said his organization agrees with many of the Ada Lovelace Institute’s recommendations.
Governments that want to be pro-innovation should “clarify the legal gray areas such as use of data for training, how bias and fairness should be evaluated, and what the burden of proof standards should be,” he said in an email to TechRepublic.
“The U.K. must swiftly establish guardrails to ensure that AI systems are developed and used responsibly within the public sector,” Iturrioz said.
If the government doesn’t establish guardrails, more risks could arise. For example, Iturrioz pointed out the use of automated facial recognition by the U.K. police, which a human rights study from the University of Cambridge last year found to be ethically and legally dubious.
The U.K.’s relatively laissez-faire approach stands in contrast to the European Union’s focus on regulation. The EU is working on an AI draft law for a risk-based approach that focuses on reducing bias, coercion or biometric identification such as automated facial recognition. In June, the European Parliament approved draft legislation for the AI Act, which establishes guidelines for the use of AI and forbids some uses, including real-time facial recognition in public places.
Representatives from countries across the world and from many of the leading AI makers presented similar concerns at the first United Nations Security Council meeting on the topic.
“The U.K. seems to be waiting to see how implementation and reception of the EU’s AI Act should influence their approach towards AI regulations,” said Iturrioz. “While this makes sense on the surface, there are risks to sitting back while others move ahead on AI regulation.”