Four regulatory bodies from the EU, U.K. and U.S. have released a statement of combined intent to study whether the AI industry allows for sufficient competition. The four groups are the European Commission, U.K. Competition and Markets Authority, U.S. Department of Justice and U.S. Federal Trade Commission.
The statement doesn’t suggest any overarching regulation or the creation of a new regulatory body.
“Our decisions will always remain sovereign and independent,” the statement reads. But, the organizations said, some cooperation is important because the risks stemming from the AI industry do not “respect international boundaries.”
The statement seeks to prevent risks to competition, such as entrenching existing AI firms of ecosystems, increasing barriers to entry or a lack of choice among buyers. There is room for more existential risks in the wording of the statement, too: “AI may be developed or wielded in ways that harm consumers, entrepreneurs, or other market participants.”
Other challenges in the AI industry include limited access to chips and close collaborations among key players. Regarding the latter, the CMA has until September to decide whether to investigate the transfer of key Inflection AI talent to Microsoft.
The joint statement, which is not connected to any specific investigation or AI company, suggests these challenges can be solved by following certain agreed-upon principles:
“AI is a borderless technology which has the potential to drive innovation and growth, delivering transformative benefits for people, businesses, and economies around the world,” said CMA Chief Executive Sarah Cardell in a press release. “That’s why we’ve come together with our EU and U.S. partners to set out our commitment to help ensure fair, open and effective competition in AI drives growth and positive change for our societies.”
SEE: Search engines are the hottest new arena in which AI companies compete.
The joint statement is part of ongoing maneuvering between governments and the booming AI industry. Meta paused on releasing multimodal AI products in the EU because of what the Facebook parent characterizes as a lack of clarity from the EU regarding GDPR privacy rules, Axios reported on July 17.
At the same time, the European Commission is investigating some of the world’s largest tech companies for “gatekeeping” software under the Digital Markets Act.
The European Union AI Act is set to go into effect Aug. 1, providing tools for startups and requiring companies to assign AI systems a risk level and disclose AI-generated content.
The businesses most likely to be affected are those within the EU that use AI products or the big AI makers. However, the broader question is whether both groups can find a balance between protecting users’ privacy – especially in the case of photorealistic AI images that could spread misinformation – and allowing new companies the chance to shake up the industry.