The EU AI Hub, launched last week by AI security firm Cranium with KPMG and Microsoft, is a service designed to assist businesses in complying with the newly adopted EU AI Act. With expert advice and bespoke technologies, users will be taken through a series of steps to identify what parts of the AI Act apply to their products and what they need to do to comply.
On March 13, 2024, the European Union Parliament voted the AI Act into law. This means businesses that offer AI products in the region will soon need to abide by its strict rules regarding facial recognition, safeguards and consumer complaints and questions.
While the EU AI Act won’t come into force until late 2024 at the earliest, many companies are looking into complying with its requirements to ensure they are prepared and do not incur any penalties. However, navigating such comprehensive regulations is no mean feat, and this is why the EU AI Hub was created.
SEE: 8 AI Business Trends in 2024, According to Stanford Researchers
The EU AI Hub is a service designed to take global organisations through a series of steps that will help them understand how the EU AI Act regulations apply to their products and comply with and embrace AI responsibly. To achieve these goals, they will be given access to:
“A business’s journey through the Hub will depend on where it currently is in its AI journey, so we will first identify an organisation’s objectives regarding meeting EU compliance requirements,” Daniel Christman, director of AI programs at Cranium, told TechRepublic.
“We’d then identify the path toward bringing a particular AI system or systems into a compliant state, and we would leverage the Cranium technology platform, KPMG services and Microsoft technology and expertise to determine and implement the relevant controls and oversight to achieve compliance.”
Resources provided by the Hub will ensure all of the business’s AI implementations are compliant, practical for their requirements and ethically sound. Businesses can work with experts from the initial strategy and design of AI technologies all the way through deployment and optimisation, using input from regulators and relevant stakeholders.
Christman is currently unsure about how long it will take a Hub user to reach compliance, though he hopes they will be able to “scale compliance across multiple AI systems much faster” than if they were to attempt it alone.
Sean Redmond, director of the EU AI Hub, said in a press release, “Compliance with the EU AI Act and other regulatory frameworks shouldn’t be seen as a block to innovation/ideation, but instead provide the guardrails that enable organizations to experiment with AI and deliver value to their businesses and customers.”
“Pricing will flex based on what the business is looking to achieve in the Hub,” Christman told TechRepublic. “Simply leveraging some of the expertise and knowledge will be no to minimal cost, with more intensive service provision and technology implementation bringing additional investment.”
The EU AI Act will apply directly to businesses located in the 27 EU member states and any businesses with customers in those states, regardless of their location. These businesses could be providers, deployers, importers or distributors of AI systems and may consider using the EU AI Hub to ensure compliance.
Christman told TechRepublic, “Many global businesses are still struggling to get their AI systems ready. Given that the final requirements only recently passed the final legal hurdles, this is somewhat to be expected — but it will still be a challenge for organisations to scale compliance across the enterprise.
“Primarily, organisations have a major challenge in capturing the full inventory of AI systems being developed internally, as well as those included in third-party tools and services.”
Developers of AI systems deemed to be “high risk” will have to meet certain obligations to comply with the AI Act, including the mandatory assessment of how their AI systems might impact the fundamental rights of citizens. This applies to the insurance and banking sectors, as well as any AI systems with “significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law.”
Providers of general-purpose AI systems must also meet certain transparency requirements under the AI Act; this includes creating technical documentation, complying with European copyright laws and providing detailed information about the data used to train AI foundation models. The rule applies to models used for generative AI systems like OpenAI’s ChatGPT.
While the AI Act was approved in March, there are still a few steps to be taken before businesses must abide by its regulations. The EU AI Act must first be published in the EU Official Journal, which is expected to happen in June or July this year. It will enter into force 20 days after publication, but the requirements of the AI Act will apply in stages:
The EU AI Act will apply in its entirety 24 months after its entry into force.
Companies that fail to comply with the EU AI Act face fines ranging from €35 million ($38 million USD) or 7% of global turnover, to €7.5 million ($8.1 million USD) or 1.5% of turnover, depending on the infringement and size of the company.