EU’s AI Act Goes Public
The European Commission made the final version of the Artificial Intelligence Act (AIA) public to promote a legal framework for AI medical devices and other products.
According to the European Parliament, this new Act focuses on providing people with “human-centric and trustworthy” AI when used for medical devices, such as in vitro diagnostic devices (IVDs), as well as other products, adding that the AIA is the first comprehensive AI law to be issued globally.
Market Safety and AI Training Are a Must
The Commissions hopes the AIA will improve the internal market of AI medical devices, restrict certain AI practices that could be harmful or that could put people at risk, and add some specific requirements for high-risk AI systems, especially when used in critical fields like, healthcare and finance. However, it does not apply for products that are still in the research or testing phase and not yet launched on the market.
The regulatory movement requires AI system providers and deployers to train their employees well on these systems and more specifically on everything related to AI, so that they can have knowledge about the systems they work with. The training should include an understanding of the systems’ functionalities, as well as the risks involved.
Fortifying High-Risk Devices
As for high-risk devices, they are categorized under the following categorizations:
Class IIa or higher under the Medical Devices Regulations. Class IIa devices are mainly those installed within the body for a period between 60 minutes and a month, such as hearing aids, blood transfusion tubes, and catheters. Manufacturers of these kinds of devices are required to establish a risk management system throughout the product’s lifecycle, ensure data governance to verify error-free data, and provide technical documentation proving compliance with the Act.
While for devices that have a lower risk, the legislation imposes less stringent requirements.
“A large segment of AI’s use in healthcare would be classified as ‘high-risk’ under the Act and thus subject to multiple requirements if it is developed or deployed within the EU. These requirements will also apply to existing AI systems but only if they undergo ‘significant changes’ to their design after this Act comes into effect.
It is worth mentioning that non-high-risk uses of AI will also require compliance. For example, applications of ‘General Purpose’ AI in business processes may require compliance as detailed below,” as per a blog post by consultancy IQIVIA.
On the other hand, the AIA does not cover the medtech field, but also various products, including machinery, toys, lifts, equipment and protective systems for explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, and automotive and aviation products.
Final Thoughts
The Artificial Intelligence Act (AIA) is a major step forward in the comprehensive AI regulation, emphasizing the importance of dialogue between regulators to ensure integration and compliance, and make sure to balance between innovation and safety.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.