To ensure data protection and prevent misuse of AI systems, the European Union has enacted the Europe AI regulation
Most things have potential for misuse and artificial intelligence (AI) is no exception. As a matter of fact, advanced AI systems pose a major risk in terms of compromising the data rights and fundamental privacy rights of users. To prevent such occurrences, the European Union has passed the first-of-its-kind general legislation on artificial intelligence. For better understanding, let us take a look at some of the key aspects of Europe AI regulation.
Risk of AI systems
The Europe AI regulation classifies the various risks associated with AI systems. It is essentially a proactive measure to prevent the potential misuse of AI systems. For better clarity and improved response, the risks associated with AI systems have been classified into four broad categories.
Unacceptable risk – Some AI processes are entirely banned under Europe AI regulation since these go against the fundamental rights and values of the European Union. Some examples include the use of subliminal techniques, biometric identification, exploitation of people’s vulnerabilities, social scoring, predictive policing, etc.
High risk – AI systems classified as high risk relate to processes that can compromise the fundamental rights of people and their safety. These AI systems are required to be closely monitored, as per the Europe AI regulation. For example, such systems will need to have risk management mechanisms, technical documentation and conformity assessments. Some examples of high-risk AI systems include recruitment systems, biometric systems and AI systems in use by law enforcement agencies.
Transparency Risk – This applies to generative AI such as ChatGPT. While not classified as high risk, generative AI systems have to function as per the EU transparency laws. For example, it is necessary for such systems to disclose that the content is AI generated. Generative AI system creators will have to ensure that the AI does not create illegal content. Any images, audio or video should be labelled as generated by AI.
Minimal risk – Excluding the ones mentioned above, all other AI systems are not required to follow any specific mandates. The European AI regulation supports the use of AI by start-ups and SMEs (small and medium-sized enterprises). National authorities are encouraged to provide such entities a relevant testing environment that can mimic the real world.
Who will enforce the Europe AI regulation?
As per the Europe AI regulation, the enforcement will be done at the European level and national level. At the European level, various authorities will be involved such as the European AI Board, the European Data Protection Supervisor and the AI Office. In addition, there is an advisory forum and a specialized scientific panel comprising independent experts. At the national level, it is mandated that one or more competent individuals should be appointed as the Market Surveillance Authority. This official will also work as the primary contact person with the European Union.
Important dates for Europe AI regulation
The Europe AI regulation has been implemented with effect from 1st August, 2024. However, some of the provisions of the regulation will be implemented at a later date. Here are some important milestones planned for the Europe AI regulation.
- 2nd February, 2025 – AI systems with unacceptable risks will be prohibited
- 2nd August 2025 – Rules related to general-purpose AI models will be implemented. Also, competent authorities will be designated at the member state level.
- 2nd August, 2026 – All provisions of the Europe AI regulation will come into force. Member state authorities will have to implement at least one AI regulatory sandbox.
- 2nd August, 2027 – Implementation of the rules, as applicable for high-risk AI systems mentioned in annex I. It covers civil aviation safety, radio equipment, agricultural vehicles, toys, in vitro diagnostic medical devices, etc.).