Artificial Intelligence (AI) indeed has the potential to disrupt industries and bring about transformative changes in various products and services. Generative AI, in particular, enables the creation of data, such as text, images, and videos, by learning patterns from vast datasets and generating unique outputs. The applications and opportunities for value creation with AI are vast.
However, along with its rapid development, AI has raised concerns about cybersecurity, privacy, risks, and ethical implications. Lawmakers in the European Union have recognized the need to address these concerns and have taken the lead in drafting the AI Act, a comprehensive regulation aimed at becoming the global standard for AI governance. This initiative makes Europe the first continent to propose such a wide-reaching regulation for AI.
The goal of the AI Act and similar regulations is to strike a balance between harnessing the benefits of AI technology and ensuring the protection of users' rights and safety. While it is challenging to control the growth of AI systems like ChatGPT, it is crucial to establish frameworks that mitigate potential risks and uphold ethical standards.
The EU AI Act focuses on four key objectives:
- The Act aims to ensure that AI systems on the EU market are safe and comply with existing laws regarding fundamental rights and EU values.
- The Act aims to provide legal certainty to encourage investment and innovation in AI.
- The Act aims to enhance governance and enforce existing laws related to fundamental rights and safety requirements for AI systems.
- The Act aims to facilitate the development of a single market for lawful, safe, and trustworthy AI applications while preventing market fragmentation.
A notable feature of the AI Act is its risk-based approach. It classifies AI applications into three risk categories:
- Unacceptable risk: It includes practices have the potential to manipulate individuals through subliminal techniques beyond their consciousness or exploit vulnerabilities of specific vulnerable groups, such as children or people living with disabilities. The Act specifically prohibits certain activities associated with unacceptable risk. Such as AI-based social scoring by public authorities and the use of real-time remote biometric identification systems for law enforcement, except in limited circumstances.
- High-risk: AI applications pose a significant risk to health, safety, or fundamental rights, and specific requirements are imposed on them. The classification of an AI system as high risk is determined not only by its function but also by the specific purpose and modalities for which the system is used.
- Low or minimal-risk: AI applications are not explicitly prohibited or classified as high risk. These are AI systems that are considered to have a lower potential for adverse effects on health, safety, or fundamental rights of individuals.
The AI Act also establishes legal requirements for high-risk AI systems, covering areas such as data governance, transparency, human oversight, accuracy, and security. Providers of high-risk AI systems have clear obligations, and proportional obligations are placed on other participants in the AI value chain. The Act also introduces conformity assessment procedures, involving independent third parties called notified bodies. These bodies are responsible for assessing the conformity of high-risk AI systems. The AI Act outlines the procedures to be followed for each type of high-risk AI system, with a gradual increase in the capacity of notified bodies over time to minimize the burden on economic operators.
Overall, the AI Act aims to regulate AI in a way that ensures safety, respect for fundamental rights, legal certainty, and a unified market for trustworthy AI applications within the European Union.
Please fill out the form below. One of our Cybersecurity professionals will contact you shortly.