Are you interested in REQUESTS? Save with our coupons on WHATSAPP o TELEGRAM!

Europe adopts the AI ​​Act, the world's first law on artificial intelligence. Here are the new rules

The recent adoption of the AI ​​Act from the European Union marks a crucial moment for the future of artificial intelligence. This historic event not only places Europe at the center of global attention for its regulatory initiative, but also raises a debate on how new rules will influence innovation and security in the field of AI. This regulatory approach of the European Union reflects a prudent but optimistic vision towards technological progress, trying to balance innovation and security.

Europe writes history: the AI ​​Act's new artificial intelligence rules

March 13, 2024 represents a memorable date for the artificial intelligence sector, with the European Union having AI Act officially adopted. This law, the first of its kind globally, has received broad approval from the European Parliament, with 523 votes in favor, 46 against and 49 abstentions. With this move, Europe positions itself as a forerunner in setting standards for the ethical and responsible management of AI, potentially influencing similar policies in other regions of the world.

Thierry Breton, the EU Commissioner for the Internal Market, showed that Europe has now established a benchmarking for the regulation of artificial intelligence. This view is echoed in the technology sector, although there is no shortage of critical voices. While some experts praise the AI ​​Act for its attempt to mitigate the risks associated with the use of artificial intelligence, others fear that it could limit technological innovation.

ai act european flag

Read also: The rules of the Digital Markets Act which has just come into force in Europe

A notable aspect of the AI ​​Act is its approach to risk classification, dividing AI applications into categories depending on the level of danger representing. The strictest rules are reserved for systems considered high risk, such as those used in vehicle automation and surveillance tools. At the same time, uses deemed unacceptable, such as social rating systems similar to social credit, will be prohibited.

Despite its noble intentions, the AI ​​Act has sparked concerns among technology companies, both European and international. The latter fear that the new rules could slow down innovation and make the European technology sector less competitive compared to counterparts in the USA and China. However, an attempt has been made to allay such fears, in particular with the introduction of more flexible rules for so-called basic models, a core technology for products such as OpenAI's ChatGPT.

The key points

  • risk-based approach: the regulation classifies AI systems based on their potential risk to society, from minimal risk to unacceptable risk, with stricter regulations for high-risk categories;
  • high-risk AI systems: High-risk AI applications, such as those in critical infrastructure, education, employment, law enforcement and other sectors, must comply with rigorous requirements, including risk mitigation systems, high-quality datasets, activity logs, detailed documentation and robust, accurate and secure operations;
  • prohibited practices: Bans AI systems considered a threat to people's fundamental rights, including manipulative applications and social scoring by governments or companies (as in China);
  • risks specific: Generative AI-based systems such as chatbots must clearly disclose interactions with machines, and deepfakes or other AI-generated content must be labeled consequentially;
  • sanctions for non-compliance: Violations can result in fines ranging from 7,5 million euros or 1,5% of annual turnover to 35 million euros or 7% of global turnover. How the EU AI Regulation addresses the biometric identification and surveillance? The regulation considers biometric identification systems, particularly those used for real-time facial recognition in public spaces by law enforcement, to be high risk and, in some cases, prohibits their use. However, exceptions are made for critical scenarios, such as preventing imminent terrorist attacks or searching for missing persons, provided that strict conditions are met.

When will the AI ​​Act come into force?

The AI ​​Act is scheduled to come into force May 2024, and affected organizations should start preparing by forming dedicated AI compliance teams. This rapid timeline underlines the urgency with which the EU addresses technological evolution and the need for adequate regulation. The date differs by one month from what was announced by the European Parliament last month.

Gianluca Cobucci
Gianluca Cobucci

Passionate about code, languages ​​and languages, man-machine interfaces. All that is technological evolution is of interest to me. I try to divulge my passion with the utmost clarity, relying on reliable sources and not "on the first pass".


0 Post comments
Inline feedback
View all comments