The first EU regulation on artificial intelligence

With a focus on high-risk AI systems, this legislation safeguards fundamental rights and privacy, creating a trustworthy framework for the future of AI.

As part of the digital strategy, the EU would regulate artificial intelligence (AI) to create better conditions for developing and using innovative technology. Ensuring that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly is a priority for Parliament.

The new rules impose obligations on service providers and users depending on the level of AI risk:

  • Unacceptable risk: AI systems with unacceptable risk are systems that threaten humans and will be banned.
  • High risk: Artificial intelligence systems that negatively affect security or fundamental rights are considered high risk.
  • Limited risk: Limited risk AI systems must meet minimum transparency requirements that allow users to make informed decisions.

Artificial intelligence can bring many benefits. Negotiations with the member states on the law’s final form will now begin in the Council. The goal is to reach an agreement by the end of this year.

Artificial intelligence has already transformed how we live, and we’re just starting to realize its capabilities. For Central European Automation Holding clients, making space for this emerging AI technology begins with a curious conversation to explore wants and needs, followed by tailored suggestions to solve problems and create more room for success.


More articles...

One for all, all for one: Interview

The longest journey begins with the first step: this could even have been the motto of the Central European Automation Holding (CEAH), which brought together the four member companies in a joint stand.

Cyber threats on the rise

Cybercriminals are targeting both IT and physical supply chains, launching mass cyberattacks, and devising new ways to extort money from businesses.