EU's AI Act 2024: Safeguarding Consumers or Restricting Tech Growth?

| Jun 24, 2024 min read

Summary


In May 2024, the European Union formally adopted new regulations under the Artificial Intelligence Act that sets out rules for how AI can be sold, used and monitored in the EU and bans certain AI practices. Initially proposed in April 2021, it aims to promote use of AI that is safe, respects human rights, protects health, safety and the environment.

The regulation applies to anyone who makes, uses, imports or distributes AI systems in the EU regardless of where they are based.

Summary

Here are the curated keypoints I noted from the Act:

Risk Based Classification (Article 6 and Annex III)

AI systems are categorized by risk levels from minimal to unacceptable where each category dictates the required compliance measures. The space is heavily regulated for AI systems which are identified as high-risk, including those used in critical infrastructures (e.g., transport and healthcare), employment, and essential private and public services (e.g., credit scoring), which will be subject to stringent compliance requirements.

Prohibited Practises (Article 5)

The act bans specific AI uses such as indiscriminate social scoring and manipulation of human behavior, protecting personal rights and preventing misuse.

  • Bans AI systems that scrape facial images from the internet or CCTV footage
  • Manipulates people’s decision or exploit their vulnerabilities
  • Classify people based on their social behavior or personal traits
  • Infer motions in workplace or educational institutions
  • Categorize people based on their biometric data

Transparency (Article 50)

Companies must inform users when they are interacting with an AI system. AI systems that create synthetic content (like deep fakes) must mark their outputs as artificially generated.

Compliance and Penalties (Article 99)

Non-compliance with certain AI practices can result in fines up to 35 million EUR or 7% of a company’s annual turnover.

However there are few exemptions for followings:

  • being systems used for military, defense or national security purposes,
  • Purpose of scientific research and development
  • Research, testing or development activity regarding AI models prior to their being placed on market personal or non-professional activities
  • AI systems released under free and open-source licenses unless they are high-risk.

Reflections

Personally how I perceived the act and my opinions on it, as of today, are:

  • The AI Act primarily concentrates on high-risk AI systems to ensure robust regulatory stand and seems to have softer sides on low risk categories probably with an intent to promote innovation.
  • Limited focus on emerging domains like generative AI suggests further enhancements may be needed to pace up with the evolving scenario and where the market is heading.
  • The AI Act is softer in terms of penalties for startups and SMEs and looks to promote innovation.
  • Since most of the applications in banking institutions like credit scoring are labeled as high-risk systems, we can expect to see slower adoption in those domains.

Reference for this post and official EU AI Act can be found here: The AI Act Explorer




If you enjoy staying updated on technology, business, and the universe, feel free to read me on Substack.