Can AI Be Regulated? Europe Is About to Find Out

Legislation full of carve-outs is no replacement for a political movement to democratize AI.

(John Keeble / Getty Images)


Way back in April 2021 — what seems like a lifetime ago in our great debate about the uses and abuses of AI — the European Commission presented a 108-page proposal for the regulation of artificial intelligence. It is widely anticipated to become the world’s first comprehensive regulation of the controversial technology. The legislation just passed the “trilogue” with a high-level deal agreed between the Commission, the European Council (representing member states), and the European Parliament (representing people).

The key feature of this particular framework is its risk-based approach, with the level of regulation correlating to the risk posed by the system to people’s safety or fundamental rights. For some kinds of AI systems, the level of risk had been deemed unacceptable, effectively banning its use and sale in the EU. This category had included social scoring systems and real-time and remote biometric identification systems such as facial recognition technologies. Under these strictures, high-risk systems demand strict obligations around assessment and ongoing monitoring, while those deemed lower risk would be subject to a much lighter regulatory touch. The examples given by the Commission of minimal or no-risk systems are spam filters and AI-enabled video games, but it also notes that the majority of AI systems currently in use in Europe would fall into this category.

But which systems fall into which tier has been an ongoing point of contention.

Sorry, but this article is available to subscribers only. Please log in or become a subscriber.