As the United States scrambles to make its mind up on tech regulation, the European Union is taking another major step towards Big Tech regulation in the world of artificial intelligence (AI).

The European Parliament overwhelmingly voted on Wednesday to pass a new draft law of the Artificial Intelligence Act with 499 votes in favor, 28 against, and 93 abstentions. The final version of the law is expected to be voted on sometime in late 2023.

“The AI Act will set the tone worldwide in the development and governance of artificial intelligence, ensuring that this technology, set to radically transform our societies through the massive benefits it can offer, evolves and is used in accordance with the European values of democracy, fundamental rights, and the rule of law,” said Dragos Tudorache, Romanian politician and Member of the European Parliament (MEP).


If passed, the law imposes a comprehensive risk-based regulatory regime over AI systems. Artificial intelligence that threatens the safety, livelihoods, or human rights of people—such as those used for biometric surveillance—are deemed as posing an “unacceptable risk” and would be banned outright.

Next on the risk spectrum are “high-risk” systems such as AI used in public transport infrastructure, educational grading, medical surgery, law enforcement, or financial credit-scoring. These will require passing certain regulatory obligations of risk assessment and security robustness before commercial use.

Finally, the vast majority of AI systems used by retail consumers, such as generative chatbots, facial recognition software, and spam filters, will fall into the “minimal” or “low-risk” categories. At minimum, they will require companies to make clear to users how the products they are using actually function.

“All eyes are on us today. While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose," said MEP Brando Benifei. "We want AI’s positive potential for creativity and productivity to be harnessed, but we will also fight to protect our position and counter dangers to our democracies and freedoms during the negotiations with Council."


The law’s broad intent to curtail the social risks of AI systems directly addresses the many concerns of bias, discrimination, and job displacement that many prominent voices—such as Twitter CEO Elon Musk and OpenAI CEO Sam Altman—have voiced in recent years.

Mark Surman, president of the Mozilla Foundation, praised the EU’s AI Act for holding “AI developers more accountable and creating more transparency around AI systems—including ones like ChatGPT.”

Others fear, however, that the act may be extending overregulation into AI systems that pose limited risk to begin with. 

Boniface de Champris, Policy Manager at the Computer & Communications Industry Association (CCIA), stressed that the new rules must address AI risks “while leaving enough flexibility for developers to deliver useful AI applications.”

The EU’s progress on regulation is broadly welcomed, but it remains to be seen if the EU can overtake the US lead in the sector.

Despite the United States’ lead on tech innovation though, America continues to lag behind its European counterparts on the tech regulatory front. While the White House has issued executive orders urging American AI companies to promote “equity” in its AI systems, Congress has decidedly taken a “wait and see” approach towards these new technologies.

The crypto industry on the other hand, has been forced to operate in a state of regulatory limbo as the SEC launched major lawsuits against two of the sector's biggest exchanges—Binance and Coinbase—just last week.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.