Brussels: Three governing bodies of the European Union have been intensely negotiating the final version of the EU AI Act, a major package of laws regulating the industry that was first proposed back in 2021. The initial deadline for a final package, December 6, has now come and gone, though lawmakers have not given up and were debating into the early hours of Thursday morning and again on Friday.

Just a few months ago, it seemed as though the EU AI Act was on its way to getting all the necessary votes and setting the benchmark for AI regulation far beyond the European bloc. But now France, Germany, and Italy in the EU Council, which is composed of member countries’ heads of state, have contested some of the package’s main tenets, and the legislation seems in real danger of failing—which would open the door for other countries outside Europe to set the global AI agenda.

To better understand the key sticking points and what’s next, I spoke with our senior AI reporter Melissa Heikkilä and Connor Dunlop, a policy expert at the Ada Lovelace Institute. I’ll warn it’s all pretty complex and it’s still a moving target; as Connor tells me, “The most surprising thing has been the level of drafting and redrafting across all three EU institutions,” which he describes as “unprecedented.” But here, with their help, I’ll do my best to answer some of the biggest questions.

As a refresher, the EU AI Act seeks to establish a risk-based framework for regulating artificial-intelligence products and applications. The use of AI in hiring, for example, is more tightly regulated and requires more transparency than a “lower-risk” application, like AI-enabled spam filters. (I wrote about the package back in June, if you want more background information.)

First, Melissa tells me, there is a lot of disagreement about foundation models, which has taken up most of the energy and space during the latest debates. There are several definitions of the term “foundation model” floating around, which is part of what’s causing the discord, but the core concept has to do with general-purpose AI that can do many different things for various applications.

You’ve probably played around with ChatGPT; that interface is essentially powered by a foundation model, in this case a large language model from OpenAI. Making this more complex, though, is that these technologies can also be plugged into various other applications with more narrow uses, like education or advertising.

Initial versions of the EU AI Act didn’t explicitly consider foundation models, but Melissa notes that the proliferation of generative AI products over the past year pushed lawmakers to integrate them into the risk framework. In the version of the legislation passed by Parliament in June, all foundation models would be tightly regulated regardless of their assigned risk category or how they are used. This was deemed necessary in light of the vast amount of training data required to build them, as well as IP and privacy concerns and the overall impact they have on other technologies.

But of course, tech companies that build foundation models have disputed this and advocate for a more nuanced approach that considers how the models are used. France, Germany, and Italy have flipped their positions and gone so far to say that foundation models should be largely exempt from AI Act regulations. (I’ll get at why below.)

The latest round of EU negotiations has introduced a two-tier approach in which foundation models are, at least in part, sorted on the basis of the computational resources they require, Connor explains. In practice, this would mean that “the vast majority of powerful general-purpose models will likely only be regulated by light-touch transparency and information-sharing obligations,” he says, including models from Anthropic, Meta, and others. “This would be a dramatic narrowing of scope [of the EU AI Act],” he adds. Connor says OpenAI’s GPT-4 is the only model on the market that would definitely fall into the higher tier, though Google’s new model, Gemini, might as well.

This debate over foundation models is closely tied to another big issue: industry-friendliness. The EU is known for its aggressive digital policies (like its landmark data privacy law, GDPR), which often seek to protect Europeans from American and Chinese tech companies. But in the past few years, as Melissa points out, European companies have started to emerge as major tech players as well. Mistral AI in France and Aleph Alpha in Germany, for instance, have recently raised hundreds of millions in funding to build foundation models. It’s almost certainly not a coincidence that France, Germany, and Italy have now started to argue that the EU AI act may be too burdensome for the industry. Connor says this means that the regulatory environment could end up relying on voluntary commitments from companies, which may only later become binding.

“How do we regulate these technologies without hindering innovation? Obviously there’s a lot of lobbying happening from Big Tech, but as European countries have very successful AI startups of their own, they have maybe moved to a slightly more industry-friendly position,” says Melissa.

Finally, both Melissa and Connor talk about how hard it’s been to find agreement on biometric data and AI in policing. “From the very beginning, one of the biggest bones of contention was the use of facial recognition in public places by law enforcement,” says Melissa.

The European Parliament is pushing for stricter restrictions on biometrics over fears the technology could enable mass surveillance and infringe on citizens’ privacy and other rights. But European countries such as France, which is hosting the Olympics next year, want to use AI to fight crime and terrorism; they are lobbying aggressively and placing a lot of pressure on the Parliament to relax their proposed policies, she says.

The December 6 deadline was essentially arbitrary, as negotiations have already continued past that date. But the EU is creeping up to a harder deadline.

Melissa and Connor tell me the key stipulations need to be settled several months before EU elections next June to prevent the legislation from withering completely or getting delayed until 2025. It’s likely that if no agreement is reached in the next few days, the discussion will resume after Christmas. And keep in mind that beyond solidifying the text of the actual law, there’s still a lot that needs to be ironed out regarding implementation and enforcement.

“Hopes were high for the EU to set the global standard with the first horizontal regulation on AI in the world,” Connor says, “but if it fails to properly assign responsibility across the AI value chain and fails to adequately protect EU citizens and their rights, then this attempt at global leadership will be severely diminished.”