Members of the European Parliament have reached an agreement and passed a draft of the Artificial Intelligence (AI) Act, following months of negotiations.
There are still a number of further steps to take, but it paves the way for the EU to bring in the world’s first comprehensive set of rules governing the development and use of AI.

The next stage is known as the trilogue, a type of meeting in the legislative process between the European Commission, the European Council and the European Parliament.
There may still be some minor adjustments to the text of the act, but it is expected to go to a plenary vote in June.
According to reports, previous proposals have been confirmed that will place stricter obligations on foundation models of AI.
These are a subcategory of General Purpose AI that are trained on large quantities of data at scale, including the increasingly well-known generative AI ChatGPT.
The new obligations would require companies that make generative AI tools such as ChatGPT to disclose if their systems have been trained on copyrighted materials.
Negotiations covered prohibited practices and risk classifications
General Purpose AI, which can be used for different tasks with minimal fine-tuning, has been the subject of extensive debate during the negotiations.
One last-minute change in the proposals states that generative AI models would have to be designed and developed in accordance with EU law and fundamental rights.
Another area subject to extensive debate has been the banning of certain AI applications due to the risk they are perceived as presenting.
The prohibition of AI-powered tools for general monitoring of interpersonal communications was discussed and rejected, but an extended ban on biometric identification AI has been agreed.
As well as being prohibited for use in real time, recognition software could also be used after the fact only for serious crimes and with pre-judicial approval.
AI-powered ‘emotion recognition’ software is prohibited in areas including law enforcement, employment and border control, while ‘purposeful’ manipulation is also banned.
Some use cases were listed under an annex as being high risk, with providers required to follow a stricter regime regarding areas such as transparency and data governance.
These categories are considered to pose a significant risk of harm to health, safety or fundamental rights.
Today’s news was brought to you by TD SYNNEX – the UK’s number one solutions distributor.
Read more of our latest Industry Updates stories