The majority of people in the UK would feel more comfortable with artificial intelligence (AI) if new laws were put into place to regulate the technology, a new survey has found.
The government has so far adopted a largely hands-off approach in a bid to encourage growth in this emerging and quickly changing sector.

However, 72% of respondents to the survey, which was carried out by the Ada Lovelace and Alan Turing Institutes, said that they were in favour of more regulation.
This figure was up from 62% in a similar survey carried out in 2022 and published in 2023, before large language models (LLMs) such as ChatGPT brought commercially available AI applications into the wider public consciousness.
The new survey found an increasing awareness of some AI applications such as those used in driverless cars (recognised by 93% of respondents) and facial recognition in policing (90%).
Nearly two-thirds (61%) were familiar with LLMs and two-fifths (40%) had experience in using them.
A report accompanying the survey pointed out that this is remarkably rapid penetration for a technology that only started to receive widespread media coverage from the end of 2022.
Other AI applications were less widely known, such as tools for assessing suitability for a mortgage (24%), powering robotic care assistants in hospitals and nursing homes (24%), and assessing eligibility for welfare benefits such as Universal Credit (18%).
People are becoming more concerned about AI risks
Perceptions of the potential benefits of AI have remained relatively stable since the 2022 survey, with many recognising improvements in speed and efficiency.
Levels of concern about the potential risks of the different use cases highlighted in both surveys had generally risen though, with more than two-thirds (67%) claiming to have already faced some form of AI-related harm.
These included false information online (reported by 61% of those who experienced harms), deepfakes (58%) and attempted financial fraud (58%).
When it came to regulating the technology, 88% of respondents wanted government or regulators to have the power to step in if an AI product was deemed to pose a risk to the public, with 75% wanting the government or independent regulators to oversee AI safety rather than private companies.
Two-thirds (65%) said that they would be more comfortable with AI if there were clear procedures for appealing decisions made with the technology, and 61% wanted more transparency with where and how AI was used.
Today’s news was brought to you by TD SYNNEX – the UK’s number one solutions distributor.