Teal overlay

AI raises the stakes on identity verification and authentication

Industry-watcher and analyst, Gartner, is predicting that within two years, close to one third of enterprises will see identity verification and authentication solutions as unreliable for use in isolation due to AI-generated deepfakes by 2026.


► Many firms won’t be happy to rely on current technologies to identify and authenticate individuals

► Security providers will need to demonstrate capability to move beyond current technologies


By 2026, attacks using AI-generated deepfakes on face biometrics will mean that 30% of enterprises will no longer consider such identity verification and authentication solutions to be reliable in isolation, according to Gartner. The firm’s experts believe that organisations will begin to question the reliability of these types of solutions, as they will not be able to tell whether the face of the person being verified is a live person or a deepfake.

Identity verification and authentication processes using face biometrics today rely on presentation attack detection (PAD) to assess the user’s liveness. But Gartner says that current standards and testing processes to define and assess PAD mechanisms do not cover digital injection attacks using AI-generated deepfakes which can already be generated today.

The firms said presentation (aka ‘imitation’ or ‘spoofing’) attacks are the most common method used, injection (code alternation) attacks increased 200% in 2023. Preventing such attacks will require a combination of PAD, injection attack detection (IAD) and image inspection.

To address these concerns, security vendors and service providers will need to go beyond current standards and demonstrate that they are monitoring, classifying and quantifying these new types of attacks. It may be necessary to make use of additional risk and recognition methods, such as device identification and behavioural analytics, to increase the chance of detecting attacks on identity verification processes.

Organisations will need to mitigate the risks of AI-driven deepfake attacks by selecting technology that can prove genuine human presence and by implementing additional measures to prevent account takeover.

If you’d like to know more about the ways you can protect your customers from identify theft and authentication attacks, please email our team by clicking on the link below.

Contact The Team

Back to Top