AI and skin-like sensors provide precise hand gesture recognition

Industry UpdatesPublished 20th August 2020

Hand gesture recognition is an important field of study as it could allow new ways for humans, computers and other technology to interact.

According to researchers at Nanyang Technological University NTU) in Singapore, most methods of gesture recognition – which tend to be based on a combination of wearable technology and cameras – are currently hampered by low data quality.

AI and skin-like sensors provide precise hand gesture recognition

This can be from a number of factors, including wearables that are too bulky or have poor contact with the skin and impaired lines of sight.

The sensory and visual data streams also need to be processed and combined, which brings its own challenges.

The researchers say that they have addressed these issues with a bio-inspired system that uses stretchable skin-like sensors and artificial intelligence algorithms based on the way that the human brain processes signals it receives from touch and vision.

They published their findings in the journal Nature Electronics and believe that the approach is a system able to recognise hand gestures more accurately than other current technologies.

Lead author Professor Chen Xiaodong said: ‘Our data fusion architecture has its own unique bioinspired features which include a man-made system resembling the somatosensory-visual fusion hierarchy in the brain.

‘We believe such features make our architecture unique to existing approaches.’

Transparent stress sensors flex with the skin

The stretchable sensors use single-walled carbon nanotubes to create a transparent wearable that adheres closely to the skin but does not block camera images.

Professor Xiaodong said that this allowed for better signal acquisition compared to rigid wearable sensors that do not cling so flexibly to the hand.

The AI system, meanwhile, combines three different neural network techniques into a single whole.

A convolutional neural network involves early visual processing, while a multilayer neural network deals with somatosensory information processing.

Somatosensory is related to sensations such as pressure, pain or warmth and in this case is centred on touch.

The third element is a sparse neural network that brings information from the other two networks together.

The team tested the system by guiding a robot through a maze controlled only by hand gestures.

The robot was guided through with no errors, while a robot controlled with a visual recognition system made six errors over the same course.

The team is now looking to design VR and AR systems based on the gesture recognition technology.

Today’s news was brought to you by TD SYNNEX – the UK’s number one solutions distributor.