A team of computer engineers has developed a new system-on-chip integrated circuit that is able to run complex artificial intelligence (AI) processes without needing to be linked to the cloud or an external power supply.
AI is increasingly being deployed in a wide range of applications, such as facial recognition, monitoring the heart rate of cardiac patients, and predicting the lifespan of a machine by measuring its vibrations.

AI is undoubtedly very useful in all sorts of settings, but it does come with a few issues and limitations.
For a start, AI-powered systems and technologies tend to be very energy-intensive, requiring a lot of power for the complex computations involved.
They also tend to need to be hooked up to the cloud at all times, which brings issues related to cyber security and privacy, as well as energy consumption.
The new chips could go a long way to addressing some of these concerns.
The researchers, from the Swiss Center for Electronics and Microtechnology (CSEM), have developed units that not only run AI processes directly on the chip rather than the cloud but can also run on a tiny battery or a small solar cell.
The system itself is also modular, meaning that it can be scaled up or down depending on the requirements.
The system-on-chip uses a new type of architecture
The device uses a brand-new type of signal processing architecture that reduces the amount of power it needs to function.
This is made up of an application-specific integrated circuit (ASIC) chip, a processor developed at CSEM, and two machine learning accelerators.
The first of these accelerators is called a binary decision tree engine – it carries out simple tasks such as answering whether human faces are present if being used as part of a facial recognition system.
The second accelerator is a convolutional neural network (CNN), which deals with classification – the actual facial recognition part of the system in this case.
Stéphane Emery, head of system-on-chip research at CSEM, also offered an example of voice recognition – in this case, the first accelerator would determine if a human voice was present while the second would work to recognise it.
Using two tiers of data processing drastically reduces the power required because the first accelerator will be the one working for the majority of the time.
Today’s news was brought to you by TD SYNNEX – the UK’s number one solutions distributor.