Skip to main content.
Blogs - July 12, 2024

Tinier than TinyML: pushing the flexible boundaries of AI

Emre Ozer, Senior Director, Processor Development

Over the last decade, the explosion of artificial intelligence (AI) has seen it become commonplace in our everyday lives, appearing everywhere from large language prediction models such as ChatGPT to fitness trackers and filters that help you take the perfect selfie.

While semiconductors are the lifeblood of AI, fuelling access and adoption, machine learning (ML) is one of a range of techniques used to deliver the intelligence, using data analysis to identify patterns and make decisions with minimal human intervention. But how does it work?

Essentially, an ML model is trained on a dataset until it understands the ‘rules’ of its task, then let loose in the real world, where it can apply this learning to new and unseen data. This is known as inference.

Training the model can use lots of time and energy, but once the model is transferred to hardware – that is, the chip – to get working on inference, the goal is to use as little power and area as possible. Depending on where the model is to be used, the power and area budget can be very small indeed, so designers will work hard to minimise its footprint.

Introducing tiny classifiers

Recently, my colleagues and I published a paper in science and technology journal Nature Electronics detailing our research – in collaboration with the University of Manchester – on Low-cost and efficient prediction hardware for tabular data using tiny classifier circuits.

Classifiers are how ML curates its outputs, literally classifying the outputs of its task as X, Y or Z. Take an ML model that’s tasked with classifying the freshness of a pre-packed food, for example. Smart packaging containing the model – along with sensors that can measure the temperature, humidity level and volatile organic compounds inside the package – will read the sensor values and make a prediction about the freshness, classifying it as one of three states:

Class 1: Fresh – safe to eat
Class 2: Not super fresh but still safe to eat
Class 3: Stale – not safe to eat

To define this classification task, sensor data is collected from many food packages under various conditions and collated into a table (hence, tabular data). This is then used as training data for the model. The model is then translated into hardware that can be deployed out in the wild, making predictions based on fresh, real-world data.

A flexible ML solution

Our work is unique because it proposes the use of an evolutionary algorithm – an algorithm that solves problems by iterative refinement, drawing inspiration from evolution – to automatically generate hardware capable of that classification. That is, you define the task and it will create the hardware to make it happen.

What is remarkable is that the classifier chip is extremely small in area – fewer than 300 logic gates – which, in terms of computational devices, is tiny. (If you’re familiar with TinyML – a type of machine learning that allows models to run on smaller, less powerful devices – these classifier circuits are even tinier.)

When implemented on a flexible substrate, such as a FlexIC, this classifier occupies up to 75 times less area, consumes up to 75 times less power and has six times better yield than the most hardware-efficient ML baseline! It’s no exaggeration to say we’ve created the world’s tiniest ML inference hardware fabricated on a flexible substrate.

We’ve also created a solution that’s extremely cost-effective. This is in part due to the low-cost fabrication of FlexICs themselves. More sustainable alternatives to silicon semiconductors, FlexICs are ultra-thin, with a flexible form factor. They enable connect, sense and compute capabilities and their simplified production process takes designs from tape-out to delivery in just four weeks, at a fraction of the cost of silicon.

Real-world applications

While our work focuses on tabular data, there is scope for the same methodology to be applied to time-series data, such as audio applications – ideal for the keyword-spotting that wakes your favourite digital assistant, for example.

But the sheer ‘tiny-ness’ of these chips – in combination with their low cost – opens up a whole new world of possibilities. Their smaller footprint and low power consumption make them ideal for a range of sensing tasks, particularly in smart packaging, smart labels and even smart medical patch applications.

This research proves that tiny classifiers are small but mighty, and 300 gates – or even less – are all you need to use ML in a range of real-world applications.

Read the Nature Electronics paper here
Share

Stay connected with Pragmatic