spot_img
HomeResearch & DevelopmentAdvancing Printed Neural Networks for Flexible, Low-Cost Applications

Advancing Printed Neural Networks for Flexible, Low-Cost Applications

TLDR: This research introduces an automated framework for designing highly efficient printed Ternary Neural Networks (TNNs) with arbitrary input precision. By holistically co-optimizing the analog-to-digital interface and the digital classifier, the proposed circuits achieve significant reductions in area (17x) and power (59x) compared to existing approximate printed neural networks. This enables the first printed-battery-powered operation with under 5% accuracy loss, making complex machine learning feasible for ultra-low-cost, flexible, and stretchable printed electronic applications.

Printed electronics are emerging as a promising alternative to traditional silicon-based systems, especially for applications that need flexibility, stretchability, and very low manufacturing costs. Think of smart packaging, wearable health devices, and disposable sensors. While printed electronics offer these unique advantages, they also come with challenges like larger component sizes and lower integration density, making complex circuits difficult to realize.

A new research paper titled Arbitrary Precision Printed Ternary Neural Networks with Holistic Evolutionary Approximation by Vojtech Mrazek, Konstantinos Balaskas, Paula Carolina Lozano Duarte, Zdenek Vasicek, Mehdi B. Tahoori, and Georgios Zervakis addresses these challenges head-on. The paper introduces an automated framework for designing printed Ternary Neural Networks (TNNs) that can handle various input precisions. This framework uses advanced multi-objective optimization and a holistic approximation approach to significantly improve the efficiency of printed neural networks.

The core idea is to bridge the gap between achieving high classification accuracy and maintaining area efficiency in printed neural networks. The researchers focused on the entire system, from the analog-to-digital interface—which is often a major bottleneck for size and power—to the digital classifier itself. Their circuits show remarkable improvements, outperforming existing approximate printed neural networks by an average of 17 times in area and 59 times in power. This breakthrough is significant because it’s the first to enable printed-battery-powered operation with less than a 5% loss in accuracy, even when considering the costs of converting analog sensor data to digital.

Why Printed Electronics and Neural Networks?

For decades, the silicon industry has driven technological advancements, constantly improving the power, performance, and area of transistors. However, many applications, such as forensics, smart packaging, and accessible healthcare products, require properties that rigid silicon systems cannot provide, like ultra-low cost, stretchability, and flexibility. Printed electronics, which use additive methods like inkjet or screen printing, are perfectly suited for these domains. They offer extremely low costs and fast manufacturing times.

Despite the low resolution and large feature sizes inherent in printed electronics, printed neural networks have garnered attention because they can meet the demands of these applications, especially for classifying analog sensor data. The challenge lies in implementing complex machine learning classifiers with the limited integration density and power availability of printed circuits.

Addressing Limitations with Approximation

To overcome the limitations of printed electronics, two main design strategies are gaining traction: bespoke design and approximate computing. Bespoke design involves creating highly customized circuits for each specific machine learning model and dataset, a level of customization that is impractical in silicon-based systems due to high fabrication costs. Approximate computing, on the other hand, leverages the fact that machine learning applications can tolerate some errors, allowing for aggressive reductions in circuit complexity and gate count.

This research combines these ideas, focusing on digital approximate printed neural networks using Electrolyte-Gated FET (EGFET) technology, known for its good mobility and low-voltage operation, making it ideal for battery-powered applications. Recognizing that printed multipliers are very expensive, the team shifted to multiplier-less networks like Ternary Neural Networks (TNNs), which use only adders, subtractors, and simple re-wiring for shift operations.

The Novel Approach: Arbitrary Precision and Holistic Optimization

Previous work often required higher precision inputs to maintain accuracy, overlooking the significant overheads of analog-to-digital converters (ADCs). This paper extends prior research by co-optimizing both the analog front-end (ADC) and the digital classifier logic. It proposes an automated framework for designing approximate TNNs with arbitrary input precision, meaning it can work with 1-bit, 2-bit, 3-bit, or 4-bit inputs, finding the optimal balance for each application.

A key innovation is the design of approximate linear threshold gates (LTGs) for the hidden neurons and approximate popcount units for the output neurons. These approximations are identified using multi-objective optimization, ensuring the best trade-off between accuracy and area efficiency. Unlike existing methods, this framework consistently delivers area-efficient solutions across various datasets and accuracy requirements.

How it Works: Two-Phase Approximation

The methodology involves two main phases. First, Cartesian Genetic Programming (CGP) is used to evolve circuits that approximate the LTG and popcount functions, creating a library of efficient approximate units. In the second phase, the Non-dominated Sorting Genetic Algorithm (NSGA-II) integrates these approximate components into a bespoke TNN circuit, aiming for maximum resource efficiency with minimal accuracy loss. This co-design of the analog front-end and digital classifier is crucial for achieving maximum area efficiency for each target accuracy.

The results are impressive. For a maximum 2% accuracy loss, the proposed approximate TNNs achieve an average of 2.9 times lower area and 4.7 times lower power, including ADC costs. When compared to the most efficient approximate MLPs in the state of the art, these TNNs achieve 21 times lower area and 67 times lower power for a 2% accuracy loss, and even more significant gains (36 times lower area and 139 times lower power) for a 5% accuracy loss. This highlights the superior scalability and efficiency of the new framework.

Also Read:

Enabling Battery-Powered Printed Systems

One of the most significant achievements is that these TNNs are the only solution that meets the 30mW power constraint across all tested datasets, making printed-battery-powered operation feasible. This is a critical step towards widespread adoption of smart services in resource-constrained domains where traditional silicon systems are not suitable. The research also shows that the TNNs are robust to process variations, a common challenge in printed electronics, maintaining well-bounded accuracy even with high variation levels in analog components.

In conclusion, this work presents a groundbreaking end-to-end digital printed classifier that successfully addresses the resource and power constraints of printed electronics. By co-optimizing the analog front-end and digital classifier with arbitrary input precision and holistic approximation, it paves the way for highly efficient, battery-powered printed neural networks for a new generation of smart, flexible, and ultra-low-cost applications.

Karthik Mehta
Karthik Mehtahttps://blogs.edgentiq.com
Karthik Mehta is a data journalist known for his data-rich, insightful coverage of AI news and developments. Armed with a degree in Data Science from IIT Bombay and years of newsroom experience, Karthik merges storytelling with metrics to surface deeper narratives in AI-related events. His writing cuts through hype, revealing the real-world impact of Generative AI on industries, policy, and society. You can reach him out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -