spot_img
HomeResearch & DevelopmentOptimizing Compute-in-Memory for AI: The Manhattan Distance Mapping Approach

Optimizing Compute-in-Memory for AI: The Manhattan Distance Mapping Approach

TLDR: Manhattan Distance Mapping (MDM) is a new technique that optimizes deep neural network (DNN) weight placement in memristive Compute-in-Memory (CIM) crossbars. It addresses parasitic resistance (PR), a major issue limiting CIM scalability and accuracy. MDM works by reversing dataflow and reordering rows based on their “Manhattan distance” from I/O rails, moving active memory cells to less PR-affected regions. This post-training method reduces the nonideality factor by up to 46% and improves DNN inference accuracy by an average of 3.6% in ResNets, enabling larger and more reliable CIM accelerators without hardware changes or retraining.

In the rapidly evolving world of artificial intelligence, specialized hardware accelerators are crucial for efficient deep neural network (DNN) processing. One promising technology is Compute-in-Memory (CIM), which integrates data storage and computation directly within the same physical fabric. This approach significantly reduces the energy-intensive movement of data between memory and processing units, leading to faster and more energy-efficient AI operations.

However, CIM architectures face a significant challenge: nonidealities. These imperfections can degrade the accuracy of AI models and limit the scalability of these powerful systems. Among these, parasitic resistance (PR) stands out as a major bottleneck. Parasitic resistance arises from the resistive interconnects within the crossbar arrays that form the core of CIM systems. As electrical signals travel through these resistive paths, voltage drops occur, leading to inaccuracies in computations. This effect forces designers to use smaller crossbar tiles, which in turn increases the need for digital synchronization, leading to higher latency and reduced overall system throughput.

Introducing Manhattan Distance Mapping (MDM)

To tackle the pervasive issue of parasitic resistance, researchers Matheus Farias, Wanghley Martins, and H. T. Kung have introduced an innovative technique called Manhattan Distance Mapping (MDM). This method is a post-training deep neural network weight mapping strategy specifically designed for memristive bit-sliced CIM crossbars. MDM aims to reduce the negative impact of PR nonidealities without requiring any changes to the DNN model itself or the underlying hardware.

The core idea behind MDM is to intelligently reorganize the placement of active memristors within the crossbar array. The researchers observed that voltage drops due to PR tend to increase proportionally with the “Manhattan distance” from the input/output (I/O) rails – a concept they term the Manhattan Hypothesis. Furthermore, DNN weights typically follow a bell-shaped distribution, meaning that lower-order bit columns are often denser (more active) than higher-order columns. This structured imbalance can exacerbate PR effects as current flows through longer, more resistive paths.

How MDM Works

MDM operates in three clever stages to mitigate PR:

First, it reverses the dataflow. By doing so, the denser, lower-order bit regions – where most active memristors are concentrated – are aligned with shorter conduction paths. This immediately reduces the cumulative voltage drops and, consequently, the impact of parasitic resistance.

Second, MDM assigns a unique “Manhattan-based score” to each row in the crossbar. This score quantifies how far the active cells in that row are from the I/O rails, effectively measuring their exposure to parasitic effects.

Finally, the rows are reordered based on these scores in ascending order. This strategic permutation relocates the denser regions of active memristors closer to the I/O interfaces, placing them in areas less affected by resistance buildup. This spatial reorganization significantly reduces the “nonideality factor” (NF), which measures the deviation of the measured output from its ideal value, all while preserving the original arithmetic meaning of the computations.

Also Read:

Impact and Results

The effectiveness of MDM was rigorously tested through circuit-level simulations and PyTorch-based simulations on ImageNet-1k, using various DNN models like ResNets, VGGs, ViTs, and DeITs. The results are compelling:

  • MDM successfully reduced the nonideality factor (NF) by up to 46%.
  • It improved inference accuracy under analog distortion by an average of 3.6% in ResNet architectures.

It’s worth noting that MDM showed slightly less effectiveness for transformer models. This is attributed to their characteristically flatter weight distributions, which can lead to denser higher-order columns and sparser lower-order ones, thereby diminishing some of MDM’s benefits.

In conclusion, Manhattan Distance Mapping offers a lightweight, spatially informed, and highly effective method for scaling CIM DNN accelerators. By intelligently addressing parasitic resistance, MDM paves the way for larger and more accurate compute-in-memory systems, bridging the gap between algorithmic demands and device-level constraints. For more in-depth information, you can read the full research paper here.

Meera Iyer
Meera Iyerhttps://blogs.edgentiq.com
Meera Iyer is an AI news editor who blends journalistic rigor with storytelling elegance. Formerly a content strategist in a leading tech firm, Meera now tracks the pulse of India's Generative AI scene, from policy updates to academic breakthroughs. She's particularly focused on bringing nuanced, balanced perspectives to the fast-evolving world of AI-powered tools and media. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -