spot_img
HomeResearch & DevelopmentNormalcy Score: Enhancing Anomaly Detection by Quantifying Uncertainty

Normalcy Score: Enhancing Anomaly Detection by Quantifying Uncertainty

TLDR: A new framework called Normalcy Score (NS) is introduced for contextual anomaly detection. It uses Gaussian process regression to explicitly model both intrinsic data variability (aleatoric uncertainty) and uncertainty due to sparse data (epistemic uncertainty). NS provides a more reliable anomaly score with high-density intervals, outperforming existing methods and proving valuable in medical applications like assessing aortic normalcy.

In the realm of data analysis, identifying unusual patterns or ‘anomalies’ is crucial for many applications, from fraud detection to medical diagnostics. A specific challenge arises in what’s known as Contextual Anomaly Detection (CAD). Here, the normalcy of a particular variable (the ‘behavioral’ variable) depends heavily on other related variables (the ‘contextual’ variables). For instance, assessing a patient’s aorta diameter (behavioral) requires considering their age, weight, height, and sex (contextual), as a ‘normal’ diameter for one person might be anomalous for another.

Traditional CAD methods often fall short because they primarily focus on what’s called ‘aleatoric uncertainty’ – the inherent variability in the data that cannot be reduced, even with more information. However, they frequently overlook ‘epistemic uncertainty,’ which arises from a lack of data in certain areas of the contextual space. Imagine trying to assess the normalcy of an aorta diameter for a very rare combination of age, weight, and height; the model might be uncertain simply because it hasn’t seen enough similar cases.

A new framework, the Normalcy Score (NS), addresses this critical gap by explicitly modeling both types of uncertainties. Developed by Luca Bindini, Lorenzo Perini, Stefano Nistri, Jesse Davis, and Paolo Frasconi, this novel approach is built upon heteroscedastic Gaussian process regression. Unlike conventional methods that treat the Z-score (a common measure of deviation from the mean) as a fixed value, NS regards it as a random variable. This allows the framework to not only provide a contextual anomaly score but also a ‘high-density interval’ (HDI), which indicates the reliability of the anomaly assessment.

The NS framework utilizes two independent Gaussian processes: one to model the expected mean of the behavioral variable given the context, and another to model its log standard deviation. This dual modeling is key to disentangling aleatoric uncertainty (captured by the standard deviation model) from epistemic uncertainty (reflected in the posterior variances of both processes, especially in sparse data regions). When the model is less certain due to insufficient data, the high-density interval for the anomaly score will be wider, signaling that the assessment might be unreliable.

Experiments conducted on various benchmark datasets demonstrated that NS consistently outperforms state-of-the-art CAD methods like QCAD and ROCOD, as well as non-contextual anomaly detection algorithms such as Isolation Forest. The framework showed superior performance in terms of detection accuracy metrics like ROC-AUC and PR-AUC. A significant real-world application was showcased in cardiology, where NS was used to assess the normalcy of aortic diameters. By training on data from normal subjects and testing on patients with conditions like Marfan syndrome or bicuspid aortic valve, NS proved more effective in detecting aortic dilation compared to its competitors. Crucially, the ability of NS to quantify epistemic uncertainty through the HDI (denoted as i(x,y)) helps clinicians identify patients for whom the model’s predictions are less reliable, prompting further diagnostic tests or closer monitoring. This feature is particularly valuable in high-stakes domains like healthcare, where informed decision-making is paramount.

Also Read:

The research highlights that NS represents a significant advancement in contextual anomaly detection, offering a robust, interpretable, and uncertainty-aware solution. Future work aims to extend this framework to other domains and to handle vector-valued behavioral variables, such as assessing the overall shape of the aorta rather than just individual diameters. For more in-depth technical details, you can refer to the full research paper here.

Meera Iyer
Meera Iyerhttps://blogs.edgentiq.com
Meera Iyer is an AI news editor who blends journalistic rigor with storytelling elegance. Formerly a content strategist in a leading tech firm, Meera now tracks the pulse of India's Generative AI scene, from policy updates to academic breakthroughs. She's particularly focused on bringing nuanced, balanced perspectives to the fast-evolving world of AI-powered tools and media. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -

Previous article
Next article