spot_img
HomeResearch & DevelopmentAssessing Cyber-Physical System Safety with Data and Probabilistic Confidence

Assessing Cyber-Physical System Safety with Data and Probabilistic Confidence

TLDR: This research paper introduces a data-driven approach to evaluate the safety probability of Cyber-Physical Systems (CPS) modeled as Mealy machines, even when the system’s internal model is unknown. It uses a Probably Approximately Correct (PAC) learning paradigm to learn safe system behaviors from observed data, providing a safety probability along with a confidence level. The method employs active learning for efficient data collection and was validated on an Automated Lane-Keeping System, demonstrating its ability to assess safety and its limitations regarding scalability for complex scenarios.

Cyber-Physical Systems (CPS) are everywhere, from self-driving cars to medical devices, combining continuous physical processes with discrete operational modes. Ensuring the safety of these complex systems is incredibly important. Traditionally, verifying their safety often relies on having precise models, but these models are frequently unavailable or too difficult to create manually.

A new research paper introduces a data-driven solution to evaluate the safety of CPS, specifically those that can be represented as Mealy machines – a type of finite state machine used to model discrete system behavior. This innovative approach helps determine the probability of a system remaining safe over a specific period, even when its internal model is unknown.

The core of this method lies in combining several key concepts: Mealy machines as models for discrete CPS behavior, probabilistic reachability analysis to calculate the likelihood of reaching safe states, and a learning framework called Probably Approximately Correct (PAC) learning. PAC learning is crucial because it allows the researchers to not only determine a safety probability but also provide a confidence level for that probability, indicating how accurate the learned safety assessment is.

The process works by observing the system and collecting data on its behavior. Instead of trying to build a complete model of the system, the approach focuses on learning the ‘safe paths’ – sequences of inputs that lead to safe states. This learning is active, meaning the system intelligently samples new data in a guided way to improve its understanding of what constitutes a safe operation. Once enough data is collected, the method can calculate the number of safe paths and, from that, the overall safety probability of the system for a given time horizon.

A significant advantage of this technique is its ability to work with systems where a precise model is not available, relying instead on observed data. It also offers a unique ‘confidence level’ alongside the safety probability, which is a valuable addition compared to other stochastic methods. While the approach does face challenges with scalability for extremely complex systems or very long time horizons, its active learning strategy helps in efficient data collection.

The researchers validated their methodology using a practical case study: an Automated Lane-Keeping System (ALKS) in a car. They compared a car with ALKS to one without, observing how the safety probability changed over different time horizons. The results showed that for systems without ALKS, the safety probability decreased over longer periods, as the car was more likely to enter an unsafe ‘alarm’ state from which it couldn’t recover. In contrast, the ALKS-equipped car maintained a higher safety level because it could recover from unsafe situations. The study also highlighted that the confidence in the safety estimate decreases as the time horizon increases if the amount of initial learning data remains constant, underscoring the need for more data for longer, more complex scenarios.

Also Read:

This work represents a significant step forward in data-driven safety analysis for Cyber-Physical Systems, offering a robust way to assess safety with quantifiable confidence, even when traditional models are out of reach. You can read the full research paper for more technical details here.

Meera Iyer
Meera Iyerhttps://blogs.edgentiq.com
Meera Iyer is an AI news editor who blends journalistic rigor with storytelling elegance. Formerly a content strategist in a leading tech firm, Meera now tracks the pulse of India's Generative AI scene, from policy updates to academic breakthroughs. She's particularly focused on bringing nuanced, balanced perspectives to the fast-evolving world of AI-powered tools and media. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -