TLDR: This research paper introduces a novel framework for analyzing and designing AI-based autonomous vehicles (AVs) by explicitly modeling AI-induced perception uncertainties. It identifies three types of errors—stochastic jumping (misdetection), measurement noise, and bounded bias—and models them using Markov chains, Gaussian processes, and bounded disturbances, respectively. The paper develops tools for studying the stochastic stability and robustness of AVs, proposing a stochastic optimal guaranteed cost control (SOGCC) method based on linear matrix inequalities (LMIs). Applied to a car-following scenario, SOGCC demonstrates superior performance in terms of error convergence, collision avoidance, and passenger comfort, even under challenging perception conditions, significantly enhancing the reliability of autonomous driving systems.
Autonomous vehicles (AVs) are rapidly advancing, largely thanks to sophisticated artificial intelligence (AI) models that excel in complex perception tasks. However, integrating these AI systems into the feedback loop of autonomous driving introduces significant risks, primarily due to a limited understanding of how AI-driven perception processes truly work. A new research paper, “Approaches to Analysis and Design of AI-Based Autonomous Vehicles”, by Tao Yan, Zheyu Zhang, Jingjing Jiang, and Wen-Hua Chen, addresses this critical challenge by developing comprehensive tools for modeling, analyzing, and synthesizing AI-based AVs, with a particular focus on their closed-loop properties like stability, robustness, and performance in a statistical sense.
The core issue lies in the inherent uncertainties introduced by AI models. While deep neural networks have dramatically improved sensing and perception (S&P) capabilities, they are often trained in a ‘black-box’ manner and validated on limited datasets. This makes it difficult for engineers to predict their robustness when encountering unforeseen or erroneous situations. Moreover, in a closed-loop driving system, even small AI-induced uncertainties can propagate over time, negatively impacting decision-making and planning systems. Understanding how and to what extent these AI-driven S&P processes affect an AV’s closed-loop behavior is fundamental to ensuring safe and reliable automated driving.
Modeling AI-Induced Perception Uncertainties
To tackle this, the researchers propose a novel way to model AI-driven perception processes by focusing on their error characteristics. They identify and model three fundamental types of AI-induced perception uncertainties:
- Stochastic Jumping: Modeled by Markov chains, this accounts for sudden changes or misdetection phenomena, such as sensor failures or occlusions.
- Measurement Noise: Represented by Gaussian processes, this captures random fluctuations in sensor readings.
- Bounded Disturbances: Modeled as bounded disturbances, this addresses unknown, low-frequency biases, like consistent sensor inaccuracies.
This approach leads to the development of a Perception Error Model (PEM) and a PEM-based Automated Driving Model (PEM-ADM), which explicitly includes these heterogeneous sources of uncertainty in the feedback loop, extending traditional control system models.
Ensuring Stability and Robustness
Using the PEM-ADM, the paper rigorously studies the closed-loop stochastic stability (SS) of AI-based Automated Driving Systems (ADSs). The stability is established by checking the feasibility of a set of linear matrix inequalities (LMIs). A crucial finding is that an AI-based ADS may not even be stochastically stable if certain conditions are violated, offering vital insights for designing SS-aware ADSs. The research also provides a method for synthesizing stabilizing controllers within the LMI framework.
Beyond stability, the paper introduces a novel concept of ‘stochastic guaranteed cost’ to quantify how robustly an ADS can perform against these AI-induced uncertainties. Criteria are developed to test the robustness level of an AV. Furthermore, the researchers investigate stochastic optimal guaranteed cost control (SOGCC), presenting an efficient design procedure based on LMI techniques and convex optimization to achieve the best guaranteed performance despite uncertainties.
Real-World Application: Car Following Control
To demonstrate the practical effectiveness of their developed results, the methodologies are applied to a car-following control example. This scenario involves an ego vehicle maintaining a prescribed safe distance from a leading vehicle, where the perception system is subject to misdetection, noise, and bias. The control law uses perceived environment information, which is corrupted by these uncertainties.
Also Read:
- Improving Online Planning with Robust Sparse Sampling
- HeLoFusion: A New Encoder for Smarter Traffic Trajectory Prediction
Experimental Validation
Extensive simulations were conducted over 200 independent trials, comparing the proposed SOGCC method with Stochastic Stabilizing Control (SSC) and the Intelligent Driving Model (IDM). The results clearly show the superior performance of the SOGCC approach. In conditions with high misdetection rates and noisy, biased perception:
- The SOGCC approach’s root mean square error (RMSE) converged quickly to a very small steady-state error, demonstrating high performance.
- The SSC approach also converged but required significantly more time.
- The IDM method failed to converge, leading to highly risky and unsafe driving, including potential collisions, as evidenced by trajectories entering the collision zone.
Moreover, SOGCC generated the smoothest and most reasonable control signals, achieving a favorable balance between performance, safety, and passenger comfort, unlike the highly fluctuating actions of the SSC policy. This highlights that adverse perception conditions can severely degrade ADS performance, and the proposed methods effectively address this by explicitly incorporating misdetection into the control design, significantly enhancing the reliability of autonomous driving.
In conclusion, this research provides a crucial framework for understanding and mitigating the risks associated with AI-induced uncertainties in autonomous vehicles. By offering tools for stability analysis, control synthesis, and optimal guaranteed cost control, it paves the way for more reliable, robust, and comfortable autonomous driving systems.


