TLDR: The paper “The Problem of Algorithmic Collisions” by Maurice Chiodo and Dennis Müller highlights the systemic risks from unforeseen interactions between autonomous algorithmic systems, including AI. It argues that current governance is inadequate due to a lack of visibility and human inability to monitor rapid escalations. Examples like self-driving cars, automated pricing, flash crashes, and smart grid issues illustrate these dangers. The authors propose solutions: registers for high-risk systems, licensing for developers/deployers, and mandatory automated monitoring/intervention mechanisms to increase transparency, accountability, and safety in the rapidly expanding algorithmic ecosystem.
In an increasingly interconnected world driven by Artificial Intelligence (AI) and autonomous systems, a new and significant systemic risk is emerging: algorithmic collisions. This phenomenon occurs when different algorithmic systems interact in unforeseen ways, leading to rapidly escalating negative outcomes that can far exceed the harm caused by individual algorithms. While much attention is often paid to the safety of single algorithms, the collective behavior of these interacting systems presents a critical and often underestimated danger.
The core of the problem lies in the “opacity of the algorithmic ecosystem.” Developers often don’t know what other algorithms their systems will interact with, and even deployers might lack full visibility into the complex web of interactions. This lack of transparency makes it incredibly difficult to anticipate or mitigate potential collisions.
Real-World Examples of Algorithmic Collisions
The paper highlights several compelling examples to illustrate this challenge. Imagine two self-driving cars, built by different manufacturers, approaching each other. If their independent safety protocols, optimized locally, unexpectedly interact, it could lead to a physical collision or widespread gridlock. Current training for self-driving cars is primarily based on human-driven road data, not interactions with other autonomous vehicles.
A more subtle, yet equally impactful, example involved two automated pricing algorithms on Amazon.com. Programmed with simple rules, these algorithms inadvertently drove the price of a book, “The Making of a Fly,” to over $23 million. One algorithm aimed to be slightly cheaper than the next lowest price, while the other aimed to sell at a profit margin above the cheapest. This created an exponential pricing loop, demonstrating how simple, rational rules can lead to absurd global outcomes when interacting blindly.
Financial markets have already experienced the consequences of algorithmic collisions. The 2010 “Flash Crash” saw major stock market indices plunge and rebound by 9% in just 36 minutes, triggered by a relatively small market spoofing event that was amplified by high-frequency trading algorithms. Similarly, the 2012 Knight Capital incident resulted in a $460 million loss for the company in 45 minutes due to a software deployment error that caused algorithms to repeatedly re-order completed trades. These events underscore how quickly algorithmic interactions can escalate beyond human oversight.
On a national scale, demand-side management in smart electricity grids presents another potential collision point. If multiple electricity companies, using independent algorithms, all calculate the “best time” to switch on customer devices (like electric vehicle chargers or smart appliances) to be the same, it could lead to a massive, simultaneous surge in demand. This could cause grid instability, cascading failures, and widespread blackouts, similar to the “TV pickup” phenomenon but without the predictability of broadcast schedules.
The Limits of Human Monitoring
Human monitoring is often proposed as a solution, but the paper argues it is largely inadequate for algorithmic collisions. The speed at which these problems escalate often far exceeds human reaction times. For instance, a self-driving car collision scenario can unfold in under 0.2 seconds, much faster than a human can react. Even slower, more invisible interactions, like “model collapse” in generative AI, pose a threat. This occurs when new AI models are trained on data generated by other AIs, leading to a decay in quality and variety of output over time. This feedback loop, though slow, is hard to recognize and rectify, creating antitrust issues as “clean” data becomes scarce.
Furthermore, humans themselves can become part of the collision. Social media recommendation algorithms, by optimizing for engagement, can inadvertently lead users down “rabbit holes” of harmful or distressing content. The algorithm keeps suggesting incrementally more appealing content, and the human becomes incrementally more interested, creating a feedback loop where both sides are “rationally” acting but with harmful consequences for the human.
Also Read:
- Navigating AI Safety: Differentiating Oversight and Control for Responsible Deployment
- Community-Driven AI: A Framework for Contestability and Value Pluralism
Towards Mitigation: Policy Suggestions
To address these growing risks, the paper proposes several pragmatic tools. Since the number of potential interactions grows quadratically with the number of systems, even small reductions in uncontrolled systems can significantly reduce overall risk.
One suggestion is to implement registers of algorithmic systems. While registering every algorithm is infeasible, focusing on high-risk, high-impact systems (similar to the EU AI Act’s “High-Risk AI System” category) could provide crucial transparency. This would allow regulators and other developers to understand what systems are operating and how they might interact, even if the exact internal mechanisms remain proprietary.
Another tool is a licensing framework for algorithmic developers and deployers. Just as drivers need licenses, or professionals in other fields are accredited, requiring licenses for those developing and deploying significant algorithmic systems could ensure a baseline level of education on safe practices. It would also act as a deterrent, with licenses potentially revoked for harmful outcomes, and help regulators identify responsible parties.
Finally, mandating automated monitoring and intervention requirements for algorithms is crucial. Just like nuclear power plants have automatic safety shutoff systems, algorithms operating in critical ecosystems should have automated dampening or shutdown mechanisms. These systems would trigger when erratic behavior or rapid, atypical changes in operation are detected, curtailing problematic interactions before humans can even react. The Knight Capital incident, for example, could have been mitigated by such a system. The key insight here is that only one party in a collision needs an effective shutoff system to prevent or dampen a catastrophic interaction.
In conclusion, the proliferation of AI and autonomous systems is rapidly expanding the algorithmic ecosystem, increasing the frequency and complexity of interactions while decreasing human oversight. Addressing the problem of algorithmic collisions requires a multi-faceted approach focused on increasing transparency, accountability, and automated safety measures. For more details, you can read the full research paper here.


