TLDR: The increasing integration of Artificial Intelligence (AI) in derivatives markets has brought about significant efficiencies but also amplified existing policy concerns and introduced new ones. Key issues include cybersecurity vulnerabilities, third-party risks from concentrated AI service providers, the potential for market manipulation, and the need to maintain market stability and transparency amidst faster AI-driven trade execution.
The derivatives markets are experiencing a profound transformation due to the escalating adoption of Artificial Intelligence (AI), leading to both enhanced efficiencies and a complex array of policy challenges. As of 2023, a staggering 99% of leading financial services firms in derivatives markets reported some form of AI deployment. Furthermore, a 2024 survey indicated that 89% of financial firms were already utilizing generative AI, primarily for internal operations rather than client-facing applications. Generative AI, which includes machine learning models trained on vast datasets, is capable of generating new content such as code, text, or videos. Firms are predominantly leveraging predictive AI for critical functions like risk management, fraud detection and prevention, operational optimization, and compliance.
The Commodity Futures Trading Commission (CFTC) has recognized the profound implications of AI usage across the financial services sector, particularly for the derivatives markets it oversees. A 2024 in-depth study by the CFTC’s Technology Advisory Committee highlighted AI’s potential to automate processes such as risk management, market surveillance, fraud detection, and the identification, execution, and back-testing of trading strategies. Academics have noted that increased AI adoption has led to greater efficiencies in back-office processing and trade execution. More recently, generative AI has empowered investment firms to process large volumes of unstructured data, thereby enhancing their analytical trading tools.
However, this burgeoning reliance on AI also introduces new risks and raises significant questions for regulatory oversight. A primary concern is ensuring robust cybersecurity and mitigating third-party risks associated with a concentrated group of external AI service providers, such as cloud and data providers. A May 2025 study by the Government Accountability Office (GAO) warned that a failure at one of these concentrated providers could impact a large number of financial companies, escalating systemic risk. CFTC Commissioner Kristin Johnson, in June 2025, underscored cybersecurity risks as a growing concern amplified by this concentration risk.
Another critical policy issue revolves around preventing market manipulation by generative AI and maintaining market stability and transparency in an environment of increasingly rapid, AI-driven trade execution. The CFTC initiated a Request for Comment on the Use of Artificial Intelligence in CFTC-Regulated Markets on January 25, 2024, to address these and other emerging issues, though no further guidance has been issued to date.
Also Read:
- Generative AI Transforms Insurance Sector, Industry Divided on Impact
- Businesses Face Significant Risks by Overlooking Generative AI Adoption
‘Herding risk’ is also a significant concern, referring to the potential for individual trading entities to make similar decisions due to factors like reliance on a few AI providers, similar models, or homogeneous training data. This AI-related herding could exacerbate systemic risk in financial markets, particularly during periods of price volatility, a risk also flagged by the 2025 GAO report. Cybersecurity incidents, though long-standing risks, are amplified by AI’s integration into key financial processes. For instance, a February 2025 hacking incident at cryptocurrency exchange Bybit, resulting in a nearly $1.5 billion loss, appeared to stem from a third-party critical infrastructure system, underscoring the heightened cyber risk in an AI-driven financial landscape.


