TLDR: Researchers from Electronic Arts have developed a sample-efficient Deep Reinforcement Learning (DRL) method to create human-like AI goalkeepers for EA SPORTS FC 25. This approach, detailed in their paper, leverages pre-collected data, curriculum learning, and a novel fine-tuning framework with expert feedback. The DRL goalkeeper outperforms the game’s built-in AI by 10% in ball saving rate, trains 50% faster, and is qualitatively judged by domain experts and playtesters as more realistic and enjoyable. The method is slated to replace traditional hand-crafted AI in future game iterations, demonstrating a practical application of DRL in commercial video games.
The world of video games has long sought to create Artificial Intelligence (AI) that feels authentic and human-like, rather than simply super-human. While Deep Reinforcement Learning (DRL) has achieved impressive feats in complex games, its application in crafting realistic AI behaviors for commercial titles has been limited due to the extensive resources and time typically required for training.
A recent research paper, titled “Human-Like Goalkeeping in a Realistic Football Simulation: a Sample-Efficient Reinforcement Learning Approach,” tackles this challenge head-on. Authored by Alessandro Sestini, Joakim Bergdahl, Jean-Philippe Barrette-LaPierre, Florian Fuchs, Brady Chen, Micheal Jones, and Linus Gisslén from SEED – Electronic Arts and EA Sports, this paper introduces a novel, sample-efficient DRL method designed specifically for industrial settings like the video game industry.
The core of their work focuses on training a goalkeeper agent in EA SPORTS FC 25, one of today’s best-selling football simulations. The goal was not just to create an AI that could save every shot, but one that would exhibit human-like decision-making and movement, improving upon the game’s existing hand-crafted AI which often suffered from suboptimal and non-realistic behaviors. Manually designing a goalkeeper’s complex positioning and anticipation is incredibly challenging and time-consuming.
A Smarter Training Approach
The researchers developed a method that significantly improves the sample efficiency of value-based DRL. This means the AI can learn effectively with less data and in less time. Key components of their approach include:
-
Leveraging Pre-collected Data: The method uses data from the game’s existing built-in AI to kickstart the learning process, providing a valuable foundation for the new agent.
-
Increased Network Plasticity: Techniques like high replay ratios and periodic network resets were employed. These help the AI’s neural networks remain adaptable and prevent them from getting stuck in less-than-optimal learning patterns.
-
Scenario-Based and Curriculum Learning: Instead of training on full, lengthy matches, the agent learns from specific, curated scenarios that mimic real goalkeeping challenges. These scenarios are introduced in phases, gradually increasing in complexity, much like a human learning a skill.
-
Human-in-the-Loop Fine-Tuning: A crucial innovation is the ability to easily adjust the agent’s behavior without restarting training from scratch. Domain experts, such as professional goalkeepers and quality assurance testers, identify situations where the AI underperforms. New training scenarios are then created based on these specific failures, and the agent is fine-tuned efficiently using a technique called Replay across Experiment (RaE).
Impressive Results on the Pitch
The evaluation of this new goalkeeper AI in EA SPORTS FC 25 yielded compelling results. Quantitatively, the DRL agent outperformed the game’s built-in AI by a significant 10% in ball saving rate. Furthermore, ablation studies demonstrated that this new method trains agents 50% faster compared to standard DRL techniques, making it much more practical for game development cycles.
Qualitative feedback was equally positive. Domain experts noted that the DRL approach created more human-like gameplay. Professional goalkeepers observed that the AI was more proactive, anticipating shots and closing down space effectively, a behavior typical of real human players. Playtesters found the DRL goalkeeper to be more realistic and enjoyable to play against, highlighting its reliability and the rewarding feeling of scoring against it.
The paper also includes additional results in the MuJoCo suite, demonstrating the method’s generalizability beyond football simulations.
Also Read:
- Improving AI Decision-Making by Tackling Unseen Factors
- Understanding the Data Cost of Privacy in Policy Optimization for AI
Impact and Future Outlook
This sample-efficient DRL method is intended to replace the hand-crafted goalkeeper AI in future iterations of the EA SPORTS FC series, a testament to its practical impact. While the approach marks a significant step forward, the authors acknowledge limitations, such as potential performance degradation with repeated fine-tuning and the need for smoother policies. Future work may explore leveraging diverse human player data for training and further refining the agent’s behavior.
In conclusion, this research presents a robust and practical DRL framework for developing human-like AI in video games. By focusing on sample efficiency and incorporating expert feedback, it addresses key challenges faced by the game industry, paving the way for more authentic and engaging AI experiences.


