TLDR: This research introduces a novel framework that integrates domain knowledge into process discovery using Large Language Models (LLMs). It leverages LLMs to convert natural language process descriptions from domain experts into declarative rules. These rules then guide the IMr discovery algorithm, combining insights from event logs and expert knowledge to build more accurate process models. The framework supports iterative refinement, interactive expert feedback, and provides a tool for end-to-end discovery and visualization. Evaluations show promising accuracy in rule extraction and robust handling of ambiguous inputs across various LLMs and prompting strategies, validated by a real-world case study.
A new research paper introduces an innovative framework that combines the power of Large Language Models (LLMs) with traditional process discovery methods to create more accurate and reliable process models. This approach addresses a long-standing challenge in process mining: how to effectively integrate human domain knowledge, often expressed in natural language, into the automated discovery of processes from event logs.
Process discovery typically relies on event data recorded by information systems to map out how processes actually run. However, these event logs can be incomplete or contain ‘noise,’ meaning the models derived solely from this data might not fully represent the real-world process. Crucially, valuable insights from domain experts, process documentation, and other forms of domain knowledge are often overlooked. This new framework aims to bridge this gap by allowing natural language descriptions from experts to guide the process discovery.
How the Framework Operates
The core of this framework involves using LLMs to translate natural language descriptions provided by domain experts into formal, declarative rules. These rules then act as constraints, guiding a process discovery algorithm called IMr. The IMr algorithm recursively builds process models by combining information from both the event log and these newly extracted rules. This helps in avoiding problematic process structures that might contradict established domain knowledge.
The framework is designed to be interactive, coordinating communication between the LLM, domain experts, and a set of backend services. Domain knowledge can be introduced at different stages: either before the initial model construction or after a preliminary model has been discovered and reviewed by experts. The LLM, configured through careful ‘prompt engineering,’ acts as an intelligent assistant. It engages in dialogues with experts to gather process descriptions and asks clarifying questions when information is ambiguous. Backend services handle tasks like generating activity lists from event logs, validating LLM outputs, and extracting rules, reducing the need for direct expert intervention in technical steps.
Key Contributions and Features
This research builds upon previous work by redesigning the framework for greater transparency and automation. It incorporates dedicated backend services for executing algorithms, which enhances reliability and promotes more active involvement from domain experts. A fully functional, user-friendly web-based tool has been developed, allowing users to run the entire pipeline by selecting an LLM model and providing an API key. This tool supports interactive exploration of results and enables iterative refinement of extracted rules and discovered models.
The evaluation of this framework was comprehensive, involving multiple LLMs and various prompting strategies. It assessed the models’ ability to extract declarative rules from natural language descriptions against human-labeled ground truth constraints. Metrics like recall, precision, error rate, and failure rate were used to compare different configurations. The study also examined how LLMs handle ambiguous inputs, focusing on whether they appropriately engage in clarification dialogues.
Evaluation Insights
The study found that advanced LLMs like Google’s Gemini 2.5 Pro and OpenAI’s o3 consistently performed better in extracting rules, demonstrating a strong ability to identify and reproduce declarative constraints. Few-shot prompting, where the LLM is given examples, generally improved performance, especially for models less adept at complex reasoning. There was also a trade-off observed between input granularity: processing one sentence at a time (sentence-to-sentence) led to higher recall (more ground truth rules captured), while aggregating multiple sentences into a paragraph (paragraph-level) resulted in higher precision (fewer incorrect rules).
A significant finding was the varying ability of LLMs to handle ambiguity. OpenAI o3 showed the most robust performance, correctly identifying unclear inputs and either asking relevant follow-up questions or refraining from generating rules when information was insufficient. This interactive clarification capability is crucial for ensuring the quality of extracted rules.
Also Read:
- The Future of Requirements Engineering: A Human-AI Partnership
- Navigating the Future of Data Science: A Survey of LLM-Based AI Agents
Real-World Application: A Case Study
The practical applicability of this framework was demonstrated through a case study with UWV, the Dutch employee insurance agency. Collaborating with domain experts, the researchers applied the framework to a real-life claim-handling process. The LLM successfully extracted rules from the expert’s natural language description and engaged in clarifying questions to resolve ambiguities. While initial models discovered without rules showed discrepancies, incorporating the LLM-extracted rules led to process models that more accurately reflected the domain experts’ knowledge, even though some limitations of the underlying IMr algorithm remained.
This research marks a significant step towards a more ‘human-in-the-loop’ paradigm for process discovery. By effectively integrating domain expertise with automated analysis of event data, the framework promises to produce process models that are not only behaviorally sound but also more interpretable and aligned with real-world operations. You can learn more about this work by reading the full paper available at arXiv:2510.07161.


