TLDR: A case study explored the effectiveness of Large Language Models (LLMs) in generating proofs for formal software verification using Rocq Prover. The study found that external dependencies and in-file context significantly boost LLM performance, LLMs excel at smaller proofs but can also tackle larger ones, and their effectiveness varies across different verification projects. LLMs can generate concise and smart proofs, but also exhibit peculiar failures. The research highlights LLMs’ potential to enhance proof automation, with proof assistants providing a crucial correctness check.
Formal verification, a method to mathematically prove the correctness of software, has long been a powerful but challenging endeavor. While it ensures a high degree of reliability, the process of writing these proofs using tools called proof assistants can be incredibly time-consuming and complex. Researchers Barış Bayazıt, Yao Li, and Xujie Si from the University of Toronto and Portland State University have explored how Large Language Models (LLMs) might simplify this task in their recent paper, A Case Study on the Effectiveness of LLMs in Verification with Proof Assistants.
The core idea behind proof assistants like Rocq Prover (formerly Coq) is to allow developers to write mathematical proofs that a program behaves exactly as intended. This involves stating a theorem and then interactively guiding the assistant through a series of logical steps, or ‘tactics,’ until the theorem is proven. This meticulous process, however, often requires many lines of proof code for each line of program code, making it a significant overhead in software development.
LLMs, known for their ability to generate human-like text and code, present a promising avenue for automating parts of this proof-writing process. The study investigated several key questions: how external information and context within the same file affect LLM performance, how LLMs handle proofs of different lengths, whether their effectiveness varies across different verification projects, and the overall quality of the proofs they generate.
The Study’s Approach
The researchers conducted a case study using two established Rocq projects: ‘hs-to-coq,’ which translates Haskell programs into Rocq, and ‘Verdi,’ a framework for verifying distributed systems. They tested various LLMs, including GPT-4o-mini, GPT-4o, OpenAI o4-mini, DeepSeek Prover V2, and DeepSeek R1-0528, under different conditions. These conditions varied the amount of information provided to the LLMs, such as external dependencies (definitions and theorems from other files) and in-file context (lines of code preceding the theorem in the same file).
Key Findings on LLM Effectiveness
The study yielded several important insights into how LLMs perform in this specialized domain:
-
The Power of Context: A significant finding was that providing LLMs with external dependencies and in-file context dramatically improved their success rates in generating proofs. When both types of context were removed, success rates dropped considerably across most models. This highlights the importance of relevant information for LLMs to reason effectively.
-
Proof Size Matters: LLMs generally performed much better on smaller proofs, those requiring fewer tactics. However, the study also showed that LLMs could still successfully generate some larger proofs, even those with more than 20 tactics, though with a lower probability. Interestingly, some generated proofs were identical to the original human-written ones, especially for smaller proofs, suggesting a degree of ‘memorization’ or common logical patterns.
-
Project-Specific Performance: The effectiveness of LLMs varied between the ‘hs-to-coq’ and ‘Verdi’ projects. For instance, in ‘hs-to-coq,’ the benefits of in-file context diminished for longer proofs, and LLMs were less likely to generate identical proofs for larger tactic counts. In contrast, ‘Verdi’ saw a remarkably strong positive effect from in-file context, especially for more complex proofs. This suggests that LLM performance can be highly dependent on the specific characteristics and structure of the verification project.
-
Quality of Generated Proofs: LLMs demonstrated the ability to generate proofs that were not only correct but sometimes more concise and ‘smarter’ than the original human-written ones. For example, an LLM generated a four-tactic proof for a theorem in Verdi that originally required 29 tactics, by recognizing and applying a previously proven contrapositive. Another instance showed an LLM applying a classical technique (case analysis on an inductively defined proposition) to a new definition, resulting in a more elegant proof. However, LLMs also exhibited peculiar failures, such as generating ‘stuttering’ proofs that repeatedly applied the same tactic indefinitely.
Also Read:
- Automated Program Repair: Bridging the Gap Between Benchmarks and Real-World Code
- New Benchmarks Reveal LLM Struggles in Formal Mathematical Proofs
Future Implications
The research concludes that LLMs hold significant promise for proof automation in formal verification. A crucial advantage is that proof assistants can rigorously check LLM-generated proofs, ensuring their correctness even if the LLM itself is unpredictable. This safety net makes LLMs particularly suitable for verification tasks, where the correctness of the output is paramount. By improving proof automation, LLMs could make formal methods more accessible and widely used in software development, ultimately leading to more reliable software.


