TLDR: This research introduces a self-improving framework for optimizing prompts used by large language models (LLMs) in financial question-answering (QA) tasks. It addresses the challenges of complex financial documents and data confidentiality by generating synthetic financial data, verifying its quality, and then using this data in a closed-loop system to iteratively refine LLM prompts. The method, evaluated on the DocMath-Eval benchmark, consistently outperforms standard prompting techniques, significantly enhancing LLM accuracy and robustness in financial reasoning without needing external labels.
Large language models (LLMs) are becoming increasingly vital for navigating the complexities of financial documents, from earnings reports to balance sheets. These documents often contain extensive tables and multi-page narratives, demanding precise numerical reasoning and deep textual understanding. However, the effectiveness of LLMs in these critical financial tasks heavily relies on the quality of the prompts they receive.
A new research paper, titled Synthetic Data-Driven Prompt Tuning for Financial QA over Tables and Documents, introduces an innovative approach to address this challenge. Authored by Yaoning Yu, Kai-Min Chang, Ye Yu, Kai Wei, Haojing Luo, and Haohan Wang, the paper proposes a self-improving framework that optimizes LLM prompts using synthetically generated financial data.
The Challenge of Financial Question Answering
Financial question answering (QA) is a high-stakes domain where even minor errors can have significant consequences. Tasks involve extracting specific numerical information and performing arithmetic from diverse and often lengthy financial reports. A major hurdle is the scarcity of extensive public datasets for training and fine-tuning sophisticated machine learning models, primarily due to the confidential nature of much financial data. Current prompt optimization methods often rely on static datasets, limiting their ability to adapt to new question types or document structures, or they require expensive, manually labeled data.
A Self-Improving Prompt Framework
The researchers introduce a novel, closed-loop prompt optimization framework driven by data-augmented optimization. This system is designed to continuously improve LLM prompts for financial reasoning tasks without the need for external labels. It integrates three core components:
-
Fin-Generator: This component creates synthetic financial tables and document excerpts. Crucially, it’s designed to produce examples of increasing difficulty, specifically targeting and exposing weaknesses in the current prompt.
-
Fin-Verifiers: Before any synthetic data is used, a set of independent verifiers rigorously checks its numerical consistency, structural validity, and robustness. Only fully validated and robust data proceeds to the next stage, ensuring the quality of the learning material.
-
Fin-Prompt Optimizer: This is the brain of the system. It evaluates the LLM’s responses on the synthetic examples, identifies errors, and then iteratively refines the prompt. This process involves analyzing failures, recommending targeted improvements, and revising the prompt. A key aspect is its ability to confirm that new revisions don’t cause regressions on previously solved cases, ensuring steady improvement.
By iterating through these steps in a continuous feedback cycle, the framework enables prompts to steadily improve their accuracy and robustness on financial reasoning tasks.
Key Contributions and Experimental Validation
The paper highlights several significant contributions, including the introduction of this self-improving framework, the design of a synthetic data generator capable of producing complex and diverse financial queries, and the validation of their approach on standard benchmarks.
The method was evaluated on the DocMath-Eval benchmark, which assesses numerical reasoning in long financial documents. The results demonstrate that the synthetic data-driven approach consistently outperforms existing prompting strategies like Chain of Thought (CoT) and Program of Thought (PoT). For instance, using the “Synthesized on Short” prompt, GPT-4o achieved an average accuracy of 68.38%, surpassing the best baseline by 3.98%. When extended to longer contexts with the “Synthesized on Long” prompt, GPT-4o’s average accuracy reached 69.54%, outperforming the best baseline by 5.14%.
These improvements underscore the value of incorporating context-specific synthetic data generation into prompt learning, significantly enhancing LLMs’ logical and numerical reasoning capabilities for complex financial QA tasks. The refined prompts developed through this process are more operationally specific, guiding the model to precisely address relevant aspects of financial data, unlike the more general instructions used in baseline methods.
Also Read:
- A New Framework for Evaluating Financial Information Retrieval in Banking
- Enhancing AI’s Math Skills: A Self-Evolving Approach to Multimodal Reasoning
Conclusion
This research offers a powerful new paradigm for prompt optimization in financial question-answering. By transforming data augmentation into a real-time feedback mechanism, the framework allows LLMs to autonomously identify and correct their own prompt weaknesses. This adaptive and scalable solution promises to deliver robust generalization and dependable performance in the financial industry, paving the way for more accurate and reliable AI-driven financial analysis.


