TLDR: This research paper explores how to reliably evaluate Speech-to-Text (ST) translation systems using ‘source-aware’ neural metrics, which typically require a text source. Since ST deals with audio, the authors investigate using automatically generated text proxies: Automatic Speech Recognition (ASR) transcripts and back-translations of reference texts. They introduce a novel re-segmentation algorithm, XLR-Segmenter, to align these synthetic sources with reference translations. The study concludes that both proxies are effective, with ASR being superior if its Word Error Rate (WER) is below 20%, otherwise back-translations are better. The XLR-Segmenter effectively handles alignment issues, making source-aware metrics viable for ST evaluation.
Evaluating the quality of speech-to-text (ST) translation systems has always been a complex challenge. Traditionally, these evaluations relied on comparing the translated output with human-provided reference translations, much like in standard machine translation (MT). However, this approach often overlooks crucial information from the original source input, which in ST, is audio rather than text.
Recent advancements in machine translation have introduced ‘source-aware’ neural metrics, such as COMET and MetricX, which consider the source text alongside the translation hypothesis and reference. These metrics have shown a stronger correlation with human judgments, indicating a more accurate assessment of translation quality. The problem arises when trying to apply these powerful metrics to speech translation, as the source is audio, not text, and reliable transcripts or precise alignments between the audio and reference translations are often unavailable.
Bridging the Gap: Synthetic Source Generation
To overcome this hurdle, researchers at Fondazione Bruno Kessler have conducted a systematic study into creating textual ‘proxies’ for the input audio. They explored two main strategies:
- Automatic Speech Recognition (ASR) Transcripts: Using ASR systems to convert the source audio into text.
- Back-Translations (BT): Translating the human reference translation back into the source language using an MT system.
Both methods have their advantages and drawbacks. ASR transcripts tend to be closer to the original spoken content in terms of lexicon and syntax, but their quality can vary significantly depending on the ASR system’s performance. Back-translations, while computationally cheaper and offering broader language coverage, might introduce artifacts influenced by the reference translation’s lexical and syntactic choices.
The Alignment Challenge and a Novel Solution
A significant challenge with synthetic sources, particularly ASR transcripts, is the potential mismatch in segmentation between the automatically generated text and the human reference translations. To address this, the researchers introduced a novel two-step cross-lingual re-segmentation algorithm called XLR-Segmenter. This algorithm first uses a cross-lingual segmenter (XL-Segmenter) and then refines the segment boundaries using word alignments built on word embeddings, which capture semantic relationships more robustly than traditional string matching.
Key Findings from Extensive Experiments
The study involved extensive experiments on two major ST benchmarks, MuST-C and Europarl-ST, covering 79 language pairs and six diverse ST systems. The findings provide clear guidance for evaluating ST systems:
- Effectiveness of Synthetic Sources: Both ASR transcripts and back-translations proved to be effective textual proxies for input audio, allowing the successful application of source-aware MT metrics in ST evaluation.
- ASR vs. Back-Translation: ASR transcripts generally offer superior reliability, but this holds true only when the ASR system’s Word Error Rate (WER) is below 20%. If the WER exceeds this threshold, back-translations become the more effective and computationally cheaper alternative.
- XLR-Segmenter’s Success: The proposed XLR-Segmenter algorithm demonstrated robust performance in re-segmenting and aligning synthetic sources, even under realistic conditions where audio is automatically segmented. It effectively restores the semantic correspondence between automatically generated ASR segments and reference translation segments, with only negligible degradation compared to using manual transcripts.
- Metric Sensitivity: An interesting observation was that COMET, while effective, appeared less sensitive to the source text’s content compared to MetricX, which showed a stronger reliance on the source.
Also Read:
- Advancing Romanian Speech Recognition with a New FastConformer-Based System
- Uncovering Hidden Misunderstandings in Collaborative Dialogue: A New Approach to Tracking Shared Understanding
Practical Implications for ST Evaluation
This research offers practical recommendations for the ST community. When evaluating ST systems, practitioners can confidently use synthetic source texts with source-aware metrics. The choice between ASR and back-translation should be guided by the quality of the available ASR system. If a high-quality ASR (WER ≤ 20%) is available, it’s the preferred choice. Otherwise, back-translation offers a reliable and more cost-effective solution, especially considering that high-quality MT models are often more readily available and consistent across languages than ASR systems.
This systematic investigation paves the way for more accurate and principled evaluation methodologies for speech translation, ensuring that advancements in ST systems can be reliably measured and compared. For more technical details, you can refer to the full research paper: How to Evaluate Speech Translation with Source-Aware Neural MT Metrics.


