TLDR: SDBench is an open-source benchmark suite for speaker diarization, integrating 13 diverse datasets and providing tools for consistent performance analysis. It addresses the challenge of high error rate variance and inconsistent system comparisons. Using SDBench, the authors developed SpeakerKit, a system 9.6 times faster than Pyannote v3 with comparable accuracy. The paper also benchmarks six state-of-the-art systems, highlighting accuracy-speed trade-offs and demonstrating SDBench’s utility for optimizing speaker diarization systems.
Speaker diarization, the technology that identifies “who spoke when” in audio, is crucial for many applications, from meeting transcriptions to voice assistants. However, comparing different speaker diarization systems has been challenging due to varying error rates across datasets and a lack of consistent evaluation methods. A new research paper introduces SDBench, an open-source benchmark suite designed to address these issues and provide a standardized way to evaluate speaker diarization performance.
SDBench, or Speaker Diarization Benchmark, integrates 13 diverse datasets, covering multiple languages, audio domains, and speaker distributions. It comes with built-in tools for consistent and detailed analysis of how well speaker diarization systems perform, whether they are designed for on-device use or server-side applications. This suite makes it possible to evaluate systems reproducibly and easily integrate new ones over time, ensuring fair and accurate comparisons.
To demonstrate SDBench’s effectiveness, the researchers developed SpeakerKit, an inference efficiency-focused system built on top of Pyannote v3, a popular open-source speaker diarization project. Through rapid ablation studies guided by SDBench, SpeakerKit was optimized to be 9.6 times faster than Pyannote v3, all while maintaining comparable accuracy. This highlights SDBench’s utility in guiding targeted improvements in speaker diarization systems.
The paper also benchmarks six state-of-the-art speaker diarization systems, including well-known names like Deepgram, AWS Transcribe, and Pyannote AI API, alongside Picovoice Falcon, Pyannote v3.1, and their own SpeakerKit. This comprehensive evaluation reveals important trade-offs between accuracy and processing speed for these systems. For instance, Pyannote AI API achieved the lowest Diarization Error Rate (DER), indicating high accuracy. SpeakerKit, running locally, achieved the highest Speed Factor, meaning it processed audio much faster, while still maintaining accuracy comparable to Pyannote. Deepgram also showed high speed among server-side systems, though with a higher error rate compared to Pyannote AI API. AWS Transcribe, on the other hand, had the lowest Speed Factor, possibly due to its design requiring diarization requests to be part of transcription requests.
The datasets used in SDBench are quite varied, ranging from short-form conversations like CALLHOME and DIHARD-III to long-form audio like ICSI and American-Life-Podcast. These datasets capture different characteristics such as total audio length, overlap ratio (how much speakers talk over each other), speaker congestion (how many speakers are active in a small window), and median speaker count. This diversity ensures that systems are tested across a wide range of real-world scenarios.
The evaluation metrics used are Diarization Error Rate (DER) and Speed Factor. DER measures the overall error, accounting for missed detections, false alarms, and speaker confusion. Speed Factor indicates how many seconds of audio a system can process per second of real-time, providing a clear measure of efficiency.
Ablation studies, which involve systematically changing parts of a system to see their impact, were crucial in SpeakerKit’s development. For example, optimizing the “sliding window strategy” (how audio is processed in segments) showed that increasing the stride from 1 to 4 seconds could lead to significant speedups (up to 38.3x) with only minimal impact on accuracy, especially for scenarios with fewer speakers. Similarly, a “per-chunk” speaker embedding strategy improved efficiency by 1.2x without compromising accuracy.
Also Read:
- Advancing Emotion Recognition in Conversations with Long-Short Distance Graph Neural Networks and Improved Curriculum Learning
- DialogueForge: Advancing Human-Chatbot Conversation Simulation with LLMs
In conclusion, SDBench provides a robust and open-source framework for evaluating speaker diarization systems, enabling fine-grained error analysis and consistent comparisons across diverse domains. Its effectiveness was proven through the development of SpeakerKit, a system that significantly improves inference efficiency while maintaining high accuracy. This benchmark suite is expected to help researchers and practitioners continue to improve speaker diarization technologies. For more details, you can refer to the full research paper: SDBench: A Comprehensive Benchmark Suite for Speaker Diarization.


