TLDR: A study explored public speaking experts’ perspectives on AI-assisted training tools. It found that while AI offers benefits like repeated practice and objective feedback, current commercial tools often lack transparency, contextualization, and support for anxiety management. Experts emphasized the need for personalized, understandable, and actionable feedback that fosters authentic speaking styles and connects low-level behaviors to higher-level communication goals, suggesting a hybrid model combining AI with human coaching.
Public speaking is a skill many find daunting, yet it’s crucial in both personal and professional life. Traditionally, mastering this art has relied heavily on expert coaches who provide personalized feedback and nuanced guidance. However, with the rapid advancements in artificial intelligence, a new wave of automated public speaking feedback tools has emerged. While these AI-powered systems promise immediate feedback, there’s been a significant gap in understanding how seasoned public speaking experts perceive their effectiveness and design.
A recent study, titled “Probing Experts’ Perspectives on AI-Assisted Public Speaking Training,” delves into this very question. Conducted by Nesrine Fourati, Alisa Barkar, Marion Dragée, Liv Danthon-Lefebvre, and Mathieu Chollet, the research aimed to gather expert opinions on commercial AI-based public speaking training tools and propose guidelines for their improvement. You can read the full paper here: Research Paper.
Understanding the Experts’ Viewpoint
The researchers engaged 16 public speaking experts through semi-structured interviews and two focus groups. These sessions allowed coaches to discuss their experiences with traditional training, their views on current commercial AI tools, and how these tools might integrate into existing coaching practices. They also offered valuable suggestions for enhancing these systems.
The initial interviews revealed several key insights into the world of public speaking coaching. Coaches noted that most trainees come with specific goals and often struggle with significant public speaking anxiety. A primary focus for coaches is building self-confidence, which naturally leads to improved performance. They identified three major dimensions of speech quality: content (word choice, structure), form (non-verbal cues like gestures, facial expressions, vocal tone), and emotions (the intended impact on the audience). Additionally, they highlighted the importance of preparation, stress management, and effective delivery as core pillars of coaching.
AI Tools: Opportunities and Challenges
When comparing existing commercial AI tools like PolymnIA and VocaCoach to their expert-informed understanding, the researchers found a notable disconnect. Many current AI systems target general users and lack customization for specific goals or contexts. Their feedback mechanisms are often opaque, without clear explanations of how higher-order criteria (like ‘confidence’ or ‘clarity’) are calculated. Crucially, none of the reviewed systems incorporated anxiety reduction techniques, which coaches identified as a foundational step in training.
Despite these limitations, experts acknowledged the significant value AI tools bring. They found these systems ideal for enabling repeated, independent practice in a low-stakes, neutral environment. This ‘judgment-free’ aspect can help disinhibit trainees and free up coaches to focus on more nuanced, higher-level pedagogical concerns, such as transmitting emotions or uncovering authentic speaking styles.
Designing Better AI Feedback
A central theme from the focus groups was the complexity of designing effective AI feedback. Experts emphasized that feedback should be:
- Carefully Selected and Understandable: Overly detailed reports, like PolymnIA’s, can overwhelm users. Coaches prefer a few key, actionable comments.
- Contextualized: Feedback should be relevant to the speech type, message, audience, and speaker’s intentions. Systems should allow users to define their goals.
- Neutral and Encouraging: While some tools lean too positive, potentially giving beginners a false sense of accomplishment, feedback should generally be neutral, offering constructive criticism without demotivating the learner.
- Focused on Higher-Level Goals: Simply analyzing low-level behaviors (like pause length) isn’t enough. Feedback needs to connect these behaviors to higher-level communicative goals like energy, enthusiasm, or commitment.
- Supportive of Authentic Style: A major concern was that over-reliance on objective behavioral criteria could lead to a ‘robotic’ or inauthentic speaking style. Experts stressed the importance of fostering a speaker’s unique voice rather than forcing conformity.
The study also highlighted the need for clear instructions and contextual elements within AI training activities. Users should be prompted to define their speech goals, understand the physical setup (e.g., standing vs. sitting), and engage in activities that separate knowledge acquisition from full speech rehearsals.
Also Read:
- Consumer Trust and AI: Why Hiding AI Use Damages Brands More Than Exaggeration
- How Teacher Training Shapes AI Readiness in Classrooms
The Path Forward
The research concludes by offering several design principles for future AI-based public speaking training systems. These include personalizing feedback based on the speaker’s style, profile, and training stage; tailoring analysis to the specific context of the speech; providing clear instructions for training activities; and developing models that can link low-level behavioral patterns to higher-level performance criteria over time. The integration of large language models holds promise for future systems, particularly for contextual understanding and simulated interactions, but their application must remain aligned with evidence-based pedagogical goals and personalized support.


