A Perception-Based L2 Speech Intelligibility Indicator:
Leveraging A Rater’s Shadowing and Sequence-to-Sequence Voice Conversion

Author: Haopeng Geng, Saito Daisuke, Nobuaki Minematsu

Institution: Graduate School of Engineering, The University of Tokyo

Abstract

Research Background

Evaluating L2 speech intelligibility is crucial for effective computer-assisted language learning (CALL). Conventional ASR-based methods often focus on native-likeness, which may fail to capture the actual intelligibility perceived by human listeners. In contrast, our work introduces a novel, perception-based L2 speech intelligibility indicator that leverages native rater’s shadowing data within a sequence-to-sequence (seq2seq) voice conversion framework. By integrating an alignment mechanism and acoustic feature reconstruction, our approach simulates the auditory perception of native listeners, identifying segments in L2 speech that are likely to cause comprehension difficulties. Both objective and subjective evaluations indicate that our method aligns more closely with native judgments than traditional ASR-based metrics, offering a promising new direction for CALL systems in global, multilingual contexts. Purposed Method 1

Purposed Method1

Purposed Method2

Purposed Method 1 Purposed Method 2

Instructions & Legend

Instructions: Click on the waveform to play/pause the audio. Highlighted intervals represent unintelligible words detected by various methods.

Ground Truth:
Human annotations of unintelligible words based on 2-stage PPG-DTW annotation.
ASR:
ASR-based word errors.
Purposed Method 1 (VC Alignment Based):
Using alignment breakdowns in the VC framework to replicate rater’s perception breakdowns.
Purposed Method 2 (Multi-Task Learning):
Annotation based on Multi-Task Learning with seq2seq VC and disfluency detection.

Examples