top of page

Pronunciation Research and Teaching: Language Innovators Episode 2

Summary

In this episode of the Language Innovators podcast, Dr. Linh Phung, co-founder and CEO of Eduling, sits down with Dr. Bradford Lee to discuss the intersections of pronunciation research, technology, and language teaching. Dr. Lee, a tenured Professor and Director of the International Centre at the Fukui University of Technology in Japan, brings over 26 years of experience in TESOL and a specialized research background in second language acquisition (SLA) and pronunciation instruction.


The Complete Episode



Highlights


Explicit vs. Implicit Instruction

A significant portion of the discussion centers on the debate between explicit and implicit instruction. Dr. Lee clarifies these terms for the audience:

  • Explicit Instruction: Occurs when a teacher directly points out an error and explains how to fix it (e.g., "You said 'lice' with an L; you mean 'rice' with an R").

  • Implicit Instruction: Involves subtle hints, such as recasts, where the teacher models the correct form without explicitly stating a mistake was made (e.g., responding to "I want to eat some lice" with "Oh, you want some rice?").


Dr. Lee argues that while implicit instruction is less intrusive, it relies heavily on the student's ability to "notice" the hint. If the student fails to realize they made an error, learning is unlikely to occur. Consequently, he finds explicit instruction to be more efficient, particularly in Asian contexts where students often expect and appreciate direct correction to help them consciously improve.


Perception vs. Production in Training

Dr. Lee’s research challenges the common assumption that pronunciation training must always involve speaking. In his dissertation, he compared perception-based training (listening and distinguishing sounds) with production-based training (actually saying the sounds).


In his perception-based intervention, students were forbidden from speaking; instead, they listened to minimal pairs (like "pit" and "pet") and pointed to the word they heard. His findings revealed that perception training was more efficient for eventual pronunciation accuracy than production training alone. He posits that students must be able to hear the difference between sounds to self-correct when they speak; without this internal benchmark, they cannot judge their own accuracy.


The Role of Aptitude, Input, and Instruction

The speakers discuss a 2022 study involving Japanese learners in Japan and the UK, which explored how immersion and auditory processing abilities affect speech learning. The study found that:

  • Immersion and Exposure: Learners living in the UK showed significantly higher pronunciation accuracy due to constant daily use and input.

  • Aptitude: Students with a higher "sound-related aptitude"—the ability to distinguish between different tones—tended to have better pronunciation accuracy.


Despite the importance of these factors, Dr. Lee emphasizes that instruction remains vital because it makes the learning process faster and more efficient. Instruction acts as a catalyst, helping students notice errors they might otherwise ignore even in an immersive environment.


Task-Based Pronunciation Teaching (TBPT)

Linh and Brad discuss the potential of Task-Based Language Teaching (TBLT) to bridge the gap between controlled classroom practice and spontaneous, real-life communication. Dr. Phung believes TBPT can be more effective in transferring gains in the classroom to spontaneous communication than traditional "listen and repeat" drills because it requires students to focus on meaning while incidentally focusing on forms (including pronunciation features).

In pronunciation research Dr. Lee highlights modern assessment protocols where students describe pictures using specific keywords (distractors mixed with target sounds). Because the student's primary focus is on describing the scene, their pronunciation of the target sound is more representative of spontaneous speech than a simple reading task. This method allows researchers to investigate whether instruction has effects on spontaneous speech.


Leveraging Technology and AI

The episode concludes with practical advice for teachers on using technology to teach pronunciation:

  • High Variation Training: One of the most effective methods is exposing students to many different voices (male, female, various ages and accents) saying the same word. This helps the brain form a clearer "border" for what a specific sound should be.

  • AI as a Tool: Teachers can use tools like ChatGPT's audio function to generate multiple exemplars of the same word in different accents (e.g., British vs. American) or personas.

  • YouGlish: Dr. Phung recommends the site YouGlish, which provides thousands of YouTube clips of different people saying specific words or phrases, offering vast authentic input.

  • Acoustic Analysis: For researchers, Dr. Lee recommends Praat, a software tool used to analyze speech signals. His own use of acoustic analysis revealed that the "back of the tongue" (specifically the F2 formant) is just as important as the tip of the tongue when teaching difficult "R" and "L" sounds.


Ultimately, the conversation underscores that while technology and research provide new ways to look at speech, successful pronunciation improvement requires a balance of massive input, meaningful communication, and instruction.


Subscribe to the Language Innovators podcast for more episodes.


Explore these pronunciation courses on the Eduling app.


 
 
 
bottom of page