top of page

AI in Education and Society: Sanity Saver, High Costs, and Seismic Change

The Language Innovators Podcast episode featuring Dr. Deborah Healey, 2019-2020 President of TESOL International Association, offers a deep discussion of the benefits and costs of Generative AI in language education and society. Hosted by Dr. Linh Phung (CEO of Eduling) and Nik Wolfe (CTO of Eduling), the conversation involves the immediate practical benefits for teachers; the systemic challenges regarding the changed nature of teaching and learning, privacy, and environmental costs; and the philosophical shifts required to maintain the "human element" in the classroom.


Subscribe to The Language Innovators Podcast for upcoming episodes and shorts.


Full video


Episode Description

In this comprehensive discussion, Dr. Deborah Healey shares her insights from decades of experience in Computer-Assisted Language Learning (CALL) to address the sudden rise of tools like ChatGPT and Gemini. The episode moves beyond the hype of AI as a "magic bullet," instead framing it as a transformative force that demands a reconstruction of pedagogical practices.


Dr. Healey, drawing from her global teacher-training experiences, particularly in parts of Africa, highlights how AI can be a "sanity saver" for overworked teachers in large-class contexts. However, she balances this optimism with warnings about the "deskilling" of educators and the widening digital divide. Nik Wolfe provides a technical counterbalance, explaining the mechanics of Large Language Models (LLMs), the intractability of hallucinations, and the reality of data privacy.


Key Highlights and Core Themes

1. The "Sanity Saver": AI as an Efficiency Tool

Dr. Healey emphasizes that for teachers in parts of the world with class sizes exceeding 100 students, Generative AI provides immediate relief in lesson planning, material creation, and brainstorming.

  • Reduced Preparation Time: AI can summarize readings and generate initial drafts of lesson plans.

  • The "Expertise" Caveat: A central theme is that AI-generated content is only useful if the teacher is skilled enough to improve upon it. Dr. Healey warns against a "thank you, I’m done" mentality, which can lead to classroom disaster and the deskilling of teachers.


2. Pedagogical Reconstruction and the Death of the "Internet Research" Assignment

The group discusses how AI has fundamentally broken traditional homework models.

  • The 10-Second Essay: Dr. Healey points out that students are not "stupid"—if a bot can produce a paper in seconds that would take a human an hour, they will use it.

  • Local Context as a Shield: To combat cheating, the episode suggests shifting assignments toward local, personal contexts (e.g., describing a specific local event or a family member) that the bot cannot replicate.

  • Questioning: The teacher can follow with oral questioning after receiving students’ work (e.g. asking students to orally describe their grandmother).


3. The Ethical Minefield: Environment and Privacy

One of the most sobering segments of the episode involves the "hidden costs" of AI.

  • Environmental Toll: Dr. Healey raises the issue of the massive water and power consumption required by data centers. She notes that generating images or videos is significantly more taxing on natural resources than well-structured text queries.

  • Privacy and "Data Hoovering": Nik Wolfe explains that for free versions of AI tools, "privacy is a myth." Any data uploaded becomes part of the training set.


4. The "Glazing" Effect and Bot Empathy

The conversation explores the psychological impact of AI interactions.

  • Artificial Empathy: Dr. Healey recites how her son describes AI as a "sociopath" in the sense that it has no emotions but is excellent at faking them.

  • The "Glazing" Phenomenon: Nik Wolfe introduces the term "glazing"—the programmed tendency of bots to be overly encouraging ("That's a fantastic idea!"). 

  • The 5% Problem: Dr. Phung references data from Khan Academy suggesting that AI tutors only truly engage about 5% of students, those who are already highly motivated and self-directed.


5. Technical Realities: Next Token Prediction and Hallucinations

Nik Wolfe demystifies the "intelligence" of AI, explaining it as Next Token Prediction.

  • Intractability of Hallucinations: Because AI is a statistical model, it doesn't "know" facts; it predicts the most likely next word. This makes hallucinations (like Dr. Healey’s experience with AI inventing non-existent TED talks) a feature of the system, not a bug.

  • Prompt Engineering as Constraint: Nik explains that effective prompting is actually about "constraining the space" of possible outputs to make the model more consistent.


6. The Future of Language Teaching

The episode concludes with a call to action for educators to define their "human value."

  • Beyond Task-Providing: If a teacher’s only role is providing tasks, they are replaceable. If their role is to provide empathy, scaffolding, and human connection, they are essential.

  • ESP and EAP: Dr. Healey predicts that English for Specific Purposes (ESP) will see the fastest transformation, as highly motivated learners use AI to master technical manuals and professional communication with the help of AI.

  • The Digital Divide: There is a grave concern that as AI becomes the standard, the gap between the "high-bandwidth" Global North and the "low-connectivity" Global South will widen, undoing years of progress made by smartphone penetration.

  • Incorporating Knowledge about Teaching and Learning: We need to keep incorporating the insights about the process of teaching and learning through decades of research instead of sliding back to minimal feedback in the form of right or wrong and mechanical drilling that past technological revolutions kept reverting to.


Conclusion

The episode serves as a nuanced discussion of the AI era in education. It encourages teachers to be "transparent" (sharing when they use AI if they expect students to do the same) and to remain curious but critical. As Dr. Healey notes, the goal is to "keep it human" while navigating a landscape where the tools are powerful, the costs are high, and the potential for both innovation and harm is unprecedented.


Comments


bottom of page