Teachers' Vocal Expressions and Student Engagement in Asynchronous Video Learning

被引:0
|
作者
Suen, Hung-Yue [1 ]
Su, Yu-Sheng [2 ,3 ,4 ]
机构
[1] Natl Taiwan Normal Univ, Dept Technol Applicat & Human Resource Dev, Taipei, Taiwan
[2] Natl Chung Cheng Univ, Dept Comp Sci & Informat Engn, Chiayi, Taiwan
[3] Natl Chung Cheng Univ, Adv Inst Mfg High Tech Innovat, Chiayi, Taiwan
[4] Natl Taiwan Ocean Univ, Dept Comp Sci & Engn, Keelung, Taiwan
关键词
Acoustic analysis; natural language processing; machine learning; pedagogy; sentiment analysis; speech emotion; SPEECH EMOTION RECOGNITION; VOICE; COMMUNICATION; MOTIVATION; MODEL;
D O I
10.1080/10447318.2025.2474469
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Asynchronous video learning, including massive open online courses (MOOCs), offers flexibility but often lacks students' affective engagement. This study examines how teachers' verbal and nonverbal vocal emotive expressions influence students' self-reported affective engagement. Using computational acoustic and sentiment analysis, valence and arousal scores were extracted from teachers' verbal vocal expressions, and nonverbal vocal emotions were classified into six categories: anger, fear, happiness, neutral, sadness, and surprise. Data from 210 video lectures across four MOOC platforms and feedback from 738 students collected after class were analyzed. Results revealed that teachers' verbal emotive expressions, even with positive valence and high arousal, did not significantly impact engagement. Conversely, vocal expressions with positive valence and high arousal (e.g., happiness, surprise) enhanced engagement, while negative high-arousal emotions (e.g., anger) reduced it. These findings offer practical insights for instructional video creators, teachers, and influencers to foster emotional engagement in asynchronous video learning.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] What Drives Student Engagement and Learning in Video Lectures? An Investigation of Instructor Visibility, Playback Speed, and Student Preferences
    Ahn, Dahwi
    Chan, Jason C. K.
    APPLIED COGNITIVE PSYCHOLOGY, 2025, 39 (02)
  • [42] AN INITIAL ANALYSIS OF STUDENT ENGAGEMENT WHILE LEARNING FOOD ANALYSIS BY MEANS OF A VIDEO GAME
    Chin Vera, Jose del Carmen
    Lopez-Malo, Aurelio
    Palou, Enrique
    2012 ASEE ANNUAL CONFERENCE, 2012,
  • [43] Exploring student perceptions of asynchronous video in online courses
    Lowenthal, Patrick R.
    DISTANCE EDUCATION, 2022, 43 (03) : 369 - 387
  • [44] Using video conferencing to improve the supervision of student teachers and pre-student-teachers
    Dudt, KP
    Garrett, JL
    PROTEUS, 1997, 14 (01) : 22 - 24
  • [45] Statistical Assessment on Student Engagement in Asynchronous Online Learning Using the k-Means Clustering Algorithm
    Kim, Sohee
    Cho, Sunghee
    Kim, Joo Yeun
    Kim, Dae-Jin
    SUSTAINABILITY, 2023, 15 (03)
  • [46] Video narratives to assess student teachers' competence as new teachers
    Admiraal, Wilfried
    Berry, Amanda
    TEACHERS AND TEACHING, 2016, 22 (01) : 21 - 34
  • [47] Understanding student behavioral engagement: Importance of student interaction with peers and teachers
    Tuan Dinh Nguyen
    Cannata, Marisa
    Miller, Jason
    JOURNAL OF EDUCATIONAL RESEARCH, 2018, 111 (02): : 163 - 174
  • [48] Quality of learning outcomes in an online video-based learning community: potential and challenges for student teachers
    So, Winnie Wing-mui
    ASIA-PACIFIC JOURNAL OF TEACHER EDUCATION, 2012, 40 (02) : 143 - 158
  • [49] VIDEO LEARNING ENVIRONMENT FOR GUIDING STUDENT TEACHERS' CONSTRUCTION OF ACTION-ORIENTED KNOWLEDGE
    Pedaste, Margus
    Allas, Raili
    Leijen, Aeli
    Adojaan, Kristjan
    Husu, Jukka
    Marcos, Juan-Jose Mena
    Meijer, Paulien
    Knezic, Dubravka
    Krull, Edgar
    Toom, Auli
    INTED2014: 8TH INTERNATIONAL TECHNOLOGY, EDUCATION AND DEVELOPMENT CONFERENCE, 2014, : 24 - 30
  • [50] Automatic Assessment of Engagement and Attention of the Student by Means of Facial Expressions
    Vazquez Rodriguez, Catalina Alejandra
    Pinto Elias, Raul
    ADVANCES IN DIGITAL TECHNOLOGIES, 2017, 295 : 60 - 70