Research on Multi-modal Affective Assistant Model in Online Synchronous Teaching

被引:0
|
作者
Li, Yinghui [1 ]
Han, Jun [1 ]
Liu, Jing [1 ]
Zhao, Yue [1 ]
机构
[1] Dept Educ Technol, Beijing 100048, Peoples R China
关键词
Online synchronization teaching; multi-modal affective computing; Agent technology; affective assistant; emotional interaction;
D O I
10.1145/3207677.3278030
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Emotion(1) plays an important role in the process of learning. There is a lack of interaction between teachers and students and a lack of emotional communication in the process of online education, which makes the effect of online education discounted. This paper studies and designs a multi-modal affective assistant model for one-to-one teaching in online synchronous teaching. The model is composed of local Agent and communication Agent. Using the multi-modal emotional information fusion technology, the design of the affective assistant model is completed in the teacher and student ends. The human-computer interaction based on the affective computing is realized. The emotional state of teachers and students in the learning process is recorded and measured in real time, and the emotional analysis is completed. The function of teaching assistants and learning assistants based on multi-modal effective computing can give teachers and students proper emotional feedback and improve the effect of teaching and learning.
引用
收藏
页数:5
相关论文
共 50 条
  • [1] Teaching to speak online: A multi-modal approach
    Woloshen, Sonya
    CANADIAN MODERN LANGUAGE REVIEW-REVUE CANADIENNE DES LANGUES VIVANTES, 2018, 74 (02): : 334 - 336
  • [2] Research on Multi-Modal Music Score Alignment Model for Online Music Education
    Ren, Dexin
    JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2024, 28 (05) : 1075 - 1084
  • [3] Multi-modal Approach for Affective Computing
    Siddharth
    Jung, Tzyy-Ping
    Sejnowski, Terrence J.
    2018 40TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2018, : 291 - 294
  • [4] Innovative Research on College English Teaching in Multi-modal Teaching Mode
    Ni, Lizhu
    PROCEEDINGS OF THE 2017 INTERNATIONAL CONFERENCE ON INNOVATIONS IN ECONOMIC MANAGEMENT AND SOCIAL SCIENCE (IEMSS 2017), 2017, 29 : 1239 - 1244
  • [5] LLaVA-Phi: Efficient Multi-Modal Assistant with Small Language Model
    Zhu, Yichen
    Zhu, Minjie
    Liu, Ning
    Xu, Zhiyuan
    Peng, Yaxin
    PROCEEDINGS OF THE 1ST INTERNATIONAL WORKSHOP ON EFFICIENT MULTIMEDIA COMPUTING UNDER LIMITED RESOURCES, EMCLR 2024, 2024, : 18 - 22
  • [6] MIA-Net: Multi-Modal Interactive Attention Network for Multi-Modal Affective Analysis
    Li, Shuzhen
    Zhang, Tong
    Chen, Bianna
    Chen, C. L. Philip
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (04) : 2796 - 2809
  • [7] Architecture considerations for interoperable multi-modal assistant systems
    Heider, T
    Kirste, T
    INTERACTIVE SYSTEMS: DESIGN, SPECIFICATION AND VERIFICATION, 2002, 2545 : 253 - 267
  • [8] ONLINE TECHNOLOGIES FOR MULTI-MODAL LITERACIES
    Epasto, A.
    Smeriglio, D.
    Mazzeo, A.
    Nucera, S.
    ICERI2016: 9TH INTERNATIONAL CONFERENCE OF EDUCATION, RESEARCH AND INNOVATION, 2016, : 1812 - 1821
  • [9] DIY Assistant: A Multi-modal End-User Programmable Virtual Assistant
    Fischer, Michael H.
    Campagna, Giovanni
    Choi, Euirim
    Lam, Monica S.
    PROCEEDINGS OF THE 42ND ACM SIGPLAN INTERNATIONAL CONFERENCE ON PROGRAMMING LANGUAGE DESIGN AND IMPLEMENTATION (PLDI '21), 2021, : 312 - 327
  • [10] Interdisciplinary and multi-modal teaching methods
    Howe, J
    GERONTOLOGIST, 2005, 45 : 599 - 599