Evaluation on algorithms and models for multi-modal information fusion and evaluation in new media art and film and television cultural creation

被引:0
|
作者
Shao, Junli [1 ]
Wu, Dengrong [2 ]
机构
[1] Sch Chinese Language & Culture Shaoxing, Shaoxing, Zhejiang, Peoples R China
[2] New Era Univ Coll, Negeri Selangor 250003, Malaysia
关键词
New media art; multi-modal information fusion; recurrent neural network; film and television culture creation; speech recognition;
D O I
10.3233/JCM-247565
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
This paper promoted the development of new media art and film and television culture creation through multi-modal information fusion and analysis, and discussed the existing problems of new media art and film and television culture creation at present, including piracy, management problems and lack of innovation ability. The network structure of RNN neural network can cycle information among neurons, retain the memory of previous user information in the progressive learning sequence, analyze user behavior data through previous memory, accurately recommend users, and provide artists with a basis for user preferences. The viewing experience scores for works 1 to 5 created using traditional creative methods were 6.23, 6.02, 6.56, 6.64, and 6.88, respectively. The viewing experience scores for works 1 to 5 created through multi-modal information fusion and analysis were 9.41, 9.08, 9.11, 9.61, and 8.44, respectively. Movies created through multi-modal information fusion and analysis had higher viewing experience ratings. The results of this article emphasize that multi-modal information fusion and analysis can overcome the limitations of traditional single creative methods, provide rich and diverse expressions, and enable creators to more flexibly respond to complex creative needs, thereby achieving better creative effects.
引用
收藏
页码:3173 / 3189
页数:17
相关论文
共 20 条
  • [1] ART-Based Fusion of Multi-modal Information for Mobile Robots
    Berghoefer, Elmar
    Schulze, Denis
    Tscherepanow, Marko
    Wachsmuth, Sven
    ENGINEERING APPLICATIONS OF NEURAL NETWORKS, PT I, 2011, 363 : 1 - 10
  • [2] Evaluation Method of Teaching Styles Based on Multi-modal Fusion
    Tang, Wen
    Wang, Chongwen
    Zhang, Yi
    2021 THE 7TH INTERNATIONAL CONFERENCE ON COMMUNICATION AND INFORMATION PROCESSING, ICCIP 2021, 2021, : 9 - 15
  • [3] Interior Design Evaluation Based on Deep Learning: A Multi-Modal Fusion Evaluation Mechanism
    Fan, Yiyan
    Zhou, Yang
    Yuan, Zheng
    MATHEMATICS, 2024, 12 (10)
  • [4] Optimizing Machine Translation Algorithms through Empirical Study of Multi-modal Information Fusion
    Zhong Xuewen
    2ND INTERNATIONAL CONFERENCE ON SUSTAINABLE COMPUTING AND SMART SYSTEMS, ICSCSS 2024, 2024, : 1336 - 1341
  • [5] Multi-modal Sarcasm Detection on Social Media via Multi-Granularity Information Fusion
    Ou, Lisong
    Li, Zhixin
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2025, 21 (03)
  • [6] Evaluation of Random Field Models in Multi-modal Unsupervised Tampering Localization
    Korus, Pawel
    Huang, Jiwu
    2016 8TH IEEE INTERNATIONAL WORKSHOP ON INFORMATION FORENSICS AND SECURITY (WIFS 2016), 2016,
  • [7] Multi-modal Fake News Detection on Social Media via Multi-grained Information Fusion
    Zhou, Yangming
    Yang, Yuzhou
    Ying, Qichao
    Qian, Zhenxing
    Zhang, Xinpeng
    PROCEEDINGS OF THE 2023 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2023, 2023, : 343 - 352
  • [8] Multi-Modal Fingerprint Presentation Attack Detection: Evaluation on a New Dataset
    Spinoulas L.
    Mirzaalian H.
    Hussein M.E.
    Abdalmageed W.
    Spinoulas, Leonidas (lspinoulas@isi.edu), 1600, Institute of Electrical and Electronics Engineers Inc. (03): : 347 - 364
  • [9] Improved Multi-modal Image Fusion with Attention and Dense Networks: Visual and Quantitative Evaluation
    Banerjee, Ankan
    Patra, Dipti
    Roy, Pradipta
    COMPUTER VISION AND IMAGE PROCESSING, CVIP 2023, PT III, 2024, 2011 : 237 - 248
  • [10] A Comprehensive Benchmark and Evaluation of Thai Finger Spelling in Multi-Modal Deep Learning Models
    Vijitkunsawat, Wuttichai
    Racharak, Teeradaj
    IEEE ACCESS, 2024, 12 : 158079 - 158093