Empowering Retail Dual Transformer-Based Profound Product Recommendation Using Multi-Model Review

被引:0
|
作者
Alsekait, Deema Mohammed [1 ]
Nawaz, Asif [2 ]
Fathi, Hanaa [3 ]
Ahmed, Zohair [4 ]
Taha, Mohamed [5 ]
Alshinwan, Mohammad [6 ]
Taha, Ahmed [5 ]
Issa, Mohamed F.
Nabil, Ayman [7 ]
AbdElminaam, Diaa Salama [8 ]
机构
[1] Princess Nourah bint Abdulrahman Univ, Alriyad, Saudi Arabia
[2] Arid Agr Univ, PMAS, Rawalpindi, Pakistan
[3] Appl Sci Private Univ, Amman, Jordan
[4] Islamic Univ, Islamabad, Pakistan
[5] Benha Univ, Banha, Egypt
[6] Univ Pannonia, Pannonia, Hungary
[7] Misr Int Univ, Cairo, Egypt
[8] Jadara Res Ctr, Cairo, Egypt
关键词
Multus-Medium Reviews; Recommendation; Sentiment Score; SpanBERT; Fusion; Vti Transformer; SENTIMENT ANALYSIS; MODEL;
D O I
10.4018/JOEUC.358002
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Advancements in technology have significantly changed how we interact on social media platforms, where reviews and comments heavily influence consumer decisions. Traditionally, opinion mining has focused on textual data, overlooking the valuable insights present in customer-uploaded images-a concept we term Multus-Medium. This paper introduces a multimodal strategy for product recommendations that utilizes both text and image data. The proposed approach involves data collection, preprocessing, and sentiment analysis using Vti for images and SpanBERT for text reviews. These outputs are then fused to generate a final recommendation. The proposed model demonstrates superior performance, achieving 91.55% accuracy on the Amazon dataset and 90.89% on the Kaggle dataset. These compelling findings underscore the potential of our approach, offering a comprehensive and precise method for opinion mining in the era of social media-driven product reviews, ultimately aiding consumers in making informed purchasing decisions.
引用
收藏
页数:23
相关论文
共 50 条
  • [31] Transformer-based two-source motion model for multi-object tracking
    Yang, Jieming
    Ge, Hongwei
    Su, Shuzhi
    Liu, Guoqing
    APPLIED INTELLIGENCE, 2022, 52 (09) : 9967 - 9979
  • [32] Transformer-Based Multi-Scale Feature Remote Sensing Image Classification Model
    Sun, Ting
    Li, Jun
    Zhou, Xiangrui
    Chen, Zan
    IEEE ACCESS, 2025, 13 : 34095 - 34104
  • [33] Transformer-based two-source motion model for multi-object tracking
    Jieming Yang
    Hongwei Ge
    Shuzhi Su
    Guoqing Liu
    Applied Intelligence, 2022, 52 : 9967 - 9979
  • [34] TransDiffSeg: Transformer-Based Conditional Diffusion Segmentation Model for Abdominal Multi-Objective
    Gu, WenWen
    Zhang, GuoDong
    Ju, RongHui
    Wang, SuRan
    Li, YanLin
    Liang, TingYu
    Guo, Wei
    Gong, ZhaoXuan
    JOURNAL OF IMAGING INFORMATICS IN MEDICINE, 2025, 38 (01): : 262 - 280
  • [35] Privacy-Aware Human Activity Classification using a Transformer-based Model
    Thipprachak, Khirakorn
    Tangamchit, Poj
    Lerspalungsanti, Sarawut
    2022 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2022, : 528 - 534
  • [36] Improving transformer-based acoustic model performance using sequence discriminative training
    Lee, Chae-Won
    Chang, Joon-Hyuk
    JOURNAL OF THE ACOUSTICAL SOCIETY OF KOREA, 2022, 41 (03): : 335 - 341
  • [37] A transformer-based model for next disease prediction using electronic health records
    Makarov, Nikolai
    Lipkovich, Mikhail
    EUROPEAN PHYSICAL JOURNAL-SPECIAL TOPICS, 2025,
  • [38] A Dual Transformer-Based Deep Learning Model for Passenger Anomaly Behavior Detection in Elevator Cabs
    Ji, Yijin
    Sun, Haoxiang
    Xu, Benlian
    Lu, Mingli
    Zhou, Xu
    Shi, Jian
    INTERNATIONAL JOURNAL OF SWARM INTELLIGENCE RESEARCH, 2024, 15 (01)
  • [39] High entropy alloy property predictions using a transformer-based language model
    Spyros Kamnis
    Konstantinos Delibasis
    Scientific Reports, 15 (1)
  • [40] Dual-attention transformer-based hybrid network for multi-modal medical image segmentation
    Zhang, Menghui
    Zhang, Yuchen
    Liu, Shuaibing
    Han, Yahui
    Cao, Honggang
    Qiao, Bingbing
    SCIENTIFIC REPORTS, 2024, 14 (01):