Personalized Clothing Prediction Algorithm Based on Multi-modal Feature Fusion

被引:0
|
作者
Liu, Rong [1 ,2 ]
Joseph, Annie Anak [1 ]
Xin, Miaomiao [2 ]
Zang, Hongyan [2 ]
Wang, Wanzhen [2 ]
Zhang, Shengqun [2 ]
机构
[1] Univ Malaysia Sarawak, Fac Engn, Kota Samarahan, Sarawak, Malaysia
[2] Qilu Inst Technol, Comp & Informat Engn, Jinan, Peoples R China
关键词
fashion consumers; image; text data; personalized; multi-modal fusion;
D O I
10.46604/ijeti.2024.13394
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
With the popularization of information technology and the improvement of material living standards, fashion consumers are faced with the daunting challenge of making informed choices from massive amounts of data. This study aims to propose deep learning technology and sales data to analyze the personalized preference characteristics of fashion consumers and predict fashion clothing categories, thus empowering consumers to make well-informed decisions. The Visuelle's dataset includes 5,355 apparel products and 45 MB of sales data, and it encompasses image data, text attributes, and time series data. The paper proposes a novel 1DCNN-2DCNN deep convolutional neural network model for the multi-modal fusion of clothing images and sales text data. The experimental findings exhibit the remarkable performance of the proposed model, with accuracy, recall, F1 score, macro average, and weighted average metrics achieving 99.59%, 99.60%, 98.01%, 98.04%, and 98.00%, respectively. Analysis of four hybrid models highlights the superiority of this model in addressing personalized preferences.
引用
收藏
页码:216 / 230
页数:15
相关论文
共 50 条
  • [41] MCCP: multi-modal fashion compatibility and conditional preference model for personalized clothing recommendation
    Yunzhu Wang
    Li Liu
    Xiaodong Fu
    Lijun Liu
    Multimedia Tools and Applications, 2024, 83 : 9621 - 9645
  • [42] Multi-Modal Pedestrian Detection Algorithm Based on Illumination Perception Weight Fusion
    Liu Keqi
    Dong Mianmian
    Gao Hui
    Zhigang Lu
    Guo Baoyi
    Pang Min
    LASER & OPTOELECTRONICS PROGRESS, 2023, 60 (16)
  • [43] Human Behavior Recognition Algorithm Based on Multi-Modal Sensor Data Fusion
    Zheng, Dingchao
    Chen, Caiwei
    Yu, Jianzhe
    JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2025, 29 (02) : 287 - 305
  • [44] A Lightweight Multi-Modal Vehicle Trajectory Prediction Algorithm
    Li Z.
    Sun H.
    Hao Z.
    Xiao D.
    Hsi-An Chiao Tung Ta Hsueh/Journal of Xi'an Jiaotong University, 2024, 58 (06): : 14 - 23
  • [45] Multi-modal voice pathology detection architecture based on deep and handcrafted feature fusion
    Omeroglu, Asli Nur
    Mohammed, Hussein M. A.
    Oral, Emin Argun
    ENGINEERING SCIENCE AND TECHNOLOGY-AN INTERNATIONAL JOURNAL-JESTECH, 2022, 36
  • [46] A Few-Shot Modulation Recognition Method Based on Multi-Modal Feature Fusion
    Zha, Yanping
    Wang, Hongjun
    Shen, Zhexian
    Wang, Jiangzhou
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (07) : 10823 - 10828
  • [47] A Multi-modal Feature Fusion-based Approach for Mobile Application Classification and Recommendation
    Cao, Buqing
    Zhong, Weishi
    Xie, Xiang
    Zhang, Lulu
    Qing, Yueying
    JOURNAL OF INTERNET TECHNOLOGY, 2022, 23 (06): : 1417 - 1427
  • [48] Dynamic Deep Multi-modal Fusion for Image Privacy Prediction
    Tonge, Ashwini
    Caragea, Cornelia
    WEB CONFERENCE 2019: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2019), 2019, : 1829 - 1840
  • [49] Deep Gated Multi-modal Fusion for Image Privacy Prediction
    Zhao, Chenye
    Caragea, Cornelia
    ACM TRANSACTIONS ON THE WEB, 2023, 17 (04)
  • [50] Multi-modal Conditional Attention Fusion for Dimensional Emotion Prediction
    Chen, Shizhe
    Jin, Qin
    MM'16: PROCEEDINGS OF THE 2016 ACM MULTIMEDIA CONFERENCE, 2016, : 571 - 575