Deep Cross-Attention Network for Crowdfunding Success Prediction

被引:7
|
作者
Tang, Zhe [1 ]
Yang, Yi [2 ]
Li, Wen [1 ]
Lian, Defu [3 ,4 ]
Duan, Lixin [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 610054, Peoples R China
[2] Hong Kong Univ Sci & Technol, Business Sch, Hong Kong 999077, Peoples R China
[3] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei 230026, Peoples R China
[4] Univ Sci & Technol China, Sch Data Sci, Hefei 230026, Peoples R China
基金
中国国家自然科学基金;
关键词
Crowdfunding success prediction; attention mechanism; multimodal learning;
D O I
10.1109/TMM.2022.3141256
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Crowdfunding creates opportunities for entrepre- neurs. It allows startup companies to reach a large audience for fundraising and bring their creative ideas to life. In this work, we are concerned with crowdfunding project success prediction problem, i.e., to predict whether a project will successfully reach its funding goal by using its project profiles. This is important for startup companies to refine their project profiles and achieve their goals. Crowdfunding project success prediction is a typical classification problem but with a few critical challenges. On the one hand, with only coarse-grained project status as weak supervision, it is hard for a deep learning network to learn the relationship between project profiles and explain why it makes this prediction. On the other hand, on the project homepage, there are various modalities of description, including metadata, textual description, images, and videos. Among those, videos play an important role in the success of a crowdfunding project, however, were ignored in previous works, due to the difficulty in extracting useful semantic and authentic information from videos, especially for the crowdfunding project where information in different modalities are unaligned. To this end, we propose a novel framework called Deep Cross-Attention Network to learn and fuse information from introduction videos and textual descriptions of project profiles. More specifically, we develop a cross-attention block to align and represent mismatched textual description and untrimmed introduction videos and fuse the information from these two modalities, which effectively remedies the lack of supervised information caused by project status as weak supervision. More importantly, with our cross-attention mechanism, the model is able to interpret how it makes such predictions and show which keywords and keyframes it depends on. We conduct extensive experiments on two crowdfunding datasets (collected from Kickstarter and Indiegogo) and show that our method achieves superior performance over existing state-of-the-art baselines.
引用
收藏
页码:1306 / 1319
页数:14
相关论文
共 50 条
  • [1] Prediction of Crowdfunding Project Success with Deep Learning
    Yu, Pi-Fen
    Huang, Fu-Ming
    Yang, Chuan
    Liu, Yu-Hsin
    Li, Zi-Yi
    Tsai, Cheng-Hung
    2018 IEEE 15TH INTERNATIONAL CONFERENCE ON E-BUSINESS ENGINEERING (ICEBE 2018), 2018, : 1 - 8
  • [2] Success Prediction on Crowdfunding with Multimodal Deep Learning
    Cheng, Chaoran
    Tan, Fei
    Hou, Xiurui
    Wei, Zhi
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2158 - 2164
  • [3] Joint Cross-Attention Network With Deep Modality Prior for Fast MRI Reconstruction
    Sun, Kaicong
    Wang, Qian
    Shen, Dinggang
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2024, 43 (01) : 558 - 569
  • [4] Perceiver CPI: a nested cross-attention network for compound-protein interaction prediction
    Nguyen, Ngoc-Quang
    Jang, Gwanghoon
    Kim, Hajung
    Kang, Jaewoo
    BIOINFORMATICS, 2023, 39 (01)
  • [5] Skin lesion segmentation network with cross-attention coding
    Li D.
    Yang F.
    Liu Y.
    Tang Y.
    Guangxue Jingmi Gongcheng/Optics and Precision Engineering, 2024, 32 (04): : 609 - 621
  • [6] Multimodal Cross-Attention Graph Network for Desire Detection
    Gu, Ruitong
    Wang, Xin
    Yang, Qinghong
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT IV, 2023, 14257 : 512 - 523
  • [7] An Improved Siamese Tracking Network Based On Self-Attention And Cross-Attention
    Lai Yijun
    Song Jianmei
    She Haoping
    2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2023, : 466 - 470
  • [8] CAFIN: cross-attention based face image repair network
    Li, Yaqian
    Li, Kairan
    Li, Haibin
    Zhang, Wenming
    MULTIMEDIA SYSTEMS, 2024, 30 (05)
  • [9] Speech Enhancement with Fullband-Subband Cross-Attention Network
    Chen, Jun
    Rao, Wei
    Wang, Zilin
    Wu, Zhiyong
    Wang, Yannan
    Yu, Tao
    Shang, Shidong
    Meng, Helen
    INTERSPEECH 2022, 2022, : 976 - 980
  • [10] RECA: Relation Extraction Based on Cross-Attention Neural Network
    Huang, Xiaofeng
    Guo, Zhiqiang
    Zhang, Jialiang
    Cao, Hui
    Yang, Jie
    ELECTRONICS, 2022, 11 (14)