Unveiling the Power of Self-Attention for Shipping Cost Prediction: The Rate Card Transformer

被引:0
|
作者
Sreekar, P. Aditya [1 ]
Verma, Sahil [1 ]
Madhavan, Varun [1 ,2 ]
Persad, Abhishek [1 ]
机构
[1] Amazon, Hyderabad, Telangana, India
[2] Indian Inst Technol, Kharagpur, W Bengal, India
来源
ASIAN CONFERENCE ON MACHINE LEARNING, VOL 222 | 2023年 / 222卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Amazon ships billions of packages to its customers annually within the United States. Shipping cost of these packages are used on the day of shipping (day 0) to estimate profitability of sales. Downstream systems utilize these days 0 profitability estimates to make financial decisions, such as pricing strategies and delisting loss-making products. However, obtaining accurate shipping cost estimates on day 0 is complex for reasons like delay in carrier invoicing or fixed cost components getting recorded at monthly cadence. Inaccurate shipping cost estimates can lead to bad decision, such as pricing items too low or high, or promoting the wrong product to the customers. Current solutions for estimating shipping costs on day 0 rely on tree-based models that require extensive manual engineering efforts. In this study, we propose a novel architecture called the Rate Card Transformer (RCT) that uses self-attention to encode all package shipping information such as package attributes, carrier information and route plan. Unlike other transformer-based tabular models, RCT has the ability to encode a variable list of one-to-many relations of a shipment, allowing it to capture more information about a shipment. For example, RCT can encode properties of all products in a package. Our results demonstrate that cost predictions made by the RCT have 28.82% less error compared to tree-based GBDT model. Moreover, the RCT outperforms the state-of-the-art transformer-based tabular model, FTTransformer, by 6.08%. We also illustrate that the RCT learns a generalized manifold of the rate card that can improve the performance of tree-based models.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] Spectral Superresolution Using Transformer with Convolutional Spectral Self-Attention
    Liao, Xiaomei
    He, Lirong
    Mao, Jiayou
    Xu, Meng
    REMOTE SENSING, 2024, 16 (10)
  • [42] A novel local enhanced channel self-attention based on Transformer for industrial remaining useful life prediction
    Zhang, Zhizheng
    Song, Wen
    Wu, Qiong
    Sun, Wenxu
    Li, Qiqiang
    Jia, Lei
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 141
  • [43] Magnetic Field Prediction Method Based on Residual U-Net and Self-Attention Transformer Encoder
    Jin L.
    Yin Z.
    Liu L.
    Song J.
    Liu Y.
    Diangong Jishu Xuebao/Transactions of China Electrotechnical Society, 2024, 39 (10): : 2937 - 2952
  • [44] A deep learning sequence model based on self-attention and convolution for wind power prediction
    Liu, Chien-Liang
    Chang, Tzu-Yu
    Yang, Jie-Si
    Huang, Kai-Bin
    RENEWABLE ENERGY, 2023, 219
  • [45] Multi-stage Transient Stability Assessment of Power System Based on Self-attention Transformer Encoder
    Fang J.
    Liu C.
    Su C.
    Lin H.
    Zheng L.
    Zhongguo Dianji Gongcheng Xuebao/Proceedings of the Chinese Society of Electrical Engineering, 2023, 43 (15): : 5745 - 5758
  • [46] Traffic prediction using MSSBiLS with self-attention model
    Suvitha, D.
    Vijayalakshmi, M.
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2022, 34 (15):
  • [47] A pagerank self-attention network for traffic flow prediction
    Kang, Ting
    Wang, Huaizhi
    Wu, Ting
    Peng, Jianchun
    Jiang, Hui
    FRONTIERS IN ENERGY RESEARCH, 2022, 10
  • [48] SELF-ATTENTION EQUIPPED GRAPH CONVOLUTIONS FOR DISEASE PREDICTION
    Kazi, Anees
    Krishna, S. Arvind
    Shekarforoush, Shayan
    Kortuem, Karsten
    Albarqouni, Shadi
    Navab, Nassir
    2019 IEEE 16TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2019), 2019, : 1896 - 1899
  • [49] ENHANCING TONGUE REGION SEGMENTATION THROUGH SELF-ATTENTION AND TRANSFORMER BASED
    Song, Yihua
    Li, Can
    Zhang, Xia
    Liu, Zhen
    Song, Ningning
    Zhou, Zuojian
    JOURNAL OF MECHANICS IN MEDICINE AND BIOLOGY, 2024, 24 (02)
  • [50] EViT: An Eagle Vision Transformer With Bi-Fovea Self-Attention
    Shi, Yulong
    Sun, Mingwei
    Wang, Yongshuai
    Ma, Jiahao
    Chen, Zengqiang
    IEEE TRANSACTIONS ON CYBERNETICS, 2025, 55 (03) : 1288 - 1300