Zero-Shot Classification of Art With Large Language Models

被引:0
|
作者
Tojima, Tatsuya [1 ]
Yoshida, Mitsuo [2 ]
机构
[1] Univ Tsukuba, Degree Programs Syst & Informat Engn, Tsukuba, Ibaraki 3058577, Japan
[2] Univ Tsukuba, Inst Business Sci, Bunkyo Ku, Tokyo 1120012, Japan
来源
IEEE ACCESS | 2025年 / 13卷
关键词
Art; Large language models; Investment; Photography; Painting; Graphics processing units; Servers; Load modeling; Data preprocessing; Data models; auction price; ChatGPT; classification; data preprocessing; Gemma; large language model; Llama; LLM; machine learning; zero-shot learning; PRICE;
D O I
10.1109/ACCESS.2025.3532995
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Art has become an important new investment vehicle. Thus, interest is growing in art price prediction as a tool for assessing the returns and risks of art investments. Both traditional statistical methods and machine learning methods have been used to predict art prices. However, both methods incur substantial human costs for data preprocessing for the construction of prediction models, necessitating a reduction in the workload. In this study, we propose the zero-shot classification method to perform automatic annotation in data processing for art price prediction by leveraging large language models (LLMs). The proposed method can perform annotation without new training data. Thus, it minimizes human costs. Our experiments demonstrated that the 4-bit quantized Llama-3 70B model, which can run on a local server, achieved the most accurate (over 0.9) automatic annotation of different art forms using LLMs, performing slightly better than the GPT-4o model from OpenAI. These results are practical for data preprocessing and comparable with the results of previous machine learning methods.
引用
收藏
页码:17426 / 17439
页数:14
相关论文
共 50 条
  • [31] Zero-Shot Recommendations with Pre-Trained Large Language Models for Multimodal Nudging
    Harrison, Rachel M.
    Dereventsov, Anton
    Bibin, Anton
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 1535 - 1542
  • [32] Zero-Shot Classification by Logical Reasoning on Natural Language Explanations
    Han, Chi
    Pei, Hengzhi
    Du, Xinya
    Ji, Heng
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 8967 - 8981
  • [33] Extensible Prompts for Language Models on Zero-shot Language Style Customization
    Ge, Tao
    Hu, Jing
    Dong, Li
    Mao, Shaoguang
    Xia, Yan
    Wang, Xun
    Chen, Si-Qing
    Wei, Furu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [34] ZEROTOP: Zero-Shot Task-Oriented Semantic Parsing using Large Language Models
    Mekala, Dheeraj
    Wolfe, Jason
    Roy, Subhro
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 5792 - 5799
  • [35] Retrieving-to-Answer: Zero-Shot Video Question Answering with Frozen Large Language Models
    Pan, Junting
    Lin, Ziyi
    Ge, Yuying
    Zhu, Xiatian
    Zhang, Renrui
    Wang, Yi
    Qiao, Yu
    Li, Hongsheng
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 272 - 283
  • [36] CDZL: a controllable diversity zero-shot image caption model using large language models
    Zhao, Xin
    Kong, Weiwei
    Liu, Zongyao
    Wang, Menghao
    Li, Yiwen
    SIGNAL IMAGE AND VIDEO PROCESSING, 2025, 19 (04)
  • [37] Generating Training Data with Language Models: Towards Zero-Shot Language Understanding
    Meng, Yu
    Huang, Jiaxin
    Zhang, Yu
    Han, Jiawei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [38] Zero-Shot Recommendation as Language Modeling
    Sileo, Damien
    Vossen, Wout
    Raymaekers, Robbe
    ADVANCES IN INFORMATION RETRIEVAL, PT II, 2022, 13186 : 223 - 230
  • [39] Towards Zero-shot Language Modeling
    Ponti, Edoardo M.
    Vulic, Ivan
    Cotterell, Ryan
    Reichart, Roi
    Korhonen, Anna
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 2900 - +
  • [40] Open-source Large Language Models are Strong Zero-shot Query Likelihood Models for Document Ranking
    Zhuang, Shengyao
    Liu, Bing
    Koopman, Bevan
    Zuccon, Guido
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 8807 - 8817