Zero-Shot Classification of Art With Large Language Models

被引:0
|
作者
Tojima, Tatsuya [1 ]
Yoshida, Mitsuo [2 ]
机构
[1] Univ Tsukuba, Degree Programs Syst & Informat Engn, Tsukuba, Ibaraki 3058577, Japan
[2] Univ Tsukuba, Inst Business Sci, Bunkyo Ku, Tokyo 1120012, Japan
来源
IEEE ACCESS | 2025年 / 13卷
关键词
Art; Large language models; Investment; Photography; Painting; Graphics processing units; Servers; Load modeling; Data preprocessing; Data models; auction price; ChatGPT; classification; data preprocessing; Gemma; large language model; Llama; LLM; machine learning; zero-shot learning; PRICE;
D O I
10.1109/ACCESS.2025.3532995
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Art has become an important new investment vehicle. Thus, interest is growing in art price prediction as a tool for assessing the returns and risks of art investments. Both traditional statistical methods and machine learning methods have been used to predict art prices. However, both methods incur substantial human costs for data preprocessing for the construction of prediction models, necessitating a reduction in the workload. In this study, we propose the zero-shot classification method to perform automatic annotation in data processing for art price prediction by leveraging large language models (LLMs). The proposed method can perform annotation without new training data. Thus, it minimizes human costs. Our experiments demonstrated that the 4-bit quantized Llama-3 70B model, which can run on a local server, achieved the most accurate (over 0.9) automatic annotation of different art forms using LLMs, performing slightly better than the GPT-4o model from OpenAI. These results are practical for data preprocessing and comparable with the results of previous machine learning methods.
引用
收藏
页码:17426 / 17439
页数:14
相关论文
共 50 条
  • [21] Large Language Models Are Zero-Shot Fuzzers: Fuzzing Deep-Learning Libraries via Large Language Models
    Deng, Yinlin
    Xia, Chunqiu Steven
    Peng, Haoran
    Yang, Chenyuan
    Zhan, Lingming
    PROCEEDINGS OF THE 32ND ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2023, 2023, : 423 - 435
  • [22] Harnessing large language models' zero-shot and few-shot learning capabilities for regulatory research
    Meshkin, Hamed
    Zirkle, Joel
    Arabidarrehdor, Ghazal
    Chaturbedi, Anik
    Chakravartula, Shilpa
    Mann, John
    Thrasher, Bradlee
    Li, Zhihua
    BRIEFINGS IN BIOINFORMATICS, 2024, 25 (05)
  • [24] Zero-Shot ECG Diagnosis with Large Language Models and Retrieval-Augmented Generation
    Yu, Han
    Guo, Peikun
    Sano, Akane
    MACHINE LEARNING FOR HEALTH, ML4H, VOL 225, 2023, 225 : 650 - 663
  • [25] Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors
    Zhang, Kai
    Gutierrez, Bernal Jimenez
    Su, Yu
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 794 - 812
  • [26] ZVQAF: Zero-shot visual question answering with feedback from large language models
    Liu, Cheng
    Wang, Chao
    Peng, Yan
    Li, Zhixu
    NEUROCOMPUTING, 2024, 580
  • [27] The unreasonable effectiveness of large language models in zero-shot semantic annotation of legal texts
    Savelka, Jaromir
    Ashley, Kevin D.
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2023, 6
  • [29] Improving Zero-Shot Stance Detection by Infusing Knowledge from Large Language Models
    Guo, Mengzhuo
    Jiang, Xiaorui
    Liao, Yong
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT XIII, ICIC 2024, 2024, 14874 : 121 - 132
  • [30] A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
    Zhuang, Shengyao
    Zhuang, Honglei
    Koopman, Bevan
    Zuccon, Guido
    PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 38 - 47