Pre-trained Transformer-based Classification for Automated Patentability Examination

被引:3
|
作者
Lo, Hao-Cheng [1 ]
Chu, Jung-Mei [2 ]
机构
[1] Natl Taiwan Univ, Dept Psychol, Taipei, Taiwan
[2] Natl Taiwan Univ, Grad Inst Networking & Multimedia, Taipei, Taiwan
关键词
Patentability; Multi-label Classification; Pre-trained Transformers; Natural Language Processing;
D O I
10.1109/CSDE53843.2021.9718474
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Patentability examination, which means checking whether claims of a patent application meet the requirements for being patentable, is highly reliant on experts' arduous endeavors entailing domain knowledge. Therefore, automated patentability examination would be the immediate priority, though underappreciated. In this work, being the first to cast deep-learning light on automated patentability examination, we formulate this task as a multi-label text classification problem, which is challenging due to learning cross-sectional characteristics of abstract requirements (labels) from text content replete with inventive terms. To address this problem, we fine-tune downstream multi-label classification models over pre-trained transformer variants (BERT-Base/Large, RoBERTa-Base/Large, and XLNet) in light of their state-of-the-art achievements on many tasks. On a large USPTO patent database, we assess the performance of our models and find the model outperforming others based on the metrics, namely micro-precision, micro-recall, and micro-F1.
引用
收藏
页数:5
相关论文
共 50 条
  • [41] Detection of Unstructured Sensitive Data Based on a Pre-Trained Model and Lattice Transformer
    Jin, Feng
    Wu, Shaozhi
    Liu, Xingang
    Su, Han
    Tian, Miao
    2024 7TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND BIG DATA, ICAIBD 2024, 2024, : 180 - 185
  • [42] Pre-trained low-light image enhancement transformer
    Zhang, Jingyao
    Hao, Shijie
    Rao, Yuan
    IET IMAGE PROCESSING, 2024, 18 (08) : 1967 - 1984
  • [43] Siamese Pre-Trained Transformer Encoder for Knowledge Base Completion
    Li, Mengyao
    Wang, Bo
    Jiang, Jing
    NEURAL PROCESSING LETTERS, 2021, 53 (06) : 4143 - 4158
  • [44] Efficient Unsupervised Community Search with Pre-trained Graph Transformer
    Wang, Jianwei
    Wang, Kai
    Lin, Xuemin
    Zhang, Wenjie
    Zhang, Ying
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2024, 17 (09): : 2227 - 2240
  • [45] Siamese Pre-Trained Transformer Encoder for Knowledge Base Completion
    Mengyao Li
    Bo Wang
    Jing Jiang
    Neural Processing Letters, 2021, 53 : 4143 - 4158
  • [46] Offline Pre-trained Multi-agent Decision Transformer
    Meng, Linghui
    Wen, Muning
    Le, Chenyang
    Li, Xiyun
    Xing, Dengpeng
    Zhang, Weinan
    Wen, Ying
    Zhang, Haifeng
    Wang, Jun
    Yang, Yaodong
    Xu, Bo
    MACHINE INTELLIGENCE RESEARCH, 2023, 20 (02) : 233 - 248
  • [47] A PRE-TRAINED AUDIO-VISUAL TRANSFORMER FOR EMOTION RECOGNITION
    Minh Tran
    Soleymani, Mohammad
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4698 - 4702
  • [48] The application of Chat Generative Pre-trained Transformer in nursing education
    Liu, Jialin
    Liu, Fan
    Fang, Jinbo
    Liu, Siru
    NURSING OUTLOOK, 2023, 71 (06)
  • [49] Swin transformer-based fork architecture for automated breast tumor classification
    Uzen, Hueseyin
    Firat, Huseyin
    Atila, Orhan
    Sengur, Abdulkadir
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 256
  • [50] Automated Classification of Urinary Cells: Using Convolutional Neural Network Pre-trained on Lung Cells
    Teramoto, Atsushi
    Michiba, Ayano
    Kiriyama, Yuka
    Sakurai, Eiko
    Shiroki, Ryoichi
    Tsukamoto, Tetsuya
    APPLIED SCIENCES-BASEL, 2023, 13 (03):