Multi-task multi-label multiple instance learning

被引:0
|
作者
Yi SHENJianping FANDepartment of Computer ScienceUniversity of North Carolina at Charlotte USA [28223 ]
机构
关键词
D O I
暂无
中图分类号
TP391.41 [];
学科分类号
080203 ;
摘要
For automatic object detection tasks,large amounts of training images are usually labeled to achieve more reliable training of the object classifiers;this is cost-expensive since it requires hiring professionals to label large-scale training images.When a large number of object classes come into view,the issue of obtaining a large enough amount of the labeled training images becomes more critical.There are three potential solutions to reduce the burden for image labeling:(1) allowing people to provide the object labels loosely at the image level rather than at the object level(e.g.,loosely-tagged images without identifying the exact object locations in the images) ;(2) harnessing large-scale collaboratively-tagged images that are available on the Internet;and,(3) developing new machine learning algorithms that can directly leverage large-scale collaboratively-or loosely-tagged images for achieving more eective training of a large number of object classifiers.Based on these observations,a multi-task multi-label multiple instance learning(MTML-MIL) algorithm is developed in this paper by leveraging both inter-object correlations and large-scale loosely-labeled images for object classifier training.By seamlessly integrating multi-task learning,multi-label learning,and multiple instance learning,our MTML-MIL algorithm can achieve more accurate training of a large number of inter-related object classifiers(where an object network is constructed for determining the inter-related learning tasks directly in the feature space rather than in the label space) .Our experimental results have shown that our MTML-MIL algorithm can achieve higher detection accuracy rates for automatic object detection.
引用
收藏
页码:860 / 871
页数:12
相关论文
共 50 条
  • [1] Multi-task multi-label multiple instance learning
    Shen, Yi
    Fan, Jian-ping
    JOURNAL OF ZHEJIANG UNIVERSITY-SCIENCE C-COMPUTERS & ELECTRONICS, 2010, 11 (11): : 860 - 871
  • [3] A COMBINED APPROACH TO MULTI-LABEL MULTI-TASK LEARNING
    Motamedvaziri, D.
    Saligrama, V.
    Castanon, D.
    2012 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP (SSP), 2012, : 616 - 619
  • [4] Multi-Label Multi-Task Learning with Dynamic Task Weight Balancing
    Wang, Tianyi
    Chen, Shu-Ching
    2020 IEEE 21ST INTERNATIONAL CONFERENCE ON INFORMATION REUSE AND INTEGRATION FOR DATA SCIENCE (IRI 2020), 2020, : 245 - 252
  • [5] MULTI-TASK DEEP NEURAL NETWORK FOR MULTI-LABEL LEARNING
    Huang, Yan
    Wang, Wei
    Wang, Liang
    Tan, Tieniu
    2013 20TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2013), 2013, : 2897 - 2900
  • [6] Multi-label Annotation for Visual Multi-Task Learning Models
    Sharma, Gaurang
    Angleraud, Alexandre
    Pieters, Roel
    2023 SEVENTH IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING, IRC 2023, 2023, : 31 - 34
  • [7] Multi-Label Multi-Task Deep Learning for Behavioral Coding
    Gibson, James
    Atkins, David C.
    Creed, Torrey A.
    Imel, Zac
    Georgiou, Panayiotis
    Narayanan, Shrikanth
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2022, 13 (01) : 508 - 518
  • [8] Multi-instance multi-label learning
    Zhou, Zhi-Hua
    Zhang, Min-Ling
    Huang, Sheng-Jun
    Li, Yu-Feng
    ARTIFICIAL INTELLIGENCE, 2012, 176 (01) : 2291 - 2320
  • [9] Multi-label emotion classification based on adversarial multi-task learning
    Lin, Nankai
    Fu, Sihui
    Lin, Xiaotian
    Wang, Lianxi
    INFORMATION PROCESSING & MANAGEMENT, 2022, 59 (06)
  • [10] Multi-Task Music Representation Learning from Multi-Label Embeddings
    Schindler, Alexander
    Knees, Peter
    2019 INTERNATIONAL CONFERENCE ON CONTENT-BASED MULTIMEDIA INDEXING (CBMI), 2019,