ConCave-Convex procedure for support vector machines with Huber loss for text classification

被引:0
|
作者
Borah, Parashjyoti [1 ]
Gupta, Deepak [2 ]
Hazarika, Barenya Bikash [3 ]
机构
[1] Indian Inst Informat Technol Guwahati Bongora, Dept Comp Sci & Engn, Gauhati 781015, Assam, India
[2] Motilal Nehru Natl Inst Technol Allahabad, Dept Comp Sci & Engn, Prayagraj 211004, Uttar Pradesh, India
[3] Assam Town Univ, Fac Comp Technol, Sankar Madhab Path,Gandhinagar, Gauhati 781026, Assam, India
关键词
Support vector machine; Hinge loss; ConCave-Convex procedure; Ramp loss function; Huber loss functions; REGRESSION; CLASSIFIERS; ALGORITHM;
D O I
10.1016/j.compeleceng.2024.109925
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The classical support vector machine (SVM) adopts the linear Hinge loss whereas the least squares SVM (LS-SVM) employs the quadratically growing least squares loss function. The robust Ramp loss function is employed in Ramp loss SVM (RSVM) that truncates the Hinge loss function and becomes flat a specified point afterwards, thus, increases robustness to outliers. Recently proposed SVM with pinball loss (pin-SVM) utilizes pinball loss function that maximizes the margin between the class hyperplanes based on quantile distance. Huber loss function is the generalization of linear Hinge loss and quadratic loss. Huber loss solves sensitivity issues of least squares loss to noise and outlier. In this work, we employ the robust Huber loss function for SVM classification for improved generalization performance. The cost function of the proposed approach consists of one convex and one non-convex part, which might sometimes provide local optimum solution instead of a global optimum. We suggest a ConCave-Convex Procedure (CCCP) to resolve this issue. Additionally, the proximal cost is scaled for each class sample based on their class size to reduce the effect of the class imbalance problem. Thus, it can be claimed that the proposed approach incorporates class imbalance learning as well. Extensive experimental analysis establishes efficacy of the proposed method. Furthermore, a sequential minimal optimization (SMO) procedure for high dimensional HSVM is proposed and its performance is tested on two text classification datasets.
引用
收藏
页数:22
相关论文
共 50 条
  • [1] The concave-convex procedure
    Yuille, AL
    Rangarajan, A
    NEURAL COMPUTATION, 2003, 15 (04) : 915 - 936
  • [2] The Concave-Convex procedure (CCCP)
    Yuille, AL
    Rangarajan, A
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 14, VOLS 1 AND 2, 2002, 14 : 1033 - 1040
  • [3] Iterative decoding based on the concave-convex procedure
    Shibuya, T
    Harada, K
    Tohyama, R
    Sakaniwa, K
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2005, E88A (05): : 1346 - 1364
  • [4] A Concave-Convex Procedure for TDOA Based Positioning
    Gholami, Mohammad Reza
    Gezici, Sinan
    Strom, Erik G.
    IEEE COMMUNICATIONS LETTERS, 2013, 17 (04) : 765 - 768
  • [5] Convex and concave hulls for classification with support vector machine
    Lopez Chau, Asdrubal
    Li, Xiaoou
    Yu, Wen
    NEUROCOMPUTING, 2013, 122 : 198 - 209
  • [6] Convex-Concave Hull for Classification with Support Vector Machine
    Lopez-Chau, Asdrubal
    Li, Xiaoou
    Yu, Wen
    12TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW 2012), 2012, : 431 - 438
  • [7] Nonlinear Regularization Path for the Modified Huber loss Support Vector Machines
    Karasuyama, Masayuki
    Takeuchi, Ichiro
    2010 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS IJCNN 2010, 2010,
  • [8] On a decoding algorithm for LDPC codes based on the concave-convex procedure
    Shibuya, T
    Sakaniwa, K
    2003 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY - PROCEEDINGS, 2003, : 148 - 148
  • [10] Virtual examples for text classification with support vector machines
    Sassano, M
    PROCEEDINGS OF THE 2003 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, 2003, : 208 - 215