Aspect-Based Sentiment Analysis with Enhanced Opinion Tree Parsing and Parameter-Efficient Fine-Tuning for Edge AI

被引:0
|
作者
Liao, Shih-wei [1 ]
Wang, Ching-Shun [1 ]
Yeh, Chun-Chao [2 ]
Lin, Jeng-Wei [3 ]
机构
[1] Natl Taiwan Univ, Dept Comp Sci & Informat Engn, Taipei 10617, Taiwan
[2] Natl Taiwan Ocean Univ, Dept Comp Sci & Engn, Keelung, Taiwan
[3] Tunghai Univ, Dept Informat Management, Taichung, Taiwan
来源
ELECTRONICS | 2025年 / 14卷 / 04期
关键词
social media text mining; aspect-based sentiment analysis; opinion tree parsing; parameter-efficient transfer learning;
D O I
10.3390/electronics14040690
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Understanding user opinions from user comments or reviews in social media text mining is essential for marketing campaigns and many other applications. However, analyzing social media user comments presents significant challenges due to the complexity of discerning relationships between opinions and aspects, particularly when comments vary greatly in length. To effectively explore aspects and opinions in the sentences, techniques based on mining opinion sentiment of the referred aspects (implicitly or explicitly) in the user comments with ACOS (aspect-category-opinion-sentiment) quadruple extraction have been proposed. Among many others, the opinion tree parsing (OTP) scheme has been shown to be effective and efficient for the ACOS quadruple extraction task in aspect-based sentiment analysis (ABAS). In this study, we continue the efforts to design an efficient ABSA scheme. We extend the original OTP scheme further with richer context parsing rules, utilizing conjunctions and semantic modifiers to provide more context information in the sentence and thus effectively improving the accuracy of the analysis. Meanwhile, regarding the limitations of computation resources for edge devices in edge computing scenario, we also investigate the trade-off between computation saving (in terms of the percentage of model parameters to be updated) and the model's performance (in terms of inference accuracy) on the proposed scheme under PEFT (parameter-efficient fine-tuning). We evaluate the proposed scheme on publicly available ACOS datasets. Experiment results show that the proposed enhanced OTP (eOTP) model improves the OTP scheme both in precision and recall measurements on the public ACOS datasets. Meanwhile, in the design trade-off evaluation for resource-constrained devices, the experiment results show that, in model training, eOTP requires very limited parameters (less than 1%) to be retrained by keeping most of the parameters frozen (not modified) in the fine-tuning process, at the cost of a slight performance drop (around 4%) in F1-score compared with the case of full fine-tuning. These demonstrate that the proposed scheme is efficient and feasible for resource-constrained scenarios such as for mobile edge/fog computing services.
引用
收藏
页数:20
相关论文
共 50 条
  • [31] SCT: A Simple Baseline for Parameter-Efficient Fine-Tuning via Salient Channels
    Zhao, Henry Hengyuan
    Wang, Pichao
    Zhao, Yuyang
    Luo, Hao
    Wang, Fan
    Shou, Mike Zheng
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (03) : 731 - 749
  • [32] Parameter-Efficient Fine-Tuning Method for Task-Oriented Dialogue Systems
    Mo, Yunho
    Yoo, Joon
    Kang, Sangwoo
    MATHEMATICS, 2023, 11 (14)
  • [33] Towards Robust and Generalized Parameter-Efficient Fine-Tuning for Noisy Label Learning
    Kim, Yeachan
    Kim, Junho
    Lee, SangKeun
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 5922 - 5936
  • [34] Characterizing Communication in Distributed Parameter-Efficient Fine-Tuning for Large Language Models
    Alnaasan, Nawras
    Huang, Horng-Ruey
    Shafi, Aamir
    Subramoni, Hari
    Panda, Dhabaleswar K.
    2024 IEEE SYMPOSIUM ON HIGH-PERFORMANCE INTERCONNECTS, HOTI 2024, 2024, : 11 - 19
  • [35] Shadclips: When Parameter-Efficient Fine-Tuning with Multimodal Meets Shadow Removal
    Zhang, Xiaofeng
    Gu, Chaochen
    Xu, Zishan
    Tang, Hao
    Cheng, Hao
    Wu, Kaijie
    Zhu, Shanying
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2024, 38 (16)
  • [36] Parameter-Efficient Fine-Tuning of Large Pretrained Models for Instance Segmentation Tasks
    Baker, Nermeen Abou
    Rohrschneider, David
    Handmann, Uwe
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2024, 6 (04): : 2783 - 2807
  • [37] SCT: A Simple Baseline for Parameter-Efficient Fine-Tuning via Salient Channels
    Henry Hengyuan Zhao
    Pichao Wang
    Yuyang Zhao
    Hao Luo
    Fan Wang
    Mike Zheng Shou
    International Journal of Computer Vision, 2024, 132 : 731 - 749
  • [38] Chain of Thought Guided Few-Shot Fine-Tuning of LLMs for Multimodal Aspect-Based Sentiment Classification
    Wu, Hao
    Yang, Danping
    Liu, Peng
    Li, Xianxian
    MULTIMEDIA MODELING, MMM 2025, PT I, 2025, 15520 : 182 - 194
  • [39] Transition-based Opinion Generation for Aspect-based Sentiment Analysis
    Ma, Tianlai
    Wang, Zhongqing
    Zhou, Guodong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 3078 - 3087
  • [40] Parameter-Efficient Fine-Tuning of Pre-trained Large Language Models for Financial Text Analysis
    Langa, Kelly
    Wang, Hairong
    Okuboyejo, Olaperi
    ARTIFICIAL INTELLIGENCE RESEARCH, SACAIR 2024, 2025, 2326 : 3 - 20