MFRLMO: Model-free reinforcement learning for multi-objective optimization of apache spark

被引:0
|
作者
Ozturk, Muhammed Maruf [1 ]
机构
[1] Suleyman Demirel Univ, Fac Engn & Nat Sci, Dept Comp Engn, West Campus, TR-32040 Isparta, Turkiye
关键词
Spark; configuration tuning; multi-objective optimization; reinforcement learning; ROBOT;
D O I
10.4108/eetsis.4764
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Hyperparameter optimization (HO) is a must to figure out to what extent can a specific configuration of hyperparameters contribute to the performance of a machine learning task. The hardware and MLlib library of Apache Spark have the potential to improve big data processing performance when a tuning operation is combined with the exploitation of hyperparameters. To the best of our knowledge, the most of existing studies employ a black-box approach that results in misleading results due to ignoring the interior dynamics of big data processing. They suffer from one or more drawbacks including high computational cost, large search space, and sensitivity to the dimension of multi-objective functions. To address the issues above, this work proposes a new model-free reinforcement learning for multi-objective optimization of Apache Spark, thereby leveraging reinforcement learning (RL) agents to uncover the internal dynamics of Apache Spark in HO. To bridge the gap between multi-objective optimization and interior constraints of Apache Spark, our method runs a lot of iterations to update each cell of the RL grid. The proposed model-free learning mechanism achieves a tradeoff between three objective functions comprising time, memory, and accuracy. To this end, optimal values of the hyperparameters are obtained via an ensemble technique that analyzes the individual results yielded by each objective function. The results of the experiments show that the number of cores has not a direct effect on speedup. Further, although grid size has an impact on the time passed between two adjoining iterations, it is negligible in the computational burden. Dispersion and risk values of model-free RL differ when the size of the data is small. On average, MFRLMO produced speedup that is 37% better than those of the competitors. Last, our approach is very competitive in terms of converging to a high accuracy when optimizing Convolutional Neural networks (CNN).
引用
收藏
页码:1 / 15
页数:15
相关论文
共 50 条
  • [41] Special issue on multi-objective reinforcement learning
    Drugan, Madalina
    Wiering, Marco
    Vamplew, Peter
    Chetty, Madhu
    NEUROCOMPUTING, 2017, 263 : 1 - 2
  • [42] A multi-objective deep reinforcement learning framework
    Thanh Thi Nguyen
    Ngoc Duy Nguyen
    Vamplew, Peter
    Nahavandi, Saeid
    Dazeley, Richard
    Lim, Chee Peng
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2020, 96
  • [43] A Constrained Multi-Objective Reinforcement Learning Framework
    Huang, Sandy H.
    Abdolmaleki, Abbas
    Vezzani, Giulia
    Brakel, Philemon
    Mankowitz, Daniel J.
    Neunert, Michael
    Bohez, Steven
    Tassa, Yuval
    Heess, Nicolas
    Riedmiller, Martin
    Hadsell, Raia
    CONFERENCE ON ROBOT LEARNING, VOL 164, 2021, 164 : 883 - 893
  • [44] Model-Free Optimal Control Method for Chilled Water Pumps Based on Multi-Objective Optimization: Engineering Application
    Qiu, Shunian
    Li, Zhenhai
    Li, Zhengwei
    ASHRAE TRANSACTIONS 2021, VOL 127, PT 2, 2021, 127 : 409 - 416
  • [45] Multi-objective Reinforcement Learning for Responsive Grids
    Julien Perez
    Cécile Germain-Renaud
    Balazs Kégl
    Charles Loomis
    Journal of Grid Computing, 2010, 8 : 473 - 492
  • [46] Pedestrian simulation as multi-objective reinforcement learning
    Ravichandran, Naresh Balaji
    Yang, Fangkai
    Peters, Christopher
    Lansner, Anders
    Herman, Pawel
    18TH ACM INTERNATIONAL CONFERENCE ON INTELLIGENT VIRTUAL AGENTS (IVA'18), 2018, : 307 - 312
  • [47] Multi-objective optimization-assisted single-objective differential evolution by reinforcement learning
    Zhang, Haotian
    Guan, Xiaohong
    Wang, Yixin
    Nan, Nan
    SWARM AND EVOLUTIONARY COMPUTATION, 2025, 94
  • [48] Constrained Multi-Objective Optimization With Deep Reinforcement Learning Assisted Operator Selection
    Ming, Fei
    Gong, Wenyin
    Wang, Ling
    Jin, Yaochu
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2024, 11 (04) : 919 - 931
  • [49] Investigating the multi-objective optimization of quality and efficiency using deep reinforcement learning
    Wang, Zhenhui
    Lu, Juan
    Chen, Chaoyi
    Ma, Junyan
    Liao, Xiaoping
    APPLIED INTELLIGENCE, 2022, 52 (11) : 12873 - 12887
  • [50] Superconducting quantum computing optimization based on multi-objective deep reinforcement learning
    Liu, Yangting
    SCIENTIFIC REPORTS, 2025, 15 (01):