Hyper-parameter optimization for improving the performance of localization in an iterative ensemble smoother

被引:1
|
作者
Luo, Xiaodong [1 ]
Cruz, William C. [2 ]
Zhang, Xin-Lei [3 ,4 ]
Xiao, Heng [5 ]
机构
[1] Norwegian Res Ctr NORCE, Nygardsgaten 112, N-5008 Bergen, Norway
[2] Univ Stavanger, Kjell Arholms Gate 41, N-4021 Stavanger, Norway
[3] Chinese Acad Sci, Inst Mech, State Key Lab Nonlinear Mech, Beijing, Peoples R China
[4] Univ Chinese Acad Sci, Sch Engn Sci, Beijing, Peoples R China
[5] Univ Stuttgart, Stuttgart Ctr Simulat Sci SC SimTech, Stuttgart, Germany
来源
关键词
Ensemble data assimilation; Iterative ensemble smoother (IES); Automatic and adaptive localization; (AutoAdaLoc); Parameterized localization; Continuous hyper-parameter OPtimization; (CHOP); KALMAN FILTER; DATA ASSIMILATION; ADAPTIVE LOCALIZATION; MODELS;
D O I
10.1016/j.geoen.2023.212404
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
This work aims to help improve the performance of an iterative ensemble smoother (IES) in reservoir data assimilation problems, by introducing a data-driven procedure to optimize the choice of certain algorithmic hyper-parameters in the IES. Generally speaking, algorithmic hyper-parameters exist in various data assimilation algorithms. Taking IES as an example, localization is often useful for improving its performance, yet applying localization to an IES also introduces a certain number of algorithmic hyper-parameters, such as localization length scales, in the course of data assimilation. While different methods have been developed in the literature to address the problem of properly choosing localization length scales in various circumstances, many of them are tailored to specific problems under consideration, and may be difficult to directly extend to other problems. In addition, conventional hyper-parameter tuning methods determine the values of localization length scales based on either empirical (e.g., using experience, domain knowledge, or simply the practice of trial and error) or analytic (e.g., through statistical analyses) rules, but few of them use the information of observations to optimize the choice of hyper-parameters. The current work proposes a generic, data driven hyper-parameter tuning strategy that has the potential to overcome the aforementioned issues. With this proposed strategy, hyper-parameter optimization is converted into a conventional parameter estimation problem, in such a way that observations are utilized to guide the choice of hyper-parameters. One noticeable feature of the proposed hyper-parameter tuning strategy is that it iteratively estimates an ensemble of hyper parameters. In doing so, the resulting hyper-parameter tuning procedure receives some practical benefits inherent to conventional ensemble data assimilation algorithms, including the nature of being derivative free, the ability to provide uncertainty quantification to some extent, and the capacity to handle a large number of hyper-parameters. Through 2D and 3D case studies, it is shown that when the proposed hyper parameter tuning strategy is applied to tune a set of localization length scales (up to the order of 103) in a parameterized localization scheme, superior data assimilation performance is obtained in comparison to an alternative hyper-parameter tuning strategy without utilizing the information of observations.
引用
收藏
页数:20
相关论文
共 50 条
  • [41] Ensemble Adaptation Networks with low-cost unsupervised hyper-parameter search
    Haotian Zhang
    Shifei Ding
    Weikuan Jia
    Pattern Analysis and Applications, 2020, 23 : 1215 - 1224
  • [42] Hyper-parameter optimization tools comparison for multiple object tracking applications
    Francisco Madrigal
    Camille Maurice
    Frédéric Lerasle
    Machine Vision and Applications, 2019, 30 : 269 - 289
  • [43] An Experimental Study on Hyper-parameter Optimization for Stacked Auto-Encoders
    Sun, Yanan
    Xue, Bing
    Zhang, Mengjie
    Yen, Gary G.
    2018 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2018, : 638 - 645
  • [44] A new hyper-parameter optimization method for machine learning in fault classification
    Ye, Xingchen
    Gao, Liang
    Li, Xinyu
    Wen, Long
    APPLIED INTELLIGENCE, 2023, 53 (11) : 14182 - 14200
  • [45] Image classification based on KPCA and SVM with randomized hyper-parameter optimization
    Li, Lin
    Lian, Jin
    Wu, Yue
    Ye, Mao
    International Journal of Signal Processing, Image Processing and Pattern Recognition, 2014, 7 (04) : 303 - 316
  • [46] Facilitating Database Tuning with Hyper-Parameter Optimization: A Comprehensive Experimental Evaluation
    Zhang, Xinyi
    Chang, Zhuo
    Li, Yang
    Wu, Hong
    Tan, Jian
    Li, Feifei
    Cui, Bin
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2022, 15 (09): : 1808 - 1821
  • [47] Particle Swarm Optimization for Hyper-Parameter Selection in Deep Neural Networks
    Lorenzo, Pablo Ribalta
    Nalepa, Jakub
    Kawulok, Michal
    Sanchez Ramos, Luciano
    Ranilla Pastor, Jose
    PROCEEDINGS OF THE 2017 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO'17), 2017, : 481 - 488
  • [48] Hyper-parameter Optimization in the context of Smart Manufacturing: a Systematic Literature Review
    Chernigovskaya, Maria
    Nahhas, Abdulrahman
    Kharitonov, Andrey
    Turowski, Klaus
    5TH INTERNATIONAL CONFERENCE ON INDUSTRY 4.0 AND SMART MANUFACTURING, ISM 2023, 2024, 232 : 804 - 812
  • [49] A new hyper-parameter optimization method for machine learning in fault classification
    Xingchen Ye
    Liang Gao
    Xinyu Li
    Long Wen
    Applied Intelligence, 2023, 53 : 14182 - 14200
  • [50] Hyper-parameter optimization tools comparison for multiple object tracking applications
    Madrigal, Francisco
    Maurice, Camille
    Lerasle, Frederic
    MACHINE VISION AND APPLICATIONS, 2019, 30 (02) : 269 - 289