Hyper-parameter optimization for improving the performance of localization in an iterative ensemble smoother

被引:1
|
作者
Luo, Xiaodong [1 ]
Cruz, William C. [2 ]
Zhang, Xin-Lei [3 ,4 ]
Xiao, Heng [5 ]
机构
[1] Norwegian Res Ctr NORCE, Nygardsgaten 112, N-5008 Bergen, Norway
[2] Univ Stavanger, Kjell Arholms Gate 41, N-4021 Stavanger, Norway
[3] Chinese Acad Sci, Inst Mech, State Key Lab Nonlinear Mech, Beijing, Peoples R China
[4] Univ Chinese Acad Sci, Sch Engn Sci, Beijing, Peoples R China
[5] Univ Stuttgart, Stuttgart Ctr Simulat Sci SC SimTech, Stuttgart, Germany
来源
关键词
Ensemble data assimilation; Iterative ensemble smoother (IES); Automatic and adaptive localization; (AutoAdaLoc); Parameterized localization; Continuous hyper-parameter OPtimization; (CHOP); KALMAN FILTER; DATA ASSIMILATION; ADAPTIVE LOCALIZATION; MODELS;
D O I
10.1016/j.geoen.2023.212404
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
This work aims to help improve the performance of an iterative ensemble smoother (IES) in reservoir data assimilation problems, by introducing a data-driven procedure to optimize the choice of certain algorithmic hyper-parameters in the IES. Generally speaking, algorithmic hyper-parameters exist in various data assimilation algorithms. Taking IES as an example, localization is often useful for improving its performance, yet applying localization to an IES also introduces a certain number of algorithmic hyper-parameters, such as localization length scales, in the course of data assimilation. While different methods have been developed in the literature to address the problem of properly choosing localization length scales in various circumstances, many of them are tailored to specific problems under consideration, and may be difficult to directly extend to other problems. In addition, conventional hyper-parameter tuning methods determine the values of localization length scales based on either empirical (e.g., using experience, domain knowledge, or simply the practice of trial and error) or analytic (e.g., through statistical analyses) rules, but few of them use the information of observations to optimize the choice of hyper-parameters. The current work proposes a generic, data driven hyper-parameter tuning strategy that has the potential to overcome the aforementioned issues. With this proposed strategy, hyper-parameter optimization is converted into a conventional parameter estimation problem, in such a way that observations are utilized to guide the choice of hyper-parameters. One noticeable feature of the proposed hyper-parameter tuning strategy is that it iteratively estimates an ensemble of hyper parameters. In doing so, the resulting hyper-parameter tuning procedure receives some practical benefits inherent to conventional ensemble data assimilation algorithms, including the nature of being derivative free, the ability to provide uncertainty quantification to some extent, and the capacity to handle a large number of hyper-parameters. Through 2D and 3D case studies, it is shown that when the proposed hyper parameter tuning strategy is applied to tune a set of localization length scales (up to the order of 103) in a parameterized localization scheme, superior data assimilation performance is obtained in comparison to an alternative hyper-parameter tuning strategy without utilizing the information of observations.
引用
收藏
页数:20
相关论文
共 50 条
  • [21] Joint state and parameter estimation with an iterative ensemble Kalman smoother
    Bocquet, M.
    Sakov, P.
    NONLINEAR PROCESSES IN GEOPHYSICS, 2013, 20 (05) : 803 - 818
  • [22] CNN hyper-parameter optimization for environmental sound classification
    Inik, Ozkan
    APPLIED ACOUSTICS, 2023, 202
  • [23] AME: Attention and Memory Enhancement in Hyper-Parameter Optimization
    Xu, Nuo
    Chang, Jianlong
    Nie, Xing
    Huo, Chunlei
    Xiang, Shiming
    Pan, Chunhong
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 480 - 489
  • [24] An efficient hyper-parameter optimization method for supervised learning
    Shi, Ying
    Qi, Hui
    Qi, Xiaobo
    Mu, Xiaofang
    APPLIED SOFT COMPUTING, 2022, 126
  • [25] RHOASo: An Early Stop Hyper-Parameter Optimization Algorithm
    Munoz Castaneda, Angel Luis
    DeCastro-Garcia, Noemi
    Escudero Garcia, David
    MATHEMATICS, 2021, 9 (18)
  • [26] Improving Machine Learning-based Code Smell Detection via Hyper-parameter Optimization
    Shen, Lei
    Liu, Wangshu
    Chen, Xiang
    Gu, Qing
    Liu, Xuejun
    2020 27TH ASIA-PACIFIC SOFTWARE ENGINEERING CONFERENCE (APSEC 2020), 2020, : 276 - 285
  • [27] USING METAHEURISTICS FOR HYPER-PARAMETER OPTIMIZATION OF CONVOLUTIONAL NEURAL NETWORKS
    Bibaeva, Victoria
    2018 IEEE 28TH INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2018,
  • [28] Hyper-Parameter Optimization for Privacy-Preserving Record Linkage
    Yu, Joyce
    Nabaglo, Jakub
    Vatsalan, Dinusha
    Henecka, Wilko
    Thorne, Brian
    ECML PKDD 2020 WORKSHOPS, 2020, 1323 : 281 - 296
  • [29] HYPER-PARAMETER OPTIMIZATION OF DEEP CONVOLUTIONAL NETWORKS FOR OBJECT RECOGNITION
    Talathi, Sachin S.
    2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2015, : 3982 - 3986
  • [30] Rethinking density ratio estimation based hyper-parameter optimization
    Fan, Zi-En
    Lian, Feng
    Li, Xin-Ran
    NEURAL NETWORKS, 2025, 182