Hyper-parameter optimization for improving the performance of localization in an iterative ensemble smoother

被引:1
|
作者
Luo, Xiaodong [1 ]
Cruz, William C. [2 ]
Zhang, Xin-Lei [3 ,4 ]
Xiao, Heng [5 ]
机构
[1] Norwegian Res Ctr NORCE, Nygardsgaten 112, N-5008 Bergen, Norway
[2] Univ Stavanger, Kjell Arholms Gate 41, N-4021 Stavanger, Norway
[3] Chinese Acad Sci, Inst Mech, State Key Lab Nonlinear Mech, Beijing, Peoples R China
[4] Univ Chinese Acad Sci, Sch Engn Sci, Beijing, Peoples R China
[5] Univ Stuttgart, Stuttgart Ctr Simulat Sci SC SimTech, Stuttgart, Germany
来源
关键词
Ensemble data assimilation; Iterative ensemble smoother (IES); Automatic and adaptive localization; (AutoAdaLoc); Parameterized localization; Continuous hyper-parameter OPtimization; (CHOP); KALMAN FILTER; DATA ASSIMILATION; ADAPTIVE LOCALIZATION; MODELS;
D O I
10.1016/j.geoen.2023.212404
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
This work aims to help improve the performance of an iterative ensemble smoother (IES) in reservoir data assimilation problems, by introducing a data-driven procedure to optimize the choice of certain algorithmic hyper-parameters in the IES. Generally speaking, algorithmic hyper-parameters exist in various data assimilation algorithms. Taking IES as an example, localization is often useful for improving its performance, yet applying localization to an IES also introduces a certain number of algorithmic hyper-parameters, such as localization length scales, in the course of data assimilation. While different methods have been developed in the literature to address the problem of properly choosing localization length scales in various circumstances, many of them are tailored to specific problems under consideration, and may be difficult to directly extend to other problems. In addition, conventional hyper-parameter tuning methods determine the values of localization length scales based on either empirical (e.g., using experience, domain knowledge, or simply the practice of trial and error) or analytic (e.g., through statistical analyses) rules, but few of them use the information of observations to optimize the choice of hyper-parameters. The current work proposes a generic, data driven hyper-parameter tuning strategy that has the potential to overcome the aforementioned issues. With this proposed strategy, hyper-parameter optimization is converted into a conventional parameter estimation problem, in such a way that observations are utilized to guide the choice of hyper-parameters. One noticeable feature of the proposed hyper-parameter tuning strategy is that it iteratively estimates an ensemble of hyper parameters. In doing so, the resulting hyper-parameter tuning procedure receives some practical benefits inherent to conventional ensemble data assimilation algorithms, including the nature of being derivative free, the ability to provide uncertainty quantification to some extent, and the capacity to handle a large number of hyper-parameters. Through 2D and 3D case studies, it is shown that when the proposed hyper parameter tuning strategy is applied to tune a set of localization length scales (up to the order of 103) in a parameterized localization scheme, superior data assimilation performance is obtained in comparison to an alternative hyper-parameter tuning strategy without utilizing the information of observations.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] Hyper-Parameter Optimization for Improving the Performance of Grammatical Evolution
    Wang, Hao
    Lou, Yitan
    Back, Thomas
    2019 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2019, : 2649 - 2656
  • [2] Generating Pool of Classifiers with Hyper-Parameter Optimization for Ensemble
    Wang, Qiushi
    Chan, Hian-Leng
    IECON 2021 - 47TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, 2021,
  • [3] Continuous Hyper-parameter OPtimization (CHOP) in an ensemble Kalman filter
    Luo, Xiaodong
    Xia, Chuan-An
    FRONTIERS IN APPLIED MATHEMATICS AND STATISTICS, 2022, 8
  • [4] Weighted Voting Based Ensemble Classification with Hyper-parameter Optimization
    Gokalp, Osman
    Tasci, Erdal
    2019 INNOVATIONS IN INTELLIGENT SYSTEMS AND APPLICATIONS CONFERENCE (ASYU), 2019, : 550 - 553
  • [5] Localization and the iterative ensemble Kalman smoother
    Bocquet, M.
    QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, 2016, 142 (695) : 1075 - 1089
  • [6] Improving the Multimodal Classification Performance of Spiking Neural Networks Through Hyper-Parameter Optimization
    Park, Jin Seon
    Hong, Choong Seon
    38TH INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING, ICOIN 2024, 2024, : 182 - 186
  • [7] Random search for hyper-parameter optimization
    Département D'Informatique et de Recherche Opérationnelle, Université de Montréal, Montréal, QC, H3C 3J7, Canada
    J. Mach. Learn. Res., (281-305):
  • [8] Random Search for Hyper-Parameter Optimization
    Bergstra, James
    Bengio, Yoshua
    JOURNAL OF MACHINE LEARNING RESEARCH, 2012, 13 : 281 - 305
  • [9] Hyper-parameter Optimization for Latent Spaces
    Veloso, Bruno
    Caroprese, Luciano
    Konig, Matthias
    Teixeira, Sonia
    Manco, Giuseppe
    Hoos, Holger H.
    Gama, Joao
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021: RESEARCH TRACK, PT III, 2021, 12977 : 249 - 264
  • [10] Federated learning with hyper-parameter optimization
    Kundroo, Majid
    Kim, Taehong
    JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2023, 35 (09)