Hyper-parameter optimization for improving the performance of localization in an iterative ensemble smoother

被引:1
|
作者
Luo, Xiaodong [1 ]
Cruz, William C. [2 ]
Zhang, Xin-Lei [3 ,4 ]
Xiao, Heng [5 ]
机构
[1] Norwegian Res Ctr NORCE, Nygardsgaten 112, N-5008 Bergen, Norway
[2] Univ Stavanger, Kjell Arholms Gate 41, N-4021 Stavanger, Norway
[3] Chinese Acad Sci, Inst Mech, State Key Lab Nonlinear Mech, Beijing, Peoples R China
[4] Univ Chinese Acad Sci, Sch Engn Sci, Beijing, Peoples R China
[5] Univ Stuttgart, Stuttgart Ctr Simulat Sci SC SimTech, Stuttgart, Germany
来源
关键词
Ensemble data assimilation; Iterative ensemble smoother (IES); Automatic and adaptive localization; (AutoAdaLoc); Parameterized localization; Continuous hyper-parameter OPtimization; (CHOP); KALMAN FILTER; DATA ASSIMILATION; ADAPTIVE LOCALIZATION; MODELS;
D O I
10.1016/j.geoen.2023.212404
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
This work aims to help improve the performance of an iterative ensemble smoother (IES) in reservoir data assimilation problems, by introducing a data-driven procedure to optimize the choice of certain algorithmic hyper-parameters in the IES. Generally speaking, algorithmic hyper-parameters exist in various data assimilation algorithms. Taking IES as an example, localization is often useful for improving its performance, yet applying localization to an IES also introduces a certain number of algorithmic hyper-parameters, such as localization length scales, in the course of data assimilation. While different methods have been developed in the literature to address the problem of properly choosing localization length scales in various circumstances, many of them are tailored to specific problems under consideration, and may be difficult to directly extend to other problems. In addition, conventional hyper-parameter tuning methods determine the values of localization length scales based on either empirical (e.g., using experience, domain knowledge, or simply the practice of trial and error) or analytic (e.g., through statistical analyses) rules, but few of them use the information of observations to optimize the choice of hyper-parameters. The current work proposes a generic, data driven hyper-parameter tuning strategy that has the potential to overcome the aforementioned issues. With this proposed strategy, hyper-parameter optimization is converted into a conventional parameter estimation problem, in such a way that observations are utilized to guide the choice of hyper-parameters. One noticeable feature of the proposed hyper-parameter tuning strategy is that it iteratively estimates an ensemble of hyper parameters. In doing so, the resulting hyper-parameter tuning procedure receives some practical benefits inherent to conventional ensemble data assimilation algorithms, including the nature of being derivative free, the ability to provide uncertainty quantification to some extent, and the capacity to handle a large number of hyper-parameters. Through 2D and 3D case studies, it is shown that when the proposed hyper parameter tuning strategy is applied to tune a set of localization length scales (up to the order of 103) in a parameterized localization scheme, superior data assimilation performance is obtained in comparison to an alternative hyper-parameter tuning strategy without utilizing the information of observations.
引用
收藏
页数:20
相关论文
共 50 条
  • [31] A Hyper-Parameter Optimization Approach to Automated Radiotherapy Treatment Planning
    Haaf, S.
    Kearney, V.
    Interian, Y.
    Valdes, G.
    Solberg, T.
    Perez-Andujar, A.
    MEDICAL PHYSICS, 2017, 44 (06) : 2901 - 2901
  • [32] Hyper-parameter optimization of gradient boosters for flood susceptibility analysis
    Lai, Tuan Anh
    Nguyen, Ngoc-Thach
    Bui, Quang-Thanh
    TRANSACTIONS IN GIS, 2023, 27 (01) : 224 - 238
  • [33] Experienced Optimization with Reusable Directional Model for Hyper-Parameter Search
    Hu, Yi-Qi
    Yu, Yang
    Zhou, Zhi-Hua
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 2276 - 2282
  • [34] Quadratic optimization for the hyper-parameter based on maximum entropy search
    Li, Yuqi
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2023, 45 (03) : 4991 - 5006
  • [35] Hyper-parameter optimization in classification: To-do or not-to-do
    Ngoc Tran
    Schneider, Jean-Guy
    Weber, Ingo
    Qin, A. K.
    PATTERN RECOGNITION, 2020, 103
  • [36] Hyper-parameter Comparison on Convolutional Neural Network for Visual Aerial Localization
    Berhold, J. Mark
    Leishman, Robert C.
    Borghetti, Brett
    Venable, Donald
    PROCEEDINGS OF THE ION 2019 PACIFIC PNT MEETING, 2019, : 875 - 885
  • [37] Hyper-Parameter Optimization for Emotion Detection using Physiological Signals
    Albraikan, Amani
    Tobon, Diana P.
    El Saddik, Abdulmotaleb
    2018 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS WORKSHOPS (PERCOM WORKSHOPS), 2018,
  • [38] Deep Learning Hyper-Parameter Optimization for Video Analytics in Clouds
    Yaseen, Muhammad Usman
    Anjum, Ashiq
    Rana, Omer
    Antonopoulos, Nikolaos
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2019, 49 (01): : 253 - 264
  • [39] Ensemble Adaptation Networks with low-cost unsupervised hyper-parameter search
    Zhang, Haotian
    Ding, Shifei
    Jia, Weikuan
    PATTERN ANALYSIS AND APPLICATIONS, 2020, 23 (03) : 1215 - 1224
  • [40] Learning networks hyper-parameter using multi-objective optimization of statistical performance metrics
    Torres, Guillermo
    Sanchez, Carles
    Gil, Debora
    2022 24TH INTERNATIONAL SYMPOSIUM ON SYMBOLIC AND NUMERIC ALGORITHMS FOR SCIENTIFIC COMPUTING, SYNASC, 2022, : 233 - 238