Multi-step Iterative Automated Domain Modeling with Large Language Models

被引:0
|
作者
Yang, Yujing [1 ]
Chen, Boqi [1 ]
Chen, Kua [1 ]
Mussbacher, Gunter [1 ]
Varro, Daniel [1 ,2 ]
机构
[1] McGill Univ, Montreal, PQ, Canada
[2] Linkoping Univ, Linkoping, Sweden
关键词
domain modeling; large language models; few-shot learning; prompt engineering;
D O I
10.1145/3652620.3687807
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Domain modeling, which represents the concepts and relationships in a problem domain, is an essential part of software engineering. As large language models (LLMs) have recently exhibited remarkable ability in language understanding and generation, many approaches are designed to automate domain modeling with LLMs. However, these approaches usually formulate all input information to the LLM in a single step. Our previous single-step approach resulted in many missing modeling elements and advanced patterns. This paper introduces a novel framework designed to enhance fully automated domain model generation. The proposed multi-step automated domain modeling approach extracts model elements (e.g., classes, attributes, and relationships) from problem descriptions. The approach includes instructions and human knowledge in each step and uses an iterative process to identify complex patterns, repeatedly extracting the pattern from various instances and then synthesizing these extractions into a summarized overview. Furthermore, the framework incorporates a self-reflection mechanism. This mechanism assesses each generated model element, offering self-feedback for necessary modifications or removals, and integrates the domain model with the generated self-feedback. The proposed approach is assessed in experiments, comparing it with a baseline single-step approach from our earlier work. Experiments demonstrate a significant improvement over our earlier work, with a 22.71% increase in the F-1-score for identifying classes, 75.18% for relationships, and a 10.39% improvement for identifying the player-role pattern, with comparable performance for attributes. Our approach, dataset, and evaluation provide valuable insight for future research in automated LLM-based domain modeling.
引用
收藏
页码:587 / 595
页数:9
相关论文
共 50 条
  • [41] Long range multi-step water quality forecasting using iterative ensembling
    Ben Islam, Md Khaled
    Newton, M. A. Hakim
    Rahman, Julia
    Trevathan, Jarrod
    Sattar, Abdul
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 114
  • [42] On Computational Efficiency and Dynamical Analysis for a Class of Novel Multi-step Iterative Schemes
    Sayevand K.
    Erfanifar R.
    Esmaeili H.
    International Journal of Applied and Computational Mathematics, 2020, 6 (6)
  • [43] Introducing memory to a family of multi-step multidimensional iterative methods with weight function
    Cordero, Alicia
    Villalba, Eva G.
    Torregrosa, Juan R.
    Triguero-Navarro, Paula
    EXPOSITIONES MATHEMATICAE, 2023, 41 (02) : 398 - 417
  • [44] Multi-step planetary gear drives for large mobile devices
    Berger, G
    Pape, M
    GEAR DRIVES '99: PLANETARY GEARBOXES, 1999, 1460 : 173 - 196
  • [45] Fixed Point Approaches for Multi-Valued Presic Multi-Step Iterative Mappings with Applications
    Raza, Ali
    Abbas, Mujahid
    Hammad, Hasanen A.
    De la Sen, Manuel
    SYMMETRY-BASEL, 2023, 15 (03):
  • [46] An iterative algorithm for simulation error based identification of polynomial input-output models using multi-step prediction
    Farina, Marcello
    Piroddi, Luigi
    INTERNATIONAL JOURNAL OF CONTROL, 2010, 83 (07) : 1442 - 1456
  • [47] Causal knowledge analysis for detecting and modeling multi-step attacks
    Ramaki, Ali Ahmadian
    Rasoolzadegan, Abbas
    SECURITY AND COMMUNICATION NETWORKS, 2016, 9 (18) : 6042 - 6065
  • [48] Learning multi-step prediction models for receding horizon control
    Terzi, Enrico
    Fagiano, Lorenzo
    Farina, Marcello
    Scattolini, Riccardo
    2018 EUROPEAN CONTROL CONFERENCE (ECC), 2018, : 1335 - 1340
  • [49] Multi-Step Generalized Policy Improvement by Leveraging Approximate Models
    Alegre, Lucas N.
    Bazzan, Ana L. C.
    Nowe, Ann
    da Silva, Bruno C.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [50] Multi-step estimators and shrinkage effect in time series models
    Svetunkov, Ivan
    Kourentzes, Nikolaos
    Killick, Rebecca
    COMPUTATIONAL STATISTICS, 2024, 39 (03) : 1203 - 1239