A fine-tuned large language model based molecular dynamics agent for code generation to obtain material thermodynamic parameters

被引:0
|
作者
Zhuofan Shi [1 ]
Chunxiao Xin [2 ]
Tong Huo [3 ]
Yuntao Jiang [1 ]
Bowen Wu [2 ]
Xingyue Chen [3 ]
Wei Qin [2 ]
Xinjian Ma [3 ]
Gang Huang [4 ]
Zhenyu Wang [1 ]
Xiang Jing [2 ]
机构
[1] Peking University,School of Software and Microelectronics
[2] National Key Laboratory of Data Space Technology and System,Institute of Information Engineering
[3] Advanced Institute of Big Data,undefined
[4] Chinese Academy of Sciences,undefined
关键词
LLM; Agent; Materials science;
D O I
10.1038/s41598-025-92337-6
中图分类号
学科分类号
摘要
In the field of materials science, addressing the complex relationship between the material structure and properties has increasingly involved leveraging the text generation capabilities of AI-generated content (AIGC) models for tasks that include literature mining and data analysis. However, theoretical calculations and code development remain labor-intensive challenges. This paper proposes a novel approach based on text-to-code generation, utilizing large language models to automate the implementation of simulation programs in materials science. The effectiveness of automated code generation and review is validated with thermodynamics simulations based on the LAMMPS software as a foundation. This study introduces Molecular Dynamics Agent (MDAgent), a framework designed to guide large models in automatically generating, executing, and refining simulation code. In addition, a thermodynamic simulation code dataset for LAMMPS was constructed to fine-tune the language model. Expert evaluation scores demonstrate that MDAgent significantly improves the code generation and review capabilities. The proposed approach reduces the average task time by 42.22%, as compared to traditional models, thus highlighting its potential applications in the field of materials science.
引用
收藏
相关论文
共 38 条
  • [31] Performance of three commercially available large language models and one locally fine-tuned model at preparing formal letters to appeal medical insurance denials of radiotherapy services.
    Kiser, Kendall
    Waters, Michael
    Reckford, Jocelyn
    Lundeberg, Christopher
    Abraham, Christopher
    JOURNAL OF CLINICAL ONCOLOGY, 2024, 42 (16)
  • [32] Understanding Citizens' Response to Social Activities on Twitter in US Metropolises During the COVID-19 Recovery Phase Using a Fine-Tuned Large Language Model: Application of AI
    Saito, Ryuichi
    Tsugawa, Sho
    JOURNAL OF MEDICAL INTERNET RESEARCH, 2025, 27
  • [33] Toward dynamic rehabilitation management: A novel smart product-service system development approach based on fine-tuned large vision model and Fuzzy-Dematel
    Yuan, Wenyu
    Zhao, Hua
    Yang, Xiongjie
    Han, Ting
    Chang, Danni
    ADVANCED ENGINEERING INFORMATICS, 2024, 62
  • [34] OpenFOAMGPT: A retrieval-augmented large language model (LLM) agent for OpenFOAM-based computational fluid dynamics
    Pandey, Sandeep
    Xu, Ran
    Wang, Wenkang
    Chu, Xu
    PHYSICS OF FLUIDS, 2025, 37 (03)
  • [35] L3i++ at SemEval-2024 Task 8: Can Fine-tuned Large Language Model Detect Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text?
    Hanh Thi Hong Tran
    Tien Nam Nguyen
    Doucet, Antoine
    Pollak, Senja
    PROCEEDINGS OF THE 18TH INTERNATIONAL WORKSHOP ON SEMANTIC EVALUATION, SEMEVAL-2024, 2024, : 13 - 21
  • [36] Artificial intelligence-based data extraction for next generation risk assessment: Is fine-tuning of a large language model worth the effort?
    Sonnenburg, Anna
    van der Lugt, Benthe
    Rehn, Johannes
    Wittkowski, Paul
    Bech, Karsten
    Padberg, Florian
    Eleftheriadou, Dimitra
    Dobrikov, Todor
    Bouwmeester, Hans
    Mereu, Carla
    Graf, Ferdinand
    Kneuer, Carsten
    Kramer, Nynke I.
    Bluemmel, Tilmann
    TOXICOLOGY, 2024, 508
  • [37] 3DSMILES-GPT: 3D molecular pocket-based generation with token-only large language model
    Wang, Jike
    Luo, Hao
    Qin, Rui
    Wang, Mingyang
    Wan, Xiaozhe
    Fang, Meijing
    Zhang, Odin
    Gou, Qiaolin
    Su, Qun
    Shen, Chao
    You, Ziyi
    Liu, Liwei
    Hsieh, Chang-Yu
    Hou, Tingjun
    Kang, Yu
    CHEMICAL SCIENCE, 2025, 16 (02) : 637 - 648
  • [38] EXPERT EVALUATION OF GUIDELINE- BASED RETRIEVAL AUGMENTED GENERATION VS. SUPERVISED FINE- TUNING FOR LARGE LANGUAGE MODEL OUTPUTS: A CASE STUDY IN HEPATITIS C VIRAL INFECTION MANAGEMENT
    Pugliese, Mauro Giuffre Nicola
    Kresevic, Simone
    Negro, Francesco
    Puoti, Massimo
    Forns, Xavier
    Pawlotsky, Jean-Michel
    Aghemo, Alessio
    Shung, Dennis
    HEPATOLOGY, 2024, 80 : S409 - S409