RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model

被引:10
|
作者
Lu, Yao [1 ]
Liu, Shang [1 ]
Zhang, Qijun [1 ]
Xie, Zhiyao [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ASP-DAC58780.2024.10473904
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Inspired by the recent success of large language models (LLMs) like ChatGPT, researchers start to explore the adoption of LLMs for agile hardware design, such as generating design RTL based on natural-language instructions. However, in existing works, their target designs are all relatively simple and in a small scale, and proposed by the authors themselves, making a fair comparison among different LLM solutions challenging. In addition, many prior works only focus on the design correctness, without evaluating the design qualities of generated design RTL. In this work, we propose an open-source benchmark named RTLLM, for generating design RTL with natural language instructions. To systematically evaluate the auto-generated design RTL, we summarized three progressive goals, named syntax goal, functionality goal, and design quality goal. This benchmark can automatically provide a quantitative evaluation of any given LLM-based solution. Furthermore, we propose an easy-to-use yet surprisingly effective prompt engineering technique named self-planning, which proves to significantly boost the performance of GPT-3.5 in our proposed benchmark.
引用
收藏
页码:722 / 727
页数:6
相关论文
共 50 条
  • [41] Extensible Open-Source Framework for Translating RTL VHDL IP Cores to SystemC
    Abrar, Syed Saif
    Jenihhin, Maksim
    Raik, Jaan
    PROCEEDINGS OF THE 2013 IEEE 16TH INTERNATIONAL SYMPOSIUM ON DESIGN AND DIAGNOSTICS OF ELECTRONIC CIRCUITS & SYSTEMS (DDECS), 2013, : 112 - 115
  • [42] Performance of an Open-Source Large Language Model in Extracting Information from Free-Text Radiology Reports
    Le Guellec, Bastien
    Lefevre, Alexandre
    Geay, Charlotte
    Shorten, Lucas
    Bruge, Cyril
    Hacein-Bey, Lotfi
    Amouyel, Philippe
    Pruvo, Jean-Pierre
    Kuchcinski, Gregory
    Hamroun, Aghiles
    RADIOLOGY-ARTIFICIAL INTELLIGENCE, 2024, 6 (04)
  • [43] Open-source intelligence and privacy by design
    Koops, Bert-Jaap
    Hoepman, Jaap-Henk
    Leenes, Ronald
    COMPUTER LAW & SECURITY REVIEW, 2013, 29 (06) : 676 - 688
  • [44] Open-source design of integrated circuits
    Fath, Patrick
    Moser, Manuel
    Zachl, Georg
    Pretl, Harald
    ELEKTROTECHNIK UND INFORMATIONSTECHNIK, 2024, 141 (01): : 76 - 87
  • [45] Open-source design of medical devices
    Otero, Jorge
    Pearce, Joshua M.
    Gozal, David
    Farre, Ramon
    NATURE REVIEWS BIOENGINEERING, 2024, 2 (04): : 280 - 281
  • [46] Enhancing Code Security Through Open-Source Large Language Models: A Comparative Study
    Ridley, Norah
    Branca, Enrico
    Kimber, Jadyn
    Stakhanova, Natalia
    FOUNDATIONS AND PRACTICE OF SECURITY, PT I, FPS 2023, 2024, 14551 : 233 - 249
  • [47] Automatic structuring of radiology reports with on-premise open-source large language models
    Woznicki, Piotr
    Laqua, Caroline
    Fiku, Ina
    Hekalo, Amar
    Truhn, Daniel
    Engelhardt, Sandy
    Kather, Jakob
    Foersch, Sebastian
    D'Antonoli, Tugba Akinci
    dos Santos, Daniel Pinto
    Baessler, Bettina
    Laqua, Fabian Christopher
    EUROPEAN RADIOLOGY, 2025, 35 (04) : 2018 - 2029
  • [48] Using the open-source statistical language R to analyze the dichotomous Rasch model
    Yuelin Li
    Behavior Research Methods, 2006, 38 : 532 - 541
  • [49] Evaluation of Open-Source Large Language Models for Metal-Organic Frameworks Research
    Bai, Xuefeng
    Xie, Yabo
    Zhang, Xin
    Han, Honggui
    Li, Jian-Rong
    JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2024, 64 (13) : 4958 - 4965
  • [50] Fine-Tuning and Evaluating Open-Source Large Language Models for the Army Domain
    Ruiz, Maj Daniel C.
    Sell, John
    arXiv,