RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model

被引:10
|
作者
Lu, Yao [1 ]
Liu, Shang [1 ]
Zhang, Qijun [1 ]
Xie, Zhiyao [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ASP-DAC58780.2024.10473904
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Inspired by the recent success of large language models (LLMs) like ChatGPT, researchers start to explore the adoption of LLMs for agile hardware design, such as generating design RTL based on natural-language instructions. However, in existing works, their target designs are all relatively simple and in a small scale, and proposed by the authors themselves, making a fair comparison among different LLM solutions challenging. In addition, many prior works only focus on the design correctness, without evaluating the design qualities of generated design RTL. In this work, we propose an open-source benchmark named RTLLM, for generating design RTL with natural language instructions. To systematically evaluate the auto-generated design RTL, we summarized three progressive goals, named syntax goal, functionality goal, and design quality goal. This benchmark can automatically provide a quantitative evaluation of any given LLM-based solution. Furthermore, we propose an easy-to-use yet surprisingly effective prompt engineering technique named self-planning, which proves to significantly boost the performance of GPT-3.5 in our proposed benchmark.
引用
收藏
页码:722 / 727
页数:6
相关论文
共 50 条
  • [1] OpenROAD-Assistant: An Open-Source Large Language Model for Physical Design Tasks
    Sharma, Utsav
    Wu, Bing-Yue
    Kankipati, Sai Rahul Dhanvi
    Chhabria, Vidya A.
    Rovinski, Austin
    PROCEEDINGS OF THE 2024 ACM/IEEE INTERNATIONAL SYMPOSIUM ON MACHINE LEARNING FOR CAD, MLCAD 2024, 2024,
  • [2] OpenROAD-Assistant: An Open-Source Large Language Model for Physical Design Tasks
    Sharma, Utsav
    Wu, Bing-Yue
    Kankipati, Sai Rahul Dhanvi
    Chhabria, Vidya A.
    Rovinski, Austin
    2024 ACM/IEEE 6TH SYMPOSIUM ON MACHINE LEARNING FOR CAD, MLCAD 2024, 2024,
  • [3] SCCL: An open-source SystemC to RTL translator
    Wu, Zhuanhao
    Gokhale, Maya
    Lloyd, Scott
    Patel, Hiren
    2023 IEEE 31ST ANNUAL INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE CUSTOM COMPUTING MACHINES, FCCM, 2023, : 23 - 33
  • [4] Re: Open-Source Large Language Models in Radiology
    Kooraki, Soheil
    Bedayat, Arash
    ACADEMIC RADIOLOGY, 2024, 31 (10) : 4293 - 4293
  • [5] Servicing open-source large language models for oncology
    Ray, Partha Pratim
    ONCOLOGIST, 2024,
  • [6] Challenges in Building An Open-source Flow from RTL to Bundled-Data Design
    Zhang, Yang
    Cheng, Huimei
    Chen, Dake
    Fu, Huayu
    Agarwal, Shikhanshu
    Lin, Mark
    Beerel, Peter A.
    2018 24TH IEEE INTERNATIONAL SYMPOSIUM ON ASYNCHRONOUS CIRCUITS AND SYSTEMS (ASYNC), 2018, : 26 - 27
  • [7] Staged Multi-Strategy Framework With Open-Source Large Language Models for Natural Language to SQL Generation
    Liu, Chuanlong
    Liao, Wei
    Xu, Zhen
    IEEJ TRANSACTIONS ON ELECTRICAL AND ELECTRONIC ENGINEERING, 2025,
  • [8] The design of an open-source carbonate reservoir model
    Gomes, Jorge Costa
    Geiger, Sebastian
    Arnold, Daniel
    PETROLEUM GEOSCIENCE, 2022, 28 (03)
  • [9] A tutorial on open-source large language models for behavioral science
    Hussain, Zak
    Binz, Marcel
    Mata, Rui
    Wulff, Dirk U.
    BEHAVIOR RESEARCH METHODS, 2024, 56 (08) : 8214 - 8237
  • [10] Upgrading Academic Radiology with Open-Source Large Language Models
    Ray, Partha Pratim
    ACADEMIC RADIOLOGY, 2024, 31 (10) : 4291 - 4292