TY - JOUR
T1 - VeriGen
T2 - A Large Language Model for Verilog Code Generation
AU - Thakur, Shailja
AU - Ahmad, Baleegh
AU - Pearce, Hammond
AU - Tan, Benjamin
AU - Dolan-Gavitt, Brendan
AU - Karri, Ramesh
AU - Garg, Siddharth
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
PY - 2024/4/22
Y1 - 2024/4/22
N2 - In this study, we explore the capability of Large Language Models (LLMs) to automate hardware design by automatically completing partial Verilog code, a common language for designing and modeling digital systems. We fine-tune pre-existing LLMs on Verilog datasets compiled from GitHub and Verilog textbooks. We evaluate the functional correctness of the generated Verilog code using a specially designed test suite, featuring a custom problem set and testing benches. Here, our fine-tuned open-source CodeGen-16B model outperforms the commercial state-of-the-art GPT-3.5-turbo model with a 1.1% overall increase. Upon testing with a more diverse and complex problem set, we find that the fine-tuned model shows competitive performance against state-of-the-art gpt-3.5-turbo, excelling in certain scenarios. Notably, it demonstrates a 41% improvement in generating syntactically correct Verilog code across various problem categories compared to its pre-trained counterpart, highlighting the potential of smaller, in-house LLMs in hardware design automation. We release our training/evaluation scripts and LLM checkpoints as open-source contributions.
AB - In this study, we explore the capability of Large Language Models (LLMs) to automate hardware design by automatically completing partial Verilog code, a common language for designing and modeling digital systems. We fine-tune pre-existing LLMs on Verilog datasets compiled from GitHub and Verilog textbooks. We evaluate the functional correctness of the generated Verilog code using a specially designed test suite, featuring a custom problem set and testing benches. Here, our fine-tuned open-source CodeGen-16B model outperforms the commercial state-of-the-art GPT-3.5-turbo model with a 1.1% overall increase. Upon testing with a more diverse and complex problem set, we find that the fine-tuned model shows competitive performance against state-of-the-art gpt-3.5-turbo, excelling in certain scenarios. Notably, it demonstrates a 41% improvement in generating syntactically correct Verilog code across various problem categories compared to its pre-trained counterpart, highlighting the potential of smaller, in-house LLMs in hardware design automation. We release our training/evaluation scripts and LLM checkpoints as open-source contributions.
KW - EDA
KW - GPT
KW - Transformers
KW - large language models
KW - verilog
UR - http://www.scopus.com/inward/record.url?scp=85190854770&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85190854770&partnerID=8YFLogxK
U2 - 10.1145/3643681
DO - 10.1145/3643681
M3 - Article
AN - SCOPUS:85190854770
SN - 1084-4309
VL - 29
JO - ACM Transactions on Design Automation of Electronic Systems
JF - ACM Transactions on Design Automation of Electronic Systems
IS - 3
M1 - 46
ER -