TY - GEN
T1 - Rome was Not Built in a Single Step
T2 - 6th ACM/IEEE International Symposium on Machine Learning for CAD, MLCAD 2024
AU - Nakkab, Andre
AU - Zhang, Sai Qian
AU - Karri, Ramesh
AU - Garg, Siddharth
N1 - Publisher Copyright:
© 2024 ACM.
PY - 2024/9/9
Y1 - 2024/9/9
N2 - Large Language Models (LLMs) are effective in computer hardware synthesis via hardware description language (HDL) generation. However, LLM-assisted approaches for HDL generation struggle when handling complex tasks. We introduce a suite of hierarchical prompting techniques which facilitate efficient stepwise design methods, and develop a generalizable automation pipeline for the process. To evaluate these techniques, we present a benchmark set of hardware designs which have solutions with or without architectural hierarchy. Using these benchmarks, we compare various open-source and proprietary LLMs, including our own fine-tuned Code Llama-Verilog model. Our hierarchical methods automatically produce successful designs for complex hardware modules that standard flat prompting methods cannot achieve, allowing smaller open-source LLMs to compete with large proprietary models. Hierarchical prompting reduces HDL generation time and yields savings on LLM costs. Our experiments detail which LLMs are capable of which applications, and how to apply hierarchical methods in various modes. We explore case studies of generating complex cores using automatic scripted hierarchical prompts, including the first-ever LLM-designed processor with no human feedback.
AB - Large Language Models (LLMs) are effective in computer hardware synthesis via hardware description language (HDL) generation. However, LLM-assisted approaches for HDL generation struggle when handling complex tasks. We introduce a suite of hierarchical prompting techniques which facilitate efficient stepwise design methods, and develop a generalizable automation pipeline for the process. To evaluate these techniques, we present a benchmark set of hardware designs which have solutions with or without architectural hierarchy. Using these benchmarks, we compare various open-source and proprietary LLMs, including our own fine-tuned Code Llama-Verilog model. Our hierarchical methods automatically produce successful designs for complex hardware modules that standard flat prompting methods cannot achieve, allowing smaller open-source LLMs to compete with large proprietary models. Hierarchical prompting reduces HDL generation time and yields savings on LLM costs. Our experiments detail which LLMs are capable of which applications, and how to apply hierarchical methods in various modes. We explore case studies of generating complex cores using automatic scripted hierarchical prompts, including the first-ever LLM-designed processor with no human feedback.
KW - Automation
KW - Hardware design
KW - Hierarchy
KW - LLM
UR - http://www.scopus.com/inward/record.url?scp=85205023906&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85205023906&partnerID=8YFLogxK
U2 - 10.1145/3670474.3685964
DO - 10.1145/3670474.3685964
M3 - Conference contribution
AN - SCOPUS:85205023906
T3 - MLCAD 2024 - Proceedings of the 2024 ACM/IEEE International Symposium on Machine Learning for CAD
BT - MLCAD 2024 - Proceedings of the 2024 ACM/IEEE International Symposium on Machine Learning for CAD
PB - Association for Computing Machinery, Inc
Y2 - 9 September 2024 through 11 September 2024
ER -