TY - GEN
T1 - Evaluating Large Language Models for G-Code Debugging, Manipulation, and Comprehension
AU - Jignasu, Anushrut
AU - Marshall, Kelly
AU - Ganapathysubramanian, Baskar
AU - Balu, Aditya
AU - Hegde, Chinmay
AU - Krishnamurthy, Adarsh
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - 3D printing is a revolutionary technology that enables the creation of physical objects from digital models. However, the quality and accuracy of 3D printing depend on the correctness and efficiency of the numerical control programming language (specifically, G-code) that instructs 3D printers on moving and extruding material. Debugging G-code, a low-level programming language for 3D printing, is a challenging task that requires manual tuning and geometric reasoning. In this paper, we present the first extensive evaluation of numerous large language models (LLMs) for debugging G-code files for 3-axis 3D printing. We design effective prompts to enable pre-trained LLMs to understand and manipulate G-code and test their performance on various aspects of G-code debugging and manipulation, including detection and correction of common errors and the ability to perform geometric transformations. We compare different state-of-the-art LLMs and analyze their strengths and weaknesses. We also discuss the implications and limitations of using LLMs for G-code comprehension and suggest directions for future research.
AB - 3D printing is a revolutionary technology that enables the creation of physical objects from digital models. However, the quality and accuracy of 3D printing depend on the correctness and efficiency of the numerical control programming language (specifically, G-code) that instructs 3D printers on moving and extruding material. Debugging G-code, a low-level programming language for 3D printing, is a challenging task that requires manual tuning and geometric reasoning. In this paper, we present the first extensive evaluation of numerous large language models (LLMs) for debugging G-code files for 3-axis 3D printing. We design effective prompts to enable pre-trained LLMs to understand and manipulate G-code and test their performance on various aspects of G-code debugging and manipulation, including detection and correction of common errors and the ability to perform geometric transformations. We compare different state-of-the-art LLMs and analyze their strengths and weaknesses. We also discuss the implications and limitations of using LLMs for G-code comprehension and suggest directions for future research.
KW - Debugging
KW - G-code
KW - Geometric comprehension
KW - Large language models
KW - Manufacturing 4.0
UR - http://www.scopus.com/inward/record.url?scp=85206606977&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85206606977&partnerID=8YFLogxK
U2 - 10.1109/LAD62341.2024.10691700
DO - 10.1109/LAD62341.2024.10691700
M3 - Conference contribution
AN - SCOPUS:85206606977
T3 - 2024 IEEE LLM Aided Design Workshop, LAD 2024
BT - 2024 IEEE LLM Aided Design Workshop, LAD 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE International LLM-Aided Design Workshop, LAD 2024
Y2 - 28 June 2024 through 29 June 2024
ER -