TY - GEN
T1 - Examining Zero-Shot Vulnerability Repair with Large Language Models
AU - Pearce, Hammond
AU - Tan, Benjamin
AU - Ahmad, Baleegh
AU - Karri, Ramesh
AU - Dolan-Gavitt, Brendan
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Human developers can produce code with cybersecurity bugs. Can emerging 'smart' code completion tools help repair those bugs? In this work, we examine the use of large language models (LLMs) for code (such as OpenAI's Codex and AI21's Jurassic J-1) for zero-shot vulnerability repair. We investigate challenges in the design of prompts that coax LLMs into generating repaired versions of insecure code. This is difficult due to the numerous ways to phrase key information - both semantically and syntactically - with natural languages. We perform a large scale study of five commercially available, black-box, "off-the-shelf"LLMs, as well as an open-source model and our own locally-trained model, on a mix of synthetic, hand-crafted, and real-world security bug scenarios. Our experiments demonstrate that while the approach has promise (the LLMs could collectively repair 100% of our synthetically generated and hand-crafted scenarios), a qualitative evaluation of the model's performance over a corpus of historical real-world examples highlights challenges in generating functionally correct code.
AB - Human developers can produce code with cybersecurity bugs. Can emerging 'smart' code completion tools help repair those bugs? In this work, we examine the use of large language models (LLMs) for code (such as OpenAI's Codex and AI21's Jurassic J-1) for zero-shot vulnerability repair. We investigate challenges in the design of prompts that coax LLMs into generating repaired versions of insecure code. This is difficult due to the numerous ways to phrase key information - both semantically and syntactically - with natural languages. We perform a large scale study of five commercially available, black-box, "off-the-shelf"LLMs, as well as an open-source model and our own locally-trained model, on a mix of synthetic, hand-crafted, and real-world security bug scenarios. Our experiments demonstrate that while the approach has promise (the LLMs could collectively repair 100% of our synthetically generated and hand-crafted scenarios), a qualitative evaluation of the model's performance over a corpus of historical real-world examples highlights challenges in generating functionally correct code.
KW - AI
KW - code generation
KW - CWE
KW - Cybersecurity
UR - http://www.scopus.com/inward/record.url?scp=85166475888&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85166475888&partnerID=8YFLogxK
U2 - 10.1109/SP46215.2023.10179324
DO - 10.1109/SP46215.2023.10179324
M3 - Conference contribution
AN - SCOPUS:85166475888
T3 - Proceedings - IEEE Symposium on Security and Privacy
SP - 2339
EP - 2356
BT - Proceedings - 44th IEEE Symposium on Security and Privacy, SP 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 44th IEEE Symposium on Security and Privacy, SP 2023
Y2 - 22 May 2023 through 25 May 2023
ER -