TY - GEN
T1 - Code-Based Cryptography for Confidential Inference on FPGAs
T2 - 25th International Symposium on Quality Electronic Design, ISQED 2024
AU - Karn, Rupesh Raj
AU - Knechtel, Johann
AU - Sinanoglu, Ozgur
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Confidential inference (CI) involves leveraging data encryption to safeguard privacy while allowing inference on encrypted data. Various cryptographic methods, such as homomorphic encryption or order-preserving encryption (OPE), are commonly employed for CI. In this work, we inspect the validity and efficiency of code-based cryptography for CI in FPGAs for the case of an ensemble of decision trees called the random forest (RF) machine learning (ML) model. FPGAs are an excellent platform for accelerating ML inference because of their low-latency performance, power efficiency, and high reconfigurability. However, creating hardware descriptions for encrypted ML models can pose challenges, especially for ML developers unfamiliar with hardware description languages. Thus, we propose an end-to-end methodology that includes high-level synthesis for ease of ML accelerator implementation on FPGAs. Additionally, we introduce variants of lightweight OPE tailored for CI in RFs. The successful and efficient implementation has been demonstrated using the Jet and MNIST datasets on the Xilinx Artix-7 FPGA.
AB - Confidential inference (CI) involves leveraging data encryption to safeguard privacy while allowing inference on encrypted data. Various cryptographic methods, such as homomorphic encryption or order-preserving encryption (OPE), are commonly employed for CI. In this work, we inspect the validity and efficiency of code-based cryptography for CI in FPGAs for the case of an ensemble of decision trees called the random forest (RF) machine learning (ML) model. FPGAs are an excellent platform for accelerating ML inference because of their low-latency performance, power efficiency, and high reconfigurability. However, creating hardware descriptions for encrypted ML models can pose challenges, especially for ML developers unfamiliar with hardware description languages. Thus, we propose an end-to-end methodology that includes high-level synthesis for ease of ML accelerator implementation on FPGAs. Additionally, we introduce variants of lightweight OPE tailored for CI in RFs. The successful and efficient implementation has been demonstrated using the Jet and MNIST datasets on the Xilinx Artix-7 FPGA.
KW - Confidential Inference
KW - FPGA
KW - HQC
KW - ML Accelerator
KW - McEliece
KW - Order-preserving Cryptography
KW - Random Forest
UR - http://www.scopus.com/inward/record.url?scp=85194077516&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85194077516&partnerID=8YFLogxK
U2 - 10.1109/ISQED60706.2024.10528692
DO - 10.1109/ISQED60706.2024.10528692
M3 - Conference contribution
AN - SCOPUS:85194077516
T3 - Proceedings - International Symposium on Quality Electronic Design, ISQED
BT - Proceedings of the 25th International Symposium on Quality Electronic Design, ISQED 2024
PB - IEEE Computer Society
Y2 - 3 April 2024 through 5 April 2024
ER -