TY - GEN
T1 - Selective Forgetting in Task-Progressive Learning Through Machine Unlearning
AU - Karn, Rupesh Raj
AU - Knechtel, Johann
AU - Sinanoglu, Ozgur
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Task-progressive learning (TPL) entails training a model sequentially on multiple tasks while mitigating the phenomenon of 'catastrophic forgetting', where new tasks erase previously learned knowledge. ATPL model can face security challenges, as some tasks are affected by so-called backdoors and adversarial attacks. These attacks exploit vulnerabilities in machine learning models by implanting malicious triggers or perturbing inputs to induce misclassification. This paper explores the role of machine unlearning in TPL to counter those attacks. We consider two common architectures: static and dynamic networks. In static architectures, each task is learned by applying weight update penalties through regularization without any change in neural architecture. In contrast, dynamic architectures expand the network for each task by freezing parameters. We explore the potential for machine unlearning in each scenario to counteract the specified attacks. Specifically, we demonstrate how mechanisms for selective forgetting can be adapted to the TPL models to efficiently 'unlearn' tasks compromised by backdoor and adversarial attacks while preserving the knowledge of other tasks. We demonstrate our method using the MNIST image dataset.
AB - Task-progressive learning (TPL) entails training a model sequentially on multiple tasks while mitigating the phenomenon of 'catastrophic forgetting', where new tasks erase previously learned knowledge. ATPL model can face security challenges, as some tasks are affected by so-called backdoors and adversarial attacks. These attacks exploit vulnerabilities in machine learning models by implanting malicious triggers or perturbing inputs to induce misclassification. This paper explores the role of machine unlearning in TPL to counter those attacks. We consider two common architectures: static and dynamic networks. In static architectures, each task is learned by applying weight update penalties through regularization without any change in neural architecture. In contrast, dynamic architectures expand the network for each task by freezing parameters. We explore the potential for machine unlearning in each scenario to counteract the specified attacks. Specifically, we demonstrate how mechanisms for selective forgetting can be adapted to the TPL models to efficiently 'unlearn' tasks compromised by backdoor and adversarial attacks while preserving the knowledge of other tasks. We demonstrate our method using the MNIST image dataset.
KW - Dynamic Networks
KW - Machine Unlearning
KW - Selective For-getting
KW - Task Progressive Learning
KW - Weight Importance
UR - http://www.scopus.com/inward/record.url?scp=105002271649&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=105002271649&partnerID=8YFLogxK
U2 - 10.1109/ICMLC63072.2024.10935063
DO - 10.1109/ICMLC63072.2024.10935063
M3 - Conference contribution
AN - SCOPUS:105002271649
T3 - Proceedings - International Conference on Machine Learning and Cybernetics
SP - 77
EP - 84
BT - Proceedings of 2024 International Conference on Machine Learning and Cybernetics, ICMLC 2024
PB - IEEE Computer Society
T2 - 23rd International Conference on Machine Learning and Cybernetics, ICMLC 2024
Y2 - 20 September 2024 through 23 September 2024
ER -