MEMBERSHIP INFERENCE ATTACKS AND DEFENSES IN NEURAL NETWORK PRUNING

Main Article Content

DR. G. KALPANA, A. RENUSRI, AYESHA BEGUM, D. NAVYA TEJA

Abstract

Neural network pruning has been an essential technique to reduce the computation and memory requirements for using deep neural networks for resource-constrained devices. Most existing research focuses primarily on balancing the sparsity and accuracy of a pruned neural network by strategically removing insignificant parameters and retraining the pruned model. Such efforts on reusing training samples pose serious privacy risks due to increased memorization, which, however, has not been investigated yet. In this paper, we conduct the first analysis of privacy risks in neural network pruning. Specifically, we investigate the impacts of neural network pruning on training data privacy, i.e., membership inference attacks. We first explore the impact of neural network pruning on prediction divergence, where the pruning process disproportionately affects the pruned model’s behavior for members and non-members. Meanwhile, the influence of divergence even varies among different classes in a fine-grained manner. Enlightened by such divergence, we proposed a self-attention membership inference attack against the pruned neural networks. Extensive experiments are conducted to rigorously evaluate the privacy impacts of different pruning approaches, sparsity levels, and adversary knowledge. The proposed attack shows the higher attack performance on the pruned models when compared with eight existing membership inference attacks. In addition, we propose a new defense mechanism to protect the pruning process by mitigating the prediction divergence based on KL-divergence distance, whose effectiveness has been experimentally demonstrated to effectively mitigate the privacy risks while maintaining the sparsity and accuracy of the pruned models.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Article Details

How to Cite
DR. G. KALPANA, A. RENUSRI, AYESHA BEGUM, D. NAVYA TEJA. (2023). MEMBERSHIP INFERENCE ATTACKS AND DEFENSES IN NEURAL NETWORK PRUNING. Turkish Journal of Computer and Mathematics Education (TURCOMAT), 14(2), 313–322. https://doi.org/10.17762/turcomat.v14i2.13657
Section
Research Articles