Complex Artificial Intelligence (AI) models pose significant privacy risks as they can potentially memorize sensitive training data. Knowledge probing was proposed to quantify the sensitive information memorized by a trained model. However, in large text datasets, Personally Identifiable Information (PII) is often discrete and sparsely distributed. Consequently, probing isolated PII instances and their limited context fails to effectively determine if the model has learned connections between related PII fragments. To address this limitation, we propose a knowledge probing method specifically designed for scenarios with sparse PII. Our method efficiently identifies and collects PII in the given dataset. It then uses this set for targeted probing to evaluate the model's recall accuracy concerning this information. Experiments demonstrate that our framework effectively reveals a model's capacity to implicitly link related, sparse PII fragments.
목차
Abstract 1. Introduction 2. Related works 3. Our proposal 4. Evaluation 4.1. Results 5. Conclusion Acknowledgement References
저자
Jinhui Zuo [ 아주대학교 인공지능학과 ]
이석원 [ Seok-Won Lee | 아주대학교 소프트웨어학과 ]
Corresponding Author