Earticle

다운로드

암시적 연계 탐지를 통한 프라이버시 위험 평가 : 언어 모델의 희소한 PII 기억을 중심으로
Probing Implicit Linkage : Assessing Privacy Risks from Sparse PII Memorization in Language Models

  • 간행물
    한국차세대컴퓨팅학회 학술대회 바로가기
  • 권호(발행년)
    2025 한국차세대컴퓨팅학회 춘계학술대회 (2025.05) 바로가기
  • 페이지
    pp.262-265
  • 저자
    Jinhui Zuo, 이석원
  • 언어
    영어(ENG)
  • URL
    https://www.earticle.net/Article/A468960

원문정보

초록

영어
Complex Artificial Intelligence (AI) models pose significant privacy risks as they can potentially memorize sensitive training data. Knowledge probing was proposed to quantify the sensitive information memorized by a trained model. However, in large text datasets, Personally Identifiable Information (PII) is often discrete and sparsely distributed. Consequently, probing isolated PII instances and their limited context fails to effectively determine if the model has learned connections between related PII fragments. To address this limitation, we propose a knowledge probing method specifically designed for scenarios with sparse PII. Our method efficiently identifies and collects PII in the given dataset. It then uses this set for targeted probing to evaluate the model's recall accuracy concerning this information. Experiments demonstrate that our framework effectively reveals a model's capacity to implicitly link related, sparse PII fragments.

목차

Abstract
1. Introduction
2. Related works
3. Our proposal
4. Evaluation
4.1. Results
5. Conclusion
Acknowledgement
References

저자

  • Jinhui Zuo [ 아주대학교 인공지능학과 ]
  • 이석원 [ Seok-Won Lee | 아주대학교 소프트웨어학과 ] Corresponding Author

참고문헌

자료제공 : 네이버학술정보

    간행물 정보

    • 간행물
      한국차세대컴퓨팅학회 학술대회
    • 간기
      반년간
    • 수록기간
      2021~2025
    • 십진분류
      KDC 566 DDC 004