Earticle

How are Korean Neural Language Models ‘surprised’ Layerwisely?

원문정보

초록

영어
Since the introduction of BERT, recent works have shown success in detecting when a word is anomalous given sentence context. Since likelihood score is not an appropriate tool in identifying the exact property of linguistic anomaly, Li et al. (2021) recently adopt Gaussian models for density estimation at intermediate layers of pretrained language models. They find that different English pretrained language models employ separate mechanisms to recognize different types of linguistic anomaly. In keeping with Li et al.‘s methodology, we probe whether Korean counterparts such as KoBERT and KR-BERT are sensitive to different levels of linguistic anomaly, just as English-based language models are. To investigate the issue concerned, we construct an experiment with a suite of test data involving morphosyntactic, semantic, and commonsense anomaly in Korean and apply the two Korean-based models to test relevant sentences. We find that KoBERT and KR-BERT show relatively higher surprisal gaps throughout layers when the anomaly is morphosyntactic than when the anomaly is semantic. By contrast, commonsense anomaly does not exhibit any surprisal gap in any layer. We thus report that, like their English counterparts, KoBERT and KR-BERT use different mechanisms to track the different types of linguistic anomaly.

목차

Abstract
1. Introduction
2. Related Works
2.1. Li et al. (2021)
3. Experimental Setup
3.1. Linguistic Datasets in Korean
3.2. Surprisal Gap
3.3. Adopted System
4. Results
5. Conclusion
References

저자

  • Sunjoo Choi [ Dongguk University/Research Professor ] First Author
  • Myung-Kwan Park [ Dongguk University/Professor ] Co-Corresponding Author
  • Euhee Kim [ Shinhan University/Professor ] Co-Corresponding Author

참고문헌

자료제공 : 네이버학술정보

    간행물 정보

    • 간행물
      언어과학 [Journal of Language Sciences]
    • 간기
      계간
    • pISSN
      1225-2522
    • 수록기간
      1994~2025
    • 등재여부
      KCI 등재
    • 십진분류
      KDC 705 DDC 405