Hikmat Yar, Amjid Ali, Zulfiqar Ahmad Khan, Noman Khan, Min Je Kim, Su Min Lee, Sung Wook Baik
언어
영어(ENG)
URL
https://www.earticle.net/Article/A468796
원문정보
초록
영어
In recent years, anomaly recognition using audio has attracted the attention of the research community, due to the increasing number of abnormal situations day by day. In the past, researchers have mainly focused on video-based anomaly recognition. However, occlusion is one of the most important factors due to which the anomalous object is unidentifiable. Therefore, in this paper, we proposed a modified vision transformer that utilized the Shifted Patch Tokenization (SPT), and Local Self-Attention (LSA) mechanism and reduced the number of multilayer perceptrons in the head, enabling the model to capture rich spatial information within the spectrogram of anomalous data. The proposed model is implemented using the Sound Events for Surveillance Applications (SESA) dataset and obtained 87% testing accuracy. Thus, the proposed model is an efficient and effective solution for audio-based anomaly recognition.