Earticle

다운로드

A Multimodal Network For Residential Areas Semantic Segmentation

  • 간행물
    한국차세대컴퓨팅학회 학술대회 바로가기
  • 권호(발행년)
    The 9th International Conference on Next Generation Computing 2023 (2023.12) 바로가기
  • 페이지
    pp.107-109
  • 저자
    Lei Yan, Zhiguo Yan
  • 언어
    영어(ENG)
  • URL
    https://www.earticle.net/Article/A448129

원문정보

초록

영어
Automatic identification of residential areas from remote sensing images is beneficial to tasks such as urban planning and disaster assessment. Currently, residential area extraction tasks are primarily based on deep learning methods using single-modal data, and the information that a single modality can express is limited. Therefore, this paper proposes an end-to-end multi-modal semantic segmentation model that extracts features of remote sensing images and mobile phone signaling data through a dual-branch encoder, fuses the two features, and concatenates them with the feature map of the decoder stage. Experimental results show that our proposed method outperforms other models and can effectively identify residential areas.

목차

Abstract
I. INTRODUCTION
II. THE PROPOSED MODEL
A. Overall Architecture
B. Encoder-Feature Extracting
C. Decoder-Resolution Restoring
III. EXPERIMENTAL RESULT
A. Dataset
B. Evaluation Metrics
C. Results
IV. CONCLUSION
REFERENCES

저자

  • Lei Yan [ Department of Computer Science and Technology Chongqing University of Posts and Telecommunications Chongqing, China ] Corresponding author
  • Zhiguo Yan [ Department of Computer Science and Technology Chongqing University of Posts and Telecommunications Chongqing, China ]

참고문헌

자료제공 : 네이버학술정보

    간행물 정보

    • 간행물
      한국차세대컴퓨팅학회 학술대회
    • 간기
      반년간
    • 수록기간
      2021~2025
    • 십진분류
      KDC 566 DDC 004