The 9th International Conference on Next Generation Computing 2023 (2023.12)바로가기
페이지
pp.107-109
저자
Lei Yan, Zhiguo Yan
언어
영어(ENG)
URL
https://www.earticle.net/Article/A448129
원문정보
초록
영어
Automatic identification of residential areas from remote sensing images is beneficial to tasks such as urban planning and disaster assessment. Currently, residential area extraction tasks are primarily based on deep learning methods using single-modal data, and the information that a single modality can express is limited. Therefore, this paper proposes an end-to-end multi-modal semantic segmentation model that extracts features of remote sensing images and mobile phone signaling data through a dual-branch encoder, fuses the two features, and concatenates them with the feature map of the decoder stage. Experimental results show that our proposed method outperforms other models and can effectively identify residential areas.
목차
Abstract I. INTRODUCTION II. THE PROPOSED MODEL A. Overall Architecture B. Encoder-Feature Extracting C. Decoder-Resolution Restoring III. EXPERIMENTAL RESULT A. Dataset B. Evaluation Metrics C. Results IV. CONCLUSION REFERENCES
키워드
feature fusionresidential area extractionsemantic segmentation
저자
Lei Yan [ Department of Computer Science and Technology Chongqing University of Posts and Telecommunications Chongqing, China ]
Corresponding author
Zhiguo Yan [ Department of Computer Science and Technology Chongqing University of Posts and Telecommunications Chongqing, China ]