Automatic identification of residential areas from remote sensing images is beneficial to tasks such as urban planning and disaster assessment. Currently, residential area extraction tasks are primarily based on deep learning methods using single-modal data, and the information that a single modality can express is limited. Therefore, this paper proposes an end-to-end multi-modal semantic segmentation model that extracts features of remote sensing images and mobile phone signaling data through a dual-branch encoder, fuses the two features, and concatenates them with the feature map of the decoder stage. Experimental results show that our proposed method outperforms other models and can effectively identify residential areas.
목차
Abstract I. INTRODUCTION II. THE PROPOSED MODEL A. Overall Architecture B. Encoder-Feature Extracting C. Decoder-Resolution Restoring III. EXPERIMENTAL RESULT A. Dataset B. Evaluation Metrics C. Results IV. CONCLUSION REFERENCES
저자
Lei Yan [ Department of Computer Science and Technology Chongqing University of Posts and Telecommunications Chongqing, China ]
Corresponding author
Zhiguo Yan [ Department of Computer Science and Technology Chongqing University of Posts and Telecommunications Chongqing, China ]