Computer Engineering & Science ›› 2020, Vol. 42 ›› Issue (09): 1578-1586.
Previous Articles Next Articles
ZHANG Yong-mei1,HUA Rui-min1,MA Jian-zhe2,HU Lei3
Received:
Revised:
Accepted:
Online:
Published:
Abstract: Aiming at the "space-time conflict" of remote sensing images, a high spatial-temporal fusion algorithm based on improved STARFM is proposed. SRCNN is used for the super-resolution reconstruction of low-resolution images. Due to the large difference in resolution between the two groups of fusion images, the network training is difficult. Firstly, both of the two groups are sampled to an intermediate resolution, and low-resolution images are reconstructed by SRCNN with high-resolution images as their prior knowledge. Secondly, the obtained intermediate resolution images are resampled, and then they are reconstructed by SRCNN with original high-resolution images as their prior knowledge. The resulting reconstructed images have higher PSNR and SSIM than the images resampled by interpolation, alleviating the systematic error caused by the sensor difference. The STARFM fusion method uses expert knowledge to extract artificial features in selecting "Spectrally Similar Neighbor Pixels" and computer their weights. Based on the basic concept of STARFM, an automatic feature extraction method using SRCNN as the basic framework is realized. The experimental results show that this method has lower MSE value than the original STARFM, which further improves the quality of spatial-temporal fusion.
Key words: spatial-temporal fusion, improved STARFM, SRCNN, automatic feature extraction
ZHANG Yong-mei, HUA Rui-min, MA Jian-zhe, HU Lei. A high spatial temporal fusion method based on deep learning and super resolution reconstruction[J]. Computer Engineering & Science, 2020, 42(09): 1578-1586.
0 / / Recommend
Add to citation manager EndNote|Ris|BibTeX
URL: http://joces.nudt.edu.cn/EN/
http://joces.nudt.edu.cn/EN/Y2020/V42/I09/1578