• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

计算机工程与科学 ›› 2020, Vol. 42 ›› Issue (09): 1578-1586.

• 图形与图像 • 上一篇    下一篇

基于深度学习与超分辨率重建的遥感高时空融合方法

张永梅1,滑瑞敏1,马健喆2,胡蕾3


  

  1. (1.北方工业大学信息学院,北京 100144;2.香港理工大学电子与信息工程系,香港 00852;

    3.江西师范大学计算机信息工程学院,江西 南昌 330022)

  • 收稿日期:2019-11-07 修回日期:2020-02-22 接受日期:2020-09-25 出版日期:2020-09-25 发布日期:2020-09-24
  • 基金资助:
    国家自然科学基金(61371143,61662033);教育部高等教育司产学合作协同育人项目(201801121002);全国高等学校计算机教育研究会2019年度课题(CERACU2019R05);教育部科技发展中心“天诚汇智”创新促教基金(2018A03029);2019年度北京市教委基本科研业务费(110052971921/002)

A high spatial temporal fusion method based on deep learning and super resolution reconstruction

ZHANG Yong-mei1,HUA Rui-min1,MA Jian-zhe2,HU Lei3   

  1. (1.School of Information Science and Technology,North China University of Technology,Beijing 100144;

     2.Department of Electronic & Information Engineering,The Hong Kong Polytechnic University,Hong Kong 00852;

    3.School of Computer Information Engineering,Jiangxi Normal University,Nanchang 330022,China)

  • Received:2019-11-07 Revised:2020-02-22 Accepted:2020-09-25 Online:2020-09-25 Published:2020-09-24

摘要: 针对遥感影像的“时空矛盾”,提出一种改进STARFM的遥感高时空融合方法。利用SRCNN对低分辨率影像进行超分辨率重建,由于所融合的2组影像分辨率差距过大,网络训练困难,先将2组影像均采样至某一中间分辨率,使用高分辨率影像作为低分辨率影像的先验知识进行SRCNN重建,再将得到的中间分辨率影像重采样后以原始高分辨率影像作为先验知识进行第2次SRCNN重建,得到的最终重建图像相比原先使用插值法重采样所得图像,在PSNR和SSIM上均有提升,缓解了传感器差异所造成的系统误差。STARFM融合方法在筛选相似像元与计算权重时均使用专家知识提取人工特征,基于STARFM时空融合的基本思想,以SRCNN作为基本框架
自动提取特征,实验结果表明,其MSE值相比原方法更低,进一步提高了遥感时空融合的质量,有利于充分利用遥感影像。

关键词: 时空融合, 改进STARFM, SRCNN, 自动特征提取

Abstract: Aiming at the "space-time conflict" of remote sensing images, a high spatial-temporal fusion algorithm based on improved STARFM is proposed. SRCNN is used for the super-resolution reconstruction of low-resolution images. Due to the large difference in resolution between the two groups of fusion images, the network training is difficult. Firstly, both of the two groups are sampled to an intermediate resolution, and low-resolution images are reconstructed by SRCNN with high-resolution images as their prior knowledge. Secondly, the obtained intermediate resolution images are resampled, and then they are reconstructed by SRCNN with original high-resolution images as their prior knowledge. The resulting reconstructed images have higher PSNR and SSIM than the images resampled by interpolation, alleviating the systematic error caused by the sensor difference. The STARFM fusion method uses expert knowledge to extract artificial features in selecting "Spectrally Similar Neighbor Pixels" and computer their weights. Based on the basic concept of STARFM, an automatic feature extraction method using SRCNN as the basic framework is realized. The experimental results show that this method has lower MSE value than the original STARFM, which further improves the quality of spatial-temporal fusion.


Key words: spatial-temporal fusion, improved STARFM, SRCNN, automatic feature extraction