Abstract The spatial and temporal adaptive reflectance fusion models (STARFM) have limited practical applications, because they often enforce the invalid assumption that land cover change does not occur between prior/posterior… Click to show full abstract
Abstract The spatial and temporal adaptive reflectance fusion models (STARFM) have limited practical applications, because they often enforce the invalid assumption that land cover change does not occur between prior/posterior and target dates. To deal with this challenge, we proposed a spatiotemporal adaptive fusion model for NDVI products (STAFFN), to better blend highly resolved spatial and temporal information from multiple sensors. Compared with existing spatiotemporal fusion models, the proposed model integrates an initial prediction into a hierarchical selection strategy of similar pixels, and can capture landscape changes very well. Experiments using spatial details and temporal abundance comparison among MODIS, Landsat, and fusion results show that the predicted data can accurately capture temporal changes while preserving fine-spatial-resolution details. Model comparison also shows that STAFFNs produce consistently lower biases than STARFMs and the flexible spatiotemporal data fusion models (FSDAFs). A synthetic NDVI product (342 scenes in total) was produced with STAFFNs having a 16-day revisit frequency at 30-m spatial resolution from 2000 to 2014. With this product, we further provided a 15-year spatiotemporal change monitoring map of the Poyang Lake wetland. Results show that the water area in the dry season tended to lose 38.3 km2 yr−1 in coverage over the past 15 years, decreasing by 18.24% of the lake area between 2001 and 2014. The wetland vegetation group tended to increase in coverage, increasing by 10.08% of the lake area in the past 15 years. Our study indicates the STAFFN model can be reasonably applied in monitoring wetland dynamics, and can be easily adapted for the use with other ecosystems.
               
Click one of the above tabs to view related content.