Forest ecosystems play a fundamental role in natural balances and climate mechanisms through their contribution to global carbon storage. Their sustainable management and conservation is crucial in the current context… Click to show full abstract
Forest ecosystems play a fundamental role in natural balances and climate mechanisms through their contribution to global carbon storage. Their sustainable management and conservation is crucial in the current context of global warming and biodiversity conservation. To tackle such challenges, earth observation data have been identified as a valuable source of information. While earth observation data constitute an unprecedented opportunity to monitor forest ecosystems, its effective exploitation still poses serious challenges since multimodal information needs to be combined to describe complex natural phenomena. To deal with this particular issue in the context of structure and biophysical variables estimation for forest characterization, we propose a new deep learning-based fusion strategy to combine together high density three-dimensional (3-D) point clouds acquired by airborne laser scanning with high-resolution optical imagery. In order to manage and fully exploit the available multimodal information, we implement a two-branch late fusion deep learning architecture taking advantage of the specificity of each modality. On the one hand, a 2-D CNN branch is devoted to the analysis of Sentinel-2 time series data, and on the other hand, a multilayer perceptron branch is dedicated to the processing of LiDAR-derived information. The performance of our framework is evaluated on two forest variables of interest: total volume and basal area at stand level. The obtained results underline that the availability of multimodal remote sensing data is not a direct synonym of performance improvements but, the way in which they are combined together is of paramount importance.
               
Click one of the above tabs to view related content.