Skip to main content Skip to main navigation

Publication

Adaptive Fusion of Multi-Modal Remote Sensing data for Optimal Sub-field Crop Yield Prediction

Francisco Mena; Deepak Kumar Pathak; Hiba Najjar; Cristhian Sanchez; Patrick Helber; Benjamin Bischke; Peter Habelitz; Miro Miranda Lorenz; Jayanth Siddamsetty; Marlon Nuske; Marcela Charfuelan Oliva; Diego Arenas; Michaela Vollmer; Andreas Dengel
In: Remote Sensing of Environment (RSE), Vol. 318, Pages 0-20, Elsevier, 3/2025.

Abstract

Accurate crop yield prediction is of utmost importance for informed decision-making in agriculture, aiding farmers, industry stakeholders, and policymakers in optimizing agricultural practices. However, this task is complex and depends on multiple factors, such as environmental conditions, soil properties, and management practices. Leveraging Remote Sensing (RS) technologies, multi-view data from diverse global data sources can be collected to enhance predictive model accuracy. However, combining heterogeneous RS views poses a fusion challenge, like identifying the specific contribution of each view in the predictive task. % However, this is constrained to specific crop-types and regions where labeled data is available. In this paper, we present a novel multi-view learning approach to predict crop yield for different crops (soybean, wheat, rapeseed) and regions (Argentina, Uruguay, and Germany). Our multi-view input data includes multi-spectral optical images from Sentinel-2 satellites and weather data as dynamic features during the crop growing season, complemented by static features like soil properties and topographic information. To effectively fuse the multi-view data, we introduce a Multi-view Gated Fusion (MVGF) model, comprising dedicated view-encoders and a Gated Unit (GU) module. The view-encoders handle the heterogeneity of data sources with varying temporal resolutions by learning a view-specific representation. These representations are adaptively fused via a weighted sum. The fusion weights are computed for each sample by the GU using a concatenation of all the view-representations. The MVGF model is trained at sub-field level with 10 m resolution pixels. Our evaluations show that the MVGF outperforms conventional models on the same task, achieving the best results by incorporating all the data sources, unlike the usual fusion results in the literature. For Argentina, the MVGF model achieves an R2 value of 0.68 at sub-field yield prediction, while at the field level evaluation (comparing field averages), it reaches around 0.80 across different countries. The GU module learned different weights based on the country and crop-type, aligning with the variable significance of each data source to the prediction task. This novel method has proven its effectiveness in enhancing the accuracy of the challenging sub-field crop yield prediction. Our investigation indicates that the gated fusion approach promises a significant advancement in the field of agriculture and precision farming.

Projects

More links