Early research years

My research journey started in Montpellier at Maison de la Télédétection (House of Photogrammetry) in 2016 during my Master 1 where I got the initiation to the research projects. It led to my first internship in satellite image processing which defined my future.

The next year, in 2017, I've done my Master 2 internship in Radar image processing. The goal of this internship was to classify different types of "black bodies" (oil/not oil) presented on the sea surface. In Radar images, the oil can be easily mistaken for other phenomena not related to the oil presence. Hereby, the developed algorithm can help to detect the potential petrol sources and sea pollution caused by ship and platform oil spills (see the image below). The method can be divided into two steps: first, we characterize the detected black body (texture extraction, geometry analysis, contextual analysis), second, we use machine learning algorithms to classify the detected objects into eight classes: biogenic oil, oil seep (indicator of the potential petrol source), platform oil spill, ship oil spill, rainfall, current, internal wave, wind shelter and upwelling.

Oil seep (potential oil source)

Platform oil spill (should be reported to the authorities to save the planet and punish the perpetrators)

Rainfall (has nothing to do with the oil)

PhD Thesis - Unsupervised Satellite Image Time Series Analysis

My PhD topic is Unsupervised Satellite Image Time Series Analysis using Deep Learning Techniques. This thesis presents a set of unsupervised algorithms for satellite image time series (SITS) analysis. My methods exploit machine learning algorithms and, in particular, neural networks to detect different spatio-temporal entities and their eventual changes in the time. In this work, I aim to identify three different types of temporal behavior:

  • no change areas,

  • seasonal changes (vegetation and other phenomena that have seasonal recurrence)

  • non-trivial changes (permanent changes such as constructions or demolishment, crop rotation, etc).

Therefore, I propose two frameworks: one for detection and clustering of non-trivial changes and another for clustering of “stable” areas (seasonal changes and no change areas). The first framework is composed of two steps which are bi-temporal change detection and the interpretation of detected changes in a multi-temporal context with graph-based approaches. The bi-temporal change detection is performed for each pair of consecutive images of the SITS and is based on feature translation with autoencoders.

At the next step, the changes from different timestamps that belong to the same geographic area form evolution change graphs. The graphs are then clustered using a recurrent neural networks AE model to identify different types of change behavior. For the second framework, we propose an approach for object-based SITS clustering. First, we encode SITS with a multi-view 3D convolutional AE in a single image. Second, we perform a two steps SITS segmentation using the encoded SITS and original images. Finally, the obtained segments are clustered exploiting their encoded descriptors.

Results. Stadium construction in Montpellier and a corresponding spatio-temporal evolution change graph.

PostDoctoral research - 3D Data Processing for Forestry Analysis

My current research is focused on the processing of the 3D point cloud data for the vegetation analysis. I deploy different 3D deep learning algorithms to extract the structural information from 3D LiDAR point clouds. This information can be therefore used for different ecological projects, such as biodiversity analysis, pasture land management, fire management, etc.

The main challenge for the analysis of 3D data is that very little annotated data is available. Therefore my work can be divided into two parts:

  • weakly-supervised 3D point cloud analysis for vegetation stratum occupancy prediction;

  • supervised 3D point cloud analysis for vegetation structure 3D modeling.

Predicting Vegetation Stratum Occupancy from Airborne LiDAR Data with Deep Learning

For this project, we have proposed a new deep learning-based method for estimating the occupancy of vegetation strata from airborne 3D LiDAR point clouds. Our model predicts rasterized occupancy maps for three vegetation strata corresponding to lower, medium, and higher cover.

The main problem for 3D vegetation data analysis is the lack of open-source datasets with point-wise annotations. In our work, we propose a weakly-supervised deep learning algorithm to avoid this tedious point annotation process.

Our weakly-supervised training scheme allows our network to only be supervised with three vegetation occupancy values aggregated over cylindrical plots containing thousands of points which are typically easier to produce than pixel-wise or point-wise annotations. We propose to employ a deep neural network operating on 3D points, and whose prediction are projected onto rasters representing the different vegetation strata.

For this project, we provide an open-source implementation along with a dataset of 199 agricultural plots to train and evaluate weakly supervised occupancy regression algorithms.

Pipeline. Our network performs the semantic segmentation of a 3D point cloud within four different classes (bare soil, lower vegetation, medium vegetation, higher vegetation). The resulting probabilities are projected onto rasters corresponding to different strata. Finally, the occupancy maps are aggregated into the stratum vegetation ratio.

Results. Using only three aggregated ground truth (GT) plot values, our model is able to create 2D occupancy maps for each vegetation layer.

Multi-Layer Modeling of Dense Vegetation from Aerial LiDAR Scans

My latest research project is focused on the analysis of the multi-layer structure of wild forest vegetation. While modern aerial LiDARs offer geometric information across all vegetation layers, most datasets and methods focus only on the segmentation and reconstruction of the top of canopy. To tackle this problem, we release WildForest3D, which consists of 29 study plots and over 2000 individual trees and bushes across 47000 m² with dense 3D annotation, along with occupancy and height maps for 3 vegetation layers: ground vegetation, understory, and overstory. Equally, we propose a 3D deep network architecture predicting for the first time both 3D point-wise labels and high-resolution layer occupancy rasters simultaneously. This allows us to produce a precise estimation of the thickness of each vegetation layer as well as the corresponding watertight meshes, therefore meeting most forestry purposes.

Pipeline. During training, our network performs the semantic segmentation of a 3D point cloud sample within 6 different classes. The point probabilities are projected onto rasters to obtain soft occupancy maps for 3 different vegetation layers. The network is supervised using both 2D and 3D predictions. During inference, the predictions are computed for an entire plot and the prediction are used to derive the minimum and maximum elevation of each layer. Finally, we can produce a watertight 3D mesh representing each vegetation layer.