Glacial Terminus Extraction using HRNet

  • 🔬 Data Science
  • 🥠 Deep Learning and Segmentation

Introduction

With the change in global climate, glaciers all over the world are experiencing an increasing mass loss, resulting in changing calving fronts. This calving front delineation is important for monitoring the rate of glacial mass loss. Currently, most calving front delineation is done manually, resulting in excessive time consumption and under-utilization of satellite imagery.

Extracting calving fronts from satellite images of marine-terminating glaciers is a two-step process. The first step involves segmenting the front using different segmentation techniques, and the second step involves post-processing mechanisms to extract the terminus line. This notebook presents the use of an HRNet model from the arcgis.learn module to accomplish the first task of segmenting calving fronts. We have used data provided in the CALFIN repository. The training data includes 1600+ Greenlandic glaciers and 200+ Antarctic glaciers/ice shelves images from Landsat (optical) and Sentinel-1 (SAR) satellites.

Necessary imports

import os
import glob
import zipfile
from pathlib import Path

from arcgis.gis import GIS
from arcgis.learn import MMSegmentation, prepare_data

Connect to your GIS

# Connect to GIS
gis = GIS("home")

Download training data

training_data = gis.content.get('cc750295180a487aa7af67a67cadff78')
training_data
glacial_terminus_point_segmentation
Sample data for Glacial terminus segmentation using HRNet Image Collection by api_data_owner
Last Modified: August 06, 2021
0 comments, 0 views

The data size is approximately 6.5 GBs and may take some time to download.

filepath = training_data.download(file_name=training_data.name)
with zipfile.ZipFile(filepath, 'r') as zip_ref:
    zip_ref.extractall(Path(filepath).parent)
output_path = os.path.join(os.path.splitext(filepath)[0])
output_path = glob.glob(output_path)

Train the model

arcgis.learn provides an HRNet model through the integration of the MMSegmentation class. For more in-depth information on MMSegmentation, see this guide - Using MMSegmentation with arcgis.learn.

Prepare data

Next, we will specify the path to our training data and a few hyperparameters.

  • path: path of the folder/list of folders containing the training data.
  • batch_size: The number of images your model will train on for each step of an epoch. This will directly depend on the memory of your graphics card.
data = prepare_data(path=output_path, dataset_type='Classified_Tiles', batch_size=24)

Visualize training data

To get a sense of what the training data looks like, the arcgis.learn.show_batch() method will randomly select training chips and visualizes them.

  • rows: Number of rows to visualize
data.show_batch(5, alpha=0.7)
<Figure size 1080x1800 with 15 Axes>

Load model architecture

model = MMSegmentation(data, 'hrnet')

Find an optimal learning rate

Learning rate is one of the most important hyperparameters in model training. ArcGIS API for Python provides a learning rate finder that automatically chooses the optimal learning rate for you.

lr = model.lr_find()
<Figure size 432x288 with 1 Axes>

Fit the model

Next, we will train the model for a few epochs with the learning rate found in the previous step. For the sake of time, we will start with 30 epochs.

model.fit(30, lr)
epochtrain_lossvalid_lossaccuracydicetime
00.6104200.5604760.7244920.30263603:44
10.5579330.4968850.7575050.35398403:45
20.4894160.4456520.7897690.50179903:46
30.4471240.4969630.7568100.56758203:47
40.4276830.3782130.8331400.52880503:51
50.3992230.3655520.8421180.53148703:53
60.3816430.4013360.8119050.53482603:54
70.3780170.6054050.7577420.17746003:52
80.3674490.4225440.8156830.61265903:50
90.3587940.4015400.8049150.57471403:50
100.3423430.4040260.8438010.46492303:52
110.3448870.3333600.8591390.63489303:48
120.3345310.4333000.8430790.53082303:50
130.3122030.3423810.8518300.67262203:47
140.3183870.3657720.8502460.47970303:44
150.3045870.2722390.8892100.67236703:44
160.2994690.2559550.8896390.72774503:43
170.2906960.2813890.8828330.68474903:43
180.2798780.2557000.8960310.71932803:43
190.2724930.2218650.9128430.72281903:43
200.2549240.2280550.9077060.73692503:45
210.2534740.2309050.9010440.75461303:47
220.2483310.2143030.9184990.75205003:49
230.2344230.1992260.9260570.77017803:48
240.2385980.1989840.9231630.77810503:49
250.2281680.2007120.9205810.77187903:48
260.2350290.1906040.9258440.77729603:47
270.2214030.1946440.9257280.78470903:47
280.2300990.1922640.9279340.78693203:49
290.2192890.1907280.9269940.78157703:46

As we can see, the training and validation losses are continuing to decrease, indicating that the model is still learning. This suggests that there is more room for training, and as such, we chose to train the model for a total of 170 epochs to achieve better results.

Visualize results in validation set

It is a good practice to see the results of the model viz-a-viz ground truth. The code below picks random samples and visualizes the ground truth and model predictions side by side. This enables us to preview the results of the model we trained for 170 epochs within the notebook.

model.show_results(5, thresh=0.1, aplha=0.1)
<Figure size 720x1800 with 10 Axes>

Accuracy assessment

arcgis.learn provides the mIOU() method that computes the mean IOU (Intersection over Union) on the validation set for each class.

model.mIOU()
100.00% [8/8 00:19<00:00]
{'0': 0.9026798871510977, 'Masked': 0.7716700616812664}
model.per_class_metrics()
NoDataMasked
precision0.9376340.899602
recall0.9600690.848553
f10.9487190.873332

Save the model

We will save the model which we trained as a 'Deep Learning Package' ('.dlpk' format). The Deep Learning package is the standard format used to deploy deep learning models on the ArcGIS platform.

We will use the save() method to save the trained model. By default, it will be saved to the 'models' sub-folder within our training data folder.

model.save("Glaciertips_hrnet_30e", publish=True)
Published DLPK Item Id: 2f4454094f974f74b1e67432bcaf564d
WindowsPath('D:/Glacier Tips/data/data for notebook/glacial_terminus_point_segmentation/models/Glaciertips_hrnet_30e')

The saved model in this notebook can be downloaded from this link.

Model inference

In this step, we will generate a classified raster using the 'Classify Pixels Using Deep Learning' tool available in both ArcGIS Pro and ArcGIS Enterprise.

  • Input Raster: The raster layer you want to classify.
  • Model Definition: Located inside the saved model in the 'models' folder in '.emd' format.
  • Padding: The 'Input Raster' is tiled, and the deep learning model classifies each individual tile separately before producing the final 'Output Classified Raster'. This may lead to unwanted artifacts along the edges of each tile, as the model has little context to predict accurately. Padding allows us to supply extra information along the tile edges, thus helping the model to make better predictions.
  • Cell Size: Should be close to the size used to train the model.
  • Processor Type: Allows you to control whether the system's 'GPU' or 'CPU' will be used to classify pixels. By default, 'GPU' will be used if available.

It is advised to zoom in to the right extent of the area of interest in order to avoid/reduce noise from the results as the model is not trained to be generalized to work across the globe.

Results

The gif below was achieved with the model trained in this notebook and visualizes the segmented calving front for Rink Isbrae, a major West Greenland outlet glacier.

Conclusion

In this notebook, we have demonstrated how to use models supported by the MMSegmentation class in arcgis.learn to perform segmentation tasks. We trained an HRNet model to segment calving fronts for the Risk Isbrae glacier. With this trained model, this segmentation task can now be performed in regular intervals to monitor glacial mass loss.

Your browser is no longer supported. Please upgrade your browser for the best experience. See our browser deprecation post for more details.