Working with Multispectral Data

Introduction

We can use multispectral imagery to train any arcgis.learn model that works with imagery. Apart from the standard workflow to train a arcgis.learn model there are a few additional parameters that can be used while working with multispectral imagery. In this guide we would discuss these additional parameters.

Prerequisites

  • To work with multispectral data gdal needs to be installed in addition to fastai and pytorch, please refer to the section "Install deep learning dependencies of arcgis.learn module" on this page for detailed documentation on installation of these dependencies.

Imports

import arcgis
from arcgis.learn import prepare_data, UnetClassifier

Data preparation

While working with multispectral data we can use the following keyword arguments in addition to the standard parameters for the prepare_data() function.

  • Right now multispectral workflow is used for the dataset in the following mutually exclusive cases:

    • If the imagery source is not having exactly three bands
    • If there is any band other than RGB in the imagery source
    • Incase of three band iamgery all bands in the imagery source are having well known names.
    • Any of these keyword arguments is sepcified imagery_type, bands, rgb_bands.
  • imagery_type: The type of imagery used to export the training data. We can use any of the well know imagery types:

    • 'sentinel2'
    • 'naip'
    • 'landsat8'
    • 'ms' - any other type of imagery

    If the imagery used to export the training data is not one of the well know types, you can specify 'ms' against imagery_type. In that case we need to either specify rgb_bands or bands parameter to preserve weights for the RGB bands otherwise all the bands would be considered unknown.

  • bands: If training data is not exported using one of the well known imagery type, we can specify the bands contained in our imagery. For example, ['r', 'g', 'b', 'nir', 'u'] here 'nir' is and 'u' is a miscellaneous band.

  • rgb_bands: We can specify the indices of red, green, blue bands in the imagery or None if that band does not exist in the imagery. This is further used as the default band combination for visualization using the {data}.show_batch() and {model}.show_results() methods, this is an optional parameter. For example, [2, 1, 0] or [2, 1, None].
  • extract_bands: By default the model gets trained on all bands available in the imagery of our training data. We can use this parameter to filter the bands on which we want to train our model on. For example '[4, 2, 1, 0]' if we do not want to train on the band at 3 index of the imagery.
  • norm_pct: The percentage of training data used to calculate imagery statistics which is further used to normalize the data while training the model. It is an optional parameter and by default it is set to 0.3 or 30% of data.
data = prepare_data(
    r'C:\Workspace\Data\LULC\traindata_sentinel2_ms_400px', 
    batch_size=4,    
    imagery_type='sentinel2',
    norm_pct=1
)

Visualize Training data

we can use the {data}.show_batch() method to visualize a few samples of the training data. Following parameters can be used with multispectral imagery to control the visualization.

  • rgb_bands: The band combination in which we want to visualize our training data, For example [2, 1, 0] or ['nir', 'green', 'blue'].
  • stretch_type: The type of stretching we want to apply to imagery in our training data for visualization.
    • 'minmax' - Default! This stretches each image chips by min-max values.
    • 'percentclip' - This stretched image chips by clipping histogram by .25%.
  • statistics_type: The type of stretching we want to apply to imagery in our training data for visualization.
    • 'dataset' - Default! This stretches each image chip using global statistics.
    • 'DRA' - stands for Dynamic Range Adjustment. This stretches each image chip using its individual statistics.
data.show_batch(statistics_type='DRA', alpha=0.5)
<Figure size 1080x1080 with 9 Axes>

Different Band Combination

False Color Composite
red -> nir
green -> green
blue -> blue

data.show_batch(rgb_bands=[7, 2, 1], statistics_type='DRA', alpha=0.5)
<Figure size 1080x1080 with 9 Axes>

Train Model

Model Initialization options

arcgis.learn uses transfer learning to enhance the model training experience. To train these models with multispectral data the model needs to accommodate the various types of bands available in multispectral imageries.

This is done by re-initializing the first layer of the model, an ArcGIS environment variable arcgis.env.type_init_tail_parameters can be used to specify the scheme in which the weights are initialized for the layer. Valid weight initialization schemes are:

  • 'random' - default: Random weights are initialized for Non-RGB bands while preserving pretrained weights for RGB bands.
  • 'red_band': Weights corresponding to the Red band from the pretrained model's layer are cloned for Non-RGB bands while preserving pretrained weights for RGB bands.
  • 'all_random': Random weights are initialized for RGB bands as well as Non-RGB bands.
arcgis.env.type_init_tail_parameters = 'red_band'
# Create the model
model = UnetClassifier(data)

Learning Rate

# Find a learning rate
model.lr_find()
<Figure size 432x288 with 1 Axes>
9.120108393559096e-05

We can use the {data}.lr_find() method to find an appropriate learning rate. Because the first layer of the model has been reinitalized it is trainable and must be trained at a lower learning rate then the remaining trainable part of the model. To do that we can use the slice(low_lr:high_lr) notation, specifying a lower learning rate for the first layer and a higher learning rate for the remaining trainable part of the model.

Because the first layer in our model has been just initialized, we might need to train the model a bit longer to get the best results.

model.fit(50, lr=slice(0.00001, 0.001), checkpoint=False)
epochtrain_lossvalid_lossaccuracytime
03.1360301.5731690.39933400:17
12.4581161.2294200.57205400:07
22.1223841.3202230.53807000:07
31.9062481.4123420.39916300:07
41.7425091.0211130.63361400:07
51.6039041.0582930.61375600:08
61.5140090.9395180.72290700:10
71.4882450.9932760.71215700:09
81.4293520.9771220.69489800:09
91.3557450.9958150.67584900:09
101.3191531.0738800.62483300:08
111.2886240.9200150.65257500:08
121.2405150.8615580.73659100:08
131.2278351.0104660.65634200:09
141.2547201.2096870.69102400:09
151.2546590.9776350.74514100:08
161.2263210.8990110.75252300:08
171.1985891.0783760.67711700:08
181.1777840.7978900.75216000:08
191.1393140.8221180.77137700:08
201.1215510.8470900.74710600:08
211.0999470.7898730.75597900:08
221.0881780.8916620.75886900:08
231.0662390.8350280.74813100:09
241.0472430.7782060.76824000:08
251.0078290.7164690.78692600:08
260.9780550.7577870.76391900:08
270.9612860.7989020.76371600:08
280.9483010.7188380.77905000:08
290.9387970.7255770.79383400:08
300.9291000.7603370.75744200:08
310.9129580.7193780.78678300:08
320.8992710.6708080.80766500:08
330.8936110.7102340.79350700:08
340.8804500.6810210.80277800:08
350.8716650.6665910.80569200:08
360.8675480.6836820.80016300:08
370.8695060.6822810.80890900:08
380.8626100.6995940.79684700:08
390.8510200.6768940.79932600:08
400.8365870.6623090.80403400:08
410.8233780.6576190.80768100:08
420.8061540.6517460.81213300:08
430.8209730.6550770.81064700:08
440.8159870.6588560.80665700:08
450.8156620.6567550.80776100:08
460.8121380.6506010.80995300:08
470.8086930.6479000.81168300:08
480.8106180.6565710.80775300:08
490.7958670.6578820.80672400:08

Validate results

we can use the {model}.show_results() method to validate a few predictions from the validation dataset and compare them with the ground truth. Following parameters can be used with multispectral imagery to control the visualization.

  • rgb_bands: The band combination in which we want to visualize our training data, For example [2, 1, 0] or ['nir', 'green', 'blue'].
  • stretch_type: The type of stretching we want to apply to imagery in our training data for visualization.
    • 'minmax' - Default! This stretches each image chips by min-max values.
    • 'percentclip' - This stretched image chips by clipping histogram by .25%.
  • statistics_type: The type of stretching we want to apply to imagery in our training data for visualization.
    • 'dataset' - Default! This stretches each image chip using global statistics.
    • 'DRA' - stands for Dynamic Range Adjustment. This stretches each image chip using its individual statistics.
model.show_results()
<Figure size 720x1800 with 10 Axes>

Inferencing

We can save the model using the {model}.save() method, output of this method is a saved model file in '.dlpk' format. The model can be then deployed using ArcGIS Pro or ArcGIS Image Server. Depending on the type of model which we train, some of the tools that work with these deep learning models are:

In this example we have trained a UnetClassifier which is a pixel classification model, so the tool Classify Pixels Using Deep Learning would work with our saved model.

model.save('50e')
WindowsPath('C:/Workspace/Data/LULC/traindata_sentinel2_ms_400px/models/50e')

Your browser is no longer supported. Please upgrade your browser for the best experience. See our browser deprecation post for more details.