Generating rgb imagery from digital surface model using Pix2Pix¶
- 🔬 Data Science
- 🥠 Deep Learning and image translation
Table of Contents¶
- Necessary imports
- Connect to your GIS
- Export image domain data
- Model training
- Model inferencing
- Results visualization
In this notebook, we will focus on using Pix2Pix , which is one of the famous and sucessful deep learning models used for paired image-to-image translation. In geospatial sciences, this approach could help in wide range of applications traditionally not possible, where we may want to go from one domain of images to another.
The aim of this notebook is to make use of
arcgis.learn Pix2Pix model to translate or convert the gray-scale DSM to a RGB imagery. For more details about model and its working refer How Pix2Pix works ? in guide section.
import os, zipfile from pathlib import Path from os import listdir from os.path import isfile, join from arcgis import GIS from arcgis.learn import Pix2Pix, prepare_data
# gis = GIS('home') ent_gis = GIS('https://pythonapi.playground.esri.com/portal', 'arcgis_python', 'amazing_arcgis_123')
For this usecase, we have a high-resolution NAIP airborne imagery in the form of IR-G-B tiles and lidar data converted into DSM, collected over St. George, state of utah by state of utah and partners  with same spatial resolution of 0.5 m. We will export that using “Export_Tiles” metadata format available in the
Export Training Data For Deep Learning tool. This tool is available in ArcGIS Pro as well as ArcGIS Image Server. The various inputs required by the tool, are described below.
Input Raster: DSM
Additional Input Raster: NAIP airborne imagery
Tile Size X & Tile Size Y: 256
Stride X & Stride Y: 128
Meta Data Format: 'Export_Tiles' as we are training a
Environments: Set optimum
Raster's used for exporting the training dataset are provided below
naip_domain_b_raster = ent_gis.content.get('a55890fcd6424b5bb4edddfc5a4bdc4b') naip_domain_b_raster
dsm_domain_a_raster = ent_gis.content.get('aa31a374f889487d951e15063944b921') dsm_domain_a_raster
Inside the exported data folder, 'Images' and 'Images2' folders contain all the image tiles from two domains exported from DSM and drone imagery respectively. Now we are ready to train the
Alternatively, we have provided a subset of training data containing a few samples that follows the same directory structure mentioned above and also provided the rasters used for exporting the training dataset. You can use the data directly to run the experiments.
training_data = gis.content.get('2a3dad36569b48ed99858e8579611a80') training_data
filepath = training_data.download(file_name=training_data.name)
#Extract the data from the zipped image collection with zipfile.ZipFile(filepath, 'r') as zip_ref: zip_ref.extractall(Path(filepath).parent)
output_path = Path(os.path.join(os.path.splitext(filepath)))
data = prepare_data(output_path, dataset_type="Pix2Pix", batch_size=5)
To get a sense of what the training data looks like,
arcgis.learn.show_batch() method randomly picks a few training chips and visualize them. On the left are some DSM's (digital surface model) with the corresponding RGB imageries of various locations on the right.
model = Pix2Pix(data)
Learning rate is one of the most important hyperparameters in model training. ArcGIS API for Python provides a learning rate finder that automatically chooses the optimal learning rate for you.
lr = model.lr_find()
The model is trained for around a few epochs with the suggested learning rate.
Here, with 30 epochs, we can see reasonable results — both training and validation losses have gone down considerably, indicating that the model is learning to translate between domain of imageries.
We will save the model which we trained as a 'Deep Learning Package' ('.dlpk' format). Deep Learning package is the standard format used to deploy deep learning models on the ArcGIS platform.
We will use the save() method to save the trained model. By default, it will be saved to the 'models' sub-folder within our training data folder.
model.save("pix2pix_model_e30", publish =True)
It is a good practice to see results of the model viz-a-viz ground truth. The code below picks random samples and shows us ground truth and model predictions, side by side. This enables us to preview the results of the model within the notebook.