- Please refer to the prerequisites section in our guide for more information. This sample demonstrates how to export training data and model inference using ArcGIS Pro. Alternatively, they can be done using ArcGIS Image Server as well.
- If you have already exported training samples using ArcGIS Pro, you can jump straight to the training section. The saved model can also be imported into ArcGIS Pro directly.
High resolution imagery is desirable for both visualization and image interpretation. However, high resolution imagery is expensive to procure. This sample notebook demonstrates how the
SuperResolution model in
arcgis.learn module can be used to increase image resolution. This model uses deep learning to add texture and detail to low resolution satellite imagery and turn it into higher resolution imagery.
We first start with high resolution aerial imagery to train the model. The data preparation step first downsamples the higher resolution imagery to create lower resolution, blurred imagery. The
SuperResolution model uses this training data and learns how to upsample the lower resolution imagery and produce realistic high resolution images that closely resemble the higher quality images that we started with. We then use the trained
SuperResolution model to produce simulated high resolution aerial imagery from relatively lower resolution satellite imagery.
We will be using ArcGIS Pro to find the area where high resolution imagery is available on the Esri World Imagery. To simplify our job, we have already created polygon representing the extent of high resolution imagery. We can add the polygon from here.
- Input Raster: ESRI World Imagery
- Image Format: JPEG Format
- Tile Size X & Tile Size Y: 512
- Meta Data Format: Export Tiles
- In 'Environments' tab set an optimum 'Cell Size' (0.1 in our case).
- Set the extent same as the polygon layer which we have added.
arcpy.ia.ExportTrainingDataForDeepLearning("ESRI World Imagery", r"C:\sample\Data\Hi_res_superres_data", "JPEG", 512, 512, 0, 0, "Export Tiles", 0, "ecode", 75, None, 0)
After filling all details and running the
Export Training Data For Deep Learning tool, a code like above will be generated and executed. That will create all the necessary files needed for the next step in the 'Output Folder', and we will now call it our training data.
We will train our model using
arcgis.learn module within ArcGIS API for Python.
arcgis.learn contains tools and deep learning capabilities required for this study. A detailed documentation to install and setup the environment is available here.
import os from pathlib import Path from arcgis.gis import GIS from arcgis.learn import SuperResolution, prepare_data
We will now use the
prepare_data() function to apply various types of transformations and augmentations on the training data. These augmentations enable us to train a better model with limited data and also prevent the model from overfitting.
prepare_data(): Takes 4 parameters:
path: Path of folder containing training data.
batch_size: No. of images your model will train on each step inside an epoch, it directly depends on the memory of your graphic card, size of the images you are training on and the type of model which you are working with. For this sample a batch size of 8 worked for us on a GPU with 8GB memory.
dataset_type: To infer the supported dataset type, in our case 'superres'.
downsample_factor: Factor to degrade the quality of image by resizing and adding compression artifacts in order to create labels.
Note:The quality of degraded image should be similar to the image on which we are going to do inferencing.
This function returns a data object which can be fed into a model for training.
gis = GIS('home')
training_data = gis.content.get('abc0812aa82c4fe681662e5ba495b6b8') training_data
filepath = training_data.download(file_name=training_data.name)
import zipfile with zipfile.ZipFile(filepath, 'r') as zip_ref: zip_ref.extractall(Path(filepath).parent)
data_path = Path(os.path.join(os.path.splitext(filepath)))
data = prepare_data(data_path, batch_size=8, dataset_type="superres", downsample_factor=8)
To make sense of training data we will use the
show_batch() method in
show_batch() randomly picks few samples from the training data and visualizes them.
The imagery chips above show images which we have been downsampled in
prepare_data and corresponding high resolution images with them.
data.show_batch() shows a batch of images from our training data. We can visualize the the low resolution training data generated using
prepare_data function on left along with the original data on the right. You can degrade the image quality more by increasing
arcgis.learn provides the
SuperResolution model for increasing image resolution, which is based on a pretrained convnet, like
ResNet that acts as the 'backbone'.
superres_model = SuperResolution(data)
We will use the
lr_find() method to find an optimum learning rate. It is important to set a learning rate at which we can train a model with good accuracy and speed.
We will now train the
SuperResolution model using the suggested learning rate from the previous step. We can specify how many epochs we want to train for. Let's train the model for 10 epochs.
After the training is complete, we can view the plot with training and validation losses.
This method displays the chips from the validation dataset with downsampled chips (left), predicted chips (middle) and ground truth (right). This visual analysis helps in assessing the qualitative results of the trained model.