ArcGIS Developers

ArcGIS API for Python

Download the samples Try it live

Increase Image Resolution using SuperResolution

Table of Contents


  • Please refer to the prerequisites section in our guide for more information. This sample demonstrates how to export training data and model inference using ArcGIS Pro. Alternatively, they can be done using ArcGIS Image Server as well.
  • If you have already exported training samples using ArcGIS Pro, you can jump straight to the training section. The saved model can also be imported into ArcGIS Pro directly.


High resolution imagery is desirable for both visualization and image interpretation. However, high resolution imagery is expensive to procure. This sample notebook demonstrates how the SuperResolution model in arcgis.learn module can be used to increase image resolution. This model uses deep learning to add texture and detail to low resolution satellite imagery and turn it into higher resolution imagery.

We first start with high resolution aerial imagery to train the model. The data preparation step first downsamples the higher resolution imagery to create lower resolution, blurred imagery. The SuperResolution model uses this training data and learns how to upsample the lower resolution imagery and produce realistic high resolution images that closely resemble the higher quality images that we started with. We then use the trained SuperResolution model to produce simulated high resolution aerial imagery from relatively lower resolution satellite imagery.

Export Training Data

We will be using ArcGIS Pro to find the area where high resolution imagery. To simplify our job, we have already created polygon representing the extent of high resolution imagery. We can add the polygon from here.

Training data can be exported by using the Export Training Data For Deep Learning tool available in ArcGIS Pro as well as ArcGIS Image Server.

  • Input Raster: Imagery
  • Image Format: JPEG Format
  • Tile Size X & Tile Size Y: 512
  • Meta Data Format: Export Tiles
  • In 'Environments' tab set an optimum 'Cell Size' (0.1 in our case).
  • Set the extent same as the polygon layer which we have added.
arcpy.ia.ExportTrainingDataForDeepLearning("Imagery", r"C:\sample\Data\Hi_res_superres_data", "JPEG", 512, 512, 0, 0, "Export Tiles", 0, "ecode", 75, None, 0)

After filling all details and running the Export Training Data For Deep Learning tool, a code like above will be generated and executed. That will create all the necessary files needed for the next step in the 'Output Folder', and we will now call it our training data.

Model Training

We will train our model using arcgis.learn module within ArcGIS API for Python. arcgis.learn contains tools and deep learning capabilities required for this study. A detailed documentation to install and setup the environment is available here.

Necessary Imports

In [1]:
import os
from pathlib import Path

from arcgis.gis import GIS
from arcgis.learn import SuperResolution, prepare_data

We will now use the prepare_data() function to apply various types of transformations and augmentations on the training data. These augmentations enable us to train a better model with limited data and also prevent the model from overfitting.
prepare_data(): Takes 4 parameters:

  • path: Path of folder containing training data.
  • batch_size: No. of images your model will train on each step inside an epoch, it directly depends on the memory of your graphic card, size of the images you are training on and the type of model which you are working with. For this sample a batch size of 8 worked for us on a GPU with 8GB memory.
  • dataset_type: To infer the supported dataset type, in our case 'superres'.
  • downsample_factor: Factor to degrade the quality of image by resizing and adding compression artifacts in order to create labels.
    Note:The quality of degraded image should be similar to the image on which we are going to do inferencing.

This function returns a data object which can be fed into a model for training.

In [2]:
gis = GIS('home')
In [3]:
training_data = gis.content.get('abc0812aa82c4fe681662e5ba495b6b8')
Image Collection by api_data_owner
Last Modified: August 28, 2020
0 comments, 2 views

In [4]:
filepath =
In [5]:
import zipfile
with zipfile.ZipFile(filepath, 'r') as zip_ref:
In [6]:
data_path = Path(os.path.join(os.path.splitext(filepath)[0]))
In [ ]:
data = prepare_data(data_path, 

Visualize training data

To make sense of training data we will use the show_batch() method in arcgis.learn. show_batch() randomly picks few samples from the training data and visualizes them.

In [ ]:

The imagery chips above show images which we have been downsampled in prepare_data and corresponding high resolution images with them. data.show_batch() shows a batch of images from our training data. We can visualize the the low resolution training data generated using prepare_data function on left along with the original data on the right. You can degrade the image quality more by increasing downsample_factor in prepare_data.

arcgis.learn provides the SuperResolution model for increasing image resolution, which is based on a pretrained convnet, like ResNet that acts as the 'backbone'.

In [ ]:
superres_model = SuperResolution(data)

We will use the lr_find() method to find an optimum learning rate. It is important to set a learning rate at which we can train a model with good accuracy and speed.

In [ ]:
Out[ ]:

Train a model

We will now train the SuperResolution model using the suggested learning rate from the previous step. We can specify how many epochs we want to train for. Let's train the model for 10 epochs.

In [ ]:, lr=0.0001584893192461114)
epoch train_loss valid_loss pixel time
0 1.764578 1.667279 0.262905 12:04
1 1.460097 1.422421 0.224943 11:17
2 1.359061 1.346903 0.212538 11:17
3 1.313709 1.309428 0.208903 11:16
4 1.283329 1.291093 0.208715 11:16
5 1.280456 1.274125 0.205538 11:17
6 1.245067 1.257469 0.202738 11:17
7 1.239518 1.248023 0.202647 11:17
8 1.230550 1.243982 0.203220 11:17
9 1.232157 1.245707 0.204075 11:17

After the training is complete, we can view the plot with training and validation losses.

In [ ]:

Visualize results on validation set

This method displays the chips from the validation dataset with downsampled chips (left), predicted chips (middle) and ground truth (right). This visual analysis helps in assessing the qualitative results of the trained model.

In [ ]: