Image scene classification using FeatureClassifier

  • 🔬 Data Science
  • 🥠 Deep Learning and Object classification

Introduction

In this sample notebook, we will be using the ArcGIS API for Python for training an object classification model on image data from an external source and using that model for inferencing in ArcGIS Pro.

For this example, we will be using the RESISC45 Dataset, which is a publicly available benchmark for Remote Sensing Image Scene Classification (RESISC) created by Northwestern Polytechnical University (NWPU). This dataset contains 31,500 images covering 45 scene classes, with 700 images in each class.

We will be using this dataset to train a FeatureClassifier model that will classify satellite image tiles in the 45 scene classes specified in the dataset.

Necessary imports

import os, json
from arcgis.learn import prepare_data, FeatureClassifier

Download & setting up training data

Since the RESISC45 Dataset is publically available, we will download the data from the Tensorflow website. The name of the dataset we will be downloading is NWPU-RESISC45.rar

After the data has been downloaded, follow the steps below to prepare the model for the FeatureClassifier.

  • Extract the .rar file
  • Create a folder named images and move all the 45 folders (correspoding to each class in the dataset) into the images folder

Next, we will create an data_path variable containing the path of the images folder.

data_path = os.path.join(os.getcwd(), "NWPU-RESISC45")

Train the model

arcgis.learn provides the ability to determine the class of each feature in the form of a FeatureClassifier model. To learn more about how it works and its potential use cases, see this guide - "How feature classifier works?".

Prepare data

Here, we will specify the path to our training data and a few hyperparameters.

  • path: path of the folder/list of folders containing training data.
  • dataset_type : The type of dataset getting passed to the Feature Classifier.
  • batch_size: Number of images your model will train on each step inside an epoch. This directly depends on the memory of your graphic card. 128 worked for us on a 32GB GPU.

Since we are using the dataset from external source for training our FeatureClassifier, we will be using Imagenet as dataset_type.

data = prepare_data(
    path=data_path, dataset_type="Imagenet", batch_size=128, val_split_pct=0.2
)

Visualize training data

To get a sense of what the training data looks like, the show_batch() method randomly picks a few training chips and visualizes them.

  • rows: Number of rows to visualize
data.show_batch(rows=5)
<Figure size 1440x1440 with 25 Axes>

Load model architecture

model = FeatureClassifier(data, oversample=True)

Find an optimal learning rate

Learning rate is one of the most important hyperparameters in model training. The ArcGIS API for Python provides a learning rate finder that automatically chooses the optimal learning rate for you.

lr = model.lr_find()
<Figure size 432x288 with 1 Axes>

Fit the model

We will train the model for a few epochs with the learning rate we have found. For the sake of time, we can start with 20 epochs.

model.fit(20, lr=lr)
epochtrain_lossvalid_lossaccuracytime
03.8496392.3031290.39301605:32
11.8916750.9227730.74190504:23
21.0156060.5346180.84063504:05
30.6868290.4216780.87095203:57
40.5080450.3513830.89174603:53
50.4182710.3155200.90063503:51
60.3766080.2865480.91206303:52
70.3194860.2815730.91127003:53
80.2951000.2600700.91904803:51
90.2783060.2441360.92238103:51
100.2635770.2351600.92238103:51
110.2279980.2315220.92523803:52
120.2056060.2234830.92904803:51
130.2090350.2224020.92952403:51
140.1950110.2155490.93015903:51
150.1882780.2131080.93031703:52
160.1780930.2090750.93079403:51
170.1766010.2085890.93254003:51
180.1882120.2059640.93349203:51
190.1758530.2046080.93301603:51

Here only after 20 epochs both training and validation losses have decreased considerably, indicating that the model is learning to classify image scenes.

Visualize results in the validation set

It is a good practice to see the results of the model viz-a-viz ground truth. The code below picks random samples and shows us ground truth and model predictions side by side. This enables us to preview the results of the model within the notebook.

model.show_results(rows=4)
<Figure size 576x1152 with 8 Axes>

Here, with only 20 epochs, we can see reasonable results.

Accuracy assessment

arcgis.learn provides the plot_confusion_matrix() function that plots a confusion matrix of the model predictions to evaluate the model's accuracy.

model.plot_confusion_matrix()
<Figure size 1080x1080 with 1 Axes>

The confusion matrix validates that the trained model is learning to classify scenes to different classes. The diagonal numbers show the number of scenes correctly classified as their respective categories.

Save the model

Now, we will save the model that we trained as a 'Deep Learning Package' ('.dlpk' format). A Deep Learning package is the standard format used to deploy deep learning models on the ArcGIS platform.

We will use the save() method to save the trained model. By default, it will be saved to the 'models' sub-folder within our training data folder.

model_name = "Nwpu_model1"
model.save(model_name)
WindowsPath('D:/NWPU/NWPU-RESISC45/models/Nwpu_model1')

Model inference

Before using the model for inference, we need to make some changes in the model_name.emd file. You can learn more about this file here.

By default, in the EMD file, the CropSizeFixed is set to 1. We need to change the CropSizeFixed to 0 so that the size of tiles cropped around the feature are not fixed.

with open(
    os.path.join(data_path, "models", model_name, model_name + ".emd"), "r+"
) as emd_file:
    data = json.load(emd_file)
    data["CropSizeFixed"] = 0
    emd_file.seek(0)
    json.dump(data, emd_file, indent=4)
    emd_file.truncate()

For us to perform inferencing in ArcGIS Pro, we need to create a feature class on the map using either the Create Feature Class tool or the Create Fishnet tool, for an area that has not already seen by the model.

We have also provided the Feature Class and the Model trained on the NWPU Dataset for reference. You can directly download these to run your own experiments from the links below.

Now, we will use the Classify Objects Using Deep Learning tool for inferencing the results. The parameters required to run the function are:

  • Input Raster: High_Resolution_Imagery
  • Input Features: Output from the Create Feature Class or Create Fishnet tool.
  • Output CLassified Objects Feature Class: Output feature class.
  • Model Definition: Emd file of the model that we trained.
  • Class Label Field: Field name that will contain the detected class number.
  • Environments: Set optimum Cell Size, Processing Extent and Processor Type.

We have investigated and found that a Cell Size of 1m/pixel works best for this model.

Results

We selected an area that had not been seen by the model and generated the features in it using the Create Feature Class tool. We then used our model for classification. Below are the results.

We also created a fishnet using the Create Fishnet tool that we then fed to our model for classification. We can use this technique to create preliminary data about the image. Based on the output, we can make inferences about the image, such as the total of residential areas, industrial areas in an image, etc. Below is the map that we created from the results.

Conclusion

In this notebook, we demonstrated how to use the FeatureClassifier model from the ArcGIS API for Python to classify image scenes using training data from an external source.

References

  • Citation : @article{cheng2017remote, title={Remote sensing image scene classification: Benchmark and state of the art}, author={Cheng, Gong and Han, Junwei and Lu, Xiaoqiang}, journal={Proceedings of the IEEE}, volume={105}, number={10}, pages={1865--1883}, year={2017}, publisher={IEEE} }

Your browser is no longer supported. Please upgrade your browser for the best experience. See our browser deprecation post for more details.