How feature classifier works?


The goal of feature classification is to determine the class of each feature (e.g. building). For instance, it could be used to determine when a building is damaged or not after a natural disaster. Feature classification requires two input data:

  • A input raster that contains the spectral bands,
  • A feature class that defines the location (e.g. outline or bounding box) of each feature.

There are two major steps in feature classification. We first export training samples based on the geographical extent of each feature. Once the training samples are exported, it can be used as the training input for the deep learning based classification algorithm to train a feature classifier.

Export training samples

The process of exporting training samples is slightly different from pixel-based classification and object detection. In feature classification, we extract training samples based on the extent of each individual feature defined by the feature class. For each training sample, there is an associated class label that comes from the feature class attribute table. Optionally, we can also define a buffer size to extract a larger neighbourhood around the feature so that more spatial context is available to the classification model, which makes distinguishing different classes easier.

Figure 1. An example of export training data for feature classifier

In the example above, there are three buildings in the original data. The one on the top is damaged and the other two is undamanged. Therefore, the export results would be three training samples with the corresponding labels. Here we used a buffer size of 50 meters so we can have more surrounding context to feed to the model next.

Deep learning based classification algorithm

Once the training samples are ready, it becomes a standard multi-class image classification problem in computer vision, which is a process of taking an input image and outputting a class. Image classification can be solved through convolutional neural networks (CNN) and there are many CNN based image classification algorithms. Most of them have a backbone CNN architecture (e.g. Resnet, LeNet-5, AlexNet, VGG 16) followed by a softmax layer. Again, You can refresh your CNN knowledge by going through this short paper “A guide to convolution arithmetic for deep learning”.

Implementation in arcgis.learn

arcgis.learn allows us to define a feature classifier architecture just through a single line of code. For example:

feature_classifier = arcgis.learn.FeatureClassifier(data, backbone=None, pretrained_path=None)

data is the returned data object from prepare_data function. backbone is used for creating the base of the classifier, which is resnet34 by default, while pretrained_path points to where pre-trained model is saved.

For more information about the API, please go to the API reference.

Your browser is no longer supported. Please upgrade your browser for the best experience. See our browser deprecation post for more details.