Detecting Palm Trees using Deep Learning

  • 🔬 Data Science
  • 🥠 Deep Learning and Object Detection

Coconuts and coconut products are an important commodity in the Tongan economy. Plantations, such as one in the town of Kolovai, have thousands of trees. Inventorying each of these trees by hand would require a lot of time and resources. Alternatively, tree health and location can be surveyed using remote sensing and deep learning.

In this notebook, we will train a deep learning model to detect palm trees in high-resolution imagery of the Kolovai region using the arcgis.learn module of the ArGIS API for Python.

Export training data

The first step is to find imagery that shows Kolovai, Tonga, and has a fine enough spatial and spectral resolution to identify trees. Once we have the imagery, we'll export training samples to a format that can be used by a deep learning model.

Download the imagery

Accurate and high-resolution imagery is essential when extracting features. The model will only be able to identify the palm trees if the pixel size is small enough to distinguish palm canopies. Additionally, to calculate tree health, we'll need an image with spectral bands that will enable you to generate a vegetation health index. You'll find and download the imagery for this study from OpenAerialMap, an open-source repository of high-resolution, multispectral imagery.

  • Go to the OpenAerialMap website.

    In the interactive map view, you can zoom, pan, and search for imagery available anywhere on the planet. The map is broken up into grids. When you point to a grid box, a number appears. This number indicates the number of available images for that box.

  • In the search box, type Kolovai and press Enter. In the list of results, click Kolovai. The map zooms to Kolovai. This is a town on the main island of Tongatapu with a coconut plantation.
  • If necessary, zoom out until you see the label for Kolovai on the map. Click the grid box directly over Kolovai.
  • In the side pane, click Kolovai UAV4R Subset (OSM-Fit) by Cristiano Giovando.
  • Click the download button to download the raw .tif file. Save the image to a location of your choice.

Because of the file size, download may take a few minutes. The default name of the file is 5b1b6fb2-5024-4681-a175-9b667174f48c.

The spatial resolution of the kolovai imagery is 9 cm, and it contains 3 bands: Red, Green, and Blue. It is used as the 'Input Raster' for exporting the training data.

Get palm labels

# Connect to GIS
from arcgis.gis import GIS
gis = GIS("home")

The following feature layaer collection contains 2 layers, labelled palm trees of a part of Kolovai region and a mask that delineates the area where image chips will be created.

palm_label = gis.content.get('1bc2daa8960340ee92ea68ddb35ab4d4')
palm_label
kolovai_labels
data for exporting Feature Layer Collection by api_data_owner
Last Modified: June 13, 2022
0 comments, 1 views

Training data can be exported by using the Export Training Data For Deep Learning tool available in ArcGIS Pro. For this example, we prepared the training data in the 'PASCAL Visual Object Classes' format, using a 'chip_size' of 448px and a 'cell_size' of 0.085, in ArcGIS Pro. The 'Input Raster' and the 'Input Feature Class' have been made available to export the required training data. We have also provided the exported training data in the next section, if you wish to skip this step.

Train DetREG model

Necessary imports

import os
import zipfile
from pathlib import Path
from arcgis.learn import prepare_data, DETReg

Get training data

training_data = gis.content.get('e7878a3fab0a400f9665d800972395f1')
training_data
detecting_palm_trees_using_deep_learning
Image Collection by api_data_owner
Last Modified: June 13, 2022
0 comments, 0 views
filepath = training_data.download(file_name=training_data.name)
import zipfile
with zipfile.ZipFile(filepath, 'r') as zip_ref:
    zip_ref.extractall(Path(filepath).parent)
data_path = Path(os.path.join(os.path.splitext(filepath)[0]))
data_path
WindowsPath('C:/Users/pri10421/AppData/Local/Temp/detecting_palm_trees_using_deep_learning')
  • path: path of the folder/list of folders containing training data.
  • batch_size: Number of images your model will train on each step inside an epoch. Depends on the memory of your graphic card.
  • chip_size: The same as the tile size used while exporting the dataset.
  • class_mapping: map label id to string
data = prepare_data(data_path, batch_size=8, chip_size=448, class_mapping={'1': 'palm'})

Visualize training data

To get a sense of what the training data looks like, the show_batch() method will randomly pick a few training chips and visualize them.

data.show_batch()
<Figure size 576x576 with 4 Axes>
data.classes
['background', 'palm']

Load model architecture

DetREG model pretrains the entire object detection network, including the object localization and embedding components. During pretraining, DETReg predicts object localizations to match the localizations from an unsupervised region proposal generator and simultaneously aligns the corresponding feature embeddings with embeddings from a self-supervised image encoder. Through the integration of DetREG model in argis.learn, we could train a deep learning model with small training data ~50 images.

detreg_model = DETReg(data)
detreg_model.lr_find()
<Figure size 432x288 with 1 Axes>
0.0001584893192461114

We will train the model for a few epochs with the learning rate 2e-5.

detreg_model.fit(epochs=100, lr=0.0001584893192461114)
epochtrain_lossvalid_losstime
027.22820521.37148500:05
125.11194619.46604700:05
224.02303519.08037000:05
323.04262918.07705900:05
422.43354617.60904300:05
521.97106617.47754300:05
621.54353017.53963100:05
721.19442216.90273300:05
820.83085616.43971300:05
920.54153116.07860900:05
1020.21182315.21113000:06
1119.79326815.06965400:06
1219.23637217.09049800:05
1318.84195917.12547900:05
1418.37144514.95043500:06
1518.02460115.85022500:05
1617.74552915.50691700:05
1717.32030514.08264400:05
1816.98992514.37908700:05
1916.62497714.17096200:05
2016.30616214.25577000:05
2116.00544514.17300200:05
2215.67372413.71161800:05
2315.44455113.36674300:05
2415.23429712.99674700:05
2514.94539412.92765600:05
2614.67760411.00863800:05
2714.33944511.80411000:05
2813.93278810.15043400:05
2913.61629710.49636400:05
3013.3284559.22686400:05
3112.9937209.05904200:05
3212.6756938.92145100:05
3312.3615108.61860500:05
3412.1092628.18780700:05
3511.8434488.00614300:05
3611.5809057.83947800:05
3711.3966878.26022300:05
3811.1831658.25592600:05
3911.0387548.60117400:05
4010.8677358.10657800:05
4110.6339667.47727500:05
4210.5139227.42568200:05
4310.3379107.74140800:05
4410.1601107.20997100:05
4510.0028997.22346600:05
469.8761526.96708800:05
479.8106126.90179200:05
489.6864747.05790600:05
499.4448276.78181100:05
509.3024396.90207400:05
519.2411936.63942700:05
529.1132866.57242100:06
538.9693666.20792200:05
548.8286906.04448200:05
558.6964616.04660400:05
568.5199605.80298600:06
578.3253865.65433100:05
588.1968365.66816100:05
598.0517235.60613600:05
607.9438535.42473800:05
617.8608475.23055800:05
627.7597125.60170000:05
637.6060075.10276200:05
647.4751725.11187100:05
657.3790894.98593200:06
667.2536405.12005900:05
677.1686914.92632400:06
687.0501764.99134400:05
696.9430804.86362300:06
706.7992724.91782200:05
716.7204534.86646100:05
726.6413605.34478300:05
736.5182055.01157300:05
746.4206324.98950600:05
756.3190814.76967700:05
766.2548474.67414300:05
776.1782844.65669900:05
786.0884545.01344400:05
796.0669464.74072600:05
806.0271864.59598800:05
815.9432434.62968300:05
825.9043454.55332300:05
835.8719644.53049800:05
845.8189384.54108700:05
855.8012394.44979200:05
865.8120114.43605000:05
875.7649164.41487500:05
885.7150534.43213800:05
895.6908104.45704900:05
905.6477664.38381100:05
915.6223764.36928900:05
925.5823004.38539600:06
935.6144104.41212700:06
945.5409174.40839300:05
955.5036064.39084500:05
965.4561394.37017600:05
975.4575494.35940000:05
985.5189404.35692500:05
995.4938284.35663500:05

As we can see, the training and validation losses were decreasing until the 99th epoch, there could be room for more training.

detreg_model.plot_losses()
<Figure size 432x288 with 1 Axes>

Visualize results in validation set

It is a good practice to see the results of the model viz-a-viz ground truth. The code below picks random samples and shows us ground truth and model predictions, side by side. This enables us to preview the results of the model we trained.

detreg_model.show_results(thresh=0.4)
<Figure size 576x1440 with 10 Axes>

Accuracy assessment

arcgis.learn provides the average_precision_score() method that computes the average precision of the model on the validation set for each class.

detreg_model.average_precision_score()
100.00% [1/1 00:00<00:00]
{'palm': 0.9461714471808662}

Save the model

We will save the trained model as a 'Deep Learning Package' ('.dlpk' format). The Deep Learning package is the standard format used to deploy deep learning models on the ArcGIS platform.

We will use the save() method to save the trained model. By default, it will be saved to the 'models' sub-folder within our training data folder.

detreg_model.save('palm_e100', publish=True)
Computing model metrics...
Published DLPK Item Id: 9406080ccb6b499b9e2651c7b36f969d
WindowsPath('C:/Users/pri10421/AppData/Local/Temp/detecting_palm_trees_using_deep_learning/models/palm_e100')

Detect palm trees with the trained deep learning model

The bulk of the work in extracting features from imagery is preparing the data, creating training samples, and training the model. Now that these steps have been completed, we'll use the trained model to detect palm trees in the desired imagery. Object detection is a process that typically requires multiple tests to achieve the best results. There are several parameters that you can alter to allow your model to perform best. To test these parameters quickly, we'll try detecting trees in a small section of the image. Once you're satisfied with the results, we'll extend the detection tools to the full image.

fc = gis.content.get('4ca014288f834385a7b97a2c4534b57d')
fc
kolovai_palm_detected
kolovai_palm_detectedFeature Layer Collection by api_data_owner
Last Modified: June 13, 2022
0 comments, 1 views

Conclusion

In this notebook, we saw how we can use DetREG deep learning model and high-resolution satellite imagery to detect palm tree. This can be an important task for monitoring and conservation purposes. We used only handful of images as training data and trained a pretty good modl with DetREG available in arcgis.learn. We trained the deep learning model for a few iterations and then deployed it to detect all the palm trees in the Kolovai imagery. The results are highly accurate and almost all the palm trees in the region have been detected.

Your browser is no longer supported. Please upgrade your browser for the best experience. See our browser deprecation post for more details.