Feature Categorization using Satellite Imagery and Deep Learning¶
- 🔬 Data Science
- 🥠 Deep Learning and Feature Classifier
Introduction and methodology¶
This sample notebook demonstrates the use of deep learning capabilities in ArcGIS to perform feature categorization. Specifically, we are going to perform automated damage assessment of homes after the devastating Woolsey fires. This is a critical task in damage claim processing, and using deep learning can speed up the process and make it more efficient. The workflow consists of three major steps: (1) extract training data, (2) train a deep learning feature classifier model, (3) make inference using the model.
Methodology¶
Part 1 - Export training data for deep learning¶
To export training data for feature categorization, we need two input data:
- An input raster that contains the spectral bands,
- A feature class that defines the location (e.g. outline or bounding box) and label of each building.
Import ArcGIS API for Python and get connected to your GIS¶
import os
from pathlib import Path
import arcgis
from arcgis import GIS
from arcgis import learn, create_buffers
from arcgis.raster import analytics
from arcgis.features.analysis import join_features
from arcgis.learn import prepare_data, FeatureClassifier, classify_objects, Model, list_models
arcgis.env.verbose = True
gis = GIS('home')
gis_ent = GIS('https://pythonapi.playground.esri.com/portal', 'arcgis_python', 'amazing_arcgis_123')
Prepare data that will be used for training data export¶
A building footprints feature layer will be used to define the location and label of each building.
building_footprints = gis.content.search('buildings_woolsey', item_type='Feature Layer Collection')[0]
building_footprints
We will buffer the building footprints layer for 150m using create_buffers . With 150m buffer, when the training data will be exported it will cover the surrounding of houses which will help the model to understand the difference between damaged and undamaged houses.
building_buffer = arcgis.create_buffers(building_footprints,
distances=[150],
units='Meters',
dissolve_type='None',
ring_type='Disks',
side_type='Full',
end_type='Round',
output_name='buildings_buffer',
gis=gis)
building_buffer
An aerial imagery of West Malibu will be used as input raster that contains the spectral bands. This raster will be used for exporting the training data and inferencing the results.
gis2 = GIS("https://ivt.maps.arcgis.com")
input_raster = gis2.content.search("111318_USAA_W_Malibu")[0]
input_raster
Specify a folder name in raster store that will be used to store our training data¶
ds = analytics.get_datastores(gis=gis_ent)
ds
ds.search()
rasterstore = ds.get("/rasterStores/LocalRasterStore")
rasterstore
samplefolder = "feature_classifier_sample"
samplefolder
Export training data using arcgis.learn
¶
Now ready to export training data using the export_training_data()
method in arcgis.learn module. In addtion to feature class, raster layer, and output folder, we also need to specify a few other parameters such as tile_size
(size of the image chips), stride_size
(distance to move each time when creating the next image chip), chip_format
(TIFF, PNG, or JPEG), metadata_format
(how we are going to store those training labels). Note that unlike Unet and object detection, the metadata is set to be Labeled_Tiles
here. More detail can be found here.
Depending on the size of your data, tile and stride size, and computing resources, this operation can take a while. In our experiments, this took 15mins~2hrs. Also, do not re-run it if you already run it once unless you would like to update the setting.
We will export the training data for a small sub-region of our study area and the whole study area will be used for inferencing of results. We will create a map widget, zoom in to the western corner of our study area and get the extent of the zoomed in map. We will use this extent in Export training data using deep learning
function.
# add the building_buffer layer in the webmap
m1 = gis.map('Malibu')
m1.add_layer(Building_buffer)
m1