Skip To Content ArcGIS for Developers Sign In Dashboard

ArcGIS API for Python

Download the samples Try it live

Shipwrecks detection using bathymetric data

  • 🔬 Data Science
  • 🥠 Deep Learning and Instance Segmentation


In this notebook, we will use bathymetry data provided by NOAA to detect shipwrecks from the Shell Bank Basin area located near New York City in United States. A Bathymetric Attributed Grid (BAG) is a two-band imagery where one of the bands is elevation and the other is uncertainty (define uncertainty of elevation value). We have applied deep learning methods after pre-processing the data (which is explained in Preprocess bathymetric data) for the detection.

One important step in pre-processing is applying shaded relief function provided in ArcGIS which is also used by NOAA in one of their BAG visualizations here. Shaded Relief is a 3D representation of the terrain which differentiate the shipwrecks distinctly from the background and reveals them. This is created by merging the Elevation-coded images and Hillshade method where a 3-band imagery is returned which is easy to interpret as compared to the raw bathymetry image. Subsequently, the images are exported as "RCNN Masks" to train a MaskRCNN model provided by ArcGIS API for Python for detecting the shipwrecks.

The notebook presents the use of deep learning methods to automate the identification of submerged shipwrecks which could be useful for hydrographic offices, archaeologists, historians who otherwise would spend a lot of time doing it manually.

Necessary imports

In [2]:
from datetime import datetime as dt

from arcgis.gis import GIS
from arcgis.raster.functions import RFT  
from arcgis.learn import prepare_data, MaskRCNN

Connect to your GIS

In [3]:
gis = GIS(url='', username='arcgis_python', password='amazing_arcgis_123')

Get the data for analysis

In [4]:
bathymetry_img ="title: Bathymetrydata owner:api_data_owner",
                                       "Imagery Layer")[0]
BathymetrydataImagery Layer by api_data_owner
Last Modified: March 18, 2020
0 comments, 3 views
In [5]:
training_data_wrecks ='title:training_data_wrecks owner:api_data_owner',
                                         "Map Image Layer")[0]
training_data_wrecksMap Image Layer by api_data_owner
Last Modified: March 16, 2020
0 comments, 6 views

Preprocess bathymetric data

We are applying some preprocessing to the bathymetry data so that we can export the data for training a deep learning model. The preprocessing steps include mapping 'No Data' pixels value to '-1' and applying Shaded Relief function to the output raster. The resultant raster after applying Shaded Relief function will be a 3-band imagery that we can use to export data using Export Training Data for Deep Learning tool in ArcGIS Pro 2.5, for training our deep learning model.

All the preprocessing steps are recorded in the form of a Raster function template which you can use in ArcGIS Pro to generate the processed raster.

In [6]:
shaded_relief_rft ="title: shaded_Relief owner:api_data_owner",
                                       "Raster Function Template")[0]
RFT_Shaded_Relief-RasterFunctionRaster function template by api_data_owner
Last Modified: March 18, 2020
0 comments, 16 views
In [7]:
shaded_relief_ob = RFT(shaded_relief_rft)
In [8]:
# ! conda install -c anaconda graphviz -y
In [1]:

We need to add this custom raster function to ArcGIS Pro using Import functions option in the 'Custom' tab of 'Raster Functions'

Once we apply the Raster function template on the bathymetry data, we will get the output image below. We will use this image to export training data for our deep learning model.

In [10]:
shaded_relief ='title:shaded_Relief owner:api_data_owner',
                                   "Map Image Layer")[2]
shaded_Relief_CopyRasterMap Image Layer by api_data_owner
Last Modified: February 28, 2020
0 comments, 63 views

Export training data

Export training data using 'Export Training data for deep learning' tool, click here for detailed documentation:

  • Set 'shaded_relief' as Input Raster.
  • Set a location where you want to export the training data in Output Folder parameter, it can be an existing folder or the tool will create that for you.
  • Set the 'training_data_wrecks' as input to the Input Feature Class Or Classified Raster parameter.
  • Set Class Field Value as 'ecode'.
  • Set Image Format as 'TIFF format'
  • Tile Size X & Tile Size Y can be set to 256.
  • Stride X & Stride Y can be set to 50.
  • Select 'RCNN Masks' as the Meta Data Format because we are training a 'MaskRCNN Model'.
  • In 'Environments' tab set an optimum Cell Size. For this example, as we have performing the analysis on the bathymetry data with 50 cm resolution, so, we used '0.5' as the cell size.


Train the model

As we have already exported our training data, we will now train our model using ArcGIS API for Python. We will be using arcgis.learn module which contains tools and deep learning capabilities. Documentation is available here to install and setup environment.

Prepare data

We can always apply multiple transformations to our training data when training a model that can help generalize the model better. Though, we do some standard data augmentations, we can enhance them further based on the data at hand, to increase data size, and avoid occurring.

Let us have look, how we can do it using Fastai's image transformation library.

In [11]:
from import crop, rotate, brightness, contrast, rand_zoom
In [12]:
train_tfms = [rotate(degrees=30,                              # defining a transform using rotate with degrees fixed to
                     p=0.5),                                  # a value, but by passing an argument p.
              crop(size=224,                                  # crop of the image to return image of size 224. The position 
                   p=1.,                                      # is given by (col_pct, row_pct), with col_pct and row_pct
                   row_pct=(0, 1),                            # being normalized between 0 and 1.
                   col_pct=(0, 1)),                           
              brightness(change=(0.4, 0.6)),                  # Applying change in brightness of image.
              contrast(scale=(1.0, 1.5)),                     # Applying scale to contrast of image.
              rand_zoom(scale=(1.,1.2))]                      # Randomized version of zoom.

val_tfms = [crop(size=224,                                    # cropping the image to same size for validation datasets
                 p=1.0,                                       # as in training datasets.

transforms = (train_tfms, val_tfms)                           # tuple containing transformations for data augmentation 
                                                              # of training and validation datasets respectively.

We would specify the path to our training data and a few hyper parameters.

  • path: path of folder containing training data.
  • batch_size: No of images your model will train on each step inside an epoch, it directly depends on the memory of your graphic card.
  • transforms: tuple containing transforms for data augmentation of training and validation datasets respectively.

This function will return a databunch, we will use this in the next step to train a model.

In [13]:
path = r'data\training\data\shipwrecks'
In [ ]:
data = prepare_data(path=path, batch_size=8, transforms=transforms)

Visualize a few samples from your training data

To make sense of training data we will use the show_batch() method in arcgis.learn. This method randomly picks few samples from the training data and visualizes them.

rows: number of rows we want to see the results for.

In [8]: