Train Deep Learning Model

URL:
https://<rasteranalysistools-url>/TrainDeepLearningModel
Methods:
GET
Version Introduced:
10.8

Description

Train Deep Learning Model

The TrainDeepLearningModel task is used to train a deep learning model using the output from the ExportTrainingDataforDeepLearning operation. It generates the deep learning model package (*.dlpk ) and adds it to an enterprise portal. You can also use this task to write the deep learning model package to a file share data store location.

11.2

Cloud store and cloud raster store support was added for the in_folder and output_name parameters.

Portal items URLs are also supported as input for the pretrained_model parameter.

Request parameters

ParameterDetails

in_folder

(Required)

This is the input location for the training sample data. It can be the path of the output location in the file share data store, file share raster store, cloud raster store, or a shared file system path. The training sample data folder must be the output from the ExportTrainingDataforDeepLearning operation, containing image and label folders, as well as the JSON model definition file written by the tool.

The following are file share raster store path examples:

Examples

Use dark colors for code blocksCopy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
//File share raster store path examples
in_folder=/fileShares/yourFileShareFolderName/trainingSampleData
in_folder={"uri":"/fileShares/yourfileShareFolderName/trainingSampleData"}

//File share path example
in_folder=/rasterStores/yourRasterStoreFolderName/trainingSampleData

//Cloud data store path example
in_folder=/cloudStores/yourCloudDatastoreName/trainingSampleData

//Cloud raster store path example
in_folder=/rasterStores/yourCloudRasterStoreName/trainingSampleData

//File share path example
in_folder=\\serverName\deepLearning\trainingSampleData

//Multiple input folders example
in_folder=/fileShares/yourFileShareFolderName/trainingSampleDataA,/fileShares/yourFileShareFolderName/trainingSampleDataB
in_folder={"uris":["/fileShares/yourFileShareFolderName/trainingSampleDataA","/fileShares/yourFileShareFolderName/trainingSampleDataB"]}

output_name

(Required)

This is the output location for the trained deep learning model package (.dlpk ). It can be a JSON object representing the output .dlpk name that will be added as a portal item or a string of the folder path in the file share data store, file share raster store, cloud data store, or cloud raster store. The data store must be registered on the server.

Example:

Use dark colors for code blocksCopy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
//Output dlpk name
output_name={"name": "trainedModel"}
output_name={"name": "trainedModel","folderId":"dfwerfbd3ec25584d0d8f4"}

//File share data store path
output_name=/fileShares/filesharename/folder

//Cloud data store path
output_name=/cloudStores/yourCloudStoreName/folder

//Raster store path
output_name=/rasterStores/yourFileShareRasterStoreName/folder
output_name=/rasterStores/yourCloudRasterStoreName/folder

//File share data store path:
output_name={"uri":"/fileShares/yourFileShareFolderName/trainedModel"}

model_type

(Required)

The model type to use for training the deep learning model. This parameter supports model types for image translation, object classification, object detection, object tracker, and pixel classification. The model types that are supported for each type of processing and the supported values for this parameter are listed below.

Image translation values: PIX2PIX | CYCLEGAN | SUPERRESOLUTION | PIX2PIXHD

Object classification values: FEATURE_CLASSIFIER | IMAGECAPTIONER

Object detection values: SSD | RETINANET | MASKRCNN | YOLOV3 | FASTERRCNN | MMDETECTION | DETREG

Object tracker values: SIAMMASK | DEEPSORT

Panoptic segmentation values: MAXDEEPLAB

Pixel classification values: UNET | PSPNET | DEEPLAB | BDCN_EDGEDETECTOR | HED_EDGEDETECTOR | MULTITASK_ROADEXTRACTOR | CONNECTNET | CHANGEDETECTOR | MMSEGMENTATION

arguments

(Optional)

This is where you list additional deep learning parameters and arguments for experiments and refinement, such as a confidence threshold for adjusting sensitivity. The names of the arguments are populated from reading the Python module.

When you set model_type to SSD , the following arguments will be used:

  • grids —The number of grids the image will be divided into for processing. Setting this argument to 4 means the image will be divided into 4 x 4 or 16 grid cells. If no value is specified, the optimal grid value will be calculated based on the input imagery.
  • zooms —The number of zoom levels each grid cell will be scaled up or down. Setting this argument to 1 means all the grid cells will remain at the same size or zoom level. A zoom level of 2 means all the grid cells will become twice as large (zoomed in 100 percent). Providing a list of zoom levels means all the grid cells will be scaled using all the numbers in the list. The default is 1.0.
  • ratios —The list of aspect ratios to use for the anchor boxes. In object detection, an anchor box represents the ideal location, shape, and size of the object being predicted. Setting this argument to [1.0,1.0], [1.0, 0.5] means the anchor box is a square (1:1) or a rectangle in which the horizontal side is half the size of the vertical side (1:0.5). The default is [1.0, 1.0].

When you set model_type to any of the pixel classification models (PSPNET , UNET , or DEEPLAB ), the following arguments will be used:

  • USE_UNET —The U-Net decoder will be used to recover data once the pyramid pooling is complete. The default is True. This argument is specific to the PSPNET model.
  • PYRAMID_SIZES —The number and size of convolution layers to be applied to the different subregions. The default is [1,2,3,6]. This argument is specific to the PSPNET model.
  • MIXUP —Specifies whether mixup augmentation and mixup loss will be used. The default is False.
  • CLASS_BALANCING —Specifies whether the cross-entropy loss inverse will be balanced to the frequency of pixels per class. The default is False.
  • FOCAL_LOSS —Specifies whether focal loss will be used. The default is False.
  • IGNORE_CLASSES —The list of class values on which the model will incur loss.

When you set model_type to RETINANET , the following arguments will be used:

  • SCALES —The number of scale levels each cell will be scaled up or down. The default is [1, 0.8, 0.63].
  • RATIOS —The aspect ratio of the anchor box. The default is [0.5,1,2].
  • MONITOR —Specifies the metric to monitor when checkpointing and early stopping of training. Available metrics are valid_loss , accuracy , miou , and dice . The default is valid_loss .

All model types support the chip_size argument, which is the chip size of the tiles in the training samples. The image chip size is extracted from the .emd file in the input folder.

Syntax: The value pairs of arguments and their values.

Example

Use dark colors for code blocksCopy
1
arguments={"name1": "value1", "name2": "value2"}

batch_size

(Optional)

The number of training samples to be processed for training at one time. If the server has a powerful GPU, this number can be increased to 16, 36, 64, and so on.

Example

Use dark colors for code blocksCopy
1
batch_size=4

max_epochs

(Optional)

The maximum number of epochs for training the model. One epoch means the whole training dataset will be passed forward and backward through the deep neural network once.

Example

Use dark colors for code blocksCopy
1
max_epochs=20

learning_rate

(Optional)

The rate at which the weights are updated during the training. It is a small positive value in the range between 0.0 and 1.0. If the learning rate is set to 0, it will extract the optimal learning rate from the learning curve during the training process.

Example

Use dark colors for code blocksCopy
1
learning_rate=0

backbone_model

(Optional)

Specifies the preconfigured neural network to be used as an architecture for training the new model. See the Backbone model values section below for more information.

Values: DARKNET53 | DENSENET121 | DENSENET161 | DENSENET169 | DENSENET201 | MOBILENET_V2 | REID_V1 | REID_V2 | RESNET18 | RESNET34 | RESNET50 | RESNET101 | RESNET152 | VGG11 | VGG11_BN | VGG13 | VGG13_BN | VGG16 | VGG16_BN | VGG19 |VGG19_BN

Example

Use dark colors for code blocksCopy
1
backbone_model=RESNET34

validation_percent

(Optional)

The percentage of training sample data that will be used for validating the model.

Example

Use dark colors for code blocksCopy
1
validation_percent=10

pretrained_model

(Optional)

The pretrained model to be used for fine-tuning the new model. It is a .dlpk portal item.

Example

Use dark colors for code blocksCopy
1
2
pretrained_model={"itemId": "8cfbd3ec25584d0d8fed23b8ff7c43b"}
pretrained_model={"url":"https://www.arcgis.com/sharing/rest/content/items/916e02960d9e495baeb4d1d2ff4055d0"}

stop_training

(Optional)

Specifies whether early stopping will be implemented. If true , the model training will stop when the model is no longer improving, regardless of the maximum epochs specified. This is the default. If false , the model training will continue until the maximum epochs is reached.

Values: true | false

overwriteModel

(Optional)

Overwrites an existing deep learning model package (.dlpk ) portal item with the same name.

If the output_name parameter uses the file share data store path, the overwriteModel parameter is not applied.

  • True —The portal .dlpk item will be overwritten.
  • False —The portal .dlpk item will not be overwritten. This is the default.

context

(Optional)

Environment settings that affect task operation. This parameter has the following settings:

  • extent —A bounding box that defines the analysis area.
  • cellSize —The output raster will have the resolution specified by cell size.
  • processorType —The specified processor (CPU or GPU) will be used for the analysis.
  • parallelProcessingFactor —The number of logical processes across which a tool will operate.

Example

Use dark colors for code blocksCopy
1
context={"cellSize": "20","processorType": "GPU"}

freeze_Model

Specifies whether the backbone layers in the pretrained model will be frozen so that the weights and biases in the backbone layers remain unaltered. If true , the predefined weights and biases will not be altered in the backboneModel value. This is the default. If false , the weights and biases of the backboneModel value may be altered to better fit the training samples. This may take more time to process but typically produces better results.

Values: true | false

f

The response format. The default response format is html .

Values: html | json | pjson

Backbone model values

The accepted preconfigured neural network values that can be submitted with the backbone_model parameter are described below.

ValueDescription

DARKNET53

The preconfigured model will be a convolutional neural network trained on the ImageNet dataset that contains more than 1 million images and is 53 layers deep.

DENSENET121

The preconfigured model will be a dense network trained on the ImageNet dataset that contains more than 1 million images and is 121 layers deep. Unlike RESNET, which combines the layer using summation, DenseNet combines the layers using concatenation.

DENSENET161

The preconfigured model will be a dense network trained on the ImageNet dataset that contains more than 1 million images and is 161 layers deep. Unlike RESNET, which combines the layer using summation, DenseNet combines the layers using concatenation.

DENSENET169

The preconfigured model will be a dense network trained on the ImageNet dataset that contains more than 1 million images and is 169 layers deep. Unlike RESNET, which combines the layer using summation, DenseNet combines the layers using concatenation.

DENSENET201

The preconfigured model will be a dense network trained on the ImageNet dataset that contains more than 1 million images and is 201 layers deep. Unlike RESNET, which combines the layer using summation, DenseNet combines the layers using concatenation.

MOBILENET_V2

The preconfigured model trained on the ImageNet database and is 54 layers deep geared toward Edge device computing, since it uses less memory.

RESNET18

The preconfigured model will be a residual network trained on the ImageNet dataset that contains more than 1 million images and is 18 layers deep.

RESNET34

The preconfigured model will be a residual network trained on the ImageNet dataset that contains more than 1 million images and is 34 layers deep. This is the default.

RESNET50

The preconfigured model will be a residual network trained on the ImageNet dataset that contains more than 1 million images and is 50 layers deep.

RESNET101

The preconfigured model will be a residual network trained on the ImageNet dataset that contains more than 1 million images and is 101 layers deep.

RESNET152

The preconfigured model will be a residual network trained on the ImageNet dataset that contains more than 1 million images and is 152 layers deep.

VGG11

The preconfigured model will be a convolution neural network trained on the ImageNet dataset that contains more than 1 million images to classify images into 1,000 object categories and is 11 layers deep.

VGG11_BN

The preconfigured model is based on the VGG network but with batch normalization, which normalizes each layer in the network. It trained on the ImageNet dataset and has 11 layers.

VGG13

The preconfigured model will be a convolution neural network trained on the ImageNet dataset that contains more than 1 million images to classify images into 1,000 object categories and is 13 layers deep.

VGG13_BN

The preconfigured model is based on the VGG network but with batch normalization, which normalizes each layer in the network. It trained on the ImageNet dataset and has 13 layers.

VGG16

The preconfigured model will be a convolution neural network trained on the ImageNet dataset that contains more than 1 million images to classify images into 1,000 object categories and is 16 layers deep.

VGG16_BN

The preconfigured model is based on the VGG network but with batch normalization, which normalizes each layer in the network. It trained on the ImageNet dataset and has 16 layers.

VGG19

The preconfigured model will be a convolution neural network trained on the ImageNet dataset that contains more than 1 million images to classify images into 1,000 object categories and is 19 layers deep.

VGG19_BN

The preconfigured model is based on the VGG network but with batch normalization, which normalizes each layer in the network. It trained on the ImageNet dataset and has 19 layers.

Example usage

The following is a sample request URL for TrainDeepLearningModel :

Use dark colors for code blocksCopy
1
https://services.myserver.com/arcgis/rest/services/System/RasterAnalysisTools/GPServer/TrainDeepLearningModel

Response

When you submit a request, the task assigns a unique job ID for the transaction.

Syntax:

Use dark colors for code blocksCopy
1
{ "jobId": "<unique job identifier>", "jobStatus": "<job status>" }

After the initial request is submitted, you can use the jobId to periodically check the status of the job and messages, as described in Check job status. Once the job has successfully completed, use the jobId to retrieve the results. To track the status, you can make a request of the following form:

Use dark colors for code blocksCopy
1
https://<rasterAnalysisTools-url>/TrainDeepLearningModel/jobs/<jobId>

When the status of the job request is esriJobSucceeded , you can access the results of the analysis by making a request of the following form:

Use dark colors for code blocksCopy
1
https://<rasterAnalysisTools-url>/TrainDeepLearningModel/jobs/<jobId>/results/out_item

JSON Response example

The response returns the .dlpk portal item, which has title , type , filename , file , id , and folderId properties.

Use dark colors for code blocksCopy
1
2
3
4
5
6
7
8
9
10
11
{
  "title": "dlpk_name",
  "type": "Deep Learning Package",
  "multipart": True,
  "tags": "imagery"
  "typeKeywords": "Deep Learning, Raster"
  "filename": "dlpk_name",
  "file": "\\servername\rasterstore\mytrainedmodel.dlpk",
  "id": "f121390b85ef419790479fc75b493efd",
  "folderId": "dfwerfbd3ec25584d0d8f4"
}

However, if a data store path is specified as the value for output_name , the output will be the data store location.

Use dark colors for code blocksCopy
1
2
3
4
5
{
  "paramName": "out_item",
  "dataType": "GPString",
  "value": {"uri": "/fileShares/yourFileShareFolderName/trainedModel/trainedModel.dlpk"}"value": {"uri": "/fileShares/yourFileShareFolderName/trainedModel/trainedModel.dlpk"}
}

Your browser is no longer supported. Please upgrade your browser for the best experience. See our browser deprecation post for more details.