Export Training Data For Deep Learning

URL:
https://<rasteranalysis-url>/ExportTrainingDataforDeepLearning
Methods:
GET
Version Introduced:
10.7

Description

Export Training Data For Deep Learning service illustration

The ExportTrainingDataforDeepLearning service generates training sample image chips from the input imagery data with labeled vector data or classified images. The output of this service tool is the data store string where the output image chips, labels, and metadata files will be stored.

Request parameters

ParameterDetails

inputRaster

(Required)

The image that will be classified. This can be specified as the portal item ID, image service URL, cloud raster dataset, shared raster dataset, a feature service with image attachments, or a raster dataset or image collection in the data store. At least one type of input must be provided in the JSON object. If multiple inputs are provided, itemId takes priority.

Syntax: JSON object describes the inputRaster .

Example:

Use dark colors for code blocksCopy
1
2
3
4
5
6
7
8
//Portal Item ID
inputRaster={"itemId": <portal item id>}

//Image Service URL
inputRaster={"url": <image service url>}

//Service Properties
inputRaster={"serviceProperties":{"name":"testrasteranalysis","serviceUrl":"https://<server name>/server/rest/services/Hosted/testrasteranalysis/ImageServer"},"itemProperties":{"itemId":"8cfbd3ec25584d0d8fed23b8ff7c43b","folderId":"sdfwerfbd3ec25584d0d8f4"}}

outputLocation

(Required)

The output location for training sample data. This can be specified as the output folder name, a file share raster data store path, a file share data store path, or a shared file system path.

Example:

Use dark colors for code blocksCopy
1
2
3
4
5
6
7
8
9
10
11
//Output folder name
outputLocation=rooftoptrainingsamples

//File share data store path
outputLocation=/fileShares/temp/exported_data

//File share raster data store path
outputLocation=/rasterStores/myrasterstore/rooftoptrainingsamples

//File share path
outputLocation=\\servername\deeplearning\rooftoptrainingsamples

inputClassData

(Required)

The labeled data, either in a feature service or an image service. Vector inputs should follow a training sample format as generated by the ArcGIS Pro Training Sample Manager; raster inputs should follow a classified raster format as generated by the Classify Raster tool.

Syntax: JSON object describes the inputClassData .

Example:

Use dark colors for code blocksCopy
1
2
3
4
5
6
7
8
//Portal Item ID
{"itemId": <portal item id>}

//Service URL
{"url": <image or feature service url>}

//Service Properties
{"serviceProperties":{"name":"testrasteranalysis","serviceUrl":"https://<server name>/server/rest/services/Hosted/testrasteranalysis/ImageServer"},"itemProperties":{"itemId":"8cfbd3ec25584d0d8fed23b8ff7c43b", "folderId":"sdfwerfbd3ec25584d0d8f4"}}

chipFormat

Specifies the raster format that will be used for the image chip outputs.

Values: TIFF | PNG | JPEG | MRF (Meta Raster Format)

Example:

Use dark colors for code blocksCopy
1
chipFormat=TIFF

tileSize

The size of the image chips. This is specified as a name value pair for x and y dimension values.

Syntax: A JSON object describes the tileSize .

Example:

Use dark colors for code blocksCopy
1
tileSize={"x":256,"y":256}

strideSize

The distance to move in the x and y directions when creating the next image chip. This is specified as a name value pair for x and y dimension values. When stride is equal to the tile size, there will be no overlap. When stride is equal to half the tile size, there will be 50 percent overlap.

Syntax: A JSON object describes the strideSize .

Example:

Use dark colors for code blocksCopy
1
strideSize={"x":128,"y":128}

metadataFormat

Specifies the format of the output metadata labels.

If the input training sample data is a feature class layer, such as a building layer or standard classification training sample file, use the KITTI_rectangles or PASCAL_VOC_rectangles option. The output metadata is a .txt file or .xml file containing the training sample data contained in the minimum bounding rectangle. The name of the metadata file matches the input source image name. If the input training sample data is a class map, use the Classified_Tiles option as the output metadata format.

Options:

  • KITTI_rectangles —The metadata follows the same format as the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) Object Detection Evaluation dataset. The KITTI dataset is a vision benchmark suite. The label files are plain text files. All values, both numerical and strings, are separated by spaces, and each row corresponds to one object. For more information, see KITTI metadata format. This format is used for object detection.
  • PASCAL_VOC_rectangles —The metadata follows the same format as the Pattern Analysis, Statistical Modeling and Computational Learning, Visual Object Classes (PASCAL VOC) dataset. The PASCAL VOC dataset is a standardized image dataset for object class recognition. The label files are .xml files that contain information about image name, class value, and bounding boxes. For more information, see PASCAL Visual Object Classes. This format is used for object detection. This is the default.
  • Classified_Tiles —This option outputs one classified image chip per input image chip. No other metadata for each image chip is used. Only the statistics output has more information about the classes, such as class names, class values, and output statistics. This format is used for pixel classification.
  • RCNN_Masks —This option outputs image chips that have a mask on the areas where the sample exists. The model generates bounding boxes and segmentation masks for each instance of an object in the image. It's based on Feature Pyramid Network (FPN) and a ResNet101 backbone in the deep learning framework model. This format is used for object detection.
  • Labeled_Tiles —Each output tile will be labeled with a specific class. This format is used for object classification.
  • MultiLabeled_Tiles —Each output tile will be labeled with one or more classes. For example, a residence may be labeled as containing a pool and also solar panels. This format is used for object classification.
  • Export_Tiles —The output will be image chips with no label. This format is used for image enhancement techniques such as super resolution.
  • CycleGAN —The output will be image chips with no label. This format is used for image translation technique CycleGAN, which is used to train images that do not overlap.
  • Imagenet —Each output tile will be labeled with a specific class. This format is used for object classification; however, it can also be used for object tracking when the Deep Sort model type is used during training.
  • Panoptic_Segmentation —The output will be one classified image chip and one instance per input image chip. The output will also have image chips that mask the areas where the sample exists; these image chips will be stored in a different folder. This format is used for both pixel classification and instance segmentation, each with an output labels folders.

PASCAL_VOC_rectangles example

Use dark colors for code blocksCopy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
<?xml version=”1.0”?>
- <layout>
      
      <object>1</object>
    - <part>
         <class>1</class>
       - <bndbox>
            <xmin>31.85</xmin>
            <ymin>101.52</ymin>
            <xmax>256.00</xmax>
            <ymax>256.00</ymax>
         </bndbox>
      </part>
  </layout>

classValueField

The field that contains the class values. If no field is specified, the system searches for a value or classvalue field. If the feature does not contain a class field, it is assumed that all records belong to one class.

Example:

Use dark colors for code blocksCopy
1
classValueField={"Classvalue"}

bufferRadius

The radius for a buffer around each training sample to delineate a training sample area. This allows you to create circular polygon training samples from points.

Example:

Use dark colors for code blocksCopy
1
bufferRadius=1

inputMaskPolygons

A polygon feature class that delineates the area where image chips will be created. Only image chips that fall completely within the polygons will be created.

Example:

Use dark colors for code blocksCopy
1
2
inputMaskPolygons={"itemId": <portal item id>}
inputMaskPolygons={"url": <feature service url>}

rotationAngle

The rotation angle that will be used to generate additional image chips. An image chip will be generated with a rotation angle of 0, which means no rotation. It will then be rotated at the specified angle to create an additional image chip. The same training samples will be captured at multiple angles in multiple image chips for data augmentation. The default rotation angle is 0.

Example:

Use dark colors for code blocksCopy
1
rotationAngle=60

referenceSystem

Specifies the type of reference system to be used to export the image tiles, either MAP_SPACE or PIXEL_SPACE . Choose MAP_SPACE when the input image is in the map-based coordinate system. This is the default value. Use PIXEL_SPACE when the input image is in image space with no rotation and no distortion.

Values: MAP_SPACE | PIXEL_SPACE

processAllRasterItems

Specifies how raster items in an image service will be processed. When false , all raster items in the image service will be mosaicked together and processed. This is the default option. When true , all raster items in the image service will be processed as a separate image.

Values: true | false

blackenAroundFeature

Specifies whether the pixels around each object or feature in each image tile will be darkened. This parameter applies only when the metadata format is set to Labeled_Tiles and an input feature class or classified raster has been specified. When false , pixels surrounding objects or features will not be darkened. This is the default. When true , pixels surrounding objects or features will be darkened.

Values: true | false

fixChipSize

Specifies whether the exported tiles will be cropped so that they are all the same size. This parameter applies only when the metadata format is set to Labeled_Tiles and an input feature class or classified raster has been specified. When true , exported tiles will be the same size and will center on the feature. This is the default. When false , exported tiles will be cropped so that the bounding geometry surrounds only the feature in the image tile.

Values: true | false

additionalInputRaster(Optional

An additional input imagery source that will be used for image translation methods. This parameter is valid when the metadataFormat parameter is set to Classified_Tiles , Export_Tiles , or CycleGAN . The value should be the portal item ID, image service URL, cloud raster dataset, or shared raster dataset that will be classified.. At least one type of input must be provided in the JSON object. If multiple inputs are provided, the itemId takes priority.

Syntax: JSON object describes the additionalinputRaster .

Example:

Use dark colors for code blocksCopy
1
2
3
{"itemId": <portal item id>}
{"url": <image service url>}
{"serviceProperties":{"name":"testrasteranalysis","serviceUrl":"https://<server name>/server/rest/services/Hosted/testrasteranalysis/ImageServer"},"itemProperties":{"itemId":"8cfbd3ec25584d0d8fed23b8ff7c43b","folderId":"sdfwerfbd3ec25584d0d8f4"}}

inputInstanceData(Optional)

The training sample data collected that contains classes for instance segmentation. The input can also be a point feature class without a class value field or an integer raster without any class information. This parameter is only valid when the metadataFormat parameter is set to Panoptic_Segmentation .

Example:

Use dark colors for code blocksCopy
1
2
3
{"itemId": <portal item id>}
{"url": <image or feature service url>}
{"serviceProperties":{"name":"testrasteranalysis","serviceUrl":"https://<server name>/server/rest/services/Hosted/testrasteranalysis/ImageServer"},"itemProperties":{"itemId":"8cfbd3ec25584d0d8fed23b8ff7c43b", "folderId":"sdfwerfbd3ec25584d0d8f4"}}

instanceClassValueField(Optional)

The minimum overlap percentage for a feature to be included in the training data. If the percentage overlap is less than the value specified, the feature will be excluded from the training chip and will not be added to the label file. The percent value is expressed as a decimal. For example, to specify an overlap of 20 percent, use a value of 0.2. The default value is 0, which means that all features will be included. This parameter improves performance of the tool and also improves inferencing. The speed is improved because fewer training chips are created. Inferencing is improved because the model is trained to only detect large patches of objects and ignores small corners of features. This parameter is only honored when the inputClassData parameter value is a feature service.

Example:

Use dark colors for code blocksCopy
1
minPolygonOverlapRatio =0.2

context

Contains settings that affect task processing. This parameter has the following settings:

  • Cell Size (cellSize)—The output raster will have the resolution specified by cell size.

  • Extent (extent)—A bounding box that defines the analysis area.

  • Parallel Processing Factor (parallelProcessingFactor )—The specified number or percentage of processes will be used for the analysis.

  • exportAllTiles —Choose if the training sample image chips with overlapped label data will be exported. If true , all image chips, including those that do not overlap labeled data, will be exported. This is the default. If false , only the image chips that overlap the labeled data will be exported.
  • startIndex —Set the start index for the sequence of image chips. This appends additional image chips to an existing sequence. The default value is 0.

f

The response format. The default response format is html.

Values: html | json | pjson

Additional KITTI metadata format information

The table below describes the 15 values in the KITTI metadata format. Only 5 of the possible 15 values are used in the tool: the class name (in column 1) and the minimum bounding rectangle composed of four image coordinate locations (columns 5–8). The minimum bounding rectangle encompasses the training chip used in the deep learning classifier.

ColumnNameDescription

1

Class value

The class value of the object listed in the stats.txt file.

2–4

Unused

5–8

Bbox

The two-dimensional bounding box of objects in the image, based on a 0-based image space coordinate index. The bounding box contains the four coordinates for the left, top, right, and bottom pixels.

9–15

Unused

Example usage

The following is a sample GET request URL for ExportTrainingDataforDeepLearning :

Use dark colors for code blocksCopy
1
https://machine.domain.com/webadaptor/rest/services/System/RasterAnalysisTools/GPServer/ExportTrainingDataforDeepLearning?inputRaster={"itemId":89964029c5354407a4f817187144be42}&outputLocation=/rasterStores/myrasterstore/rooftoptrainingsamples&inputClassData={"itemId":66b1f5fa24b14217a1129f8ab688386a}&chipFormat=TIFFtileSize={"x":256,"y":256}&strideSize={"x":128,"y":128}&metadataFormat=KITTI_rectangles&classValueField=&bufferRadius=1&inputMaskPolygons=&rotationAngle=0&referenceSystem=MAP_SPACE&processAllRasterItems=false&blackenAroundFeature=false&fixChipSize=true&f=pjson

The following is a sample POST request URL for ExportTrainingDataforDeepLearning :

Use dark colors for code blocksCopy
1
2
3
4
5
6
POST /webadaptor/rest/services/System/RasterAnalysisTools/GPServer/ExportTrainingforDeepLearning HTTP/1.1
HOST: machine.domain.com
Content-Type: application/x-www-form-urlencoded
Content-Length: []

inputRaster={"itemId":89964029c5354407a4f817187144be42}&outputLocation=/rasterStores/myrasterstore/rooftoptrainingsamples&inputClassData={"itemId":66b1f5fa24b14217a1129f8ab688386a}&chipFormat=TIFFtileSize={"x":256,"y":256}&strideSize={"x":128,"y":128}&metadataFormat=KITTI_rectangles&classValueField=&bufferRadius=1&inputMaskPolygons=&rotationAngle=0&referenceSystem=MAP_SPACE&processAllRasterItems=false&blackenAroundFeature=false&fixChipSize=true&f=pjson

Response

When you submit a request, the task assigns a unique job ID for the transaction.

Syntax:

Use dark colors for code blocksCopy
1
2
3
4
{
"jobId": "<unique job identifier>",
"jobStatus": "<job status>"
}

After the initial request is submitted, you can use jobId to periodically review the status of the job and messages as described in Checking job status. Once the job has successfully completed, use jobId to retrieve the results. To track the status, you can make a request of the following form:

Use dark colors for code blocksCopy
1
https://<raster analysis tools url>/ExportTrainingDataforDeepLearning/jobs/<jobId>

When the status of the job request is esriJobSucceeded , you can access the results of the analysis by making a request of the following form:

Use dark colors for code blocksCopy
1
https://<raster analysis tools url>/ExportTrainingDataforDeepLearning/jobs/<jobId>/results/outLocation

JSON Response example

The response returns the outLocation parameter, which provides the output location of the training data and has properties for parameter name, data type, and value. The content of the value is always the output data store item's itemId value or URL.

Use dark colors for code blocksCopy
1
2
3
4
5
6
7
{
  "paramName": "outLocation",
  "dataType": "GPString",
  "value": {
    "uri": "/rasterStores/myrasterstore/rooftops"
  }
}

Your browser is no longer supported. Please upgrade your browser for the best experience. See our browser deprecation post for more details.