Calculate Field

Calculate Field

The Calculate Field task works with a layer to create and populate a new field or edit and existing field. The output is a new feature service that is the same as the input features, with the newly calculated values.


Calculate Field was introduced in ArcGIS Enterprise 10.6.

Request URL

http://<analysis url>/CalculateField/submitJob

Request parameters




The input features that will have a field added and calculated.

Syntax: As described in Feature input, this parameter can be one of the following:

  • A URL to a feature service layer with an optional filter to select specific features
  • A URL to a big data catalog service layer with an optional filter to select specific features
  • A feature collection

REST web example:

  • {"url" : "", "filter": "Month = 'September'"}

REST scripting example:

  • "inputLayer" : {"url": "", "filter": "Month = 'September'"}



A string representing the name of the new field. If the name already exists in the dataset, a numeric value will be appended to the field name.

REST web example: MyNewField

REST scripting example: "fieldName" : "AccumulatedValues"



The type for the new field.

Values: Date |Double | Integer | String

REST web example: Double

REST scripting example: "dataType" : "Integer"



An Arcade expression used to calculate the new field values. You can use any of the Date, Logical, Mathematical or Text functions available with Arcade expressions.

REST web example:

  • $feature["Field1"] + abs($feature["Field2"])

REST scripting example:

  • "expression" : "$feature[\"Field1\"] + abs($feature[\"Field2\"])"


Boolean value denoting if the expression is track-aware.

REST web example: true

REST scripting examples: trackAware : "false"


(Required if trackAware is true)

The fields used to identify distinct tracks. There can be multiple trackFields. trackFields are only required when trackAware is true.



A time boundary allows your to analyze values within a defined time span. For example, if you use a time boundary of 1 day, starting on January 1st, 1980 tracks will be analyzed 1 day at a time. The time boundary parameter was introduced in ArcGIS Enterprise 10.7.

The time boundary parameters are only applicable if the analysis is trackAware.

The timeBoundarySplit parameter defines the scale of the time boundary. In the case above, this would be 1. See the portal documentation for this tool to learn more.

REST scripting example: "timeBoundarySplit" : 1

REST web example: 2



The unit applied to the time boundary. timeBoundarySplitUnit is required if a timeBoundarySplit is provided.

REST scripting example: "timeBoundarySplitUnit" : "Days"

REST web example: Weeks



A date that specifies the reference time to align the time boundary to, represented in milliseconds from epoch. The default is January 1, 1970, at 12:00 a.m. (epoch time stamp 0). This option is only available if the timeBoundarySplit and timeBoundarySplitUnit are set.

REST scripting example: "timeBoundaryReference" : 946684800000

REST web example: 9466835800000



The task will create a feature service of the results. You define the name of the service.

REST web example: myOutput

REST scripting example: "outputName" : "myOutput"


The context parameter contains additional settings that affect task execution. For this task, there are four settings:

  • Extent (extent)—A bounding box that defines the analysis area. Only those features that intersect the bounding box will be analyzed.
  • Processing spatial reference (processSR)—The features will be projected into this coordinate system for analysis.
  • Output spatial reference (outSR)—The features will be projected into this coordinate system after the analysis to be saved. The output spatial reference for the spatiotemporal big data store is always WGS84.
  • Data store (dataStore)—Results will be saved to the specified data store. The default is the spatiotemporal big data store.

"extent" : {extent},
"processSR" : {spatial reference},
"outSR" : {spatial reference},
"dataStore":{data store}


The response format. The default response format is html.

Values: html | json


When you submit a request, the service assigns a unique job ID for the transaction.

"jobId": "<unique job identifier>",
"jobStatus": "<job status>"

After the initial request is submitted, you can use jobId to periodically check the status of the job and messages as described in Checking job status. Once the job has successfully completed, use jobId to retrieve the results. To track the status, you can make a request of the following form:

https://<analysis url>/CalculateField/jobs/<jobId>

Accessing results

When the status of the job request is esriJobSucceeded, you can access the results of the analysis by making a request of the following form:

http://<analysis url>/CalculateField/jobs/<jobId>/results/output?token=<your token>&f=json



output will always contain a feature service.

Request example:
"http://<analysis url>/CalculateField/jobs/<jobId>/results/output"}

The result has properties for parameter name, data type, and value. The contents of value depend on the output parameter provided in the initial request. The value parameter contains the URL of the output cube.

"value":{"url":"<hosted feature service layer url>"}