Aggregate points

Aggregate Points summarizes points into polygons or bins. The boundaries from the polygons or bins are used to collect the points within each area and use them to calculate statistics. The result always contains the count of points within each area.

ap workflow

Usage notes

  • Aggregate Points is designed to collect and summarize points within a set of boundaries. The input DataFrame must contain point geometries to be aggregated.

  • There are two ways to specify the boundaries:

    • Use a polygon DataFrame by specifying setPolygons().
    • Use a square or hexagonal bin of a specified size that is generated when the analysis is run by specifying setBins().
  • Specify bin shape using setBins(). The bin size specifies how large the bins are. If you are aggregating into hexagons, the size is the height of each hexagon, and the width of the resulting hexagon will be two times the height divided by the square root of three. If you are aggregating into squares, the bin size is the height of the square, which is equal to the width. ap-measure

  • Analysis with binning requires that your input DataFrame's geometry has a projected coordinate system. If your data is not in a projected coordinate system, the tool will transform your input to a World Cylindrical Equal Area (SRID: 54034) projection. You can transform your data to a projected coordinate system by using ST_Transform.

    Learn more about coordinate systems and transformations

  • The output is always a DataFrame with polygon geometries. Only polygons that contain points will be returned. ap point in poly

    The input point and polygons (left) and the result (right) from Aggregate Points are shown.

  • If time is enabled on the input DataFrame, you can apply time stepping to your analysis. Each time step is analyzed independent of points outside the time step. To use time stepping, your input data must be time enabled and represent an instant in time. When time stepping is applied, rows returned in the output DataFrame will have time type of interval represented by the start_time and end_time fields.

  • The most basic aggregations will calculate a count of the number of points in each polygon. Statistics (count, sum, minimum, maximum, range, mean, standard deviation, and variance) can also be calculated on numerical fields. Count and any can be calculated on string fields. The statistics will be calculated on each area separately. If you specify a statistic that is not valid (mean of a string field), it will be skipped.

  • When the count statistic is applied to a field, it returns a count of the nonnull values present in the field. When the any statistic is applied to a string field, it returns a single string present in the field.

Limitations

  • Input DataFrames must contain point geometries. The polygons to aggregate into must be a polygon DataFrame if bins are not used. Linestrings and polygons cannot be used as the input.
  • Only the polygons that contain points are returned.

Results

The tool outputs polygons that include the following fields:

FieldDescriptionNotes 
countThe count of input points within each polygon. 
<statistic>_<fieldname>Specified statistics will each create an attribute field, named in the following format: statistic_fieldname. For example, the maximum and standard deviation of the id field are MAX_id and SD_id, respectively. 
step_startWhen time stepping is specified, output polygons will have a time interval. This field represents the start time. 
step_endWhen time stepping is specified, output polygons will have a time interval. This field represents the end time.
bin_geometryThe geometry of the result bins if aggregating into bins.

Performance notes

Improve the performance of Aggregate Points by doing one or more of the following:

  • Only analyze the records in your area of interest. You can pick the records of interest by using one of the following SQL functions:

    • ST_Intersection—Clip to an area of interest represented by a polygon. This will modify your input records.
    • ST_EnvIntersects—Select records that intersect an envelope.
    • ST_Intersects—Select records that intersect another dataset or area of intersect represented by a polygon.
  • Larger bins will perform better than smaller bins. If you are unsure about which size to use, start with a larger bin to prototype.
  • Similar to bins, larger time steps will perform better than smaller time steps.

Similar capabilities

Similar tools:

How Aggregate Points works

Equations

Variance is calculated using the following equation:

variance variance where

Standard deviation is calculated as the square root of the variance.

Calculations

  • Points are summarized using only the points that are intersecting the input boundary.

  • The following figure and table illustrate the statistical calculations of a point DataFrame within district boundaries. The Population field was used to calculate the statistics (Count, Sum, Minimum, Maximum, Range, Mean, Standard Deviation, and Variance) for the input DataFrame. The Type field was used to calculate the statistics (Count and Any) for the input DataFrame.

    example image

    An example attribute table is displayed above with values to be used in this hypothetical statistical calculation.

    Numeric statisticResults District A
    CountCount of: [280, 408, 356, 361, 450, 713] = 6
    Sum280 + 408 + 356 + 361 + 450 + 713 = 2568
    MinimumMinimum of: [280, 408, 356, 361, 450, 713] = 280
    MaximumMaximum of: [280, 408, 356, 361, 450, 713] = 713
    Average2568/6 = 428
    Variance table variance = 22737.2
    Standard Deviation table stdev = 150.7886
    String statisticResults District A
    Count= 6
    Any= Secondary School
  • The count statistic (for strings and numeric fields) counts the number of nonnull values. The count of the following values equals 5: [0, 1, 10, 5, null, 6] = 5. The count of this set of values equals 3: [Primary, Primary, Secondary, null] = 3.

Syntax

For more details, go to the GeoAnalytics Engine API reference for aggregate points.

SetterDescriptionRequired
run(dataframe)Runs the Aggregate Points tool using the provided DataFrame.Yes
addSummaryField(summary_field, statistic, alias=None)Adds a summary statistic of a field in the input DataFrame to the result DataFrame.No
setBins(bin_size, bin_size_unit, bin_type='square')Sets the size and shape of bins used to aggregate points into.Either setBins() or setPolygons() is required.
setPolygons(polygons)Sets the polygons into which points will be aggregated.Either setBins() or setPolygons() is required.
setTimeStep(interval_duration, interval_unit, repeat_duration=None, repeat_unit=None, reference_time=None)Sets the time step interval, time step repeat, and reference time. If set, points will be aggregated into each bin for each time step.No

Examples

Run Aggregate Points

Python
Use dark colors for code blocksCopy
                                                                       
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
# Log in
import geoanalytics
geoanalytics.auth(username="myusername", password="mypassword")

# Imports
from geoanalytics.tools import AggregatePoints
from geoanalytics.sql import functions as ST

# Path to the earthquakes data
data_path = r"https://sampleserver6.arcgisonline.com/arcgis/rest/services/" \
    "Earthquakes_Since1970/FeatureServer/0"

# Create an earthquakes DataFrame and transform the spatial reference
# to World Cylindrical Equal Area (54034)
df = spark.read.format("feature-service").load(data_path) \
                    .withColumn("shape", ST.transform("shape", 54034))

# Use Aggregate Points to summarize the count of world earthquakes into 10 year intervals
result = AggregatePoints() \
            .setBins(bin_size=800, bin_size_unit="Kilometers", bin_type="Hexagon") \
            .setTimeStep(interval_duration=10, interval_unit="years") \
            .addSummaryField(summary_field="magnitude", statistic="Min") \
            .addSummaryField(summary_field="magnitude", statistic="Max") \
            .run(df)

# Show the 5 rows of the result DataFrame with the highest count
result.sort("COUNT", ascending=False).show(5)
Result
Use dark colors for code blocksCopy
          
1
2
3
4
5
6
7
8
9
10
+-----+-------------+-------------+--------------------+-------------------+-------------------+
|COUNT|MIN_magnitude|MAX_magnitude|        bin_geometry|         step_start|           step_end|
+-----+-------------+-------------+--------------------+-------------------+-------------------+
| 18.0|          4.7|          7.4|{"rings":[[[78519...|1999-12-31 16:00:00|2009-12-31 16:00:00|
| 15.0|          4.6|          7.5|{"rings":[[[85447...|1989-12-31 16:00:00|1999-12-31 16:00:00|
| 15.0|          4.5|          6.1|{"rings":[[[57735...|1999-12-31 16:00:00|2009-12-31 16:00:00|
| 14.0|          4.0|          7.5|{"rings":[[[30022...|1979-12-31 16:00:00|1989-12-31 16:00:00|
| 13.0|          4.1|          6.6|{"rings":[[[78519...|1989-12-31 16:00:00|1999-12-31 16:00:00|
+-----+-------------+-------------+--------------------+-------------------+-------------------+
only showing top 5 rows

Plot results

Python
Use dark colors for code blocksCopy
                                                                       
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
# Create a continents DataFrame and transform geometry to World Cylindrical
# Equal Area (54034) for plotting
continents_path = "https://services.arcgis.com/P3ePLMYs2RVChkJx/ArcGIS/rest/" \
    "services/World_Continents/FeatureServer/0"
continents_df = spark.read.format("feature-service").load(continents_path) \
                            .withColumn("shape", ST.transform("shape", 54034))

# Plot the Aggregate Points result DataFrame for the time step interval
# of 12-31-1969 to 12-31-1979 with the world continents
continents_plot = continents_df.st.plot(facecolor="none",
                                        edgecolors="lightblue",
                                        figsize=(16,10))
result_plot = result.where("step_start = '1969-12-31 16:00:00'") \
                        .st.plot(cmap_values="COUNT",
                                 cmap="YlOrRd",
                                 legend=True,
                                 ax=continents_plot,
                                 vmin=0,
                                 vmax=14)
result_plot.set_title("Aggregate count of world earthquakes from 12-31-1969 to 12-31-1979")
result_plot.set_xlabel("X (Meters)")
result_plot.set_ylabel("Y (Meters)");

# Plot the Aggregate Points result DataFrame for the time step interval
# of 12-31-1979 to 12-31-1989 with the world continents
continents_plot = continents_df.st.plot(facecolor="none",
                                        edgecolors="lightblue",
                                        figsize=(16,10))
result_plot = result.where("step_start = '1979-12-31 16:00:00'") \
                        .st.plot(cmap_values="COUNT",
                                 cmap="YlOrRd",
                                 legend=True,
                                 ax=continents_plot,
                                 vmin=0,
                                 vmax=14)
result_plot.set_title("Aggregate count of world earthquakes from 12-31-1979 to 12-31-1989")
result_plot.set_xlabel("X (Meters)")
result_plot.set_ylabel("Y (Meters)");

Plotting example for an Aggregate Points result. Count of world earthquakes is shown.

Version table

ReleaseNotes

1.0.0

Tool introduced

Your browser is no longer supported. Please upgrade your browser for the best experience. See our browser deprecation post for more details.