TRK_SplitByDistance

TRK_SplitByDistance takes a track column and a distance and returns an array of tracks. The result array contains the input track split into segments with each segment no longer than the specified distance.

The distance can be defined using ST_CreateDistance or with a tuple containing a number and a unit string (e.g., (10, "kilometers")).

Splitting tracks that span large distances between vertices may result in segments that are slightly longer or shorter than the specified distance. In these cases, use ST_GeodesicDensify prior to splitting to obtain the most accurate results.

Tracks are linestrings that represent the change in an entity's location over time. Each vertex in the linestring has a timestamp (stored as the M-value) and the vertices are ordered sequentially.

For more information on using tracks in GeoAnalytics for Microsoft Fabric, see the core concept topic on tracks.

FunctionSyntax
Pythonsplit_by_distance(track, distance)
SQLTRK_SplitByDistance(track, distance)
ScalasplitByDistance(track, distance)

For more details, go to the GeoAnalytics for Microsoft Fabric API reference for split_by_distance.

Python and SQL Examples

PythonPythonSQL
Use dark colors for code blocksCopy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
from geoanalytics_fabric.sql import functions as ST
from geoanalytics_fabric.tracks import functions as TRK
from pyspark.sql import functions as F

data = [
    ("LINESTRING M (-117.27 34.05 1633455010, -117.22 33.91 1633456062, -116.96 33.64 1633457132)",),
    ("LINESTRING M (-116.89 33.96 1633575895, -116.71 34.01 1633576982, -116.66 34.08 1633577061)",),
    ("LINESTRING M (-116.24 33.88 1633575234, -116.33 34.02 1633576336)",)
]

df = spark.createDataFrame(data, ["wkt"]).withColumn("track", ST.line_from_text("wkt", srid=4326))

result = df.withColumn("split_by_distance", TRK.split_by_distance("track", (5, "miles")))

result.select(F.explode("split_by_distance"), F.monotonically_increasing_id().alias("id")) \
      .st.plot(is_categorical=True, cmap_values="id", cmap="prism", linewidths=10, figsize=(15, 8))
Plotted example for TRK_SplitByDistance
Plotted result for TRK_SplitByDistance.

Scala Example

Scala
Use dark colors for code blocksCopy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import com.esri.geoanalytics.sql.{functions => ST}
import com.esri.geoanalytics.sql.{trackFunctions => TRK}
import org.apache.spark.sql.{functions => F}

case class lineRow(lineWkt: String)
val data = Seq(lineRow("LINESTRING M (-117.27 34.05 1633455010, -117.22 33.91 1633456062, -116.96 33.64 1633457132)"),
               lineRow("LINESTRING M (-116.89 33.96 1633575895, -116.71 34.01 1633576982, -116.66 34.08 1633577061)"),
               lineRow("LINESTRING M (-116.24 33.88 1633575234, -116.33 34.02 1633576336)"))

val df = spark.createDataFrame(data)
              .withColumn("track", ST.lineFromText($"lineWkt", F.lit(4326)))
              .withColumn("split_by_distance", TRK.splitByDistance($"track",  F.lit(struct(F.lit(5), F.lit("miles")))))
              .withColumn("result_tracks", F.explode($"split_by_distance"))

df.select("result_tracks").show(5, truncate = false)
Result
Use dark colors for code blocksCopy
1
2
3
4
5
6
7
8
9
10
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|result_tracks                                                                                                                                                                    |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|{"hasM":true,"paths":[[[-117.27,34.05,1.63345501e9],[-117.24516726585477,33.980468344393365,1.6334555324807265e9]]]}                                                             |
|{"hasM":true,"paths":[[[-117.24516726585477,33.980468344393365,1.6334555324807265e9],[-117.22033453170955,33.91093668878674,1.6334560549614527e9]]]}                             |
|{"hasM":true,"paths":[[[-117.22033453170955,33.91093668878674,1.6334560549614527e9],[-117.22,33.91,1.633456062e9],[-117.16629305642225,33.85422740474618,1.6334562830247293e9]]]}|
|{"hasM":true,"paths":[[[-117.16629305642225,33.85422740474618,1.6334562830247293e9],[-117.1118527253359,33.79769321477189,1.6334565070676303e9]]]}                               |
|{"hasM":true,"paths":[[[-117.1118527253359,33.79769321477189,1.6334565070676303e9],[-117.05741239424954,33.7411590247976,1.6334567311105313e9]]]}                                |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
only showing top 5 rows

Version table

ReleaseNotes

1.0.0-beta

Python, SQL, and Scala functions introduced

Your browser is no longer supported. Please upgrade your browser for the best experience. See our browser deprecation post for more details.