ST_GeohashBins takes a geometry column and a numeric precision and returns an array column. The result array contains Geohash bins that cover the spatial extent of each record in the input column. The specified precision determines the size of each bin. You can optionally specify a numeric value for padding, which conceptually applies a buffer of the specified distance to the input geometry before creating the Geohash bins.
Function | Syntax |
---|---|
Python | geohash |
SQL | ST |
Python | geohash |
For more details, go to the GeoAnalytics for Microsoft Fabric API reference for geohash_bins.
Python and SQL Examples
from geoanalytics_fabric.sql import functions as ST
from pyspark.sql import functions as F
data = [
('POINT (1 1)',),
('LINESTRING (1 0.2, 1 0.4, 1 0.6, 1 0.8)',),
("POLYGON ((0.5 0.5, 0.5 0.8, 0.8 0.8, 0.8 0.5, 0.5 0.5))", )
]
df = spark.createDataFrame(data, ["wkt"]).withColumn("geometry", ST.geom_from_text("wkt", srid=4326))
bins = df.select(ST.geohash_bins("geometry", 5, 20).alias("bins"))
bin_geometries = bins.select(F.explode("bins").alias("bin")).select(ST.bin_geometry("bin"))
ax = df.st.plot("geometry", facecolor="none", edgecolor="red")
bin_geometries.st.plot(ax=ax, facecolor="none", edgecolor="blue")
Scala Example
import com.esri.geoanalytics.sql.{functions => ST}
import org.apache.spark.sql.{functions => F}
case class GeometryRow(wkt: String)
val data = Seq(GeometryRow("POINT (1 1)"),
GeometryRow("LINESTRING (1 0.2, 1 0.4, 1 0.6, 1 0.8)"),
GeometryRow("POLYGON ((0.5 0.5, 0.5 0.8, 0.8 0.8, 0.8 0.5, 0.5 0.5))"))
val df = spark.createDataFrame(data)
.withColumn("geometry", ST.geomFromText($"wkt", F.lit(4326)))
.withColumn("geohash_bins", ST.geohashBins($"geometry", F.lit(5), F.lit(20)))
df.select("geohash_bins").show()
+--------------------+
| geohash_bins|
+--------------------+
|[bin#70746701332837]|
|[bin#707467013325...|
|[bin#705577227716...|
+--------------------+
Version table
Release | Notes |
---|---|
1.0.0-beta | Python, SQL, and Scala functions introduced |