GeoJSON format data can be stored in a distributed file system such as HDFS, a Cloud storage such as S3, a local directory, or other locations that is accessible through Spark.
When loading GeoJSON data, a geometry column will be automatically created in the result DataFrame and its spatial
reference set. GeoJSON supports
polygon, and multipart collections of
polygon geometries. After loading GeoJSON
files into a Spark DataFrame, you can perform analysis and visualize the data by using the SQL functions and tools available
in GeoAnalytics Engine in addition to functions offered in Spark. Once you save a DataFrame
as GeoJSON, you can store the files or access and visualize them through other systems.
The following table shows examples of the Python syntax for loading and saving GeoJSON with GeoAnalytics Engine, where
path is a path to a directory of GeoJSON files or a single GeoJSON file.
By default, GeoJSON uses World Geodetic System 1984 (SRID:4326) and decimal degrees when saving the DataFrame. If the DataFrame geometry is
in a different spatial reference, it will be automatically transformed into World Geodetic System 1984. In addition, GeoAnalytics Engine
supports the option to save the DataFrame with a custom spatial reference using
custom. To learn more about spatial
references, see Coordinate systems and transformations. Additionally, the Spark
Data classes provide other options that can be used when loading and saving GeoJSON files,
as shown below.
|Specify the number of records to sample when inferring the schema.|
|Merge the schemas of a collection of GeoJSON datasets in the input directory.|
|When set to |
|Partition the output by the given column name. This example will partition the output GeoJSON files by values in the |
|Overwrite existing data in the specified path. Other available options are |
|Writes the GeoJSON in multiline format.|
GeoJSON doesn't support the generic
geometrydata type. A
polygoncolumn is required when writing to GeoJSON.
Writing to GeoJSON requires exactly one geometry field.
When loading GeoJSON, if there is no spatial reference defined it will be assumed to be World Geodetic System 1984 (SRID:4326).
Spark will read GeoJSON files from multiple directories if the directory names start with
column=. For example, the following example directory contains GeoJSON data that is partitioned by
district. Spark can infer
districtas a column name in the DataFrame by reading the subdirectory names starting with
Writing DataFrames to GeoJSON doesn't require a spatial reference to be set on the geometry column. However, it is recommended to always set and check the spatial reference of a DataFrame before writing to GeoJSON if the data is not in World Geodetic System 1984.