Skip to content

Instantly share code, notes, and snippets.

@echeipesh
Last active March 18, 2022 14:53
Show Gist options
  • Save echeipesh/374cd4159a28785954f5b0758d54b073 to your computer and use it in GitHub Desktop.
Save echeipesh/374cd4159a28785954f5b0758d54b073 to your computer and use it in GitHub Desktop.
GeoParquet

GeoMesa Parquet Support

GeoMesa implements parquet support as part of FileSystem DataStore. The interesting part is that in addition to expected WKB encoding it adds X/Y geometry encoding using the Parquet repeat level and definition level.

Code links: Parquet Writer Test and WriteSupport

Writing GADM36 level 2 data produces following schema:

❯ pqrs schema geomesa.parquet
Metadata for file: geomesa.parquet

version: 1
num of rows: 45962
created by: parquet-mr version 1.9.0 (build 38262e2c80015d0935dad20f8e18f2d6f9fbd03c)
metadata:
  geomesa.fs.sft.name: level2
  geomesa.parquet.version: 1
  writer.model.name: SimpleFeatureWriteSupport
  geomesa.fs.sft.spec: *geom:MultiPolygon:org.geotools.jdbc.nativeTypeName=MULTIPOLYGON:org.geotools.jdbc.nativeType=12:hasGeopkgSpatialIndex=true:nativeSRID=4326:COORDINATE_DIMENSION=2,GID_0:String:org.geotools.jdbc.nativeType=12:org.geotools.jdbc.nativeTypeName=TEXT,NAME_0:String:org.geotools.jdbc.nativeType=12:org.geotools.jdbc.nativeTypeName=TEXT,GID_1:String:org.geotools.jdbc.nativeType=12:org.geotools.jdbc.nativeTypeName=TEXT,NAME_1:String:org.geotools.jdbc.nativeType=12:org.geotools.jdbc.nativeTypeName=TEXT,NL_NAME_1:String:org.geotools.jdbc.nativeType=12:org.geotools.jdbc.nativeTypeName=TEXT,GID_2:String:org.geotools.jdbc.nativeType=12:org.geotools.jdbc.nativeTypeName=TEXT,NAME_2:String:org.geotools.jdbc.nativeType=12:org.geotools.jdbc.nativeTypeName=TEXT,VARNAME_2:String:org.geotools.jdbc.nativeType=12:org.geotools.jdbc.nativeTypeName=TEXT,NL_NAME_2:String:org.geotools.jdbc.nativeType=12:org.geotools.jdbc.nativeTypeName=TEXT,TYPE_2:String:org.geotools.jdbc.nativeType=12:org.geotools.jdbc.nativeTypeName=TEXT,ENGTYPE_2:String:org.geotools.jdbc.nativeType=12:org.geotools.jdbc.nativeTypeName=TEXT,CC_2:String:org.geotools.jdbc.nativeType=12:org.geotools.jdbc.nativeTypeName=TEXT,HASC_2:String:org.geotools.jdbc.nativeType=12:org.geotools.jdbc.nativeTypeName=TEXT
message level2 {
  OPTIONAL group geom {
    REQUIRED group x (LIST) {
      REPEATED group list {
        REQUIRED group element (LIST) {
          REPEATED group list {
            REPEATED DOUBLE element;
          }
        }
      }
    }
    REQUIRED group y (LIST) {
      REPEATED group list {
        REQUIRED group element (LIST) {
          REPEATED group list {
            REPEATED DOUBLE element;
          }
        }
      }
    }
  }
  OPTIONAL BYTE_ARRAY GID_5f0 (UTF8);
  OPTIONAL BYTE_ARRAY NAME_5f0 (UTF8);
  OPTIONAL BYTE_ARRAY GID_5f1 (UTF8);
  OPTIONAL BYTE_ARRAY NAME_5f1 (UTF8);
  OPTIONAL BYTE_ARRAY NL_5fNAME_5f1 (UTF8);
  OPTIONAL BYTE_ARRAY GID_5f2 (UTF8);
  OPTIONAL BYTE_ARRAY NAME_5f2 (UTF8);
  OPTIONAL BYTE_ARRAY VARNAME_5f2 (UTF8);
  OPTIONAL BYTE_ARRAY NL_5fNAME_5f2 (UTF8);
  OPTIONAL BYTE_ARRAY TYPE_5f2 (UTF8);
  OPTIONAL BYTE_ARRAY ENGTYPE_5f2 (UTF8);
  OPTIONAL BYTE_ARRAY CC_5f2 (UTF8);
  OPTIONAL BYTE_ARRAY HASC_5f2 (UTF8);
  REQUIRED BYTE_ARRAY __fid__ (UTF8);
}

By contrast this is the schema produces with current Spark method that encodes MBR as 4 columns and geometry as WKB:

❯ pqrs schema part-00002-eab76752-130b-41b0-bd51-fb66490ef540-c000.snappy.parquet
Metadata for file: part-00002-eab76752-130b-41b0-bd51-fb66490ef540-c000.snappy.parquet

version: 1
num of rows: 741
created by: parquet-mr version 1.13.0-SNAPSHOT (build 300200eb72b9f16df36d9a68cf762683234aeb08)
metadata:
  org.apache.spark.version: 3.2.0
  org.apache.spark.sql.parquet.row.metadata: {"type":"struct","fields":[{"name":"geom","type":"binary","nullable":true,"metadata":{}},{"name":"fid","type":"string","nullable":true,"metadata":{}},{"name":"GID_0","type":"string","nullable":true,"metadata":{}},{"name":"NAME_0","type":"string","nullable":true,"metadata":{}},{"name":"GID_1","type":"string","nullable":true,"metadata":{}},{"name":"NAME_1","type":"string","nullable":true,"metadata":{}},{"name":"GID_2","type":"string","nullable":true,"metadata":{}},{"name":"NAME_2","type":"string","nullable":true,"metadata":{}},{"name":"bbox","type":{"type":"struct","fields":[{"name":"xmin","type":"double","nullable":false,"metadata":{}},{"name":"xmax","type":"double","nullable":false,"metadata":{}},{"name":"ymin","type":"double","nullable":false,"metadata":{}},{"name":"ymax","type":"double","nullable":false,"metadata":{}}]},"nullable":true,"metadata":{}}]}
message spark_schema {
  OPTIONAL BYTE_ARRAY geom;
  OPTIONAL BYTE_ARRAY fid (STRING);
  OPTIONAL BYTE_ARRAY GID_0 (STRING);
  OPTIONAL BYTE_ARRAY NAME_0 (STRING);
  OPTIONAL BYTE_ARRAY GID_1 (STRING);
  OPTIONAL BYTE_ARRAY NAME_1 (STRING);
  OPTIONAL BYTE_ARRAY GID_2 (STRING);
  OPTIONAL BYTE_ARRAY NAME_2 (STRING);
  OPTIONAL group bbox {
    REQUIRED DOUBLE xmin;
    REQUIRED DOUBLE xmax;
    REQUIRED DOUBLE ymin;
    REQUIRED DOUBLE ymax;
  }
}

Why is this interesting?

Automatic ColumnGroup Stats

Because the produced schema uses x and y columns the parquet writer is able to gather RowGroup the min/max statistics automatically. Note how the stats column matches the schema and automatically defines the bounding box. This produces the same metadata required to define MBR for each page and file allowing efficient filtering with Parquet push-down predicates.

scala> ParquetReaderUtils.getParquetDataIndex("/tmp/geomesa.parquet")
log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Metadata:
Column: [geom, x, list, element, list, element]
	Stats: min: -171.090271, max: 159.10922241210938, num_nulls: 0
Column: [geom, y, list, element, list, element]
	Stats: min: -55.11694335937494, max: 42.6607132, num_nulls: 0
Column: [GID_5f0]
	Stats: num_nulls: 0, min/max not defined
Column: [NAME_5f0]
	Stats: num_nulls: 0, min/max not defined
Column: [GID_5f1]
	Stats: num_nulls: 0, min/max not defined
Column: [NAME_5f1]
	Stats: num_nulls: 0, min/max not defined
Column: [NL_5fNAME_5f1]
	Stats: num_nulls: 1394, min/max not defined
Column: [GID_5f2]
	Stats: num_nulls: 0, min/max not defined
Column: [NAME_5f2]
	Stats: num_nulls: 0, min/max not defined
Column: [VARNAME_5f2]
	Stats: num_nulls: 1183, min/max not defined
Column: [NL_5fNAME_5f2]
	Stats: num_nulls: 1544, min/max not defined
Column: [TYPE_5f2]
	Stats: num_nulls: 44, min/max not defined
Column: [ENGTYPE_5f2]
	Stats: num_nulls: 1, min/max not defined
Column: [CC_5f2]
	Stats: num_nulls: 1043, min/max not defined
Column: [HASC_5f2]
	Stats: num_nulls: 284, min/max not defined
Column: [__fid__]
	Stats: num_nulls: 0, min/max not defined

This has several positive implications:

Significantly more efficient POINT encoding.

If the is encoded as WKB we must store xmin/ymin,xmax/ymax as its own column.

Concerns:

Filter efficiency

We get storage efficiency by being able to use x/y as both storage column and stats column. However, now that all x/y values are exposed to Parquet does it mean that it will have to scan every geometry coordinate before validating MBR filter? What is the effect of this on query times?

Compatibility

Producing coordinate list encoding is non-standard way to encode geometries when compared to WKB. Client implementations will have to be a little more complex.

WKB Fallback

As can be seen in the implementation some cases still fallback to WKB. I guess these would be GeometryCollections.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment