Skip to content

Instantly share code, notes, and snippets.

@1ambda
Created December 25, 2021 07:05
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save 1ambda/9273d62f40444475390693c8c3c4ff8b to your computer and use it in GitHub Desktop.
Save 1ambda/9273d62f40444475390693c8c3c4ff8b to your computer and use it in GitHub Desktop.
== Physical Plan ==
* Filter (2)
+- InMemoryTableScan (1)
+- InMemoryRelation (2)
+- * ColumnarToRow (4)
+- Scan parquet (3)
(1) InMemoryTableScan
Output [1]: [listing_id#10]
Arguments: [listing_id#10], [isnotnull(listing_id#10), (listing_id#10 >= 20000000)]
(2) InMemoryRelation
Arguments: [listing_id#10, listing_url#11, listing_name#12, listing_summary#13, listing_desc#14], CachedRDDBuilder(org.apache.spark.sql.execution.columnar.DefaultCachedBatchSerializer@df15d03,StorageLevel(disk, memory, deserialized, 1 replicas),*(1) ColumnarToRow
+- FileScan parquet [listing_id#10,listing_url#11,listing_name#12,listing_summary#13,listing_desc#14] Batched: true, DataFilters: [], Format: Parquet, Location: InMemoryFileIndex[file:/home/1ambda/airbnb_listings_parquet], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<listing_id:int,listing_url:string,listing_name:string,listing_summary:string,listing_desc:...
,None)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment