Skip to content

Instantly share code, notes, and snippets.

View full-sized avatar

Jia Yu jiayuasu

View GitHub Profile
View example.scala
val inputFile = "/Users/jiayu/Downloads/Apache_Sedona_Wherobots/gemeinde_de.geojson"
val featuresFile = "/Users/jiayu/Downloads/Apache_Sedona_Wherobots/gemeinde_de_features.geojson"
val geoJson =
// Sedona requires that GEOJSON schema has to be Feature, not FeatureCollection
// because a FeatureCollection GeoJSON does NOT follow one geometry per record
// See
// So we need to separate meta data and features
val metaData ="crs", "source", "type")
jiayuasu / gpg-public-key.gpg
Last active January 7, 2021 09:06
View gpg-public-key.gpg
View Publish Sedona
- Publish snapshots
- Publish Sedona for Spark 3.0 and Scala 2.12
python3 spark3
mvn clean -Darguments="-DskipTests" release:prepare -DdryRun=true -DautoVersionSubmodules=true -Dresume=false
jiayuasu / README.markdown
Created July 26, 2018 20:22 — forked from alloy/README.markdown
Learn the LLVM C++ API by example.
View README.markdown

The easiest way to start using the LLVM C++ API by example is to have LLVM generate the API usage for a given code sample. In this example it will emit the code required to rebuild the test.c sample by using LLVM:

$ clang -c -emit-llvm test.c -o test.ll
$ llc -march=cpp test.ll -o test.cpp
jiayuasu / jit.cpp
Created July 9, 2018 17:51 — forked from tomas789/jit.cpp
LLVM JIT Example - Example of very simple JIT using LLVM. It compiles function with prototype `int64_t()` returning value `765`. Build: clang++ `llvm-config --cppflags --ldflags --libs core jit X86` jit.cpp
View jit.cpp
#include <iostream>
#include <cstdint>
#include <string>
#include "llvm/ExecutionEngine/JIT.h"
#include "llvm/IR/IRBuilder.h"
#include "llvm/IR/Module.h"
#include "llvm/PassManager.h"
#include "llvm/Support/TargetSelect.h"
#include "llvm/Analysis/Verifier.h"
jiayuasu /
Created November 27, 2017 23:54 — forked from marmbrus/
Example of injecting custom planning strategies into Spark SQL.

First a disclaimer: This is an experimental API that exposes internals that are likely to change in between different Spark releases. As a result, most datasources should be written against the stable public API in org.apache.spark.sql.sources. We expose this mostly to get feedback on what optimizations we should add to the stable API in order to get the best performance out of data sources.

We'll start with a simple artificial data source that just returns ranges of consecutive integers.

/** A data source that returns ranges of consecutive integers in a column named `a`. */
case class SimpleRelation(
    start: Int, 
    end: Int)(
    @transient val sqlContext: SQLContext) 
View How to sign GPG on Mac OSX
For some reason I had to setup the GPG TTY variable on my bash when I tried to sign my commit (i.e. -S flag) because of passphrase, which is new to me since I usually have the passphrase prompt within my terminal, so who knows, might be a new GPG version.
Anyway, fixed it with:
export GPG_TTY
Might be worth adding it to the under common issues and/or what not.
jiayuasu / install mac vim - gvim
Created September 7, 2017 00:43 — forked from hectorperez/install mac vim - gvim
install mac vim / gvim
View install mac vim - gvim
Step 1. Install homebrew from here:
Step 1.1. Run export PATH=/usr/local/bin:$PATH
Step 2. Run brew update
Step 3. Run brew install vim && brew install macvim
Step 4. Run brew link macvim
- copy & paste between tabs
View Babylon-Scala-Example-0.1.1-later.scala
/*---------------------------- Babylon 0.1.1 (or later) Scala API usage ----------------------------*/
* If you are writing Babylon program in Spark Scala Shell, no need to declare the Spark Context by yourself.
* If you are writing a self-contained Babylon Scala program, please declare the Spark Context as follows and
* stop it at the end of the entire program.
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf