Skip to content

Instantly share code, notes, and snippets.



View GitHub Profile
Kiollpt /
Last active Jul 14, 2021


  • description: monitor the AWS RDBMS schema change

  • create new column to Summary or Detail:

    1. add COL into google sheet (fill default value it can decrease amount of updating time)
    2. orginzie sql in Summary or Detail func.(fetch table schema from inforamtion_schema table)
    3. col = a.columns[:NUM] in check_cell func -> NUM+=1
View bash
LS_COLORS=$LS_COLORS:'di=0;35:' ; export LS_COLORS; ls
export LS_OPTIONS='--color=auto'
#eval "$(dircolors -b)"
alias ls='ls $LS_OPTIONS'
# tmux
View start
set mouse=a
watch tail logfile1 logfile2
shortcut : tmux a -t #num
Kiollpt / Event-stream based GraphQL
Created Dec 13, 2020 — forked from OlegIlyenko/Event-stream based GraphQL
Event-stream based GraphQL subscriptions for real-time updates
View Event-stream based GraphQL

In this gist I would like to describe an idea for GraphQL subscriptions. It was inspired by conversations about subscriptions in the GraphQL slack channel and different GH issues, like #89 and #411.

Conceptual Model

At the moment GraphQL allows 2 types of queries:

  • query
  • mutation

Reference implementation also adds the third type: subscription. It does not have any semantics yet, so here I would like to propose one possible semantics interpretation and the reasoning behind it.

View Stateful.scala
val output = ds.groupByKey(_.`pickup zone`).mapGroupsWithState[State,Result](GroupStateTimeout.NoTimeout()){
case (key:String,values:Iterator[order],state: GroupState[State]) => {
val data = values.toSeq
val size = data.size
val updateState = if(state.exists) {
Kiollpt /
Last active Oct 26, 2020


  • * 27 Remove Element 视频讲解
  • * 26 Remove Duplicates from Sorted Array 视频讲解 L1:match condition | L2
  • ? 80 Remove Duplicates from Sorted Array II 视频讲解
  • 277 Find the Celebrity 视频讲解
  • * 189 Rotate Array 视频讲解
  • * 41 First Missing Positive 视频讲解
  • *? 299 Bulls and Cows 视频讲解
  • 134 Gas Station 视频讲解
View csv to kafka.scala
def main(args: Array[String]): Unit = {
val ss = SparkSession.builder().master("local[*]").appName("app1").getOrCreate()
import ss.implicits._
val filePath = "src/resource"
val schema = Encoders.product[order].schema
val ds = ss.readStream.option("header","true").schema(schema).csv(filePath)
val ds1 = lit("1") as "key",to_json(struct("*"))as "value")
if __name__ == "__main__":