Skip to content

Instantly share code, notes, and snippets.

@tgruben
tgruben / day02.rkt
Created December 2, 2022 16:03
aoc day 2
#lang racket
(define rock "A")
(define paper "B")
(define scissor "C")
(define rules (make-hash))
(hash-set! rules (string-append rock scissor) 6)
(hash-set! rules (string-append rock paper) 0)
(hash-set! rules (string-append rock rock) 3)
(hash-set! rules (string-append paper rock) 6)
@tgruben
tgruben / day1.rkt
Last active December 1, 2022 21:30
aoc2022 day1
#lang racket
(define (max-calories rec)
(foldl + 0 (map string->number (string-split rec "\n"))))
(define (process in)
(sort (foldl cons '() (map max-calories (string-split in "\n\n"))) >))
(define (report l)
(printf "part1:~a ~n" (car l))
(printf "part2:~a ~n" (foldl + 0 (take l 3))))
;; part 1

VDS

Import/Export To Cloud or Local

Cloud must support filesystem like interface

Format

looks like a filesystem on cloud/tarfile on local

https://github.com/usmanhalalit/gost for idea of implementation

for pilosa each node will talk directly to cloud/local collector (possibly NAT)

initiated via vdsm and each node spawn asyncronously

vsd only data would just export

Option1 (Most scalable)

package main
import (
"crypto/tls"
"flag"
"log"
"net"
"net/textproto"
)
Script started on 2019-10-14 13:37:14+0000
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
[ubuntu@experiment-5d816f2592f715a1 ~] 2019-10-14 13:37:14
$exit
exit
Script done on 2019-10-14 13:47:58+0000
# Configure the Google Cloud provider
provider "google" {
credentials = "${file("credentials.json")}"
project = "REPLACEMEWITHPROJECTID"
region = "us-central1"
}
# Terraform plugin for creating random ids
resource "random_id" "instance_id" {
// Configure the Google Cloud provider
provider "google" {
credentials = "${file("credentials.json")}"
project = "REPLACEME_WITHPROJECTID"
region = "us-central1"
}
// Terraform plugin for creating random ids
resource "random_id" "instance_id" {
// Configure the Google Cloud provider
provider "google" {
credentials = "${file("credentials.json")}"
project = "pilosa-sandbox"
region = "us-central1"
}
// Terraform plugin for creating random ids
resource "random_id" "instance_id" {
@tgruben
tgruben / main.tf
Created August 27, 2019 03:15
upgrade main.tf for gcp
// Configure the Google Cloud provider
provider "google" {
credentials = "${file("credentials.json")}"
project = "pilosa-sandbox"
region = "us-central1"
}
// Terraform plugin for creating random ids
resource "random_id" "instance_id" {
byte_length = 8
import org.apache.log4j.{Level, Logger}
import util.Random
import org.apache.spark.sql.functions._
import scala.reflect.ClassTag
import org.apache.spark.{Partition, TaskContext,SparkContext}
import org.apache.spark.rdd.RDD
import org.apache.spark.Partitioner
import org.apache.spark.sql.types._
import org.apache.spark.SparkContext._