Skip to content

Instantly share code, notes, and snippets.

View check_apiserver.sh
#!/bin/sh
errorExit() {
echo "*** $*" 1>&2
exit 1
}
APISERVER_VIP=xxx.xxx.xx.xx
APISERVER_DEST_PORT=6443
View keepalived.conf
global_defs {
router_id LVS_DEVEL
}
vrrp_script check_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 3
weight -2
fall 10
rise 2
}
View personal.yaml
distributed:
version: 2
scheduler:
bandwidth: 1000000000 # 100 MB/s estimated worker-worker bandwidth
worker:
memory:
target: 0.90 # target fraction to stay below
spill: False # fraction at which we spill to disk
pause: 0.80 # fraction at which we pause worker threads
terminate: 0.95 # fraction at which we terminate the worker
View update_dask_k8s.py
#!/usr/bin/env python3
"""
Update Dask configuration based on the configuration
of the running Pod.
To be run at startup.
"""
import os
View image-manipulation-executed.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@eddienko
eddienko / Makefile
Created May 2, 2018 — forked from maartenbreddels/Makefile
Makefile for converting GaiaDR2 cvs files to a single hdf5 file
View Makefile
# Makefile for converting the CSV files from http://cdn.gea.esac.esa.int/Gaia/gdr2/gaia_source/csv/
# to a single (vaex) hdf5 file
# * https://docs.vaex.io
# * https://github.com/maartenbreddels/vaex/
# It is multistage to work around opening 60 000 files at once.
# Strategy is
# * stage1: convert all cvs.gz to csv to hdf5
# * do this via xargs and calling make again, since gmake has trouble matching 60 000 rules
# * stage2: Create part-<NUMBER>.txt files containing max FILES_PER_PART per file
# * stage3: convert the list of hdf5 files to single hdf5 files (part-<NUMBER>.hdf5)
@eddienko
eddienko / Image pipeline (delayed).ipynb
Created Apr 1, 2018
Dask/Image pipeline (delayed).ipynb
View Image pipeline (delayed).ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View confluent-kafka-tornado.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View pcap.py
import pandas as pd
def parse(line):
words = line.split()
time = words[0]
protocol = words[1]
if protocol == 'IP':
src_ip, src_port = words[2].rsplit('.', 1)
dst_ip, dst_port = words[4].strip(':').rsplit('.', 1)
@eddienko
eddienko / Spark_Jupyter_OS_X.md
Created Jan 27, 2018 — forked from flopezlasanta/Spark_Jupyter_OS_X.md
Steps to configure Jupyter (iPython Notebook) with Python (3.5.1) and Spark (1.6.0) kernel on Mac OS X (El Capitan)
View Spark_Jupyter_OS_X.md

Install Python3, Scala and Apache Spark via Brew (http://brew.sh/)

brew update
brew install python3
brew install scala
brew install apache-spark

Set environment variables