git clone https://gist.github.com/dd6f95398c1bdc9f1038.git vault
cd vault
docker-compose up -d
export VAULT_ADDR=http://192.168.99.100:8200
Initializing a vault:
vault init
git clone https://gist.github.com/dd6f95398c1bdc9f1038.git vault
cd vault
docker-compose up -d
export VAULT_ADDR=http://192.168.99.100:8200
Initializing a vault:
vault init
Git for Windows comes bundled with the "Git Bash" terminal which is incredibly handy for unix-like commands on a windows machine. It is missing a few standard linux utilities, but it is easy to add ones that have a windows binary available.
The basic idea is that C:\Program Files\Git\mingw64\
is your /
directory according to Git Bash (note: depending on how you installed it, the directory might be different. from the start menu, right click on the Git Bash icon and open file location. It might be something like C:\Users\name\AppData\Local\Programs\Git
, the mingw64
in this directory is your root. Find it by using pwd -W
).
If you go to that directory, you will find the typical linux root folder structure (bin
, etc
, lib
and so on).
If you are missing a utility, such as wget, track down a binary for windows and copy the files to the corresponding directories. Sometimes the windows binary have funny prefixes, so
// Problem: creating a Spark UDF that take extra parameter at invocation time. | |
// Solution: using currying | |
// http://stackoverflow.com/questions/35546576/how-can-i-pass-extra-parameters-to-udfs-in-sparksql | |
// We want to create hideTabooValues, a Spark UDF that set to -1 fields that contains any of given taboo values. | |
// E.g. forbiddenValues = [1, 2, 3] | |
// dataframe = [1, 2, 3, 4, 5, 6] | |
// dataframe.select(hideTabooValues(forbiddenValues)) :> [-1, -1, -1, 4, 5, 6] | |
// | |
// Implementing this in Spark, we find two major issues: |
<!DOCTYPE html> | |
<html lang="en"> | |
<head> | |
<meta charset="UTF-8"> | |
<meta name="description" content="SOMA 01 - Hello World"> | |
<title>SOMA 01 - Hello World</title> | |
</head> | |
<body> | |
<script src="https://d3js.org/d3.v4.min.js" charset="utf-8"></script> | |
<script> |
const createBlogPosts = async (posts, assets, categories, managementToken, spaceId, simpleLog = console.log) => { | |
const client = contentful.createClient({ | |
accessToken: managementToken, | |
logHandler: (level, data) => simpleLog(`${level} | ${data}`) | |
}) | |
const space = await client.getSpace(spaceId) | |
const linkMap = new Map() | |
assets.forEach(asset => linkMap.set(asset.wpAsset.link, asset.fields.file['en-US'].url)) |
import streamlit as st | |
import pandas as pd | |
@st.cache | |
def load_metadata(): | |
DATA_URL = "https://streamlit-self-driving.s3-us-west-2.amazonaws.com/labels.csv.gz" | |
return pd.read_csv(DATA_URL, nrows=1000) | |
@st.cache | |
def create_summary(metadata, summary_type): |
import org.apache.hadoop.conf.Configuration; | |
import org.apache.iceberg.*; | |
import org.apache.iceberg.catalog.Catalog; | |
import org.apache.iceberg.catalog.TableIdentifier; | |
import org.apache.iceberg.data.GenericRecord; | |
import org.apache.iceberg.data.IcebergGenerics; | |
import org.apache.iceberg.data.Record; | |
import org.apache.iceberg.data.parquet.GenericParquetWriter; | |
import org.apache.iceberg.hadoop.HadoopCatalog; | |
import org.apache.iceberg.io.CloseableIterable; |