Skip to content

Instantly share code, notes, and snippets.

@fyaconiello
fyaconiello / django-lion-setup.md
Created February 9, 2012 17:22
Django Lion Setup
Step 1 - download and install macosx pkgs
  • download and install xcode
  • XCode and enable Command Line Tools
  • download and install mysql-server
Step 2
nano ~/.bash_profile

add the following

@eclubb
eclubb / sqlite2pg.sh
Created March 30, 2012 17:20
Script to import SQLite3 database into PostgreSQL
#!/bin/sh
# This script will migrate schema and data from a SQLite3 database to PostgreSQL.
# Schema translation based on http://stackoverflow.com/a/4581921/1303625.
# Some column types are not handled (e.g blobs).
SQLITE_DB_PATH=$1
PG_DB_NAME=$2
PG_USER_NAME=$3
@bittner
bittner / sqlite2pg.sh
Last active January 23, 2024 08:35 — forked from eclubb/sqlite2pg.sh
#!/bin/bash
# This script will migrate schema and data from a SQLite3 database to PostgreSQL.
# Schema translation based on http://stackoverflow.com/a/4581921/1303625.
# Some column types are not handled (e.g blobs).
#
# See also:
# - http://stackoverflow.com/questions/4581727/convert-sqlite-sql-dump-file-to-postgresql
# - https://gist.github.com/bittner/7368128
Homebrew - Scala 2.10.4
Install Tapped version of Scala 2.10.4 (old version)
> brew install homebrew/versions/scala210
Unlink version of Scala 2.11 (if installed)
> brew unlink scala
Link Tapped version of Scala 2.10.4
> brew link scala210 --force
@andrearota
andrearota / example.scala
Created October 18, 2016 08:40
Creating Spark UDF with extra parameters via currying
// Problem: creating a Spark UDF that take extra parameter at invocation time.
// Solution: using currying
// http://stackoverflow.com/questions/35546576/how-can-i-pass-extra-parameters-to-udfs-in-sparksql
// We want to create hideTabooValues, a Spark UDF that set to -1 fields that contains any of given taboo values.
// E.g. forbiddenValues = [1, 2, 3]
// dataframe = [1, 2, 3, 4, 5, 6]
// dataframe.select(hideTabooValues(forbiddenValues)) :> [-1, -1, -1, 4, 5, 6]
//
// Implementing this in Spark, we find two major issues: