Skip to content

Instantly share code, notes, and snippets.

@ilovefood2
ilovefood2 / redis_import_csv.txt
Created September 20, 2019 06:32 — forked from arsperger/redis_import_csv.txt
import csv file into redis with a single command
cat data.csv | awk -F',' '{print " SET \""$1"\" \""$2"\" \n"}' | redis-cli --pipe
@ilovefood2
ilovefood2 / data_loading_utils.py
Created September 14, 2019 18:09 — forked from iyvinjose/data_loading_utils.py
Read large files line by line without loading entire file to memory. Supports files of GB size
def read_lines_from_file_as_data_chunks(file_name, chunk_size, callback, return_whole_chunk=False):
"""
read file line by line regardless of its size
:param file_name: absolute path of file to read
:param chunk_size: size of data to be read at at time
:param callback: callback method, prototype ----> def callback(data, eof, file_name)
:return:
"""
def read_in_chunks(file_obj, chunk_size=5000):
@ilovefood2
ilovefood2 / m3u8download.sh
Last active July 4, 2019 04:19 — forked from patrickgill/m3u8download.sh
download m3u8 ts segments, then decode, join, and remux them! (HTTP Live Streaming TS files)
# download
wget http://xxx.com/upload/20180419/79bf8642d29b9d51a5bebb8ddd0ea926/79bf8642d29b9d51a5bebb8ddd0ea926.m3u8
aria2c -x 4 -j 4 -Z -P http://xxx.com/upload/20180419/79bf8642d29b9d51a5bebb8ddd0ea926/79bf8642d29b9d51a5bebb8ddd0ea926[000-286].ts
# decode (example)
#openssl aes-128-cbc -d -K 15D0F46608409DA364E3F5D92BDE9F61 -iv 00000000000000000000000000000000 -nosalt -in G00000000.ts -out G00000000.d.ts
# join all ts files
cat *.ts > out.ts
@ilovefood2
ilovefood2 / global-protect.sh
Created April 21, 2019 08:11 — forked from kaleksandrov/global-protect.sh
Simple script that starts and stops GlobalProtect.app on Mac OSX.
#!/bin/bash
case $# in
0)
echo "Usage: $0 {start|stop}"
exit 1
;;
1)
case $1 in
start)
@ilovefood2
ilovefood2 / hadoop_spark_osx
Created March 24, 2019 07:20 — forked from cjzamora/hadoop_spark_osx
Hadoop + Spark installation (OSX)
Source: http://datahugger.org/datascience/setting-up-hadoop-v2-with-spark-v1-on-osx-using-homebrew/
This post builds on the previous setup Hadoop (v1) guide, to explain how to setup a single node Hadoop (v2) cluster with Spark (v1) on OSX (10.9.5).
Apache Hadoop is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. The Apache Hadoop framework is composed of the following core modules:
HDFS (Distributed File System): a distributed file-system that stores data on commodity machines, providing very high aggregate bandwidth across the cluster.
YARN (Yet A