Skip to content

Instantly share code, notes, and snippets.

View cjzamora's full-sized avatar
🏠
Working from home

Charles Zamora cjzamora

🏠
Working from home
View GitHub Profile
<?php // -->
$data = json_decode(file_get_contents('php://input'), true);
error_log(print_r($_POST, true));
if(isset($_GET['access_token'])) {
$subscribers = file_get_contents('subscribers.txt');
try {
(function(l,h){function p(a){var b=a.length,c=d.type(a);return d.isWindow(a)?!1:1===a.nodeType&&b?!0:"array"===c||"function"!==c&&(0===b||"number"==typeof b&&0<b&&b-1 in a)}function k(){Object.defineProperty(this.cache={},0,{get:function(){return{}}});this.expando=d.expando+Math.random()}function m(a,b,c){var e;if(c===h&&1===a.nodeType)if(e="data-"+b.replace(jc,"-$1").toLowerCase(),c=a.getAttribute(e),"string"==typeof c){try{c="true"===c?!0:"false"===c?!1:"null"===c?null:+c+""===c?+c:kc.test(c)?JSON.parse(c):
c}catch(d){}E.set(a,b,c)}else c=h;return c}function s(){return!0}function t(){return!1}function F(){try{return x.activeElement}catch(a){}}function B(a,b){for(;(a=a[b])&&1!==a.nodeType;);return a}function I(a,b,c){if(d.isFunction(b))return d.grep(a,function(a,d){return!!b.call(a,d,a)!==c});if(b.nodeType)return d.grep(a,function(a){return a===b!==c});if("string"==typeof b){if(lc.test(b))return d.filter(b,a,c);b=d.filter(b,a)}return d.grep(a,function(a){return 0<=za.call(b,a)!==c})}function Q(a,b){return d.
@cjzamora
cjzamora / hadoop_spark_osx
Last active March 25, 2024 07:23
Hadoop + Spark installation (OSX)
Source: http://datahugger.org/datascience/setting-up-hadoop-v2-with-spark-v1-on-osx-using-homebrew/
This post builds on the previous setup Hadoop (v1) guide, to explain how to setup a single node Hadoop (v2) cluster with Spark (v1) on OSX (10.9.5).
Apache Hadoop is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. The Apache Hadoop framework is composed of the following core modules:
HDFS (Distributed File System): a distributed file-system that stores data on commodity machines, providing very high aggregate bandwidth across the cluster.
YARN (Yet A
@cjzamora
cjzamora / .bash_profile
Created January 5, 2016 05:04
Environment Variables for Hadoop, Spark, Java etc.
# export homebrew bin folder
export PATH="/usr/local/bin:$PATH"
# export homebrew sbin home
export PATH="/usr/local/sbin:$PATH"
# export java home
export JAVA_HOME=$(/usr/libexec/java_home)
# export scala home
@cjzamora
cjzamora / gists-sample
Created October 27, 2014 06:40
This is a sample Gists!
Sample Gists for Eden JS Github API