Skip to content

Instantly share code, notes, and snippets.


  • Seattle, Washington
Block or report user

Report or block drocsid

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
drocsid / gist:0ed6d76d9ea5c804e5a7163c993cad98
Created Apr 9, 2018
AWS SDK Java 2 Scala getObjectFile
View gist:0ed6d76d9ea5c804e5a7163c993cad98
import{GetObjectRequest, GetObjectResponse, ListObjectsV2Request, S3Object}
import{AwsCredentialsProvider, EnvironmentVariableCredentialsProvider, InstanceProfileCredentialsProvider, ProfileCredentialsProvider}
import{ResponseInputStream, StreamingResponseHandler}
import java.nio.file.Paths
View gist:79153b1ae228fa6a0f58b9958f552bbb
create temp table event_shp
as (
select cust_key,
case when delivery_channel <> 'JOIN' and
product_type = 'General' and
sale_dt in
(select event_dt
where anniversary_public_event=1 ) then 1
else 0
drocsid / gist:ee5803d7995631abdfc06125b5e739a4
Created Jan 15, 2018
Elasticsearch SocketTimeoutException
View gist:ee5803d7995631abdfc06125b5e739a4
Caused by: UncategorizedExecutionException[Failed execution]; nested: ExecutionException[]; nested: SocketTimeoutException;
at org.elasticsearch.action.bulk.Retry.withSyncBackoff(
at org.elasticsearch.action.bulk.BulkRequestHandler$SyncBulkRequestHandler.execute(
at org.elasticsearch.action.bulk.BulkProcessor.execute(
at org.elasticsearch.action.bulk.BulkProcessor.executeIfNeeded(
at org.elasticsearch.action.bulk.BulkProcessor.internalAdd(
at org.elasticsearch.action.bulk.BulkProcessor.add(
at org.elasticsearch.action.bulk.BulkProcessor.add(
View gist:43180f1227c8f282b2b9b73351c59eff
java.lang.NoSuchMethodError: org.apache.http.conn.ssl.SSLConnectionSocketFactory.<init>(Ljavax/net/ssl/SSLContext;Ljavax/net/ssl/HostnameVerifier;)V
at com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.<init>(
at com.amazonaws.http.apache.client.impl.ApacheConnectionManagerFactory.getPreferredSocketFactory(
at com.amazonaws.http.apache.client.impl.ApacheConnectionManagerFactory.create(
at com.amazonaws.http.apache.client.impl.ApacheConnectionManagerFactory.create(
at com.amazonaws.http.apache.client.impl.ApacheHttpClientFactory.create(
at com.amazonaws.http.apache.client.impl.ApacheHttpClientFactory.create(
at com.amazonaws.http.AmazonHttpClient.<init>(
at com.amazonaws.AmazonWebServiceClient.<init>(AmazonWebServiceCl
View gist:b0efa4ff6ff4a7c3c8bb56767d0b6877
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.spark.Logging;
import org.apache.spark.SparkConf;
View pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns=""
drocsid / gist:b0da92eb313b1bf71912
Last active Jan 17, 2016
Running out of memory locally launching multiple spark jobs using spark yarn / submit from shell.
View gist:b0da92eb313b1bf71912
I launch around 30-60 of these jobs defined like in the background from a wrapper script. I wait about 30 seconds between launches, then the wrapper monitors yarn to determine when to launch more. There is a limit defined at around 60 jobs, but even if I set it to 30, I run out of memory on the host submitting the jobs. Why does my approach to using spark-submit cause me to run out of memory. I have about 6G free, and I don't feel like I should be running out of memory when submitting jobs.
export HADOOP_CONF_DIR=/etc/hadoop/conf
spark-submit \
--class sap.whcounter.WarehouseCounter \
--master yarn-cluster \
--num-executors 1 \
--driver-memory 1024m \
drocsid / gist:9741e847ad7dd0c7b16d
Created Oct 15, 2015
etcd2 keeping state, has hostname not defined as an option.
View gist:9741e847ad7dd0c7b16d
core@coreos003 ~ $ sudo rm -rf /var/lib/etcd/*
core@coreos003 ~ $ sudo rm -rf /var/lib/etcd2/*
core@coreos003 ~ $ sudo systemctl stop etcd2
core@coreos003 ~ $ sudo systemctl disable etcd2
core@coreos003 ~ $ sudo systemctl stop etcd
core@coreos003 ~ $ sudo systemctl disable etcd
etcd2 -name coreos002 -initial-advertise-peer-urls -listen-peer-urls -listen-client-urls, -advertise-client-urls -initial-cluster-token etcd-core-42 -initial-cluster coreos002=,coreos003=,coreos004= -initial-cluster-state new
etcd2 -name coreos003 -initial-advertise-peer-urls -listen-peer-urls -listen-client-urls, -advertise-client-urls -initial-cluster-token etcd-core-42 -initial-cluster coreos002=http://10.5
View gist:04fe63f4bb7a5c5a24bf
"persistent": {
"action": {
"destructive_requires_name": "true"
"indices": {
"store": {
"throttle": {
"max_bytes_per_sec": "60mb"
View gist:1438eead63651112dcdc
coreos-test ~ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether d4:ae:52:67:58:4b brd ff:ff:ff:ff:ff:ff
inet brd scope global dynamic eno1
You can’t perform that action at this time.