Skip to content

Instantly share code, notes, and snippets.

running "/xxxxxxx-data/bin/terraform0.12.8 apply -input=false -no-color \"/xxxxxxx-data/repos/xxxxxxx/xxxxxxx/77/default/xxxxxxx/xxxxxxx-default.tfplan\"" in "/xxxxxxx-data/repos/xxxxxxx/xxxxxxx/77/default/xxxxxxx": exit status 1
2019/09/17 08:08:53 [INFO] Terraform version: 0.12.8
2019/09/17 08:08:53 [INFO] Go runtime version: go1.12.9
2019/09/17 08:08:53 [INFO] CLI args: []string{"/xxxxxxx-data/bin/terraform0.12.8", "apply", "-input=false", "-no-color", "/xxxxxxx-data/repos/xxxxxxx/xxxxxxx/77/default/xxxxxxx/xxxxxxx-default.tfplan"}
2019/09/17 08:08:53 [DEBUG] Attempting to open CLI config file: /home/xxxxxxx/.terraformrc
2019/09/17 08:08:53 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2019/09/17 08:08:53 [DEBUG] checking for credentials in "/home/xxxxxxx/.terraform.d/plugins"
2019/09/17 08:08:53 [DEBUG] checking for credentials in "/home/xxxxxxx/.terraform.d/plugins/linux_amd64"
2019/09/17 08:08:53 [INFO] CLI command args: []string{"apply", "-input=false", "-no-color", "/xxxxxxx-data/repos/xx
@nysthee
nysthee / error.log
Created September 13, 2019 14:00
google_bigquery_data_transfer_config.query_config error
Error: Error creating Config: googleapi: Error 400: P4 service account needs iam.serviceAccounts.getAccessToken permission. Running the following command may resolve this error: gcloud projects add-iam-policy-binding <PROJECT_ID> --member='serviceAccount:service-<PROJECT_NUMBER>@gcp-sa-bigquerydatatransfer.iam.gserviceaccount.com' --role='roles/iam.serviceAccountShortTermTokenMinter'
on bq-test.tf line 16, in resource "google_bigquery_data_transfer_config" "query_config":
16: resource "google_bigquery_data_transfer_config" "query_config" {
#! /usr/bin/env python
import sys
from boto import ec2, cloudformation
import boto.utils
def get_group_info(ec2conn, instance_id) :
reservations = ec2conn.get_all_instances(instance_id)
instance = [i for r in reservations for i in r.instances][0]
if instance.tags.has_key('aws:autoscaling:groupName'):
#!/bin/sh
# No sudo, as init scripts run as root
BUCKET_NAME=$1
STACK_NAME=$2
EC2_AVAIL_ZONE=`curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone`
EC2_REGION="`echo \"$EC2_AVAIL_ZONE\" | sed -e 's:\([0-9][0-9]*\)[a-z]*\$:\\1:'`"
REGION_PREFIX="`echo ${EC2_REGION: : -2}`"
BIND_IP=`curl -s http://169.254.169.254/latest/meta-data/local-ipv4`
echo "Bucket name for fetching configuration is $BUCKET_NAME"
@nysthee
nysthee / Spark+ipython_on_MacOS.md
Last active January 19, 2018 16:01 — forked from ololobus/Spark+ipython_on_MacOS.md
Apache Spark installation + ipython/jupyter notebook integration guide for macOS

Apache Spark installation + ipython/jupyter notebook integration guide for macOS

Tested with Apache Spark 2.1.0, Python 2.7.13 and Java 1.8.0_112

For older versions of Spark and ipython, please, see also previous version of text.

Install Java Development Kit

Keybase proof

I hereby claim:

  • I am nysthee on github.
  • I am nysthee (https://keybase.io/nysthee) on keybase.
  • I have a public key ASCu0W4n-XcjDCFc4eByc9zSmmoBajVeIICkTyjO9JCtnwo

To claim this, I am signing this object:

@nysthee
nysthee / r.rb
Last active August 22, 2017 23:28
class R < Formula
desc "Software environment for statistical computing"
homepage "https://www.r-project.org/"
url "https://cran.rstudio.com/src/base/R-3/R-3.2.2.tar.gz"
sha256 "9c9152e74134b68b0f3a1c7083764adc1cb56fd8336bec003fd0ca550cd2461d"
bottle do
sha256 "d0254993416c177d7fa49b9cde95eb8bd262e3a801408b21951cc0f7755e0a0e" => :sierra
sha256 "2098376a2d552573a1b0e2ff29c076b05a0161ec276260b5b76a80e87d5cd6c1" => :el_capitan
sha256 "be31e78c3df77a46e91500b4809cb7f89bceacabc0c38d1bc3e56beab31bff6e" => :yosemite
@nysthee
nysthee / README.md
Created July 10, 2017 11:08 — forked from rantav/README.md
Find slow queries in mongo DB

A few show tricks to find slow queries in mongodb

Enable profiling

First, you have to enable profiling

> db.setProfilingLevel(1)

Now let it run for a while. It collects the slow queries ( > 100ms) into a capped collections, so queries go in and if it's full, old queries go out, so don't be surprised that it's a moving target...

# List unique values in a DataFrame column
pd.unique(df.column_name.ravel())
# Convert Series datatype to numeric, getting rid of any non-numeric values
df['col'] = df['col'].astype(str).convert_objects(convert_numeric=True)
# Grab DataFrame rows where column has certain values
valuelist = ['value1', 'value2', 'value3']
df = df[df.column.isin(valuelist)]
@nysthee
nysthee / postmortem.md
Created September 26, 2016 08:44 — forked from mlafeldt/postmortem.md
Example Postmortem from SRE book, pp. 487-491

Shakespeare Sonnet++ Postmortem (incident #465)

Date

2015-10-21

Authors

  • jennifer
  • martym