Skip to content

Instantly share code, notes, and snippets.

View RichardHightower's full-sized avatar

Richard Hightower RichardHightower

View GitHub Profile
@RichardHightower
RichardHightower / setup_for_logback_logstash.md
Created April 20, 2016 02:15
Logstash Logback setup for Mesosphere for debugging distributed microservices

Logstash and Logback setup for running in Mesosphere.

.../salt/logstash/files/conf.d/input-json.conf

input {
  udp {
    port => 5000
    codec => "json"
  }
interface Config {
    String getString(String path);
    int getInt(String path);
    float getFloat(String path);
    double getDouble(String path);
    long getLong(String path);
    Map<String, Object> getMap(String path);
  T getObject(String path, Class type);
@RichardHightower
RichardHightower / TwoAsyncCalls.java
Last active July 7, 2016 19:20
Make two async calls and not returning until both come back or fail
override fun removeArtistFromSystem(artistId: Long): Promise<Boolean> {
return Promises.invokablePromise { promise ->
val saveSystemDataPromise = Promises.promiseBoolean()
.catchError { e -> logger.info("removeArtistFromSystem:: unable to save system data for $artistId", e) }
val removeArtistFromSystemPromise = Promises.promiseBoolean()
.catchError { e -> logger.info("removeArtistFromSystem:: unable to remove $artistId from repo", e) }
Promises.all(saveSystemDataPromise, removeArtistFromSystemPromise)

/etc/logstash/conf.d

The key is that it is ok to have many logstash processes running with different input and output filters. And we need json in and json out for UDP. The non UDP/JSON versions do not seem to work with extra fields/MDC. The output encoder moving from logstash and into kibana was wrong so it would black hole all of our logs from logback.

# cat 50-udp.conf 

input {
    udp {
        port => 5001
@RichardHightower
RichardHightower / foo.md
Created August 15, 2016 23:11
Installing exhibitor....
$ dcos package install exhibitor
We recommend a minimum of three nodes with 3GB of RAM available.
Continue installing? [yes/no] yes
Installing Marathon app for package [exhibitor] version [1.0.0]
Object is not valid
got 1.0, expected 0.5 or less
got 1.0, expected 0.0
must be false
@RichardHightower
RichardHightower / install_docker.sh
Created August 21, 2016 00:37
Installing lab env
## Setup docker repo
sudo apt-get install apt-transport-https ca-certificates
sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
sudo touch /etc/apt/sources.list.d/docker.list
sudo echo "deb https://apt.dockerproject.org/repo ubuntu-xenial main" > /etc/apt/sources.list.d/docker.list
## Install Docker
sudo apt-get update
sudo apt-get install -y linux-image-extra-$(uname -r) linux-image-extra-virtual
@RichardHightower
RichardHightower / logback.xml
Last active August 24, 2016 17:55
Java setup for Mesos/Gradle/QBit zip that uses logback/sl4j and virtual hosting
<configuration>
<appender name="STASH-UDP" class="net.logstash.logback.appender.LogstashSocketAppender">
<host>${LOGSTASH_HOST:-192.168.99.100}</host>
<port>${LOGSTASH_PORT:-5001}</port>
<customFields>{"serviceName":"sample-dcos-qbit","serviceHost":"${HOST}","servicePort":"${PORT0}","serviceId":"sample-dcos-qbit-${HOST}-${PORT0}","serviceAdminPort":"${PORT1}"}</customFields>
</appender>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">

installing aws on a box (logged in as su)

mkdir /tmp/awscli; cd /tmp/awscli
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip
 ./awscli-bundle/install -i /usr/lib/aws -b /usr/bin/aws
 
@RichardHightower
RichardHightower / aaa-packer.md
Last active November 3, 2016 05:03
Immutable DC/OS CentOS AMI image creator using docker
@RichardHightower
RichardHightower / a_read_me_cloudformation_dcos.md
Last active November 11, 2016 05:19
Using CloudFormation, Packr, etc. for Immutable Infrastructure to build DC/OS and deploy it to Amazon Web Services

We setup a dev env for DC/OS in AWS (subnets, multi-az, ags groups, ami images, etc), tagged everything as dcos-dev, and then used Cloudformer to generate a starter AWS CloudFormation script. Cloudformer allows you to reverse engineer your AWS environment into cloudformation scripts. We then modified what cloudformer produced (to make it work as cloudformer just gets you about 90% of the way there), and then we added mappings, parameters and outputs to our cloudformation script.

Included are the cloudformation and packer scripts. Hope it helps you get setup. Feedback is welcome.

We in this case is my client, DC/OS support, Amazon support, and I. We did this instead of using the canned Amazon support because we needed to run masters, and agents in [multiple AZs](http://docs.aws.amazon.co