I am writing this down so I don't forget. I started this task but had to stop a few times, and then remember where I left off and at some point, others will need to know how to get started.
brew install ansible
package main
import (
"os"
"fmt"
"github.com/coreos/go-systemd/sdjournal"
)
We setup a dev env for DC/OS in AWS (subnets, multi-az, ags groups, ami images, etc), tagged everything as dcos-dev
, and then used Cloudformer to generate a starter AWS CloudFormation script. Cloudformer allows you to reverse engineer your AWS environment into cloudformation scripts. We then modified what cloudformer produced (to make it work as cloudformer just gets you about 90% of the way there), and then we added mappings, parameters and outputs to our cloudformation script.
Included are the cloudformation and packer scripts. Hope it helps you get setup. Feedback is welcome.
We in this case is my client, DC/OS support, Amazon support, and I. We did this instead of using the canned Amazon support because we needed to run masters, and agents in [multiple AZs](http://docs.aws.amazon.co
We created a packer AMI builder based on the advanced DC/OS install guide support for CentOS7. We use the official CentOS7 AMIs as a base.
Download and install packer.
$ brew install packer
installing aws on a box (logged in as su)
mkdir /tmp/awscli; cd /tmp/awscli
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip
./awscli-bundle/install -i /usr/lib/aws -b /usr/bin/aws
<configuration> | |
<appender name="STASH-UDP" class="net.logstash.logback.appender.LogstashSocketAppender"> | |
<host>${LOGSTASH_HOST:-192.168.99.100}</host> | |
<port>${LOGSTASH_PORT:-5001}</port> | |
<customFields>{"serviceName":"sample-dcos-qbit","serviceHost":"${HOST}","servicePort":"${PORT0}","serviceId":"sample-dcos-qbit-${HOST}-${PORT0}","serviceAdminPort":"${PORT1}"}</customFields> | |
</appender> | |
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> |
## Setup docker repo | |
sudo apt-get install apt-transport-https ca-certificates | |
sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D | |
sudo touch /etc/apt/sources.list.d/docker.list | |
sudo echo "deb https://apt.dockerproject.org/repo ubuntu-xenial main" > /etc/apt/sources.list.d/docker.list | |
## Install Docker | |
sudo apt-get update | |
sudo apt-get install -y linux-image-extra-$(uname -r) linux-image-extra-virtual |
$ dcos package install exhibitor
We recommend a minimum of three nodes with 3GB of RAM available.
Continue installing? [yes/no] yes
Installing Marathon app for package [exhibitor] version [1.0.0]
Object is not valid
got 1.0, expected 0.5 or less
got 1.0, expected 0.0
must be false
The key is that it is ok to have many logstash processes running with different input and output filters. And we need json in and json out for UDP. The non UDP/JSON versions do not seem to work with extra fields/MDC. The output encoder moving from logstash and into kibana was wrong so it would black hole all of our logs from logback.
# cat 50-udp.conf
input {
udp {
port => 5001
override fun removeArtistFromSystem(artistId: Long): Promise<Boolean> { | |
return Promises.invokablePromise { promise -> | |
val saveSystemDataPromise = Promises.promiseBoolean() | |
.catchError { e -> logger.info("removeArtistFromSystem:: unable to save system data for $artistId", e) } | |
val removeArtistFromSystemPromise = Promises.promiseBoolean() | |
.catchError { e -> logger.info("removeArtistFromSystem:: unable to remove $artistId from repo", e) } | |
Promises.all(saveSystemDataPromise, removeArtistFromSystemPromise) |