- "And so it was in the days before the reign of the unikernel that the hypervisors fought the containers. Then the containers turned inward"
- "And the one that went by water battled the one that went by air"
- "And the demon and the daystar waited for the chance to strike"
- "All the while the operators feasted on the discarded table scraps of the one called G"
- "And while the bits were lost in the hydra-like pit of abstraction, a voice cried out across the land"
- "The prophet climbed the mountain and raised his keyboard and said but one word - systemd"
- "But the people were betrayed by the nspawn saying 'lennart, lennart. lama sabachthani'"
- "And then did the great moon come forth in the sky"
- "And whispered not a word. Only drawing symbols in the air...."
- "In the end all were tested in the distributed fires of the bound one and found lacking"
I've been thinking for a while about a strongly typed binary CM system based on Golang (Rust would work as well).
The model depends on something that compiles to a single binary and compiles very fast. Typically golang fits this bill. I've not played with Rust just yet but if it compiles fast, it would work as well.
The workflow goes something like this:
- Upload source to the "server" component.
- Server component compiles a binary for all hosts it has registered that the code would apply to (i.e. role:webserver)
- Optionally for unknown clients, the binary is compiled on-the-fly when the host checks in and provides at least platform information. The coolest part is that golang is easily cross-compiled so this works for all platforms that check in.
$ModLoad omrelp | |
$RepeatedMsgReduction off | |
$template ls_json,"{\"@version\":1,\"es_environment\":\"dmz\",\"@timestamp\":\"%timestamp:1:19:date-rfc3339%.%timestamp:1:3:date-subseconds%+00:00\",%HOSTNAME:::jsonf:source_host%,\"message\":\"%timestamp% %app-name%:%msg:::json%\",%syslogfacility-text:::jsonf:facility%,%syslogseverity-text:::jsonf:severity%,%app-name:::jsonf:program%,%procid:::jsonf:processid%}" | |
*.* :omrelp:X.X.X.X:21514;ls_json |
Folks know I'm pretty gung-ho about slack as a communication tool and several people have brought up the following ValleyWag article:
http://valleywag.gawker.com/slack-is-letting-anyone-peek-at-their-competitors-1643790919/+barrett
Obviously we're concerned about it as well but as with most security issues, things aren't so clear cut. In many cases, I've gotten comments from people who don't use slack so there's understandable confusion. Note that I don't work for slack (obviously) and I don't speak for them (I didn't think this would ever need to be said but...there you go)
To be clear, this is an issue and we've already reported it to slack as well that it's unacceptable for us
- When you create a slack team (the top-level construct), you are given the option of specifying which email domains you want to allow people to "automatically" be able to register with. This is really handy. So someone at apple might say "we want all apple employees to be able to join this slack team".
In rebuilding and testing our omnibus "megapackage" on centos7, we discovered a weirdness where our post install scripts kept bombing out.
Note that this same script in our omnibus package worked on 14.04 ubuntu with no modification as well as the previous centos6 and ubuntu 12.04 setups.
It's possible the centos image we were using to test for centos6 already had selinux disabled (I haven't gone back and retested).
The really WEIRD part is that this work when I run these cookbooks manually logging in as root (the solo run we do in the post-install)
Something about the context of running chef-solo inside the context of a post-install script on centos7 feels like the issue but it could just be an old-fashioned bug.
slack "iptables-rules-changed-#{node.name}" do | |
message "iptables.sav was updated on #{node.name}! This should be investigated" | |
icon_url node['chef_client']['handler']['slack']['icon_url'] | |
channel node['chef_client']['handler']['slack']['channel'] | |
username "Chef iptables notifier" | |
action :nothing | |
end | |
template "/etc/iptables.sav" do | |
owner "root" |
This is a pretty opinionated solution that we use internally. It's strictly designed to post to slack via the API and it uses our notion of wrapping EVERYTHING with a role. All of our plugins automatically use brain storage as well. To be able to execute anything with hubot, you have to be a rundeck_admin
role user.
You should be able to tease out the rundeck API stuff specifically.
It depends on a common format for your job defs in rundeck. We have two types of jobs in rundeck that we use via this plugin:
- ad-hoc
- predefined
ALL of our jobs have a common parameter called slack_channel
. Hubot will automatically set this for you based on where/who it was talking to.
A modifed version of the IRC adapter for Hubot that responds via Slack webhook api
Not at all. This was the first time I ever touched coffeescript.
- Enable the slack IRC gateway
- Create a dedicated account for Hubot
- Create a new incoming webhook for hubot to use. Set
HUBOT_SLACK_WEBHOOK_URL
env var to that url
Basic port of my old rabbitmq+ruby global log tailer to redis+golang.
Still a go newb but it works. Probably lots of edge cases. Managed to port the bits to publish to a websocket instead of stdout. Plan on cleaning all that up and publishing it as two bits - the service and the client.
This is just a basic client. Totally insecure.
Note you would need to customize this. We have our own keys in our logstash events that are pointless to you.
This tool is primarily for our developers to be able to work with our production logs. They have access to our Kibana install but sometimes being able to just hit the command-line and use natural tools makes more sense. There's also an option to display stacktraces as well which is handy to be able to turn on or off.