Skip to content

Instantly share code, notes, and snippets.

@zlanich
zlanich / gist:4dfdb1a020a12ff788a8f34a51928317
Created October 24, 2016 16:08
saltstack-lxd-formula failed init - Full output
app_runtime_1:
----------
ID: pip
Function: pkg.installed
Name: python-pip
Result: True
Comment: Package python-pip is already installed
Started: 01:25:24.290411
Duration: 524.518 ms
Changes:
@zlanich
zlanich / gist:3fd07f4418836889f8361115d43baada
Created October 24, 2016 01:20
Saltstack-LXD-Formula Ubunutu 16.04 Initial Failure Output
app_runtime_1:
----------
ID: pip
Function: pkg.installed
Name: python-pip
Result: True
Comment: The following packages were installed/updated: python-pip
Started: 01:09:18.306701
Duration: 25342.397 ms
Changes:
@zlanich
zlanich / gist:c67190a14610f70412965f3a91792925
Created October 24, 2016 00:58
Saltstack-LXD-Formula lxd state failed - Output
app_runtime_2:
----------
ID: pip
Function: pkg.installed
Name: python-pip
Result: True
Comment: The following packages were installed/updated: python-pip
Started: 00:45:25.314551
Duration: 21431.701 ms
Changes:
@zlanich
zlanich / saltstack-lxd-formula-simple-pillar.sls
Last active October 24, 2016 01:01
Saltstack-LXD-Formula Simple Pillar
lxd:
lxd:
run_init: True
init:
trust_password: BooseGumps21
network_address: "[::]"
network_port: 8443
python:
use_pip: True
Hey guys, I’m having a real hard time figuring out how to handle my Gluster situation for the web hosting setup I’m working on. Here’s the rundown of what I’m trying to accomplish:
- Load-balanced web nodes (2 nodes right now), each with multiple LXD containers in them (1 container per website)
- Gluster vols mounted into the containers (I probably need site-specific volumes, not mounting the same volume into all of them)
Here are 3 scenarios I’ve come up with for a replica 3 (possibly w/ arbiter):
Option 1. 3 Gluster nodes, one large volume, divided up into subdirs (1 for each website), mounting the respective subdirs into their containers & using ACLs & LXD’s u/g id maps (mixed feelings about security here)
Option 2. 3 Gluster nodes, website-specifc bricks on each, creating website-specific volumes, then mounting those respective volumes into their containers. Example:
# Infrastrucutre Data Example:
infrastructure:
name: Some Infrastructure
id: 123
enabled_environments:
- staging
- development
backups: true
size_resources:
db.pillar.insert({
_id: 'inf-123-webserver-1',
mongo_pillar: {
sites: {
somesite.com: {
multisite: False,
other_values: etc
},
someothersite.com: {
multisite: True,
# Preface
#
# Customers will be able to log into our GUI and create any number of "Infrastrcutures".
# They can then add any number of "Sites" to those infrastructures. We will be using the
# "Infrastructure ID" as part of the naming convention for the minion IDs to keep track of
# what servers belong to what customer account.
# Example: Customer creates an Infrastrcture called "My Infrastructure" that get's a unique
# id assigned to in my GUI as "123". There will also be another Infrastructure below (Inf ID: 456)
# as an example of how the Pillar Data is mapped.
reactor:
- salt/queue/bigjob/process
- /srv/reactor/salt-queue.sls
- A profile for both my Staging and Production servers (for now, just single webservers, but later optional H/A)
- Environment Map for my Staging server(s)
- Environment Map specifically for my Production servers that each host a "Site"
- Pillars that apply to all servers, but "Site" specific pillars as well, each in their own SLS file
- States that apply to their relevant servers (Web, Database, etc)
- Orchestration file that basically heads up the "Applying" of my entire infrastructure state when things change, like adding a new site, re-syncing my connection between Staging/Prod sites for Database migrations, etc
For now, I would write these SLS and Map files by hand, but when I finally write my own GUI for managing these "Sites", I can create the objects, use a YAML encoder and write the YAML files programmatically
The other option down the road would be to write my own connector that delivers Salt Pillar Data, etc in the same way "Reclass" does.