Skip to content

Instantly share code, notes, and snippets.

@khaije1
Last active February 16, 2017 19:58
Show Gist options
  • Save khaije1/49a3e6a2c8bc4424a39a to your computer and use it in GitHub Desktop.
Save khaije1/49a3e6a2c8bc4424a39a to your computer and use it in GitHub Desktop.
GeMa Spec
---
so in the video you'll notice that the vagrant box is built on when a presence-check fails
Jose Raffucci (jose@raffucci.net)
right
the VM inventory is based on the salt roster, and the vm-image name is pulled from the roster's value for the basebox_image grain
for each of the defined VMs
finally, the VMs are created w/ a bunch of hostnames and configured to use a local DNS resolver
Jose Raffucci (jose@raffucci.net)
this is very cool, its an instant dev sys
vagrant is configured to recognize when as master is created and assign it to the salt.vagrant.dev fqdn
Jose Raffucci (jose@raffucci.net)
its super sweet for sure
Jose Raffucci (jose@raffucci.net)
because the minions will reach out to salt and the fallback domain suffix for the local resolver is vagrant.dev, this means that minions will supplicate to that VM on start up w/o need to configure networking options
Jose Raffucci (jose@raffucci.net)
magic!
no really that is so nice
i mean you could deploy pre-configured systems
like hadoop
or eucalyptus
etc..
yeah, it lowers the bar and lets people focus on their designs
yup ☺
...
Jose Raffucci
Jose may not be on Hangouts right now. He'll see your messages later.
Jose Raffucci (jose@raffucci.net)
yeah in me youth
lol
So would you be OK with the idea of an internet accessible secret server?
The benefits are that it make collab easier because secrets dont need to move, only the references to them.
but I also understand having reservations.
Jose Raffucci (jose@raffucci.net)
ya i would,
i mean my db is exposed to the internet
soo
yeah, the nice thing about vault too is that it's distributed, so I'll see if I can secure it behind network/nat barriers while still allowing replication
Jose Raffucci (jose@raffucci.net)
yeah thats cool too, i think though if you see the same posibility for the overseer-agents to talk to vault for their config it might be cool, then again using salt to set system variables is also an option
Jose • Mon, 10:18 AM
oh yeah of course
once it's set up it can be used by multiple systems.
Jose Raffucci (jose@raffucci.net)
right
yeah i dont mind being the ginny pig
private sector gaming
bleeding edge usually
can I show you what the machine config looks like?
Jose Raffucci (jose@raffucci.net)
sure!
https://github.com/genesis-matrix/gema-plr-core/blob/master/lookup/minion_id/example-minion.sls
This has all the per-minion control info in one place.
Jose Raffucci (jose@raffucci.net)
wow indeed
i love it
50 lines with spaces
It give you the ability to add labels (that may trigger other actions), to call states (and provide per-state variables), and to add other organizing info for the minion.
Jose Raffucci (jose@raffucci.net)
was gonna ask what label was
perfect
im sold
im easy though
lol
so when you get like downtime, prob never, call me and we can go over genesis and how to start using it now
so i can apply more uniformity and prepare for my next version release of overseer
thats what I borrowed from Kubernetes ... labels are the simplified entry points for describing minions, and lookups are the data/actions that use the labels to expand
Jose Raffucci (jose@raffucci.net)
ah
and labels are just a dictionary
Jose Raffucci (jose@raffucci.net)
so labels are imports?
im jsut simplifyinh
nope, just tags, but they are dereferenced from within the pillar to expand into lookups which may add metadata, state files and sls-specific variables and more
Jose Raffucci (jose@raffucci.net)
k so tag gets filled in
so for many systems, it'll only be necessary to add labels
Jose Raffucci (jose@raffucci.net)
?
i mean labels
get filled in?
im confuses as to what you mean by entrypoint
from what i see
the lookup gets the labels?
yeah, so backup up a little ... to simplify things, there are only two ways to affix info to a target and these are: minion_id and nodegroup.
Jose Raffucci (jose@raffucci.net)
k
i see the minion id
and i see node data
both of these are garunteed to be correct by salt
so the file you're looking at is a lookup file
the label (represented as json) would look like this {label: {minion_id: example-minion}}
that label is applied implicitly
nodegroup works similarly
Jose Raffucci (jose@raffucci.net)
ok
i think i follow
so those are the "entrypoints" because they affix the initial labels
Jose Raffucci (jose@raffucci.net)
ok
so when this lookup genereates the config
it spits everything out as labels?
scratch that
in the lookup file, you'll notice there are also secondary_labels and secondary_lookups ... thats were an entrypoint lookup can throw out other references to use the same label lookup convention.
Jose Raffucci (jose@raffucci.net)
it spits out the labels that it aggregated?
ah
Exactly!
Jose Raffucci (jose@raffucci.net)
kk
so you can start simply, and tersely but expand to add machine description in the data structure that you see before you.
Jose Raffucci (jose@raffucci.net)
so the lookup can get similar structured data and stuff it back as label, to let you extend configurations?
So a lebel is really like a property
label*
in programming terms i guess
I think thats right
in terms of how to describe it
Jose Raffucci (jose@raffucci.net)
and a lookup is like a dynamic way of aggregating entry points which are of different types. and an entry point is another set of generated configs
is that right?
I describe it as a data-structure because thats how the operator would work on it
Jose Raffucci (jose@raffucci.net)
ya makes sense
properties can be structured
hmmm
we need a chat, salt seems to be so powerful, sometimes makes me thing it could do overseer's job
yeah ... and because the properties are described as embedded dictionaries, every label can produce a chain of actions
Jose Raffucci (jose@raffucci.net)
ah
so that was the key
So under label, the meaning of the dictionaries flow like this: question ... answer ... detail
so label:minion_id is the question
Jose Raffucci (jose@raffucci.net)
ya
label:minion_id:example is the answer
in the example of the minion_id question, the answer is supplied by the system
Jose Raffucci (jose@raffucci.net)
so this is like a template
sls ninja
so label:minion_id use the default value that will use salt internally to fill in the answer as label:minion_id:example and use that as the label
Jose Raffucci (jose@raffucci.net)
jinjs
ugh
typing fail
njinja
Jose Raffucci (jose@raffucci.net)
ty for the support
not how I use anchored references to move the minion_id metadata back up into the tree
it makes it possible to make the data expandable from a single tree
Jose Raffucci (jose@raffucci.net)
ah
and that means, for additional bonus that properties that propogate through the system in the way can be traced back to their source, becuase the source data is still in the tree
similar trees are merged, but the entrypoint labels are always unique so are preserved as-is
non-unique labels are merged recursively to allow them to accrete
Jose Raffucci (jose@raffucci.net)
very cool, im starting to see why kub did it like this, so they can split off each logical part of the config specially if needing different accesses
In case you can't tell I'm very happy with this solution. I finally feel like the data-management story is complete.
Jose Raffucci (jose@raffucci.net)
dynamically
ya
i can see
i'll still need some more help understanding the fine bits but it make sense what u explained
this might go further than kube, but was very much inspired by it
Jose Raffucci (jose@raffucci.net)
yeah no doubt
and it's something I just put together in nights and weekends because I saw a need
Jose Raffucci (jose@raffucci.net)
wow cool
it does seem like a logical solution
and less messy
down right understandable 😉
yup, because there is a lot that doesn't need to be stated or can be stated tersely
Jose Raffucci (jose@raffucci.net)
right
the tree can fill in to become useful, but the order of operations and ability to audit isn't lost
... and (almost most importantly), it supports templating to a much higher level because default state file variables can be overrided on a per-minion basis
Jose Raffucci (jose@raffucci.net)
like this example
Yup!
On that last point, see lines 38 and 39
Jose Raffucci (jose@raffucci.net)
ah ha
and that data was gathered by a pilar?
pillar*
This will be expanded into it's own lookup as lookup:sls_path:{{sls path name}} with an embedded dict of custom variables
Jose Raffucci (jose@raffucci.net)
ah
so you use a tempalte to stuff in the path and name
Yes!
see this in action here: https://github.com/genesis-matrix/gema-sls-core/blob/b32879d2dbb8de794a9c5d3354e6556550cbfb3e/state/machine/_spec/minset-configs.sls
Jose Raffucci (jose@raffucci.net)
got it
line 13
nice
it's generic and can be included as boilerplate in each state
Jose Raffucci (jose@raffucci.net)
so the template uses the pillar
got it
looks cool asf
yeah I think it's a winner
the state top.sls and pillar top.sls are simple but fully templated so dont need to change unless new entrypoint labels are added (but minion_id and nodegroup are pretty comprehensive)
under normal circumstances they never change
Jose Raffucci (jose@raffucci.net)
gotcha
since what you where talking about labes is a way to add entrypoints dynamically or ateleast without modifying the root sls files
I'll add more templating to the minion_id lookup I showed you, replacing the literal example-minion with a fill-in
this will make it possible to symlink example-minion2 to example-minion1 where you want them to have the same configs
Jose Raffucci (jose@raffucci.net)
ah
for like clustering etc..
clustering just one example, lots of times for just uniformityu
so with highly generic states with sane default values that can be overrided it becomes possible to describe an entire new system in a single file
Yup
Jose Raffucci (jose@raffucci.net)
because of the way this is structured labels can be added by external systems like mongodb or consul
labels also provide good simple interfaces for integration with other systems, like Active Directory
Jose Raffucci (jose@raffucci.net)
ya
or vault
or is vault more for pillar
if a machine is a member of a certain ADgroup, that can be a new entrypoint (or part of an existing nodegroup) and it gets states assigned to it
All of a sudden AD becomes a way to manage machine labels
Jose Raffucci (jose@raffucci.net)
nice
so thats about it, but I think you can see why I'm excited. This really all only came together for the first time last night
Jose Raffucci (jose@raffucci.net)
Ya I'm sold
but for the moment it feels like the solution I want to use, with simple interface, built to be troubleshoot'd and extended with reasonable effort and can stood up quickly with re-usable and sharable states.
This is the Genesis-Matrix
The rest is polish and extending functionality.
Jose Raffucci (jose@raffucci.net)
Ya looks good
Jose • Mon, 11:28 AM
oh ya know I was thinking about this, setting up a secret sharing system isn't really required if we can both access a private git repo
that's a lower bootstrapping cost
Jose Raffucci (jose@raffucci.net)
im fine with that also
Jose • Mon, 1:20 PM
Jose Raffucci (jose@raffucci.net)
hey homie
how your day been?
ok except that I've been stuck on this weird problem
Jose Raffucci (jose@raffucci.net)
weird ones what make us...
I've been using emacs/org-mode/org-babel/bash/salt in an unholy combination because I didn't have the flexibility I needed w/ Salt ready to go yet.
Jose Raffucci (jose@raffucci.net)
oh
Can't wait to start rolling out this usage convention, it'll really make things so much easier
Jose Raffucci (jose@raffucci.net)
ya sounds like it really will, then you can stop your blasphemes ways
srsl
y
e
srsl-e
Jose Raffucci (jose@raffucci.net)
😉
I only started doing it this way b/c my boss was scared of salt, but then I realized there were things I was doing that were hard to reproduce back in salt land.
... until now hahah
Jose Raffucci (jose@raffucci.net)
lol
i think shwoing your boss that kub
uses it
that it prefers it
is prob strong enough evidence
that its superious
I think the next steps for me will be consul, vault and kubernetes
hahah
i like that word
Jose Raffucci (jose@raffucci.net)
ya
Jose • Mon, 4:58 PM
Jose Raffucci (jose@raffucci.net)
Send a message
enterprise:
- low-cost, adaptable, dynamic, provable, portable systems design
Machines, Services, Accounts
- Machines will be configured as: <project>_<role>, corresponding to their names: <project>-<role>-<deployenv><iterator>[<descriptor>..]
- this reflects a resource allocation mapping. This resource of this role type is assigned to this project.
- machine roles are expected to use a common role configuration, even among projects
- machine roles names are pointers to a UUID-labelled resource. This can appear as salt://discrete/machine_role/<descriptive_text>
- Services will be configured as: <project>_<role>, which will look similar but with different implications.
- this reflects a service customiation/specification. This service type is configured to the requirements of this project.
- it is natural to assume that service:<project>_<role>, will correspond to machine:<project>_<role> and that is intended to facilitate quick recognition of relationships, even in a wildly heterogenous environment. It's the beginning of understanding, by providing context, but not the end.
- the ability to assign a service:<project>_<role> to a machine with a matching role, even in a different project is a design goal
- the ability to run multiple different service:<project>_<role>'s for different projects is explicitly not garunteed. In other words, similar services from different projects are not required or expected to co-exist on the same machine.
- services should present the following state options
- init (by
- services should present the following metadata (pillar-backed)
- machines_available, defines the list of all machines this service may (is permitted to) be applied to
- machines_assigned, defines the list of all machines where this service is or is set to be acti
- machines_provisioned, defines the list of all machines where the service state is applied and have not been unapplied
- machines_onduty, defines the list of all machines where this service is provisioned and onduty
- service venn diagram:
- available:
- assigned
- provisioned
- where a system is assigned AND provisioned, it may be onduty (aka active)
Q: how to handle refcounting in infrastructure design
- if a configuration element is no longer needed and is not being used it should be garbage-collected i.e. throw-away
- but within a deep network of interdependencies, how to know when this is safe to do
- this bears resemblance to the problem of ref-counting for garbage-collecting in memory-managed runtime environments
Notes:
- discrete resources MUST reference only other discrete resources using only UUIDs
- when delivering a resource, or tracking requirements delivery, the following should be noted: Common Language Name, top-level UUID (where possible/available), the genesis-ctl SHA1 (which should provide the SHA1's of all submodules as well) and a date (if desired)
pkg-install-salt-master:
require:
- state: config.apps.pkgrepo.epel-add
pkg.installed:
- pkgs:
- salt-master
- python-pygit2
##
##
##
lookup:
issue_orgid:
52a6d8c9-c1fe-4657-bf6c-473cbad0225e:
credset: []
configmap: []
tgttags: []
uris: []
## EOF
Title: GeMa, Introduction and Applicability
Purpose: To introduce GeMa, answer questions and evaluate applicability for adoption and investment w/i DevOps @ 14w.
Agenda:
- Overview:
- Q, what is it (design)
- Q, what does it do (enablement)
- Q, why does it matter to us now (implications)
- Design:
- objective: low-cost, adaptable, dynamic, provable, portable systems design
- components:
- GeMa Controller: is a portable and scriptable workstation for modeling, reasoning, and communicating about computer technology topics.
- GeMa Paradigm: is a powerful and disciplined approach for safely managing runtime, operational, and other system design data sets.
- Demo:
- Snapper (for OS-level snapshot and rollback)
- Artifactory
- etc, etc
- Workflow:
- enhancements and fixes are requested via Jira
- (opt.) test-driven acceptance criteria is defined using testinfra's
- a branch for this work is created in the gema {{ your_org }} repo
- work underway on the feature/fix using GeMa workstation (as needed/appropriate)
- the work can be tested by colleagues during dev by pulling the branch
- a PR is submitted to merge the work branch
- the PR is peer-reviewed, and if appropriate, others may be brought in for demo a/o consultation
- when adopted, the PR is merged to live staging branch
- when successful, the PR is promoted to live master
- Uses:
- Current 14West-related Work:
- Extensively used for work to: develop Snapper VM config
- Aided in development, testing, learning of: Splunk, Artifactory,
- integration with: docker, Artifactory, FreeIPA, Atlassian
- Future {{ your_org }}-related Work:
- integrations with: AD, PMP, AWS, MongoDB, Splunk Event Logging
- eval of Saltstack Enterprise Support, which includes the Web Mgmt GUI
- Asks:
- [ ] innovation project: to finish/update GeMa paradigm implementation on our salt infrastructure in TST and PRD
- [ ] innovation project: to produce VMware utilization reports to aid in billing
- [ ] innovation project: to port GeMa workstation to native Linux
- [ ] innovation project: on VM creation, create Jira issues with system details
- [ ] Discovery/PoC: creating VMs on vSphere from Templates using Salt
- [ ] Discovery/PoC: executing and wrapping the PubSvs-style provisioning process with Salt/GeMa
- [ ] Discovery/PoC: investigate workflow improvements for VM life-cycle mgmt, (particularly HW Provisioning)
- [ ] funds for SSCE training and exam w/ goal of becoming Saltstack certified this year
- Q, under what circumstances would Saltstack Enterprise Support be considered warranted
- Q, what GeMa capability areas are the most meaningful at 14west, and that you'd like to see the most investment in
- Q, what specific and general goals would you like to see GeMa adopt
- Q, any suggestions on the best case to make for adopting GeMa
- Next Steps:
- Q, is GeMa suitable for adoption, if not, why not
- Q, identify continuing objectives, hurdles, criteria and the like
--- # related ideas
- practical tooling for engineering discipline as applied to system design and operational management
- Reproducible Research
- DRY and other programming best practices
- Literate Programming
- Cattle not Pets
- Immutable Infrastructure
- Event-based Infrastructure
- Low-cost, Consistent Implementation of Best Practice
- Mean Time To Recovery, (MTTR)
- Test-Driven Infrastructure Development
...
--- # Intro GeMa Workstation is DONE, now intro GeMa Paradigm
I covered enough of the GeMa workstation in the first meeting to provide some sense of what it is.
For the second meeting I'll use the workstation to introduce the idea of the GeMa paradigm, which is a disciplined operations-focused approach to organizing and expressing system configs.
As part of the paradigm discussion I'll show a couple demos. Any thoughts on what would make a good demo or good use-case example?
(1) demo of state to stand up Artifactory server , (2) demo of state to apply OS-level rollback configuration, (3) demo of simple in-system requirements documentation and checking
We'll discuss existing and potential use cases, and some of the ways this could be integrated into new and existing workflows.
Finally, Whats the best route to designing a successful innovation project?
I've included some questions that I think will help to answer that question, and hopefully will allow us to identify some next steps along the route of creating a proposal and associated LOEs for wider review.
Cheers and thanks for your involvement!
Observation:
- I found that the pillar doesn't do recursive symbol resolution, which is something I assumed it did and becomes necessary for highly complex data modelling.
Background:
- There are systems in Salt that _do_ do this but they're not available using the gitfs pillar meaning that a local checkout of the pillar is needed.
- the pillar is highly valuable, dynamic and arbitrarily large set of data and needs to be seriously protected if it's going to be distributed
- happily it's only on the master, even if there's only one master it's still "distributed" in my view
- I want data designers to have 100% confidence in using the pillar with their most business critical secrets
- especially for a bootstraping project I want to make it easy for folks
Questions/Problems:
- does salt have like a vault or something anagalous?
Predictions:
- there is always the possbility that it's be desirable to setup mutilple masters
- the pillar is most useful when we think about how to use it to model data and I dont want data designers to be able to assume that anything and everything can safely go in the pillar
Evidences:
- so to support this I wrote a state thtat will mount /srv/pillar as a tmpfs volume and changes the swapfs to use a one-time encryption key
right but those masters are under your control right?
so this is nice because it prevents offline attacks
gotcha
there is integration with Hashicorp's vault: https://docs.saltstack.com/en/develop/ref/pillar/all/salt.pillar.vault.html
ah
so a way to possibly move beyond the local share
which is a good solution, but to maintain the change controlled git-backed workflow this is simpler to get started and easier to understand
because it's still git
right, just pull and get the latest pillars
just has to be done on master, can be gateway controlled that way
the state will setup the secure in-memory filesystem then checkout the pillar to that dir
the vault integration could be a nice to have for down the line but having a well composed solution for bootstraping securely makes me happy, it should be sufficient as a basis
and I can then use it with other pillar systems, like https://docs.saltstack.com/en/latest/ref/pillar/all/salt.pillar.stack.html (which does provide recursive symbol resolution), and https://docs.saltstack.com/en/latest/ref/pillar/all/salt.pillar.file_tree.html (which makes managing binary secrets super easy)
yeah, it's nice because the changes I made fix a prblem and dont require an infrastructure/data designer to change anything about their workflow
I need to polish the solution up a bit to set the pillar to refresh on a scheduler but thats no biggie
yeah, either webhooks, or schedule-based or both, both are supported, schedule-based is more resource intensive for sure but simpler so I may seek that out first
oh the third option is in response to imperative command
and even then, a git pull is hardly gonna bring anything down to its knees
so if there is a single controller that the masters report to, a command to update could be issued to affect all of them at once
yup, it's pretty efficient w/ the sha1 comparisons and such
also, continued cool: this will support gpg encrypted contents so while I wouldn't expect it to be a common pattern, it is possible to have an additional layer of security - enough that a public git repo could be used for a lot of confidential efforts
for now I'm going to stay away from gpg in the pillar, it's really cool, but is very likey to confuse things imho
a big part of this design is leveraging kubernetes but I'm thinking of re-doing the schedule to move things around to support a smaller MVP
k8s can come later, but this change really helps close the gap
it allows a person to say, "just add more masters"
spin up a master on a cloud service, etc w/o too much worry about data leakage
so tmpfs means that the data is fast and not persistent on disk
and per-boot encrypted swap means that if there's memory pressure and the tmpfs needs to be paged out, even if there is an unplanned power outage, the data shouldn't be recoverable
yup
it's not perfect but it's an effective mitigation that enables running mulltiple parallel masters, even isolating them by networks or according to their flocks
because it's still git sync'd
I actually opened a feature request to have pillar stack support gitfs and the maintainer closed it
I was like you SOB, but now I'm not stuck behind that either haha
so the pillar is a secured access yaml database
ok, final version it's a heirarchial key value store
I know your talking about dynamically generating data for a state request, correct?
well the end goal of that process is to feed data to do an action, correct?
yeah
i guess it's mostly that, but (maybe) it'd also be a useful refernce for other systems
for example, if I wanted to store an inventory list of servers for access from another system I could put that in the pillar and access it via API
sure, all data can be usefull if easily accessible
for now I'm trying to think about how to order these things so that it's comfortable to manipulate in a powerful way that doesn't violate the principle of least surprise for an operator
the more I think about this the more I realize that the pillar needs to be able to create a fairly dynamic and complete model of the infrastructure details in order to be more useful
so your talking hierarchy in the pillar as well as the order in which they inherit
to accomplish there there are a few subsystems that I'll combine: the pillarstack system (which provides recursive symbol resolution), the consul system (which uses the consul key value store to update the pillar with runtime info), and the default pillar which uses the top file
yeah
it's a data assembly line
each of these subsystems provide slightly different things and work slightly differently
but they are stackable in any order because they'll inherit the work from the underlying system
ok i got it!
I'll do pillarstack first, this will be more static but it will help to setup some usage patterns about how data is associated with attributes in a fairly rich way
you def work more abstractily more of the time then I do, im kinda jelly
that sounds like good idea
then the normal pillar because that is CM'd in git, it will make controlling the flow of that complete data model more visible as a control point
then the runtime pillar: consul
because if a peice of data needs to be added it should be possible without needing the layers below it to change
so pillarstack -> gitfs/vanilla -> consul
and consul is optional
pillarstack ships with salt so it's OK to bake it in
I'll just write the associatation in pillarstack to generically create complete data for vanilla/gitfs to use, it will be context-oriented ...
so if a minion has a certain domain-suffix (for example) it will get access to the data for that domains DNS controllers
pillarstack will be kept generic, and custom work will go into the vanilla/gitfs pillar because it's more likely to change over time
pillarstack will be focused on building a system of data, vanilla will be focused on customizing it to a site, and consul will be focused on making it dynamic and responsive ☺
because pillarstack goes first, the values it resolves will be available to vanilla, so although it cant make internal recursive references, it can make references to the data that it's inherited, so it's not a fully-free axis of movement but it's an improvement
i see, so there are restrictions, and since pillar has the widest scope of generic data then it should go first
it means the operator will need to know a little about this system to use it (which I try to avoid) but as many assumption as possible can be built into pillarstack so that eventually the vanilla can be simple, expressive and sane
It seems to me this is about automation and I don't see many people going in and committing to git, atleast not in the standard way?
yeah, because as the vanilla system evaluates it's pillar it will be able to fufill references to pillar data thats already in the model provided by pillarstack, but not any new pillar data generated by itself earlier in it's evaluation. it's an odd restriction but I'll seek to manage it with good pillarstack policies
but if a person is going to use it, I have to be able to point then at a control point and that may as well be git ☺
kk, or a web page that creates a commit?
the other almost funny wrinkle is that pillarstack will operate on the exact same directories that are provided by the gitfs/vanilla subsystem
just saying, how many people need to have git access and is it a good idea...
sure it could be bitbucket or stash or whatev
so there will always be gui's
but in this system design it's a requirement that every part of it be change-controlled
at least up to the runtime stuff that we dont necessarily want to tbe change controlled
yeah, it's kinda true
consul, the runtime pillar will kinda help with this as well: https://demo.consul.io/ui/
right but the interface to that, I just would not trust many people with git access to a repo
yeah, I think there will be a need for a well-informed integrator as change requests emerge
the nice thing is the whole stack should support making it all testable
it's neat, it's service discovery, service health montioring, and key-value all in one
since the inventory of a site doesn't need to be commited to git, I'm figuring that stuff can live here, in what I'm calling the runtime layer so that likely, most management will take place here and can even be queried and manipulated via API
all of these pillars have slightly different interfaces but I think this order makes the most sense
i have to admit
that data problem does seem like it can get out of control
I can see where your concern was
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment