Although this might look similar to other configuration management systems its philosophy is a bit different. Some of the key aspects of its philosophy are:
- Data is key. Data should be easy to gather, process and make it avaiable to potential consumers.
- YAML is not a programming language, it's just a mean to define simple data and glue different building blocks.
- It's easier to write, troubleshoot and debug simple python than complex YAML.
- It's easier to write, troubleshoot and debug simple python than complex Jinja2 templates.
The inventory is just a replaceable python script which adheres to some standards. It's supposed to be simple and provide generic information like groups/roles, devices and a mapping between the two. It also provides connectivity information (hostname, username, password, OS, etc). More specific data is gathered later on by modules (see Modules section).
Modules provide reusable functionality and provide the basic building blocks.
To write modules a base class will be provided with some basic functionality. The module can then implement any of the following two methods run_scoped
and run_global
to provide functionality.
Modules can run in two modes:
- global - Tasks run in global mode are run once only and data is made available to everybody. Tasks in the
pre
andpost
sections are global. A module, to provideglobal
functionality has to implementrun_global
method. - scoped - Tasks run in scoped mode are run per device and data is only made available to other tasks within the same scope or to other global tasks. Tasks defined in the
tasks
section are scoped. A module, to providescoped
functionality has to implementrun_scoped
method.
A module can combine run_global
and run_scope
to easily provide both functionailities. For example:
class MyModule(BaseClassYetToProvide): def run_scope(self, host): do_something(host) def run_global(self): for group in self.inventory.groups(): for host in self.inventory.hosts_in_group(group): self.run_scope(hosts)
A module doesn't need to overload both methods. It might make sense for some methods to only run in scoped
mode (for example, a module that loads configuration on a device) or in global
mode (a module that process data).
- facts_yaml - Reads a directory structure with hostfiles and groupfiles and provides data. Provides similar functionality to other configuration magenement systems.
- napalm_facts - Gather facts from live devices using napalm getters.
- network_services - Based on a directory structure
$service/$vendor/template.j2
defines a set of services that can be mapped to devices and groups. - napalm_configure - Provides configuration capabilities based on napalm.
- ip_fabric - Reads a file defining a topology, correlates the definition with data from the inventory and generates all the necessary data for the deployment.
- template - Provides generic jinja2 functionality.
- napalm_validate - Provides the validate functionality of napalm.
Brigade doesn't have the notion of commit
or dry-run
. However, modules that perform changes should provide a commit
argument to dictate if you want to perform the change or not.
All the modules have access to all the data collected/generated previously. More interestingly, due to the global/scoped nature of tasks. Data availability follows the following rules:
- All data provided by the inventory is available to everybody.
- Data gathered by pre tasks are available to all subsequent tasks.
- Data gathered by scoped tasks defined in the tasks section are available to:
- To the subsequent tasks for that device.
- To all the tasks in the post section.
Example of data flow:
- The inventory could provide some basic information; devices, groups,parameters to connect to de devices, etc... Something simple, generic and very fast to run.
- A pre task could read specialized information to deploy an IP fabric or a WAN network or something else and make it consumable to everybody. Some sanity checks could be performed here as well. In addition, further data could be gathered from live devices and make it available through the entirity of the runbook. For example, you could read a prescriptive topology file, compare it with LLDP information and compute a fabric configuration (interfaces, IPs, BGP sessions, etc). The idea is to generate/munge data and make it consumable
- During the tasks execution phase, modules defined there could use all the data gathered so far. Individual devices could expand and collect more data in case they need them, things like OS version, interface names, etc... Things that might be useful to pick the right configuration
- Finally, a post task could process the result, log the result somewhere, validate the deployment, etc...
Runbooks are yaml
files that glue all the building blocks. Runbooks are not code and thus there are not if
statements or for
loops. Because all modules have access to all the data, there is no need for dynamic variables in the runbook. Modules can still register data but that's just useful so other methods know where to find that data.
The only exceptions is the when
clause. This accepts a string and the task will only be executed if the string eval
is True
.
In addition, modules can also tweak their behavior via CLI options and/or the environment.
That's because you haven't used ansible ;) I am mostly sticking to ansible naming and using ansible as reference to have a common language but what we should discuss here is the "core" of brigade. How data is shared throughout the runbook, the
scoped
vsglobal
type of tasks... those kind of things. Because the rest are "modules" that are easily replaceable : )Following the design I proposed, what you are describing could be easily implemented as a module to be run in the
pre
section. That's why I want to keep data out of the inventory (unless it's very generic like username, password and stuff like that). So people can lay out data in their favorite way and potentially breaking the data down depending on the use case of the runbook.That's fine, but right now, to have a "design" discussion it's easier to stick with known names ;)
As I mentioned, I am fine with changing names. I was just sticking to known names. Most of the times, when I say facts I really mean live state (that's actually what napalm retrieves, state)
That's fine again, that could be another module for
pre
or an enhancement later on to provide the "caching" functionality to modules that might want to use it. That could be exposed via the class the modules have to inherit. I know ansible has a similar capability and that people uses it although I have never used it as my operations are data driven so caching is pointless for me. In any case, it's probably a nice feature to add later on, probably not part of the MVP though.Why? It's just an extra line
class MyModule(BaseModule)
compared to a standalone function and the benefits are massive. For the core developers it means you have a simple way of discovering user modules, just crawl the path looking for subclasses ofBaseModule
. Without it, it's harder to discover them or you might require the user to manually specify them. It's also greatto verify if the module supports thescoped
and/or theglobal
modes by having araise NotImplmented
on the parent class. In addition, functionality for logging, for the caching you were proposing earlier, exposing data via class attributes, etc... I think the benefits outweighs having to add an extra line because the alternative is having a function that requires to receive a million parameters and that if they change for some reason it might break up existing user modules.Not sure what you mean in that paragraph, those were supposed examples of modules we could write to begin with. They are not supposed to be part of the "core" but modules.