Although this might look similar to other configuration management systems its philosophy is a bit different. Some of the key aspects of its philosophy are:
- Data is key. Data should be easy to gather, process and make it avaiable to potential consumers.
- YAML is not a programming language, it's just a mean to define simple data and glue different building blocks.
- It's easier to write, troubleshoot and debug simple python than complex YAML.
- It's easier to write, troubleshoot and debug simple python than complex Jinja2 templates.
The inventory is just a replaceable python script which adheres to some standards. It's supposed to be simple and provide generic information like groups/roles, devices and a mapping between the two. It also provides connectivity information (hostname, username, password, OS, etc). More specific data is gathered later on by modules (see Modules section).
Modules provide reusable functionality and provide the basic building blocks.
To write modules a base class will be provided with some basic functionality. The module can then implement any of the following two methods run_scoped
and run_global
to provide functionality.
Modules can run in two modes:
- global - Tasks run in global mode are run once only and data is made available to everybody. Tasks in the
pre
andpost
sections are global. A module, to provideglobal
functionality has to implementrun_global
method. - scoped - Tasks run in scoped mode are run per device and data is only made available to other tasks within the same scope or to other global tasks. Tasks defined in the
tasks
section are scoped. A module, to providescoped
functionality has to implementrun_scoped
method.
A module can combine run_global
and run_scope
to easily provide both functionailities. For example:
class MyModule(BaseClassYetToProvide): def run_scope(self, host): do_something(host) def run_global(self): for group in self.inventory.groups(): for host in self.inventory.hosts_in_group(group): self.run_scope(hosts)
A module doesn't need to overload both methods. It might make sense for some methods to only run in scoped
mode (for example, a module that loads configuration on a device) or in global
mode (a module that process data).
- facts_yaml - Reads a directory structure with hostfiles and groupfiles and provides data. Provides similar functionality to other configuration magenement systems.
- napalm_facts - Gather facts from live devices using napalm getters.
- network_services - Based on a directory structure
$service/$vendor/template.j2
defines a set of services that can be mapped to devices and groups. - napalm_configure - Provides configuration capabilities based on napalm.
- ip_fabric - Reads a file defining a topology, correlates the definition with data from the inventory and generates all the necessary data for the deployment.
- template - Provides generic jinja2 functionality.
- napalm_validate - Provides the validate functionality of napalm.
Brigade doesn't have the notion of commit
or dry-run
. However, modules that perform changes should provide a commit
argument to dictate if you want to perform the change or not.
All the modules have access to all the data collected/generated previously. More interestingly, due to the global/scoped nature of tasks. Data availability follows the following rules:
- All data provided by the inventory is available to everybody.
- Data gathered by pre tasks are available to all subsequent tasks.
- Data gathered by scoped tasks defined in the tasks section are available to:
- To the subsequent tasks for that device.
- To all the tasks in the post section.
Example of data flow:
- The inventory could provide some basic information; devices, groups,parameters to connect to de devices, etc... Something simple, generic and very fast to run.
- A pre task could read specialized information to deploy an IP fabric or a WAN network or something else and make it consumable to everybody. Some sanity checks could be performed here as well. In addition, further data could be gathered from live devices and make it available through the entirity of the runbook. For example, you could read a prescriptive topology file, compare it with LLDP information and compute a fabric configuration (interfaces, IPs, BGP sessions, etc). The idea is to generate/munge data and make it consumable
- During the tasks execution phase, modules defined there could use all the data gathered so far. Individual devices could expand and collect more data in case they need them, things like OS version, interface names, etc... Things that might be useful to pick the right configuration
- Finally, a post task could process the result, log the result somewhere, validate the deployment, etc...
Runbooks are yaml
files that glue all the building blocks. Runbooks are not code and thus there are not if
statements or for
loops. Because all modules have access to all the data, there is no need for dynamic variables in the runbook. Modules can still register data but that's just useful so other methods know where to find that data.
The only exceptions is the when
clause. This accepts a string and the task will only be executed if the string eval
is True
.
In addition, modules can also tweak their behavior via CLI options and/or the environment.
Is the run_global a mock example or would run_global generally be run in serial?