Skip to content

Instantly share code, notes, and snippets.

@hkumarmk
Last active June 15, 2016 13:33
Show Gist options
  • Save hkumarmk/3bb8701ab9c7ae7657fdef6c08e2fcb5 to your computer and use it in GitHub Desktop.
Save hkumarmk/3bb8701ab9c7ae7657fdef6c08e2fcb5 to your computer and use it in GitHub Desktop.
Containerizing Contrail Services

Changes in provisioning/deployment in container world

There are two stages in deployment of containerized contrail services.

  • Build stage – In this stage installing the packages and setup with reasonable default configurations are added and a docker image is produced. This is happening during artifact build in our servers and docker image is produced as the artifact. This is a ready to deploy container image which only required customer environment specific configuration.
  • Provisioning stage - In this stage, the artifact produced in last stage is deployed on customer environment, added custom configurations and other operations and is started on appropriate nodes (cfgm nodes in case of contrail-config ).

NOTE: Only configuration is done during provisioning stage, no packages will be installed at this moment – so the images should be specific for a contrail version and an openstack build (e.g contrail-config-liberty-3.1-50, contrail-config-mitaka-3.1.50)

Design approach

Here are the approaches we have taken:

  • There should be container per contrail role. E.g individual containers for contrail-config, control, analytics, (may be others also for db, rabbitmq etc) - they will be multi-process containers
  • Build and provisioning code of the container should be as simple as possible
  • Build stage should handle all package installations and any other tasks which are common to any environment (like writing reasonable default configurations). This should happen on our servers as part of contrail build process
  • Provisioning stage should handle all configurations required in different environments (like writing various configuration files)
  • Provisioning should handle individual container configurations and any high level setup actions should be done by another orchestration tool (either we build or using existing tools or a combination of two)
    • Provisioning code is included in the container image and will be running while starting the container – the configuration code should be idempotent to avoid any issues during container restarts
    • All inputs for both configurations, and any parameters to customize the system should be accepted as environment variables which will be provided to docker container while initializing it – so initializing/changing the configuration is as easy as starting/restarting the container with appropriate set of environment passed. Provisioning code will be running while initializing/starting the container which will parse these inputs and write configuration files, and start the services using supervisord.
    • No external system/scripts are involved on provisioning or configuring the system – the container is self contained and just accept parameters and config inputs while starting it. The orchestration tool would need to handle orchestrating various containers/services while deploying or may be during maintenance, but individual containers would handle all the tasks themselves.
  • Networking – I used host network type which will use host network, this will avoid creating another network layer for these services. So no extra network configuration required.
  • Orchestration system – have to handle two set of tasks – orchestrating the containers themselves, and orchestrating various tasks during the provisioning/upgrade. We can write our own orchestration system (may be a fab or ansible based one) as well as use existing toolsets like Kubernetes , Messos – in any case set of code is required to orchestrate tasks during provisioning/upgrade which are specific to contrail provisioning.
  • The need of connecting to container should be minimal - all common operations should be done without logging into the container.
    • All logs and crash should be available on the host machine itself

Since this approach make multi-process containers, detecting and handling single process failures need more complex approaches, may be we would need to do research on various tools which provide such facilities. Also while keeping logging into containers minimal, a single process failure multiple times (supervisor inside the container will handle most of the process failures), would need whole container restart which may cause multiple processes go down.

Alternative approaches

  • Make single process containers with abstracting that complexity in orchestration tool
  • Orchestration tool to compose those individual containers in logical group to contrail roles.
  • Advantage in this approach are:
    • single process containers are easy to manage, they can be managed as we manage a process/service.
    • Process failures will cause entire container to fail which will make docker service to restart the container automatically
    • Easy to scale individual processes as required
    • Various orchestration operations are easy - e.g upgrade, scale, operations like roll restart etc
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment