Skip to content

Instantly share code, notes, and snippets.

@platu
Last active February 20, 2024 10:03
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save platu/9af406e872b69026e1b44f7727bcc6ca to your computer and use it in GitHub Desktop.
Save platu/9af406e872b69026e1b44f7727bcc6ca to your computer and use it in GitHub Desktop.
Using Ansible to automate the installation of Web servers on Incus system containers
tags
m1, Devnet, incus, linux, lab15

DevNet Lab 15 -- Using Ansible to automate the installation of Web servers on Incus system containers

[toc]


Background / Scenario

In this lab, you will first configure Ansible to communicate with a virtual machine hosting web servers in Incus system containers. You will create playbooks that automate the process of installing Incus on the Web server VM and build a dynamic inventory with a Python script. You will also create a custom playbook that installs Apache with specific instructions on each container.

Lab 2 VMS Topology

Part 1: Launch the Web server VM

  1. Copy the master server image

    At the Hypervisor console shell, we make a copy of server image from the $HOME/masters directory and rename the files.

    cp $HOME/masters/debian-testing-amd64.{qcow2,qcow2_OVMF_VARS.fd} .
    '/home/etudianttest/masters/debian-testing-amd64.qcow2' -> './debian-testing-amd64.qcow2'
    '/home/etudianttest/masters/debian-testing-amd64.qcow2_OVMF_VARS.fd' -> './debian-testing-amd64.qcow2_OVMF_VARS.fd'
    
    rename.ul debian-testing-amd64 webserver-host debian-testing-amd64.qcow2*
    ls webserver-host*
    webserver-host.qcow2  webserver-host.qcow2_OVMF_VARS.fd
    
  2. Launch the Web server VM.

    Do not forget to change the $tapnum VM interface number.

    $HOME/masters/scripts/ovs-startup.sh webserver-host.qcow2 4096 $tapnum
    ~> Virtual machine filename   : webserver-host.qcow2
    ~> RAM size                   : 4096MB
    ~> SPICE VDI port number      : 59XX
    ~> telnet console port number : 23XX
    ~> MAC address                : b8:ad:ca:fe:00:XX
    ~> Switch port interface      : tapXX, access mode
    ~> IPv6 LL address            : fe80::baad:caff:fefe:XX%vlanYYY
    
  3. Open a SSH connection to the Web server VM.

    Once again, do not forget to change the tap interface number at the right end of the link local IPv6 address.

    ssh etu@fe80::baad:caff:fefe:XX%vlanYYY

Part 2: Configure Ansible on the Devnet VM

The web server hosting VM is now ready for Ansible automation. First, we need to configure Ansible and check that we have access to the web server VM from the Devnet VM via SSH.

Step 1: Create the Ansible directory and configuration file

  1. Make the ~/labs/lab15 directory for example and navigate to this folder

    mkdir -p ~/labs/lab15 && cd ~/labs/lab15
  2. Check that ansible package is installed

    If the ansible package is not already installed on your Devnet VM, it's time to do so.

    apt show ansible | head -n 10
    Package: ansible
    Version: 7.7.0+dfsg-3
    Priority: optional
    Section: universe/admin
    Origin: Ubuntu
    Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
    Original-Maintainer: Lee Garrett <debian@rocketjump.eu>
    Bugs: https://bugs.launchpad.net/ubuntu/+filebug
    Installed-Size: 263 MB
    Depends: ansible-core (>= 2.11.5-1~), python3:any, openssh-client | python3-paramiko (>= 2.6.0), python3-distutils, python3-dnspython, python3-httplib2, python3-jinja2, python3-netaddr, python3-yaml
    
  3. Create a new ansible.cfg file in the lab15 directory from the shell prompt

    cat << 'EOF' > ansible.cfg
    # config file for Lab 15 Web Servers management
    [defaults]
    # Use inventory/ folder files as source
    inventory=inventory/
    host_key_checking = False # Don't worry about RSA Fingerprints
    retry_files_enabled = False # Do not create them
    deprecation_warnings = False # Do not show warnings
    interpreter_python = /usr/bin/python3
    [inventory]
    enable_plugins = auto, host_list, yaml, ini, toml, script
    [persistent_connection]
    command_timeout=100
    connect_timeout=100
    connect_retry_timeout=100
    ssh_type = libssh
    EOF
    
  4. Create the inventory directory

    mkdir ~/labs/lab15/inventory

Step 2: Check SSH access from Devnet VM to Web server VM

We start with a shell test connection before to set the configuration for ansible.

One more time, be sure to change tap interface number to match your resource allocation.

ssh etu@fe80::baad:caff:fefe:XXX%enp0s1
The authenticity of host 'fe80::baad:caff:fefe:XXX%enp0s1 (fe80::baad:caff:fefe:XXX%enp0s1)' can't     be established.
ED25519 key fingerprint is SHA256:yFLaZk+OfY7z7bHyHPXgjowRS4KMHjfoMQxracRdG9M.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'fe80::baad:caff:fefe:XXX%enp0s1' (ED25519) to the list of known hosts.
Linux webserver-host 6.6.13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.6.13-1 (2024-01-20) x86_64

Step 3: Create a new vault file

Create a new vault file called lab15_passwd.yml and enter the unique vault password which will be used for all users passwords to be stored.

ansible-vault create $HOME/lab15_passwd.yml
New Vault password:
Confirm New Vault password:

This will open the default editor which is defined by the $EDITOR environment variable. There we enter a variable name which will designate the password for Web server VM user account.

webserver_user_pass: XXXXXXXXXX

Step 4: Create a new inventory file

Create the inventory file inventory/hosts.yml with the IP address of your Web server VM.

cat << 'EOF' > inventory/hosts.yml
---
vms:
  hosts:
    webserver:
      ansible_host: 'fe80::baad:caff:fefe:XXX%enp0s1'
  vars:
    ansible_ssh_user: etu
    ansible_ssh_pass: '{{ webserver_user_pass }}'
    ansible_become_pass: '{{ webserver_user_pass }}'

all:
  children:
    vms:
    containers:
EOF

Step 5: Verify Ansible communication with the Web server VM

Now, we are able to use the ping ansible module to commincate with the webserver entry defined in the inventory file.

ansible webserver -m ping --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
Vault password:
webserver | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

As the ansible ping is successful, we can go on with container management setup within the Web server VM.

Part 3: Initialize Incus container management with Ansible

In order to be able to launch system containers and configure Web services in these containers, we first must initialize the Incus manager with an Ansible playbook.

Step 1: Create the incus_init.yml playbook

Create the lxd_init.yml file an add the following information to the file. Make sure you use the proper YAML indentation. Every space and dash is significant. You may lose some formatting if you copy and paste.

cat << 'EOF' > incus_init.yml
---
- name: INCUS INSTALLATION AND INITIALIZATION
  hosts: webserver
  tasks:
    - name: INSTALL INCUS PACKAGE
      ansible.builtin.apt:
        name: incus
        state: present
        update_cache: true
      become: true

    - name: ADD USER TO INCUS SYSTEM GROUPS
      ansible.builtin.user:
        name: '{{ ansible_ssh_user }}'
        groups:
          - incus
          - incus-admin
        append: true
      become: true

    - name: RESET SSH CONNECTION TO ALLOW USER CHANGES
      ansible.builtin.meta:
        reset_connection

    - name: INITIALIZE LXD
      ansible.builtin.shell: |
        set -o pipefail
        cat << EOT | incus admin init --preseed
        config:
          core.https_address: '[::]:8443'
        networks: []
        storage_pools:
        - config: {}
          description: ""
          name: default
          driver: dir
        profiles:
        - config: {}
          description: ""
          devices:
            eth0:
              name: eth0
              nictype: macvlan
              parent: enp0s1
              type: nic
            root:
              path: /
              pool: default
              type: disk
          name: default
        projects: []
        cluster: null
        EOT
        touch $HOME/incus_init_done
      args:
        chdir: $HOME
        creates: incus_init_done
EOF

The incus_init.yml playbook contains four tasks:

  1. Install incus package if necessary
  2. Add the normal user etu to the incus and incus-admin system groups Therefore, we use the {{ ansible_ssh_user }} variable
  3. Reset the SSH connection between the Devnet VM and the werbserver VM to allow the new group assignments to take effect
  4. Initialize incus container manager from a preseeded setup YAML

The incus initialization instructions fall in two main categories:

  1. Storage A default storage pool is defined based on a btrfs subvolume
  2. Networking We choose to use macvlan to connect any number of containers. This means that containers will be connected to the exact same VLAN as the Web server VM. IPv4 and IPv6 addressing will follow the same rules as the Web server VM. In our context, both Devnet and Web server VMs are connected to the hypervosor default VLAN with DHCP addressing for IPv4 and SLAAC for IPv6.

Step 2: Run the incus_init.yml playbook

ansible-playbook incus_init.yml --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
Vault password:
PLAY [webserver] **************************************************

TASK [Gathering Facts] ********************************************
ok: [webserver]

TASK [INSTALL INCUS PACKAGE] **************************************
ok: [webserver]

TASK [ADD USER TO INCUS SYSTEM GROUPS] ****************************
ok: [webserver]

TASK [RESET SSH CONNECTION TO ALLOW USER CHANGES] *****************

TASK [INITIALIZE LXD] *********************************************
changed: [webserver]

PLAY RECAP ********************************************************
webserver  : ok=4    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Part 4: Instantiate containers with Ansible

In this part, we start managing web services on demand with system container instantiation based on an Ansible playbook.

Step 1: Create a lab inventory template

In part 2 step 2, we have created the inventory/hosts.yml file that defines all the necessary parameters to run Ansible playbooks on the web servers hosting VM.

Now, we have to create a new inventory file named inventory/lab.yml which defines all the system containers parameters. The purpose here is to be able to run Ansible playbooks within these containers.

cat << 'EOF' > inventory/lab.yml
---
containers:
  hosts:
    web[01:04]:
    
  vars:
    ansible_ssh_user: webuser
    ansible_ssh_pass: '{{ webuser_pass }}'
    ansible_become_pass: '{{ webuser_pass }}'
EOF

Note: This inventory file is incomplete as it does not define ansible_host variable for each container.

Step 2: Add a new entry in Ansible vault for container access

In the previous setp above, we defined a user named webuser with its password stored in the webuser_passvariable.

We must add the corresponding entry in the ansible vault file named `$HOME/lab15_passwd.yml'.

ansible-vault edit $HOME/lab15_passwd.yml
New Vault password:
Confirm New Vault password:

There we enter a variable name which will designate the password for each container user account.

webuser_pass: XXXXXXXXXX

Step 3: Create an Ansible playbook to launch and configure access to containers

cat << 'EOF' > incus_launch.yml
---
- name: LAUNCH INCUS CONTAINERS, SETUP USER ACCOUNT AND SSH SERVICE
  hosts: webserver
  tasks:
    - name: LAUNCH INCUS CONTAINERS
      ansible.builtin.shell: |
        set -o pipefail
        if  ! incus ls -c n | grep -q "{{ item }}"
        then
          incus launch images:debian/trixie "{{ item }}"
          touch $HOME/"{{ item }}_launched"
        fi
      args:
        chdir: $HOME
        creates: "{{ item }}_launched"
      with_inventory_hostnames:
        - all:!webserver

    - name: SETUP USER ACCOUNT AND SSH SERVICE
      ansible.builtin.shell: |
        set -o pipefail
        incus exec "{{ item }}" -- bash -c "if ! grep -q webuser /etc/passwd; then adduser --quiet --gecos \"\" --disabled-password webuser; fi"
        incus exec "{{ item }}" -- bash -c "chpasswd <<<\"webuser:{{ webuser_pass }}\""
        incus exec "{{ item }}" -- bash -c "if ! id webuser | grep -qo sudo; then adduser --quiet webuser sudo; fi"
        incus exec "{{ item }}" -- apt update
        incus exec "{{ item }}" -- apt install -y openssh-server python3 python3-apt
        incus exec "{{ item }}" -- apt clean
        touch $HOME/"{{ item }}_configured"
      args:
        chdir: $HOME
        creates: "{{ item }}_configured"
      with_inventory_hostnames:
        - all:!webserver
EOF

This playbook has two different tasks.

  1. We first have to create and launch containers only if they are not already there and running. We are using the shell Ansible module to run commands on the Web server VM.
  2. Once the containers are running, we create a new user account and install the openssh-server package in each container.

For both of these tasks, we use the creates function to ensure the commands are run only once. When all the shell commands are executed, we terminate by a touch command which creates an empty file which is further used as mark showing the job has already been done.

Step 4: Run the incus_launch.yml playbook

ansible-playbook incus_launch.yml --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
Vault password:
PLAY [webserver] *************************************************************************

TASK [Gathering Facts] *******************************************************************
ok: [webserver]

TASK [LAUNCH INCUS CONTAINERS] ***********************************************************
ok: [webserver] => (item=web01)
ok: [webserver] => (item=web02)
ok: [webserver] => (item=web03)
ok: [webserver] => (item=web04)

TASK [SETUP USER ACCOUNT AND SSH SERVICE] ************************************************
ok: [webserver] => (item=web01)
ok: [webserver] => (item=web02)
ok: [webserver] => (item=web03)
ok: [webserver] => (item=web04)

PLAY RECAP *******************************************************************************
webserver  : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Part 5: Complete a dynamic inventory

Now that the containers are launched, it is time to get their network addresses to build a new inventory file which will allow Ansible to run playbooks in each of these containers.

We swicth here to Python development to build the new YAML inventory file based on informations given by Incus on the Web server VM

Step 1: Fetch containers configuration

Here is a short new playbook named incus_fetch.yml which will retrieve configuration from the Web server VM to the Devnet VM.

cat << 'EOF' > incus_fetch.yml
---
- name: BUILD CONTAINERS DYNAMIC INVENTORY
  hosts: webserver
  tasks:
    - name: GET INCUS CONTAINERS CONFIGURATION
      ansible.builtin.shell: incus --format yaml ls > container_config.yml
      args:
        chdir: $HOME
        creates: container_config.yml

    - name: FETCH INCUS CONTAINERS CONFIGURATION
      ansible.builtin.fetch:
        src: $HOME/container_config.yml
        dest: container_config.yml
        flat: true
EOF

When we run this playbook, we get a copy of the Incus containers configuration from the Web server VM.

ansible-playbook incus_fetch.yml --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
Vault password:
PLAY [webserver] ******************************************************

TASK [Gathering Facts] ************************************************
ok: [webserver]

TASK [GET INCUS CONTAINERS CONFIGURATION] *****************************
ok: [webserver]

TASK [FETCH INCUS CONTAINERS CONFIGURATION] ***************************
[WARNING]: sftp transfer mechanism failed on [fe80::baad:caff:fefe:XXX%enp0s1]. Use ANSIBLE_DEBUG=1 to see detailed information
[WARNING]: scp transfer mechanism failed on [fe80::baad:caff:fefe:XXX%enp0s1]. Use ANSIBLE_DEBUG=1 to see detailed information
changed: [webserver]

PLAY RECAP ************************************************************
webserver  : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

From the Devnet VM shell, we can check the presence of the container_config.yml file.

ls -lh container_config.yml
-rw-rw-r-- 1 etu etu 18K févr. 11 09:23 container_config.yml

Step 2: Build a Python script for containers inventory

The main purpose here is to build a dynamic inventory with containers actual network addresses. With our macvlan network setup and random layer 2 MAC addresses, containers are addressed completely dynamically.

This is why we need to extract network addresses from the YAML configuration file and build a new inventory file.

  1. First attempt: parse YAML file produced by incus ls command.

    #!/usr/bin/env python3
    # -*- coding: utf-8 -*-
    
    import yaml
    
    with open('container_config.yml','r') as yaml_file:
        containers = yaml.safe_load(yaml_file)
    
    # look for the container 'name' key and then the network 'addresses' for each
    # container
    for container in containers:
        print(f"Container: {container['name']}")
        for addresses in container['state']['network']['eth0']['addresses']:
            print(f"  Addresses: {addresses}")
    
    /bin/python3 /home/etu/labs/lab15/build_inventory.py
    Container: web01
      Addresses: {'family': 'inet', 'address': '198.18.28.122', 'netmask': '23', 'scope': 'global'}
      Addresses: {'family': 'inet6', 'address': '2001:678:3fc:1c:216:3eff:fea4:95b7', 'netmask': '64', 'scope': 'global'}
      Addresses: {'family': 'inet6', 'address': 'fe80::216:3eff:fea4:95b7', 'netmask': '64', 'scope': 'link'}
    Container: web02
      Addresses: {'family': 'inet', 'address': '198.18.28.70', 'netmask': '23', 'scope': 'global'}
      Addresses: {'family': 'inet6', 'address': '2001:678:3fc:1c:216:3eff:fe6e:7a91', 'netmask': '64', 'scope': 'global'}
      Addresses: {'family': 'inet6', 'address': 'fe80::216:3eff:fe6e:7a91', 'netmask': '64', 'scope': 'link'}
    Container: web03
      Addresses: {'family': 'inet', 'address': '198.18.28.69', 'netmask': '23', 'scope': 'global'}
      Addresses: {'family': 'inet6', 'address': '2001:678:3fc:1c:216:3eff:fe40:705', 'netmask': '64', 'scope': 'global'}
      Addresses: {'family': 'inet6', 'address': 'fe80::216:3eff:fe40:705', 'netmask': '64', 'scope': 'link'}
    Container: web04
      Addresses: {'family': 'inet', 'address': '198.18.28.193', 'netmask': '23', 'scope': 'global'}
      Addresses: {'family': 'inet6', 'address': '2001:678:3fc:1c:216:3eff:fe1b:b041', 'netmask': '64', 'scope': 'global'}
      Addresses: {'family': 'inet6', 'address': 'fe80::216:3eff:fe1b:b041', 'netmask': '64', 'scope': 'link'}
    
  2. Second attempt: format YAML output

    #!/usr/bin/env python3
    # -*- coding: utf-8 -*-
    
    import yaml
    
    with open('container_config.yml','r') as yaml_file:
        containers = yaml.safe_load(yaml_file)
    
    print('containers:')
    print('  hosts:')
    for container in containers:
        print(f"    {container['name']}:")
        for addresses in container['state']['network']['eth0']['addresses']:
            # print IPv6 local link address
            if addresses['family'] == 'inet6' and addresses['scope'] == 'link':
                print(f"      ansible_host: '{addresses['address']}%enp0s1'")
    
    /bin/python3 /home/etu/labs/lab15/build_inventory.py
    containers:
      hosts:
        web01:
          ansible_host: 'fe80::216:3eff:fea4:95b7%enp0s1'
        web02:
          ansible_host: 'fe80::216:3eff:fe6e:7a91%enp0s1'
        web03:
          ansible_host: 'fe80::216:3eff:fe40:705%enp0s1'
        web04:
          ansible_host: 'fe80::216:3eff:fe1b:b041%enp0s1'
    
  3. Run the Python script from the incus_fecth.yml ansible playbook

    Here's a new version of the playbook with an additional task that creates the containers.yml file in the inventory/ directory.

    cat << 'EOF' > incus_fetch.yml
    ---
    - name: BUILD CONTAINERS DYNAMIC INVENTORY
      hosts: webserver
      tasks:
        - name: GET INCUS CONTAINERS CONFIGURATION
      ansible.builtin.shell: incus --format yaml ls > container_config.yml
          args:
            chdir: $HOME
            creates: container_config.yml
    
    - name: FETCH INCUS CONTAINERS CONFIGURATION
      ansible.builtin.fetch:
        src: $HOME/container_config.yml
        dest: container_config.yml
        flat: true
    
    - name: ADD INCUS CONTAINERS ADDRESSES TO INVENTORY
      ansible.builtin.script:
        build_inventory.py > inventory/containers.yml
      delegate_to: localhost
    EOF
    

    When we run this new version, the containers.yml, file is added to the inventory/ directory.

    ansible-playbook incus_fetch.yml --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    PLAY [webserver] ******************************************************
    
    TASK [Gathering Facts] ************************************************
    ok: [webserver]
    
    TASK [GET INCUS CONTAINERS CONFIGURATION] *****************************
    ok: [webserver]
    
    TASK [FETCH INCUS CONTAINERS CONFIGURATION] ***************************
    ok: [webserver]
    
    TASK [ADD INCUS CONTAINERS ADDRESSES TO INVENTORY] ********************
    changed: [webserver -> localhost]
    
    PLAY RECAP ************************************************************
    webserver  : ok=4    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
    
    ls -lh inventory/
    total 12K
    -rw-rw-r-- 1 etu etu 244 févr. 11 10:13 container.yml
    -rw-rw-r-- 1 etu etu 265 févr.  9 17:05 hosts.yml
    -rw-rw-r-- 1 etu etu 173 févr. 10 08:32 lab.yml

Step 3: Check Ansible inventory

We are now able to run the ansible-inventory command to check that the Web server VM and its containers are properly addressed.

ansible-inventory --yaml --list
all:
  children:
    containers:
      hosts:
        web01:
          ansible_become_pass: '{{ webuser_pass }}'
          ansible_host: fe80::216:3eff:fea4:95b7%enp0s1
          ansible_ssh_pass: '{{ webuser_pass }}'
          ansible_ssh_user: webuser
        web02:
          ansible_become_pass: '{{ webuser_pass }}'
          ansible_host: fe80::216:3eff:fe6e:7a91%enp0s1
          ansible_ssh_pass: '{{ webuser_pass }}'
          ansible_ssh_user: webuser
        web03:
          ansible_become_pass: '{{ webuser_pass }}'
          ansible_host: fe80::216:3eff:fe40:705%enp0s1
          ansible_ssh_pass: '{{ webuser_pass }}'
          ansible_ssh_user: webuser
        web04:
          ansible_become_pass: '{{ webuser_pass }}'
          ansible_host: fe80::216:3eff:fe1b:b041%enp0s1
          ansible_ssh_pass: '{{ webuser_pass }}'
          ansible_ssh_user: webuser
    ungrouped: {}
    vms:
      hosts:
        webserver:
          ansible_become_pass: '{{ webserver_user_pass }}'
          ansible_host: fe80::baad:caff:fefe:XXX%enp0s1
          ansible_ssh_pass: '{{ webserver_user_pass }}'
          ansible_ssh_user: etu

Step 4: Check Ansible SSH access to the containers

We are now also able to run the ansible command with its ping module to check SSH access to all the containers.

ansible containers -m ping --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
Vault password:
web02 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
web04 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
web01 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
web03 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

Another way to check SSH access to the containers is to use the command module instead of ping.

ansible containers -m command -a "/bin/echo Hello, World!" --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
Vault password:
web01 | CHANGED | rc=0 >>
Hello, World!
web04 | CHANGED | rc=0 >>
Hello, World!
web02 | CHANGED | rc=0 >>
Hello, World!
web03 | CHANGED | rc=0 >>
Hello, World!

Part 6: Create an Ansible playbook to automate Web service installation

In this Part, you will create and automate the installation of Apache webserver software.

Step 1: Create the install_apache.yml playbook

cat << 'EOF' > install_apache.yml
---
- name: INSTALL APACHE2, ENABLE MOD_REWRITE, SET LISTEN PORT 8081, AND CHECK HTTP STATUS CODE
  hosts: containers
  become: true
  tasks:
    - name: UPDATE AND UPGRADE APT PACKAGES
      ansible.builtin.apt:
        update_cache: true
        upgrade: 'full'

    - name: INSTALL APACHE2
      ansible.builtin.apt:
        name: apache2
        state: present

    - name: ENABLE APACHE2 MOD_REWRITE MODULE
      community.general.apache2_module:
        name: rewrite
        state: present
      notify: RESTART APACHE2

    - name: CLEAN UNWANTED APT OLDER STUFF
      ansible.builtin.apt:
        autoclean: true
        autoremove: true

  handlers:
    - name: RESTART APACHE2
      ansible.builtin.service:
        name: apache2
        state: restarted
EOF

Explanation of some of the significant lines in your playbook:

  • hosts: containers - This references the containers group of devices in your hosts inventory file. This playbook will be run for all the devices with this group.
  • become: true - The become keyword activates sudo command execution, which will allow tasks such as installing applications.
  • apt: - The apt module is used to manage packages and application installations on Linux.
  • handlers: - Handlers are similar to a task but are not run automatically. They are called by a task. Notice that the task ENABLED MOD_REWRITE calls the handler RESTART APACHE2

Step 2: Run the install_apache.yml playbook

ansible-playbook install_apache.yml --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
Vault password:
PLAY [containers] *****************************************************

TASK [Gathering Facts] ************************************************
ok: [web01]
ok: [web03]
ok: [web02]
ok: [web04]

TASK [UPDATE AND UPGRADE APT PACKAGES] ********************************
changed: [web02]
changed: [web04]
changed: [web03]
changed: [web01]

TASK [INSTALL APACHE2] ************************************************
changed: [web01]
changed: [web03]
changed: [web02]
changed: [web04]

TASK [ENABLE APACHE2 MOD_REWRITE MODULE] ******************************
changed: [web02]
changed: [web03]
changed: [web01]
changed: [web04]

TASK [CLEAN UNWANTED OLDER APT STUFF] *************************************
ok: [web02]
ok: [web01]
ok: [web04]
ok: [web03]

RUNNING HANDLER [RESTART APACHE2] *************************************
changed: [web04]
changed: [web03]
changed: [web02]
changed: [web01]

PLAY RECAP ************************************************************
web01      : ok=6    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
web02      : ok=6    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
web03      : ok=6    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
web04      : ok=6    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Compared to playbooks in previous tasks, we can see that each task is run on each of the four Incus system containers we have added to the inventory.

Step 3: Add a task to verify Apache2 service status

We now want to verify that the apache2 web server is active. Therfore, we add a task to the install_apache.ymlplaybook.

cat << 'EOF' > install_apache.yml
---
- hosts: containers
  become: true
  tasks:
    - name: UPDATE AND UPGRADE APT PACKAGES
      ansible.builtin.apt:
        update_cache: true
        upgrade: 'full'

    - name: INSTALL APACHE2
      ansible.builtin.apt:
        name: apache2
        state: present

    - name: ENABLE APACHE2 MOD_REWRITE MODULE
      community.general.apache2_module:
        name: rewrite
        state: present
      notify: RESTART APACHE2

    - name: CLEAN UNWANTED OLDER STUFF
      ansible.builtin.apt:
        autoclean: true
        autoremove: true

    - name: GET APACHE2 SERVICE STATUS
      ansible.builtin.systemd:
        name: apache2
      register: apache2_status
    
    - name: PRINT APACHE2 SERVICE STATUS
      ansible.builtin.debug:
        var: apache2_status.status.ActiveState

  handlers:
    - name: RESTART APACHE2
      ansible.builtin.service:
        name: apache2
        state: restarted

Here we introduce the systemd module and the ability to debug within a playbook by displaying the value a variable after registering to the status of a systemd service.

When running the plyabook, the relevant part of the output gives:

TASK [GET APACHE2 SERVICE STATUS] *************************************
ok: [web02]
ok: [web01]
ok: [web04]
ok: [web03]

TASK [debug] **********************************************************
ok: [web01] => {
    "apache2_status.status.ActiveState": "active"
}
ok: [web02] => {
    "apache2_status.status.ActiveState": "active"
}
ok: [web03] => {
    "apache2_status.status.ActiveState": "active"
}
ok: [web04] => {
    "apache2_status.status.ActiveState": "active"
}

Step 4: Reconfigure Apache server to listen on port 8081

In this step we add two tasks using the lineinfile Ansible module to edit configuration files.

Here is a copy of the new tasks to add in the install_apache.yml file playbook.

	- name: SET APACHE2 LISTEN ON PORT 8081
	  ansible.builtin.lineinfile:
	    dest: /etc/apache2/ports.conf
	    regexp: '^Listen 80'
	    line: 'Listen 8081'
	    state: present
	  notify:
	    - RESTART APACHE2
	
	- name: SET APACHE2 VIRTUALHOST LISTEN ON PORT 8081
	  ansible.builtin.lineinfile:
	    dest: /etc/apache2/sites-available/000-default.conf
	    regexp: '^<VirtualHost \*:80>'
	    line: '<VirtualHost *:8081>'
	    state: present
	  notify:
	    - RESTART APACHE2

The lineinfile module is used to replace existing lines in the /etc/apache2/ports.conf and /etc/apache2/sites-available/000-default.conf files. You can search the Ansible documentation for more information on the lineinfile module.

Once the playbook is run again, we can check the results through the ansible command module.

  1. Check the /etc/apache2/ports.conf file

    ansible containers -m command -a "grep ^Listen /etc/apache2/ports.conf" --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    web01 | CHANGED | rc=0 >>
    Listen 8081
    web04 | CHANGED | rc=0 >>
    Listen 8081
    web02 | CHANGED | rc=0 >>
    Listen 8081
    web03 | CHANGED | rc=0 >>
    Listen 8081
    
  2. Check the /etc/apache2/sites-available/000-default.conf file

    ansible containers -m command -a "grep ^<VirtualHost /etc/apache2/sites-available/000-default.conf" --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    web02 | CHANGED | rc=0 >>
    <VirtualHost *:8081>
    web04 | CHANGED | rc=0 >>
    <VirtualHost *:8081>
    web01 | CHANGED | rc=0 >>
    <VirtualHost *:8081>
    web03 | CHANGED | rc=0 >>
    <VirtualHost *:8081>
    
  3. Finally, we cal list the TCP sockets open in listening state

    ansible containers -m command -a "ss -ltn" --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    web02 | CHANGED | rc=0 >>
    State  Recv-Q Send-Q Local Address:Port Peer Address:Port
    LISTEN 0      4096   127.0.0.53%lo:53        0.0.0.0:*
    LISTEN 0      128          0.0.0.0:22        0.0.0.0:*
    LISTEN 0      4096         0.0.0.0:5355      0.0.0.0:*
    LISTEN 0      4096      127.0.0.54:53        0.0.0.0:*
    LISTEN 0      128             [::]:22           [::]:*
    LISTEN 0      4096            [::]:5355         [::]:*
    LISTEN 0      511                *:8081            *:*
    web04 | CHANGED | rc=0 >>
    State  Recv-Q Send-Q Local Address:Port Peer Address:Port
    LISTEN 0      4096      127.0.0.54:53        0.0.0.0:*
    LISTEN 0      4096   127.0.0.53%lo:53        0.0.0.0:*
    LISTEN 0      128          0.0.0.0:22        0.0.0.0:*
    LISTEN 0      4096         0.0.0.0:5355      0.0.0.0:*
    LISTEN 0      128             [::]:22           [::]:*
    LISTEN 0      511                *:8081            *:*
    LISTEN 0      4096            [::]:5355         [::]:*
    web01 | CHANGED | rc=0 >>
    State  Recv-Q Send-Q Local Address:Port Peer Address:Port
    LISTEN 0      4096   127.0.0.53%lo:53        0.0.0.0:*
    LISTEN 0      128          0.0.0.0:22        0.0.0.0:*
    LISTEN 0      4096      127.0.0.54:53        0.0.0.0:*
    LISTEN 0      4096         0.0.0.0:5355      0.0.0.0:*
    LISTEN 0      128             [::]:22           [::]:*
    LISTEN 0      511                *:8081            *:*
    LISTEN 0      4096            [::]:5355         [::]:*
    web03 | CHANGED | rc=0 >>
    State  Recv-Q Send-Q Local Address:Port Peer Address:Port
    LISTEN 0      128          0.0.0.0:22        0.0.0.0:*
    LISTEN 0      4096   127.0.0.53%lo:53        0.0.0.0:*
    LISTEN 0      4096      127.0.0.54:53        0.0.0.0:*
    LISTEN 0      4096         0.0.0.0:5355      0.0.0.0:*
    LISTEN 0      128             [::]:22           [::]:*
    LISTEN 0      511                *:8081            *:*
    LISTEN 0      4096            [::]:5355         [::]:*
    

    In the results above, we can see that lines numbered 9, 17, 26 and 35 show that there are open TCP ports listening on 8081.

Step 5: Add a task to verify access to the web services

When deploying new services, it is important to check that they are actually reachable at the application layer.

In order to do so, we add a last task to the install_apache.yml playbook that run a local curl command on the Devnet VM and verify the HTTP answer code is 200 OK.

    - name: CHECK HTTP STATUS CODE
      ansible.builtin.uri:
        url: 'http://[{{ ansible_default_ipv6.address }}]:8081'
        status_code: 200
      when: "'containers' in group_names"
      delegate_to: localhost
      become: false

As our playbook starts with the gather facts job, a lot of ansible variables are set during this first phase.

In the example above, we use IPv6 address of each container in the HTTP URL and expect the code 200 as a succesful result.

  • The delegate_to: localhost instructs the task to be run from the Devnet VM.
  • The become: false tells the task must be run at the normal user level.

If we run the playbook with success, we only get ok as results. Here is a sample:

TASK [CHECK HTTP STATUS CODE] *******************************************
ok: [web04 -> localhost]
ok: [web01 -> localhost]
ok: [web02 -> localhost]
ok: [web03 -> localhost]

If we run the same playbook with the very verbose option -vvv we get detailed results of each HTTP request. Here is a sample for one of the four containers tested:

    "last_modified": "Sun, 11 Feb 2024 10:07:56 GMT",
    "msg": "OK (10701 bytes)",
    "redirected": false,
    "server": "Apache/2.4.58 (Debian)",
    "status": 200,
    "url": "http://[2001:678:3fc:1c:216:3eff:fe6e:7a91]:8081",
    "vary": "Accept-Encoding"

On line 5, the status value is actually 200.

Conclusion

This lab is mainly a discovery of Ansible modules for students with no prior experience in the world of automation. It covers many Ansible modules with basic tasks for installing and configuring containers and services.

If you've reached these lines, I hope you've enjoyed the trip 😃

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment