Skip to content

Instantly share code, notes, and snippets.

@kvietmeier
Last active June 24, 2024 15:23
Show Gist options
  • Save kvietmeier/d497360a0e304286285220066ff0b53c to your computer and use it in GitHub Desktop.
Save kvietmeier/d497360a0e304286285220066ff0b53c to your computer and use it in GitHub Desktop.
Anisble AdHoc Commands

Ansible Ad Hoc Commands

Ansible can seem complex and difficult when youo first get started, especially if you are trying to decipher complex playbooks you find on on GitHub or in blogs. Fortunately, you can ease yourself into learning Ansible and add complexity as you go. On my own journey to learn Ansible I decided I had setup enough Linux servers by hand and was tired of cutting and pasting the same commands, repeatedly. Ansible seemed like the perfect solution but after wading through complex or broken examples on GitHub and trivial use cases in the docs; I stepped back and decided to start from the beginning with “Fundamental Principles” which in Ansible are the Ad Hoc commands.

I based this on a HowTo I created for setting up a basic Linux server by hand which walks through the steps you do for every server to make it useful:

  • Kill NetWorkManager if it is running
  • Register with Red Hat (if required) and subscribe to channels
  • Configure additional users and sudo access
  • Configure yum/apt repos (add EPEL for example)
  • Install the additional packages you need to have a useful server.
  • Run “yum/apt update.”
  • Setup NTP
  • Configure NFS client to mount a share (less important in the days of GitHub)
  • Compile software like iperf and FIO.
  • Enable Serial over LAN (another thing usually and easily done through kickstart)

This can take a fair bit of time if you have more than one server to setup. You could do this all in a complex playbook which is going to take a fair bit of time as well. Or - you could use some Ad Hoc commands, save some time, and learn Ansible in a scalable way.

Note that in the cloud and with hypervisors most of the above could and should be done through a cloud-init file.

Ad hoc commands are also a great way to do some basic system management like check the date/time on a group of servers, or veryify the number of CPUs in your VMs. It can also be a handy way to update things like authorized_keys or /etc/hosts if you aren't using DNS in a lab setting.


Prerequisites

To get started – create an “ansible” directory but keep it simple. To start this directory will hold your inventory file and ansible.cfg files. The other directories for roles, vars, etc. you will add later when you start creating playbooks. You will probably want at least one dir for any files you want to copy over.

  • SSH keys shared with target hosts (ssh-copy-id) – add keys to .ssh/authorized_keys
  • Ansible installed.
  • Populated Ansible inventory file.
  • Basic ansible.cfg
  • Source files to copy if needed (/etc/hosts, .ssh/config, .bashrc etc...).

This will be covered in the following sections.

References:


Configure SSH

A complete explanation is outside the scope of this document, but I provide the commands you need to set it up. This is another configuration step best done in the initial kickstart as setting up SSH key auth can be a bit of a “chicken or egg” situation and it is hard to avoid some level of manual work. With just a few systems it is manageable though, especially if you use 2 very useful SSH scripts.

  • ssh-keyscan
  • ssh-copy-id

These 2 scripts will save you much pain and suffering if key-auth isn’t setup already. Unfortunately, they are not installed by default on most systems and not available on Windows at all:

Install -
Linux: openssh-clients
OS X: X Code: ssh-keyscan
brew: ssh-copy-id

ssh-keyscan:
This will allow you to add hosts to the .ssh/known_host file automatically, avoiding the irritating question the first time you login.

ssh-copy-id:
Copies the id_rsa.pub key into the authorized_keys file on the targeted host


Examples

Both can be used in simple for loops:
  • First grab all of the keys for .ssh/known_hosts:

    for i in $(cat hosts.txt | awk '{print $2}') ; do ssh-keyscan ${i} >> ~/.ssh/known_hosts ; done
  • Use '{print $1}' if you want to add them by IP instead

    for i in $(cat hosts.txt | awk '{print $1}') ; do ssh-keyscan ${i} >> ~/.ssh/known_hosts ; done

    Where the file "hosts.txt", has this format:

    10.23.229.236   inst-flm2-server-05-01
    10.23.229.237   inst-flm2-server-05-02
    10.23.229.238   inst-flm2-server-05-03
    10.23.229.239   inst-flm2-server-05-04

    Or use /etc/hosts directly and just grep out the hosts you need

    for i in $(cat /etc/hosts | grep <some_string> | awk '{print $2}') ; do ssh-keyscan ${i} >> ~/.ssh/known_hosts ; done
  • Then copy the id_rsa,pub keys (don’t forget the correct user)

    for i in $(cat /etc/hosts | grep <some_string> | awk '{print $2}'); do ssh-copy-id -i .ssh/id_rsa.pub <user>@${i}; done
  • Another thing to consider is disabling updating known_hosts on the client/ansible host.

    • This avoids getting prompted about an unkown key everytime you repave a host

    Add these 2 lines to the .ssh/config file:

    Host *
    # Effect is to not populate the known_hosts file every time you connect to a new server
    UserKnownHostsFile /dev/null
    StrictHostKeyChecking no 
    Host *
You should now be able to ssh into the target hosts without getting prompted

However - a better way to propagate SSH keys to all the VMs you create would be to use cloud-init during the initial bringup. This is really handy in Azure because Azure only allows you to add one key.

#cloud-config
# vim: syntax=yaml

###--- Users
users:
  # Configure Azure added user - With additional SSH keys
  - name: ubuntu
    ssh-authorized-keys:
    - ssh-rsa  <key1 - ansible server>
    - ssh-rsa  <key2 - DB admin>
    sudo: ['ALL=(ALL) NOPASSWD:ALL']
    groups: sudo
    shell: /bin/bash

Installing and configuring Ansible for basic first-time use

  • Install Ansible

  • Configure Ansible

    Choose the method appropriate for your Ansible host. Ansible will install just fine on Mac OS X using brew or pip and it will install/run under WSL - but the control node will not run on native Windows.
    It is a standard best practice to create a $HOME/ansible directory to work out of.

Setup basic ansible.cfg

If installing Ansible using dnf, apt, etc. the latest ansible.cfg file should be present in /etc/ansible, possibly as a .rpmnew file (or other) as appropriate in the case of updates.

  • Copy it to your $HOME/ansible directory
  • Keep it really simple - point to the inventory file and configure sudo.

This example covers the basics:

azureuser@linuxtools:~/ansible$ cat ansible
# Basic config file for ansible
# Ansible Config -- https://ansible.com/
# ===============================================

### Some basic default values...
[defaults]
inventory            = ~/ansible/inventory
remote_tmp           = ~/.ansible/tmp
local_tmp            = ~/.ansible/tmp
remote_port          = 22
retry_files_enabled  = False
deprecation_warnings = False

# Uncomment this to disable SSH key host checking
host_key_checking    = False

[privilege_escalation]
#sudo_user      = root
become          =True
become_method   =sudo
become_user     =root
become_ask_pass =False<end>

# END

Ansible inventory file and project directories

The default location is /etc/ansible/hosts and that is where it will be on Linux, but on the Mac you won’t have one. It can be either YAML or INI format, whichever you prefer.

Rather than use the system wide inventory file you should create a directory that you will work from and copy or create an inventory file there. The convention is to just call it “inventory”. Add this location/file to the ansible.cfg file.

My example is using “host variables” per group of target hosts to set the ansible user for that group, and to tell Ansible to use sudo. And a line that you might need to work around Python versioning issues.

azureuser@linuxtools:~/ansible$ cat inventory
###=====================================================================###
###   Ansible Inventory
###=====================================================================###

### Database Testing
[dbnodes]
db-02
db-03
db-04

[dbnodes:vars]
ansible_ssh_user=ubuntu
ansible_python_interpreter="/usr/bin/python3"

[dbmgmt]
dbmgmt-01

[dbmgmt:vars]
ansible_ssh_user=ubuntu
ansible_python_interpreter="/usr/bin/python3"

# Linux Testing
[linuxnodes]
linux-01
linux-02
linux-03

[linuxnodes:vars]
ansible_ssh_user=azureuser

###=====================================================================### 
### For all hosts - per group overrides this
###=====================================================================### 

[all:vars]
ansible_ssh_user=root
ansible_python_interpreter="/usr/bin/python3"

Once you create the project directory, setup the inventory file, and add any entries to ansible.cfg you need, you are ready to start running Ad Hoc commands.

Don’t worry when it doesn’t work the first time, it never does.

**Using different inventory files:** You can maintain multiple inventory files and limit the systems you run playbooks/commands against. Example:

ansible -i inventory_3node voltnodes,voltmgmt -a "date"
ansible -i inventory_3node voltnodes -a "date"
ansible -i inventory_3node db-01,db-02 -a "date"

Running simple Ad Hoc Commands

Basic test - Ping the hosts and see if they respond (ICMP is blocked in the Azure fabric):

azureuser@linuxtools:~/ansible$ ansible voltnodes -m ping
linux-01 | success >> {
"changed": false,
"ping": "pong"
}
linux-03 | success >> {
"changed": false,
"ping": "pong"
}
linux-02 | success >> {
"changed": false,
"ping": "pong"
}

Get Information:

Basic Info
ansible dbnodes -a "date"
ansible dbnodes -s -a "uptime"
azureuser@linuxtools:~/ansible$ ansible dbnodes -a "date"
db-02 | CHANGED | rc=0 >>
Thu Jul 20 15:39:07 PDT 2023
db-04 | CHANGED | rc=0 >>
Thu Jul 20 15:39:07 PDT 2023
db-03 | CHANGED | rc=0 >>
Thu Jul 20 15:39:07 PDT 2023

azureuser@linuxtools:~/ansible$ ansible dbnodes -a "uptime"
db-03 | CHANGED | rc=0 >>
 15:41:02 up 33 min,  1 user,  load average: 0.00, 0.00, 0.00
db-02 | CHANGED | rc=0 >>
 15:41:02 up 33 min,  1 user,  load average: 0.01, 0.01, 0.00
db-04 | CHANGED | rc=0 >>
 15:41:02 up 33 min,  1 user,  load average: 0.00, 0.00, 0.00
Do all of the VMs have the correct NVME drives?
azureuser@linuxtools:~/ansible$ ansible dbasenodes -a "nvme list"
db-03 | CHANGED | rc=0 >>
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     SN: 000001           MSFT NVMe Accelerator v1.0               1          32.21  GB /  32.21  GB    512   B +  0 B   v1.00000
/dev/nvme0n2     SN: 000001           MSFT NVMe Accelerator v1.0               12        274.88  GB / 274.88  GB    512   B +  0 B   v1.00000
/dev/nvme0n3     SN: 000001           MSFT NVMe Accelerator v1.0               13        274.88  GB / 274.88  GB    512   B +  0 B   v1.00000
db-02 | CHANGED | rc=0 >>
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     SN: 000001           MSFT NVMe Accelerator v1.0               1          32.21  GB /  32.21  GB    512   B +  0 B   v1.00000
/dev/nvme0n2     SN: 000001           MSFT NVMe Accelerator v1.0               12        284.54  GB / 284.54  GB    512   B +  0 B   v1.00000
/dev/nvme0n3     SN: 000001           MSFT NVMe Accelerator v1.0               13        274.88  GB / 274.88  GB    512   B +  0 B   v1.00000
db-04 | CHANGED | rc=0 >>
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     SN: 000001           MSFT NVMe Accelerator v1.0               1          32.21  GB /  32.21  GB    512   B +  0 B   v1.00000
/dev/nvme0n2     SN: 000001           MSFT NVMe Accelerator v1.0               12        274.88  GB / 274.88  GB    512   B +  0 B   v1.00000
/dev/nvme0n3     SN: 000001           MSFT NVMe Accelerator v1.0               13        274.88  GB / 274.88  GB    512   B +  0 B   v1.00000
Did the VMs get configured with the correct number of vCPU?
azureuser@linuxtools:~/ansible$ ansible dbasenodes,management -m shell -a "lscpu | grep -iw cpu\(s\) | egrep -v 'node|line'"
dbase01 | CHANGED | rc=0 >>
CPU(s):                             4
dbase02 | CHANGED | rc=0 >>
CPU(s):                             32
dbase03 | CHANGED | rc=0 >>
CPU(s):                             4
mgmt01 | CHANGED | rc=0 >>
CPU(s):                             2

Configuring Systems:

Setup NTP (could do this in cloud-init)
ansible dbnodes -s -m yum -a "name=ntp state=installed"
ansible dbnodes -s -m service -a "name=ntpd state=stopped enabled=no"
ansible dbnodes -m copy -a "src=/export/labshare/configs/ntp.conf dest=/etc/ntp.conf"
ansible dbnodes -a "date"
ansible dbnodes -s -a "sed -i'' 's/SYNC_HWCLOCK=no/SYNC_HWCLOCK=yes/g' /etc/sysconfig/ntpdate"
ansible dbnodes -s -a "ntpdate 1.ntp.foo.bar.com"
ansible dbnodes -s -a "hwclock --systohc"
ansible dbnodes -a "date"
ansible dbnodes -s -m service -a "name=ntpd state=started enabled=yes"
Copy over a hosts file
ansible dbnodes -m copy -a "src=/mnt/share/configs/hosts.txt dest=/etc/hosts"
ansible dbnodes -s -a "cat /etc/hosts"
Clean up home directory after kickstart install (move/delete files):
ansible linux-01 -s -a "mkdir ~/.install"
ansible linux-01 -s -a "mv ~/cobbler.ks ~/.install"
ansible linux-01 -s -a "mv ~/anaconda-ks.cfg ~/.install"
ansible linux-01 -s -a "mv ~/ks-post-nochroot.log ~/ks-post.log ~/ks-pre.log ~/cobbler.ks ~/.install"
Edit /etc/fstab on DB servers
ansible dbnodes -m lineinfile -a "dest=/etc/fstab line='### Mount LabShare'"
ansible dbnodes -m lineinfile -a "dest=/etc/fstab line='nfs01:/export/share /mnt/share nfs defaults 1 1'"
Remove these lines later (wanted to convert to autofs from static NFS):
ansible vdb-01 -s -a "sed -i '/[La]ab[Ss]hare/d' /etc/fstab"
Run a script on the target hosts
ansible dbnodes -m shell -a "/home/ubuntu/mydb/setup.sh"

Subscription Manager - RHEL/OSP:

Register -
ansible db-01 -s -a "subscription-manager register --username auser@adomain.com --password f00bar"
Attach -
ansible db-01 -s -a "subscription-manager attach --pool=subscription-manager attach --pool=<key>"
ansible db-01 -s -a "subscription-manager repos --enable rhel-7-server-optional-rpms"

Package Management:

yum update and install packages:
azureuser@linuxtools:~/ansible$ ansible linux-01 -m yum -a "name=* state=latest"
Use apt and update all packages:
azureuser@linuxtools:~/ansible$ ansible dbnodes -m apt -a "name=* state=latest"
db-02 | success >> {
"changed": false,
"msg": "",
"rc": 0,
"results": [
"All packages up to date"
]
}
db-03 | success >> {
"changed": false,
"msg": "",
"rc": 0,
"results": [
"All packages up to date"
]
}
db-04 | success >> {
"changed": false,
"msg": "",
"rc": 0,
"results": [
"All packages up to date"
]
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment