Skip to content

Instantly share code, notes, and snippets.

@dmccuk
Last active July 9, 2024 10:38
Show Gist options
  • Save dmccuk/20842287a694a4c9d5a4dd30f7ef38f0 to your computer and use it in GitHub Desktop.
Save dmccuk/20842287a694a4c9d5a4dd30f7ef38f0 to your computer and use it in GitHub Desktop.

The issue you're encountering is likely due to the use of the MBR (Master Boot Record) partitioning scheme. MBR has a partition size limitation of 2TB. If you have disks larger than 2TB and you want to use the entire space in a single partition, you should use the GPT (GUID Partition Table) partitioning scheme instead of MBR.

To use the full capacity of a 3TB (or larger) disk on RHEL6, you'd want to use the GPT format. Here's how you can create a GPT partition:

  1. Backup any data: Before making any changes to the disk, ensure you've backed up any data on it.

  2. Install gdisk: The gdisk utility is a version of fdisk for GPT. It might not be installed by default:

    sudo yum install gdisk
  3. Start gdisk:

    sudo gdisk /dev/sdx

    Replace /dev/sdx with the appropriate disk name.

  4. Create a new partition:

    • Press o to create a new empty partition table.
    • Press n to create a new partition.
    • Follow the prompts to set the start and end of the partition.
    • Press w to write changes to disk.
  5. Format the partition: Once you've created the partition, you'll need to format it with a filesystem, for example:

    sudo mkfs.ext4 /dev/sdx1

    Replace /dev/sdx1 with the appropriate partition name.

  6. Mount & use: Now you can mount this partition and start using it.

By moving to GPT, you can utilize the full size of disks larger than 2TB. Also, keep in mind that while GPT is widely supported, ensure any other systems that might access this disk also support GPT. RHEL6 and most modern systems support it without any issues.

The message you're seeing is typical when you start gdisk on a disk that currently has an MBR partition table. gdisk is letting you know that you can convert the MBR partition table to GPT if you wish. However, converting from MBR to GPT is not a simple switch, and any data on the disk would be at risk.

To answer your question:

  1. If you are initializing a new disk: If the 3TB disk is new and doesn't have data on it, then you can safely create a GPT partition table on it without concerns. Once you've done that, you can move data to it from other disks without any issues.

  2. If you're trying to convert an existing disk with data: It's riskier. Before attempting to convert from MBR to GPT, you should make a complete backup of any data on the disk. Conversion tools are designed to change the partition table without data loss, but things can go wrong, and a backup is crucial.

  3. Interoperability: As for moving data onto the new GPT-partitioned disk, RHEL6 will have no trouble reading and writing to GPT partitions. GPT is well-supported in modern Linux distributions. The primary consideration is if you ever expect this disk to be used on older systems or certain other devices which might not support GPT. But for a standard RHEL6 (and newer) environment, you should be good to go.

Lastly, always remember to backup any important data before making major changes to disk structures.

# Retrieve the newest certificate from the local machine's personal store
$newestCert = Get-ChildItem -Path Cert:\LocalMachine\My | Sort-Object NotAfter -Descending | Select-Object -First 1

if ($newestCert) {
    Write-Output "Newest Certificate Thumbprint: $($newestCert.Thumbprint)"
    
    # Update the existing WinRM HTTPS listener with the newest certificate's thumbprint
    Set-WSManInstance -ResourceURI winrm/config/listener -SelectorSet @{ Address = "*"; Transport = "HTTPS" } -ValueSet @{ CertificateThumbprint = $newestCert.Thumbprint }

    Write-Output "HTTPS listener updated with newest certificate."
}
else {
    Write-Error "No certificates found in the local machine's personal store."
}

https://aws.amazon.com/marketplace/pp/prodview-fxjjedym32gky https://repost.aws/questions/QU7bw453xcRUyYfO5BBnJnqg/oracle-linux-8-uek-availability

When you want to convert a Linux server in VMware into a template for cloning, you need to ensure that all system-specific information and configurations are cleaned up to prevent conflicts or unintended configurations on the cloned systems. Here's a checklist you can follow before converting a VM to a template:

  1. Hostname: Reset the hostname to a generic name.

    echo "localhost" > /etc/hostname
  2. Network Configuration: Remove or clear the network configuration files.

    • For systems using ifcfg scripts (like RHEL/CentOS):

      rm -f /etc/sysconfig/network-scripts/ifcfg-ens*
    • For systems using Netplan (like newer versions of Ubuntu):

      rm -f /etc/netplan/*.yaml
  3. SSH Keys: Delete SSH server keys. New keys will be generated on the first boot of the cloned system.

    rm -f /etc/ssh/ssh_host_*
  4. Log Files: Clear system logs to start fresh on the cloned VMs.

    find /var/log -type f -exec truncate -s 0 {} \;
  5. Command History: Clear the command history of all users, especially root.

    rm -f /root/.bash_history
    rm -f /home/*/.bash_history
  6. Temporary Files: Delete any temporary files.

    rm -rf /tmp/*
    rm -rf /var/tmp/*
  7. Machine ID: Clear the machine ID. This ID will be regenerated on the next boot.

    echo "" > /etc/machine-id
  8. UUIDs & MAC Address Config: Ensure that any system-specific UUIDs or MAC addresses are not hardcoded in configuration files.

  9. Packages & Software: Consider removing or generalizing software to fit the intended use of the template.

  10. Users & Passwords: Ensure you remove or reset passwords, especially if you’ve set custom passwords for applications or services.

  11. Custom Services: If there are any custom services or applications, ensure they're configured to start fresh for new clones.

  12. Unmount Drives/Devices: Ensure you've unmounted any temporary devices or drives.

  13. Package Database: You might want to clean the package manager cache.

  • For yum (RHEL/CentOS):

    yum clean all
  • For apt (Debian/Ubuntu):

    apt clean
  1. VMware-specific Configurations:
  • Uninstall open-vm-tools or VMware Tools if you've installed them. You can install them again once the VM is deployed from the template.
  • Remove any cron jobs or scheduled tasks that are specific to the current VM.
  1. Shutdown the VM:
shutdown -h now

Once the VM is powered off, you can convert it to a template in the vSphere/VMware client. When deploying new VMs from the template, ensure to customize settings, such as CPU, memory, and disk size, according to the requirements of the specific VM.

ANsible AD user adding

---
- name: Setup SSH key for AD user
  hosts: your_target_hosts
  become: yes  # Become root to manage user and home directory
  vars:
    ad_username: "AD_user"
    ssh_public_key: "ssh-rsa AAAAB3N..."

  tasks:
    - name: Ensure AD user's home directory exists
      ansible.builtin.user:
        name: "{{ ad_username }}"
        create_home: yes
        home: "/home/{{ ad_username }}"

    - name: Create .ssh directory for AD user
      ansible.builtin.file:
        path: "/home/{{ ad_username }}/.ssh"
        state: directory
        owner: "{{ ad_username }}"
        group: "{{ ad_username }}"
        mode: '0700'

    - name: Add SSH key to authorized_keys
      ansible.builtin.lineinfile:
        path: "/home/{{ ad_username }}/.ssh/authorized_keys"
        line: "{{ ssh_public_key }}"
        create: yes
        owner: "{{ ad_username }}"
        group: "{{ ad_username }}"
        mode: '0600'

Packer setup

Step 1: Download Oracle Linux 7 ISO

Download the OEL7 ISO:

  • Visit the Oracle Linux download page.
  • Download the ISO for Oracle Linux 7. This file will be used by Packer to install the OS in the VM.

Step 2: Create a Packer Template

Packer Template:

Create a file named oel7.json (or any other name you prefer) for your Packer template. Here's a basic template to get you started:

{
  "builders": [{
    "type": "virtualbox-iso",
    "guest_os_type": "Oracle_64",
    "iso_url": "path_to_your_downloaded_oel7_iso",
    "iso_checksum": "checksum_of_iso",
    "iso_checksum_type": "md5",
    "headless": true,
    "ssh_username": "your_username",
    "ssh_password": "your_password",
    "vm_name": "OEL7_VM",
    "shutdown_command": "shutdown -P now",
    "boot_wait": "2m",
    "disk_size": 20480,
    "memory": 2048,
    "cpus": 2,
    "format": "ova",
    "output_directory": "output-virtualbox-ova",
    "boot_command": [
      "<tab> text ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ks.cfg<enter><wait>"
    ],
    "http_directory": ".",
    "http_port_min": 8000,
    "http_port_max": 9000
  }]
}

  • Replace "path_to_your_downloaded_oel7_iso" with the actual path to the ISO file.
  • Replace "checksum_of_iso" with the actual checksum of the ISO (you can usually find this on the download page or calculate it using a tool like md5sum).
  • Update ssh_username and ssh_password with credentials you want to set for your VM.

Step 3: Run Packer

Build the VM:

  • Open a terminal on your RHEL8 server.
  • Navigate to the directory where your oel7.json file is located.
  • Run the following command:
packer build oel7.json
  • Packer will create a VM based on this template.

Step 4: Export the VM as an OVA File

Manual Export Using VMware Workstation Player:

  • After Packer completes the build, open VMware Workstation Player.
  • Find the VM that Packer created. It should be listed in VMware Workstation Player's library.
  • Right-click on the VM and choose Export to OVF or a similar option.
  • Follow the prompts to export the VM as an OVA file.

Additional Notes

  • Network Configuration: This basic template does not include network configuration or provisioning scripts. You might need to modify it to suit your specific requirements.

Manual Steps: The export to OVA is a manual step in VMware Workstation Player.

VMware Compatibility: Ensure that the guest_os_type in the Packer template is compatible with VMware Workstation Player.

By following these steps, you should be able to create a basic OVA file for OEL7 using Packer and VMware Workstation Player. Remember,

  • this is a basic template and might need further customization based on your specific requirements.
#version=RHEL7
# System authorization information
auth --enableshadow --passalgo=sha512

# Use CDROM installation media
cdrom

# Run the Setup Agent on first boot
firstboot --enable

# Keyboard layouts
keyboard --vckeymap=us --xlayouts='us'

# System language
lang en_US.UTF-8

# Network information
network  --bootproto=dhcp --device=eth0 --onboot=on --ipv6=auto --no-activate
network  --hostname=localhost.localdomain

# Root password
rootpw --plaintext your_root_password
# Create a user
user --name=your_username --password=your_password --iscrypted --gecos="User"

# System services
services --enabled="chronyd"

# System timezone
timezone America/New_York --isUtc

# System bootloader configuration
bootloader --append=" crashkernel=auto" --location=mbr --boot-drive=sda

# Clear the Master Boot Record
zerombr

# Partition clearing information
clearpart --all --initlabel

# Disk partitioning information
autopart --type=lvm

# Enable installation repo
url --url="http://mirror.centos.org/centos/7/os/x86_64/"
repo --name="AppStream" --baseurl="http://mirror.centos.org/centos/7/AppStream/x86_64/"

%packages
@base
%end

%addon com_redhat_kdump --enable --reserve-mb='auto'
%end

Ansible for python2

Option 1: Set in the Inventory File

Edit Your Inventory File: Open your Ansible inventory file (typically hosts or inventory).

Specify Python Interpreter:

For each host or group of hosts that need to run Python 2 scripts, add the ansible_python_interpreter variable and set it to the path of the Python 2 interpreter. Example:

[python2_hosts]
host1 ansible_python_interpreter=/usr/bin/python2
host2 ansible_python_interpreter=/usr/bin/python2
Run Ansible Playbook: Execute your Ansible playbook as usual. Ansible will use Python 2 for the specified hosts.

Option 2: Set in Playbooks

Define Variable in Playbook: In your Ansible playbook, you can set the ansible_python_interpreter variable at the play level or for specific tasks.

Example Playbook:

yaml
Copy code
- hosts: python2_hosts
  vars:
    ansible_python_interpreter: "/usr/bin/python2"
  tasks:
    - name: Run a Python 2 script
      script: path_to_your_python2_script.py

Run Ansible Playbook: When you run this playbook, Ansible will use Python 2 for the hosts under python2_hosts.

Option 3: Set in Role Variables

Create or Edit Role: If you're using roles in Ansible, you can define the ansible_python_interpreter variable within the role's variable file (vars/main.yml).

Define Variable:

ansible_python_interpreter: "/usr/bin/python2"

Use Role in Playbook: Include this role in your playbook. Ansible will apply the specified Python interpreter when executing tasks from this role.

Additional Notes Path to Python Interpreter: Ensure that the path to the Python 2 interpreter (/usr/bin/python2 in the examples) matches the actual path on the target hosts. Python 2 End of Life: Be aware that Python 2 has reached its end of life. It's advisable to plan for migrating to Python 3 where possible, as Python 2 no longer receives updates or security fixes. Testing: Test your configuration in a non-production environment first to ensure it works as expected. By configuring the ansible_python_interpreter variable, you can control which Python interpreter Ansible uses for specific hosts, plays, or roles, allowing for compatibility with scripts that require Python 2.


Kickstart stuff

#version=RHEL8

# Use graphical install
graphical

# Use CDROM installation media
cdrom

# Run the Setup Agent on first boot
firstboot --enable

# Keyboard layouts
keyboard --xlayouts='us'

# System language
lang en_US.UTF-8

# Network information
network  --bootproto=dhcp --device=eth0 --onboot=on --activate
network  --hostname=localhost.localdomain

# Root password (use a strong password here)
rootpw --plaintext your_root_password

# System timezone
timezone America/New_York --isUtc

# System bootloader configuration
bootloader --location=mbr --boot-drive=sda

# Clear the Master Boot Record
zerombr

# Disk partitioning information
autopart --type=lvm
# For manual partitioning, use something like:
# part /boot --fstype=xfs --size=1024
# part pv.100 --size=1 --grow
# volgroup vg_system pv.100
# logvol / --fstype=xfs --name=lv_root --vgname=vg_system --size=10240
# logvol swap --name=lv_swap --vgname=vg_system --size=2048

# Enable firewall and disable SELinux
firewall --enabled
selinux --disabled

# System services
services --enabled="chronyd"

# Do not configure the X Window System
skipx

# Package selection (minimal installation)
%packages
@^minimal-environment
%end

%post
# Post-installation script
# You can put your custom post-installation scripts here
%end

HCL:

source "virtualbox-iso" "ubuntu-example" {
  vm_name         = "packer-ubuntu-vm"
  iso_url         = "<iso-url>"
  iso_checksum    = "sha256:<iso-checksum>"
  guest_os_type   = "Ubuntu_64"
  ssh_username    = "ubuntu"
  ssh_password    = "password"
  boot_wait       = "10s"

  disk_size       = 10240 // Size of the primary disk in MB

  // VirtualBox-specific settings
  vboxmanage = [
    ["modifyvm", "{{.Name}}", "--memory", "4096"],
    ["modifyvm", "{{.Name}}", "--cpus", "2"]
  ]
}

build {
  sources = ["source.virtualbox-iso.ubuntu-example"]

  // Local-shell provisioner to add a second disk
  provisioner "local-shell" {
    inline = [
      "VBoxManage createhd --filename output-{{build_name}}/additional_disk.vdi --size 10240", // Size of the additional disk in MB (10 GB)
      "VBoxManage storageattach '{{build_name}}' --storagectl 'SATA Controller' --port 1 --type hdd --medium 'output-{{build_name}}/additional_disk.vdi'"
    ]
  }

  // ... other provisioners if any ...
}

- hosts: all
  vars:
    ansible_python_interpreter: /usr/bin/python3
  tasks:
    - name: Example task
      <task-module>: <task-arguments>
[target-hosts]
your_host_or_group ansible_python_interpreter=/usr/bin/python3

- name: Get hostname
  ansible.builtin.command: hostname
  register: result_hostname

- name: Assert that hostname has 3 parts
  assert:
    that:
      - result_hostname.stdout.split('.') | length == 3
    fail_msg: "Hostname does not have 3 parts"
    success_msg: "Hostname has 3 parts"

testing dirrent hostnames

---
- name: Test server name format logic
  hosts: localhost
  gather_facts: no
  vars:
    test_server_names:
      - "ukdc1-9k-abc01"
      - "ab-t1-b-9k-abc01"
      - "another-format-server"

  tasks:
    - name: Determine format and set facts accordingly for test server names
      vars:
        server_name: "{{ item }}"
      set_fact:
        env: "{{ (server_name[6] if server_name | length > 10 and server_name[3] == '-' and server_name[5] == '-' else server_name[8]) }}"
        lhp: "{{ (server_name[9:12] if server_name | length > 10 and server_name[3] == '-' and server_name[5] == '-' else server_name[11:14]) }}"
        zone: "{{ (server_name[7] if server_name | length > 10 and server_name[3] == '-' and server_name[5] == '-' else server_name[4]) }}"
        use_format: "{{ ('first' if server_name | length > 10 and server_name[3] == '-' and server_name[5] == '-' else 'new') }}"
      loop: "{{ test_server_names }}"
      loop_control:
        loop_var: item

    - name: Display server name and extracted facts
      debug:
        msg: "Server: {{ item }}, Format: {{ use_format }}, Env: {{ env }}, LHP: {{ lhp }}, Zone: {{ zone }}"
      loop: "{{ test_server_names }}"
      loop_control:
        loop_var: item

Change a password:

Changing the root password across multiple Linux systems is a common use case for Ansible, which can automate this process efficiently and securely. The best practice for changing the root password with Ansible involves using the `user` module along with an encrypted password generated using a tool like `openssl` or `mkpasswd`. This approach ensures that the new password is encrypted in transit and not exposed in plain text in your Ansible playbook or logs.

### Step 1: Generate an Encrypted Password

First, generate an encrypted password. You can use `openssl` for this:

```bash
openssl passwd -6 -salt xyz yourpassword

Replace yourpassword with the desired new password. The -6 flag specifies the SHA-512 encryption algorithm, and -salt xyz adds a salt to the encryption process to enhance security. Remember to replace xyz with a random salt value.

Alternatively, use mkpasswd (you might need to install the whois package to get this command):

mkpasswd --method=sha-512

Step 2: Create an Ansible Playbook

Next, create an Ansible playbook to change the root password. Here's a simple playbook example:

---
- name: Change root password
  hosts: all
  become: yes

  tasks:
    - name: Update root password
      user:
        name: root
        password: "{{ encrypted_password }}"

Replace {{ encrypted_password }} with the encrypted password string you generated earlier. For better security practices, you should use Ansible Vault to encrypt the password variable or the entire file containing the password.

Using Ansible Vault

To avoid storing the encrypted password directly in the playbook:

  1. Create a file with the encrypted password variable:
ansible-vault create secret.yml

When prompted, enter the password for the vault and add the following content:

encrypted_password: 'encrypted-password-here'

Replace encrypted-password-here with your encrypted password.

  1. Include the vault file in your playbook using vars_files:
---
- name: Change root password
  hosts: all
  become: yes
  vars_files:
    - secret.yml

  tasks:
    - name: Update root password
      user:
        name: root
        password: "{{ encrypted_password }}"
  1. Run the playbook, providing the vault password:
ansible-playbook playbook.yml --ask-vault-pass

This method keeps the encrypted password secured and avoids exposing sensitive information directly in your playbook or source control.

Remember, changing the root password is a critical operation that can affect system access. Ensure you have proper backups and recovery processes in place before making such changes across your infrastructure.

One line password checks...

echo "<password>" | su - root -c 'echo "Success"' 2>/dev/null && echo "Yes, the password works." || echo "No, the password doesn't work."
- name: Check Root Access
  hosts: all
  become: yes
  become_method: su
  tasks:
    - name: Attempt to read a file only root can access
      ansible.builtin.command: cat /root/.ssh/authorized_keys
      register: result
      ignore_errors: yes

    - name: Check if access was successful
      ansible.builtin.debug:
        msg: "Root access verified."
      when: result.rc == 0

    - name: Report failure to access root
      ansible.builtin.debug:
        msg: "Root access denied."
      when: result.rc != 0

Possible steps to install OpenJDK 17 on rhel7

To install OpenJDK 17 on RHEL 7, you can follow these more detailed steps. This guide assumes you have access to the terminal and appropriate permissions to execute commands (typically as the root user or via sudo).

Step 1: Download OpenJDK 17

First, you'll need to download the OpenJDK 17 binaries. While RHEL's default repositories may not provide the latest JDK version, you can download it from the official OpenJDK website or AdoptOpenJDK, which is now part of the Eclipse Foundation (Eclipse Temurin™).

  • Visit Adoptium and choose the appropriate OpenJDK 17 version for Linux/x64.

Step 2: Extract the JDK Archive

Once downloaded, upload the tar.gz file to your RHEL 7 system, if you downloaded it from another machine. Use scp or similar tools if necessary. Then, extract the JDK archive to an appropriate directory, such as /usr/lib/jvm. This is a common directory for Java installations but may not exist by default.

sudo mkdir -p /usr/lib/jvm
cd /usr/lib/jvm
sudo tar -xzf /path/to/downloaded/openjdk-17_linux-x64_bin.tar.gz

Replace /path/to/downloaded/ with the actual path where your OpenJDK tar.gz file is located.

Step 3: Set Up Environment Variables

For the system and users to recognize the newly installed Java version as the default, set up environment variables. Edit or create the /etc/profile.d/jdk.sh file:

sudo vi /etc/profile.d/jdk.sh

Add the following lines, replacing /usr/lib/jvm/jdk-17 with the actual path to your JDK if different:

export JAVA_HOME=/usr/lib/jvm/jdk-17
export PATH=$JAVA_HOME/bin:$PATH

Save and exit the editor. Make the script executable:

sudo chmod +x /etc/profile.d/jdk.sh

Apply the changes:

source /etc/profile.d/jdk.sh

Step 4: Verify the Installation

To ensure OpenJDK 17 is correctly installed and set as the default Java version, use:

java -version

You should see the version of OpenJDK being displayed, indicating that OpenJDK 17 is now the default Java version.

Step 5: Update Alternatives

For systems with multiple Java installations, use the update-alternatives command to manage the default version:

sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk-17/bin/java 1
sudo update-alternatives --config java

Follow the prompts to select OpenJDK 17 if it's not already the default.

Additional Notes

  • Be aware of the RHEL 7 lifecycle and consider updating to a newer RHEL version for better compatibility and security in the long term.
  • Always perform such installations and configurations in a testing environment before applying them to production systems.
  • Keep track of any custom repository or installation paths you use for easier maintenance and updates.

lineinefile update:

- name: Replace line in /etc/fstab
  ansible.builtin.lineinfile:
    path: /etc/fstab
    regexp: '^/dev/shm\\s+/dev/shm\\s+tmpfs\\s+defaults\\s+0\\s+0$'
    line: 'tmpfs  /dev/shm  tmpfs  defaults 0 0'
    backrefs: yes

check with extra-vars:

- name: Test patch_id variable
  hosts: localhost
  tasks:
    - name: Display patch_id if provided
      debug:
        msg: "The provided patch_id is {{ patch_id }}"
      when: patch_id is defined and patch_id != ''

    - name: Display a default message if patch_id is not provided
      debug:
        msg: "No patch_id was provided."
      when: patch_id is not defined or patch_id == ''

Some kernel parameters

Here's a brief description of each specified kernel.sched_* parameter:

kernel.sched_wakeup_granularity_ns: Determines how much longer the current task will run before another task is woken up. A lower value can improve system responsiveness by allowing newly awakened tasks to preempt the current task more quickly.

kernel.sched_tuneable_scaling: Controls how the scheduler's tunable values are scaled across CPUs. It adjusts the scheduler's behavior based on the system's workload and CPU characteristics, aiming for a balance between performance and power efficiency.

kernel.sched_schedstats: Enables or disables the collection of detailed scheduling statistics. Useful for debugging or optimizing system performance but may incur overhead.

kernel.sched_rt_runtime_us: Specifies the time slice, in microseconds, allocated to real-time tasks within each sched_rt_period_us. It controls how much CPU time is guaranteed to real-time tasks.

kernel.sched_rt_period_us: Defines the period, in microseconds, over which real-time tasks are allowed to run. It sets the time frame for real-time scheduling.

kernel.sched_rr_timeslice_ms: Sets the time slice, in milliseconds, allocated to each task in a round-robin scheduling scheme. It determines how long a task will run before the scheduler switches to the next task in the round-robin queue.

kernel.sched_nr_migrate: Controls the maximum number of active tasks that can be migrated from one CPU to another during load balancing. It affects how tasks are distributed across CPUs.

kernel.sched_min_granularity_ns: Sets the minimum granularity of the scheduler, in nanoseconds. This value helps prevent too frequent preemptions, ensuring that tasks have a minimum execution time before being rescheduled.

kernel.sched_migration_cost: Represents the typical cost, in nanoseconds, of migrating a task from one CPU to another. A higher value suggests that task migrations are more costly, potentially influencing the scheduler's decisions on moving tasks.

kernel.sched_latency_ns: The total period over which the scheduler aims to run all runnable tasks at least once, in nanoseconds. It's a key parameter in defining the scheduler's behavior, influencing how long tasks may wait before getting CPU time.

kernel.sched_energy_wave, kernel.sched_domain: These parameters are less commonly documented and could be specific to certain kernel versions or configurations, focusing on advanced scheduling domains and energy efficiency mechanisms.

kernel.sched_deadline_period_min_us and kernel.sched_deadline_period_max_us: Define the minimum and maximum allowable period, in microseconds, for tasks scheduled under the SCHED_DEADLINE policy, which is used for tasks with specific timing requirements.

kernel.sched_child_runs_first: Determines whether a child process runs first after being created with fork() before the parent process. This can influence the performance of certain applications.

kernel.sched_cfs_bandwidth_slice_us: Specifies the time slice, in microseconds, for bandwidth control under the Completely Fair Scheduler (CFS), impacting how bandwidth is allocated among competing tasks.

kernel.sched_autogroup_enabled: Enables automatic grouping of tasks, which can improve the responsiveness of interactive tasks by effectively grouping and scheduling related tasks together.

Please note, the exact behavior and availability of some parameters may vary depending on the kernel version and configuration


---
- name: Append a line to all .bashrc files in /home and list them
  hosts: all
  become: yes  # Use with caution, elevates permissions to root
  tasks:
    - name: Find all .bashrc files in /home
      ansible.builtin.find:
        paths: "/home"
        patterns: "*.bashrc"  # Ensures that it looks for all files ending with .bashrc
        recurse: yes
        file_type: file
      register: bashrc_files

    - name: Debug found .bashrc files
      ansible.builtin.debug:
        msg: "Found .bashrc files: {{ bashrc_files.files | map(attribute='path') | list }}"
      when: bashrc_files.files | length > 0

    - name: Warn if no .bashrc files found
      ansible.builtin.debug:
        msg: "No .bashrc files found in /home."
      when: bashrc_files.files | length == 0
      
    - name: Append line to each .bashrc file in /home
      ansible.builtin.lineinfile:
        path: "{{ item.path }}"
        line: "export ANSIBLE='ansible line'"
        create: no
      loop: "{{ bashrc_files.files }}"
      when: bashrc_files.files | length > 0


---
- name: Append a line to all .bashrc files in /home using command
  hosts: all
  become: yes  # Necessary for access to all home directories
  tasks:
    - name: Find all .bashrc files in /home using shell command
      ansible.builtin.command: "find /home -type f -name .bashrc"
      register: find_result
      changed_when: false
      ignore_errors: yes

    - name: Debug found .bashrc files
      ansible.builtin.debug:
        msg: "{{ find_result.stdout_lines }}"
      when: find_result.stdout_lines is defined and find_result.stdout_lines | length > 0

    - name: Warn if no .bashrc files found
      ansible.builtin.debug:
        msg: "No .bashrc files found in /home."
      when: find_result.stdout_lines is undefined or find_result.stdout_lines | length == 0

    - name: Append line to each .bashrc file found
      ansible.builtin.lineinfile:
        path: "{{ item }}"
        line: "export ANSIBLE='ansible line'"
        create: no
      loop: "{{ find_result.stdout_lines }}"
      when: find_result.stdout_lines is defined and find_result.stdout_lines | length > 0

Ansible to delete files per server based on a list

if the data looks like this:

server1,/path/to/files/1
server1,/path/to/files/2
server2,/path/to/files/1
server2,/path/to/files/2
server3,/path/to/files/1
server4,/path/to/files/2
server4,/path/to/files/1
server5,/path/to/files/2

run this sed command to create the file_paths.yml file

sed -e '1i file_paths:' -e 's/\([^,]*\),\(.*\)/ - { host: '\''\1'\'', path: '\''\2'\'' }/' server_paths.txt > file_paths.yml OR:

awk -F, '{print "  - { host: '\''"$1"'\'', path: '\''"$2"'\'' }"}' input.txt > output.yml

Once you have the list, create the playbook:

- hosts: all
  gather_facts: no

  tasks:
    - name: Include variable file with file paths
      include_vars:
        file: file_paths.yml

    - name: Delete specified files on each server
      file:
        path: "{{ item.path }}"
        state: absent
      loop: "{{ file_paths }}"
      when: inventory_hostname == item.host
      become: yes  # Use sudo to delete the files

run the playbook: ansible-playbook -i hosts.ini delete_files.yml


Collect information about remote files

The point of this playbook is to collect the hostname, owner, group and permissions of a file or list of files and generate a variable file that can be used to restore the data should we need to back to it's original state.

The ansible code.

It will need some tweaking. This version only does one listed file. When you do any of this thype of work, the point is to start slow and get each tasks working as expected BEFORE you just create the whole thing and try to work out why it doesn't work. You also get to understand what you;re doing and see realtime output of how it's going and if it's going int he right direction.

---
- name: Collect file information and generate variable file
  hosts: all
  gather_facts: false
  vars_files:
    - files_info.yml
  tasks:
    - name: Gather file information
      stat:
        path: "{{ item.path }}"
      register: file_stat
      loop: "{{ files_info }}"
      when: inventory_hostname == item.host

    - name: Append to list of collected file info
      set_fact:
        file_info:
          - { hosts: '{{ item.host }},', path: '{{ item.path }}', user: '{{ file_stat.stat.pw_name }}', group: '{{ file_stat.stat.gr_name }}', perms: '{{ '%04o' % file_stat.stat.mode | int }}' }
      loop: "{{ files_info }}"
      when: inventory_hostname == item.host
      register: file_info_results

    - name: Combine file info results
      set_fact:
        all_file_info: "{{ file_info_results.results | map(attribute='ansible_facts.file_info') | list | flatten }}"

    - name: Initialize variable file if not present
      delegate_to: localhost
      copy:
        content: "file_info:\n"
        dest: "{{ playbook_dir }}/variable_file.yml"
      when: not file_info | default([])

    - name: Append file info to variable file
      delegate_to: localhost
      lineinfile:
        path: "{{ playbook_dir }}/variable_file.yml"
        line: "  - { hosts: '{{ item.host }},', path: '{{ item.path }}', user: '{{ item.user }}', group: '{{ item.group }}', perms: '{{ item.perms }}' }"
        insertafter: "file_info:"
      loop: "{{ all_file_info }}"


---
- name: Collect file information via shell and generate variable file
  hosts: all
  gather_facts: false
  vars_files:
    - files_info.yml
  tasks:
    - name: Gather file information via shell
      shell: |
        file_path="{{ item.path }}"
        stat_output=$(stat -c '%U %G %a' "$file_path")
        user=$(echo $stat_output | cut -d' ' -f1)
        group=$(echo $stat_output | cut -d' ' -f2)
        perms=$(echo $stat_output | cut -d' ' -f3)
        echo "{{ inventory_hostname }}:$file_path:$user:$group:$perms"
      register: file_info_shell
      loop: "{{ files_info }}"
      when: inventory_hostname == item.host

    - name: Append results to file
      lineinfile:
        path: "/tmp/collected_file_info.txt"
        create: yes
        line: "{{ item.stdout }}"
      loop: "{{ file_info_shell.results }}"

    - name: Convert shell results to structured data
      set_fact:
        structured_file_info: >
          {{
            structured_file_info | default([]) +
            [{'hosts': result.split(':')[0] + ',', 'path': result.split(':')[1], 'user': result.split(':')[2], 'group': result.split(':')[3], 'perms': result.split(':')[4]} for result in lookup('file', '/tmp/collected_file_info.txt').splitlines()]
          }}

    - name: Create final YAML output
      template:
        src: "file_info_template.j2"
        dest: "/tmp/variable_file.yml"
      delegate_to: localhost

file_info:
{% for item in structured_file_info %}
  - { hosts: '{{ item.hosts }}', path: '{{ item.path }}', user: '{{ item.user }}', group: '{{ item.group }}', perms: '{{ item.perms }}' }
{% endfor %}

    - name: Initialize variable file with header
      delegate_to: localhost
      copy:
        content: "file_info:\n"
        dest: "/tmp/variable_file.yml"
      when: not lookup('file', '/tmp/variable_file.yml', errors='ignore')
      
    - name: Append results to variable file
      delegate_to: localhost
      lineinfile:
        path: "/tmp/variable_file.yml"
        line: "  - { hosts: '{{ item.stdout.split(':')[0] }}', path: '{{ item.stdout.split(':')[1] }}', user: '{{ item.stdout.split(':')[2] }}', group: '{{ item.stdout.split(':')[3] }}', perms: '{{ item.stdout.split(':')[4] }}' }"
        insertafter: "file_info:"
      loop: "{{ file_info_shell.results }}"
      when: "'stdout' in item and item.stdout != ''"
      

---
- name: Convert file paths to structured YAML
  hosts: localhost
  gather_facts: false
  tasks:
    - name: Initialize YAML file with header
      lineinfile:
        path: /tmp/structured_file_list.yml
        line: "---\nfile_list:"
        create: yes

    - name: Read file paths from local file
      slurp:
        src: file_paths.txt
      register: file_content

    - name: Append file paths to structured YAML
      lineinfile:
        path: /tmp/structured_file_list.yml
        line: "  - { host: '{{ item.split()[0] }}', path: '{{ item.split()[1] }}' }"
      loop: "{{ file_content.content | b64decode | splitlines() }}"
      when: "item"

---
- name: Check if files exist on remote servers
  hosts: all
  vars:
    file_list:
      - { host: 'server1', path: '/path/to/file1' }
      - { host: 'server1', path: '/path/to/file12' }
      - { host: 'server2', path: '/path/to/another/file1' }

  tasks:
    - name: Check file existence using shell
      shell: test -f "{{ item.path }}" && echo "exists" || echo "does not exist"
      register: shell_output
      loop: "{{ file_list }}"
      when: inventory_hostname == item.host
      ignore_errors: true  # Prevents the task from failing and allows the playbook to continue

    - name: Display file existence results
      debug:
        msg: "File {{ item.item.path }} {{ item.stdout.trim() }}"
      loop: "{{ shell_output.results }}"
      when: inventory_hostname == item.item.host

COmmand to check if file exists, the pull out more info

ssh $server 'if [ ! -f /etc/opt/quest/qpm4u/pm.settings ]; then echo "$HOSTNAME Not Under QPM Management"; else case $(grep masters /etc/opt/quest/qpm4u/pm.settings) in *"ukserver01"*) echo "$HOSTNAME Managed by Old QPM Master";; *"ukserver001"*) echo "$HOSTNAME Managed by New QPM Master";; *) echo "$HOSTNAME Not Under QPM Management";; esac; fi'

---
- name: Set up Python alternatives
  hosts: all
  become: yes

  tasks:
    - name: Ensure Python 2.7 is installed
      yum:
        name: python2
        state: present

    - name: Ensure Python 3.6 is installed
      yum:
        name: python36
        state: present

    - name: Install alternatives for python
      alternatives:
        name: python
        link: /usr/bin/python
        path: /usr/bin/python3.6
        priority: 50

    - name: Install alternatives for python2
      alternatives:
        name: python2
        link: /usr/bin/python2
        path: /usr/bin/python2.7
        priority: 30

    - name: Install alternatives for python3
      alternatives:
        name: python3
        link: /usr/bin/python3
        path: /usr/bin/python3.6
        priority: 40

    - name: Set default Python version
      alternatives:
        name: python
        path: /usr/bin/python3.6


---
- name: Ensure Apache is configured correctly
  hosts: webservers
  become: yes

  vars:
    config_file_path: /etc/httpd/conf/httpd.conf

  tasks:
    - name: Copy Apache configuration file
      copy:
        src: files/httpd.conf
        dest: "{{ config_file_path }}"
      notify: Restart Apache

  handlers:
    - name: Restart Apache
      service:
        name: httpd
        state: restarted

grep -oP '^\w+.*?(\(.*?\))' input.txt | sed 's/.*(\(.*\))/\1/' | awk '{print $1, $NF}'

---
- name: Test new user login
  hosts: target_host
  gather_facts: no
  tasks:
    - name: Ensure the expect module is present
      package:
        name: expect
        state: present
      become: yes

    - name: Test SSH login
      expect:
        command: ssh -o StrictHostKeyChecking=no testuser@localhost whoami
        responses:
          password: "testpassword"
      register: result
      failed_when: "'testuser' not in result.stdout"

    - name: Debug login test result
      debug:
        msg: "SSH login test passed: {{ result.stdout }}"

    - name: Test sudo privileges
      expect:
        command: ssh -o StrictHostKeyChecking=no testuser@localhost 'sudo whoami'
        responses:
          password: "testpassword"
      register: sudo_result
      failed_when: "'root' not in sudo_result.stdout"

    - name: Debug sudo test result
      debug:
        msg: "Sudo test passed: {{ sudo_result.stdout }}"

check a file exists

---
- name: Check if files exist on specified hosts
  hosts: all
  vars_files:
    - file_list.yml
  tasks:
    - name: Check if file exists
      stat:
        path: "{{ item.path }}"
      register: file_status
      loop: "{{ file_list }}"
      when: inventory_hostname == item.host

    - name: Output file path if it exists
      debug:
        msg: "File {{ item.path }} exists on {{ item.host }}"
      when: file_status.stat.exists
      loop: "{{ file_list }}"
      loop_control:
        label: "{{ item.path }}"
      failed_when: not file_status.stat.exists
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment