Keybase proof
I hereby claim:
- I am rulerof on github.
- I am abobulsky (https://keybase.io/abobulsky) on keybase.
- I have a public key ASCjqjneTXV6VZmalCQiZ4Crnvsd6dAqjlhF2KH71wHElAo
To claim this, I am signing this object:
I hereby claim:
To claim this, I am signing this object:
We download the latest release of pfSense as a gzipped ISO, then extract it and pass it to virt-install
to get the VM up and running. Interactive portions of setup are handled with a VNC viewer because the pfSense installer doesn't seem to be able to work with virt-install
's native console redirection, at least not out of the box. I'd love a tip from anyone if it's possible to fix that somehow.
CentOS 8 instructions are this way.
Find the latest release here.
This process is an outline of the steps I followed to get my two first-gen 640GB ioDrive Duo cards working on Oracle Linux 7. After we install the driver, we're going to perform the optional step of formatting the cards to use native 4k sectors, since it offers marginally better throughput and a decent reduction to memory usage.
Go to the SanDisk support site and download the packages that correspond to your device and kernel. I'm using 64-bit Oracle Linux 7.
On the support site, we pick out our device,
We're going to add ZFS support to our Oracle Linux installation. We'll just add the ZFS on Linux Repo, verify the binary signature from GitHub, install the files, ensure the driver loads properly, and verify that it's functional. We'll save things like array creation for another document.
This is mostly a transcription of the process from the CentOS/RHEL ZoL installation manual.
This will install ZFS v0.7 release on OEL 7.7 and earlier, and ZFS 0.8 on OEL 7.8 and later.
Using the following hardware:
The plan is to create a RAIDz3 zpool with 20 disks in the array, 3 hot spares, a mirrored log device containing one drive from each of the ioDrives, and two cache devices made out of the remaining ioDrives. We'll also underprovision the ioDrives to help with wear leveling, since 320GB ZIL and 640GB cache are excessive for this setup.
If you want to install a package like Docker Community Edition on OEL, you'll have to add the CentOS Extras repo, which as of release 7 is built-in to CentOS and not added on later like EPEL is. As a result, instructions for adding it to EL7 are hard to find. This should work for any build of Enterprise Linux that does not already include the CentOS Extras repo. I have absolutely no idea if it's apporpriate to use this repo on a build of Linux other than CentOS, but it should be.
This was tested on Oracle Enterprise Linux 7, but should be copy-pastable on any version or EL build assuming things don't change too much.
# Get OS Release number
virt-install
Cisco has a few different guides for installing their vWLC on KVM, but most of them focus on oVirt-style installations that are heavy on hand-crafted XML and [what appears to be] the use of OpenStack. If you're just using a plain single-host KVM setup and want to install the vWLC in a VM, this guide is for you.
First, download the vWLC KVM installation image appropriate for your setup. I'm going to use version 8.5.171.0 (you'll have to create an account to download it), and then transfer it to your KVM server:
AndrewBobulskys-MacBook-Pro:~ andrewbobulsky$ scp ~/Downloads/MFG_CTVM_LARGE_8.5.171.0.iso 10.0.25.2:/tmp 100% 367MB 40.7MB/s 00:09
# Set a user to delete | |
targetUser = ThisGuy | |
# Set the name of our awscli profile | |
aws_profile = prod | |
# Get all the keys that the user has on its account | |
userKeys=$(aws iam list-access-keys --user-name $targetUser --profile $aws_profile | jq -r '.AccessKeyMetadata[].AccessKeyId') | |
# Delete the keys |
#!/usr/bin/env bash | |
# shellcheck disable=SC2046 | |
# We're catching errors manually here | |
set +e | |
# Start off the output formatting for this whole thing | |
echo "----" | |
if command -v VBoxManage >/dev/null 2>&1 ; then |
#!/bin/bash | |
# content-engine-update.sh pulls new container images and starts those images | |
# if there are any updates to be found. The "proper" way to do this is with | |
# the v2tec/watchtower docker image, but I prefer this method because of the | |
# logging output I get with this single-host setup | |
# Get the script name | |
scriptName="$(basename $0)" |