A primitive script for Starting, Stopping and Checking Status of the filebrowser docker container.
mkdir $HOME/labs/docker/filebrowser/
# Install QEMU-6.1.0 | |
wget https://download.qemu.org/qemu-6.1.0.tar.xz | |
tar xvJf qemu-6.1.0.tar.xz | |
cd qemu-6.1.0 | |
./configure | |
make | |
sudo make install | |
# Download Armbian (Ubuntu Focal 20.04) for OrangePi PC | |
#wget https://mirrors.netix.net/armbian/dl/orangepipc/archive/Armbian_21.08.1_Orangepipc_focal_current_5.10.60.img.xz |
In Linux KVM we use sparse file format called qcow
and qcow2
for disk images.
Sometimes we need to mount a qcow2 disk images of a guest devices on to your host server.
We need to do this to salvage a file, reset password of root user, or troubleshoot a faulty disk image.
We can use this dcumentation to do mount a Linux KVM guest qcow2 disk.
Executing networkmanager-1.22.10-r0.pre-install | |
* | |
* To setup system connections, regular users must be member of 'plugdev' group. | |
* | |
* | |
* To control WiFi devices, enable wpa_supplicant service: 'rc-update add wpa_supplicant default' | |
* then reboot the system or restart 'wpa_supplicant' and 'networkmanager' services respectively. | |
* | |
(25/80) Installing networkmanager-openrc (1.22.10-r0) |
#!ipxe | |
kernel http://ftp.sh.cvut.cz/slax/Slax-9.x/ipxe/9.6.0/64bit/vmlinuz vga=normal load_ramdisk=1 prompt_ramdisk=0 rw printk.time=0 from=http://ftp.sh.cvut.cz/slax/Slax-9.x/slax-64bit-9.6.0.iso | |
initrd http://ftp.sh.cvut.cz/slax/Slax-9.x/ipxe/9.6.0/64bit/initrfs.img | |
boot |
1969 - Led Zeppelin - Whole Lotta Love | |
1970 - Led Zeppelin - Immigrant Song | |
1971 - Led Zeppelin - Stairway To Heaven | |
1971 - Led Zeppelin - Black Dog | |
1971 - Led Zeppelin - Misty Mountain Hop | |
1975 - Led Zeppelin - Kashmir |
1982 - Run to the hills | |
1982 - The number of the beast | |
1983 - The Trooper | |
1984 - Aces High | |
1986 - Wasted Years | |
1992 - Fear of the dark | |
1982 - Hallowed be thy name |
I've been playing with jq, and I've been having a hard time finding examples of how it works with output from a service like AWS (which I use a lot).
Here is one I use a lot with vagrant-ec2.
When we're launching and killing a lot of instances, the AWS API is the only way to track down which instances are live, ready, dead, etc.
To find instances that are tagged with e.g. {"Key" = "Name", "Value" = "Web-00'} in the middle of a vagrant dev cycle, or a prod launch/replace cycle, you can do something like this: