wsl --shutdown
diskpart
# open window Diskpart
select vdisk file="C:\WSL-Distros\…\ext4.vhdx"
attach vdisk readonly
compact vdisk
detach vdisk
## USAGE | |
# $ nix-build kexec-installer.nix | |
# can be deployed remote like this | |
# $ rsync -aL -e ssh result/ root@host: | |
# $ ssh root@host ./kexec-installer | |
## Customize it like this | |
# # custom-installer.nix | |
# import ./kexec-installer.nix { | |
# extraConfig = {pkgs, ... } { | |
# user.extraUsers.root.openssh.authorizedKeys.keys = [ "<your-key>" ]; |
[network] | |
generateResolvConf = false | |
[automount] | |
options = "umask=22" |
name: linting | |
on: pull_request | |
jobs: | |
cloverage: | |
runs-on: ubuntu-18.04 | |
steps: | |
- name: Install Java |
(ns prime-factors.core | |
(:require [clojure.core.logic :refer :all]) | |
(:require [clojure.core.logic.fd :refer [in interval eq]])) | |
(defn factor-pairs [number] | |
(run* [pair] | |
(fresh [factor1 factor2] | |
(in factor1 factor2 (interval 2 number)) | |
(eq (= number (* factor1 factor2))) | |
(== pair [factor1 factor2])))) |
#!/bin/bash
START=$1
END=$2
REGIONS="us-east-1 us-west-2"
pipeline { | |
agent any | |
stages { | |
stage('usernamePassword') { | |
steps { | |
script { | |
withCredentials([ | |
usernamePassword(credentialsId: 'gitlab', | |
usernameVariable: 'username', |
lfs hsm_archive /mnt/lustre/<path>/<filename> Copies the file to the archive. | |
lfs hsm_release /mnt/lustre/<path>/<filename> Removes the file from the Lustre file system; does not affect the archived file. | |
lfs hsm_restore /mnt/lustre/<path>/<filename> Restores the archived file back to the Lustre file system. This is an asynchronous, non-blocking restore. A client’s request to access an archived file will also restore the file back the Lustre file system if is has been released; this will be a synchronous and blocking restore. | |
lfs hsm_cancel /mnt/lustre/<path>/<filename> Cancels an lfs_hsm command that is underway. |
Uploading big files to Amazon S3 can be a bit of pain when you're on an unstable network connection. If an error occurs, your transfer will be cancelled and you can start the upload process all over again.
To check the integrity of a file that was uploaded in multiple parts, you can calculate the checksum of the local file and compare it with the checksum on S3. Problem is: Amazon doesn't use a regular md5 hash for multipart uploads. In this post we'll take a look at how you can compute the correct checksum on your computer so you can compare it to the checksum calculated by Amazon.
The solution So if you want to check if your files where transferred correctly, you have to compute the ETag hash in the same way that Amazon does. Luckily there is this bash script which splits up your files (like the multipart upload) and calculates the correct ETag hash.
#!/usr/bin/env bash | |
# lists all unused AWS security groups. | |
# a group is considered unused if it's not attached to any network interface. | |
# requires aws-cli and jq. | |
# all groups | |
aws ec2 describe-security-groups \ | |
| jq --raw-output '.SecurityGroups[] | [.GroupName, .GroupId] | @tsv' \ | |
| sort > /tmp/sg.all |