I couldn't find instructions on the internet anywhere for how to craft a UEFI secure bootable VMware ESXi installer disk iso. I worked it out using the rEFInd OSS project. This is not perfect or optimal, but it does the job.
The JFrog documentation is lacking on how to do this in a proper way, expecting to use the normal procedures for logging into docker and not in an automated way. This gets even worse if you try and configure your task to authenticate to a private registry - which does not seem to be possible.
The solution is to adjust your user-data, preferably storing your config and key in Secrets Manager.
Putting the config into Secret Manager:
aws secretsmanager update-secret --secret-id artifactory --region us-west-1 --secret-string '{"https://companyname-repo-virtual.jfrog.io": {"auth": "AUTHKEY_FROM_SETMEUP","email": "EMAIL_FROM_SETMEUP"}}'
# WARNING. DO NOT USE tls_private_key resource like I have done in this example | |
# Doing so will result in the private key being stored in state. You do not want that | |
# Instead use an existing key pair and use the file interpolation function to source | |
# the private key from disk for use in the rsadecrypt interpolation function | |
resource "tls_private_key" "key" { | |
algorithm = "RSA" | |
} | |
resource "aws_key_pair" "key_pair" { |
When you're decomissioning a machine that has been managed by Puppet you may want to programatically clean up the node. There are two parts to this:
- revoking and deleting the certificate of the node in Puppet's CA
- deactivating the node in PuppetDB
The following should work for Puppet 4.x and Puppet DB 4.x (including Puppet Enterprise 2016.4.x, 2017.1.x, 2017.2.x).
I've used certificate based auth, and the examples are being run from the puppet master so make use of existing certificates for authentication. When run remotely the cacert, certificate and corresponding private key for authentication will need to be present.
$FOLDER_NAME = "DownlodedFiles" | |
$TEAM_CITY = 'http://TC' | |
$BUILD_TYPE_ID = "Trunk_Ci_FastCi_Build" | |
$FILES_TO_DOWNLOAD = ".zip" | |
$folderName = (".\" + $FOLDER_NAME) | |
If (Test-Path $folderName){ | |
Remove-Item $folderName | |
} |
- no upfront installation/agents on remote/slave machines - ssh should be enough
- application components should use third-party software, e.g. HDFS, Spark's cluster, deployed separately
- configuration templating
- environment requires/asserts, i.e. we need a JVM in a given version before doing deployment
- deployment process run from Jenkins
# -*- mode: ruby -*- | |
# vi: set ft=ruby : | |
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing! | |
VAGRANTFILE_API_VERSION = "2" | |
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| | |
config.vm.define :local do |local| | |
local.vm.box = "ubuntu/trusty64" | |
local.vm.network "private_network", ip: "192.168.50.13" |
ssh admin@nas | |
/etc/init.d/services.sh stop | |
/etc/init.d/xdove.sh stop | |
# See if there are still files open on the disk: | |
lsof |grep /share/MD0_DATA | |
# Kill the open processes if any. |
#!/usr/bin/awk -f | |
# | |
# Author: Matt Pascoe - matt@opennetadmin.com | |
# | |
# This awk script is used to extract relavant information from a dhcpd.conf | |
# config file and build dcm.pl output with appropriate fields. This can be | |
# used to bootstrap a new database from existing site data. | |
# As usual, inspect the output for accuracy. | |
# Also you will get three types of output, subnet,pool,host. You must | |
# add the subnet information first, then pool, then host. |
def prefetch(resources = {}) | |
# generate hash of {provider_name => provider} | |
providers = instances.inject({}) do |hash, instance| | |
hash[instance.name] = instance | |
hash | |
end | |
# Identify the namevar(s) for the type | |
nv_properties = resource_type.properties.select(&:isnamevar?).map(&:name) | |
nv_params = resource_type.parameters.select do |param| |