Skip to content

Instantly share code, notes, and snippets.

@sanchezl
Last active January 22, 2024 07:44
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save sanchezl/23851d96519204d632e3df787f810eab to your computer and use it in GitHub Desktop.
Save sanchezl/23851d96519204d632e3df787f810eab to your computer and use it in GitHub Desktop.
Cluster Creation Scripts

Cluster Creation Scripts

Scripts to use as inspiration for your own OpenShift clusters.

Setup

  1. Create a directory with the name of the cluster you want to create.
  2. Generate an install-config.yaml (for example, using openshift-install create install-configs).
  3. Copy your install-config.yaml to install-config.yaml.save. This ensures you still have a copy after the installer deletes install-config.yaml. Add a bin/ subdirectory and add it to the begining of your PATH. I recommend you use direnv to manage this (sudo dnf install direnv). Download an installer using download-openshift-install.sh. It presents a text-UI, or specify latest parameter to download the latest good nightly build of the current version in development.

Run

Run the rest of the scripts:

  • destroy-cluster.sh: cleans up most AWS resources.
  • cleanup-records-sets.sh: cleans up AWS Route53 (DNS) records.
  • reset-config.sh: cleans up files from previous cluster and copies install-config.yaml.save to install-config.yaml.
  • create-cluster.sh: runs the installer.
  • create-admin-user.sh: creates and configures an admin user. Does not remove kubeadmin.
  • generate-or-renew-certs.sh: generates or renews SSL certs using letsencrypt
  • install-certs.sh: configures cluster to use the SSL certs.
  • deploy-user-grafana.sh: deploys a grafana instance with access to the cluster metrics.

Example

destroy-cluster.sh ; cleanup-records-sets.sh && reset-config.sh && create-cluster.sh && create-admin-user.sh && generate-or-renew-certs.sh && install-certs.sh
#!/usr/bin/env bash
aws route53 wait resource-record-sets-changed --id \
$(aws route53 change-resource-record-sets --hosted-zone-id \
"$(aws route53 list-hosted-zones-by-name --dns-name $2. \
--query HostedZones[0].Id --output text)" \
--query ChangeInfo.Id \
--output text \
--change-batch "{ \
\"Changes\": [{ \
\"Action\": \"$1\", \
\"ResourceRecordSet\": { \
\"Name\": \"_acme-challenge.${CERTBOT_DOMAIN}.\", \
\"ResourceRecords\": [{\"Value\": \"\\\"${CERTBOT_VALIDATION}\\\"\"}], \
\"Type\": \"TXT\", \
\"TTL\": 30 \
} \
}] \
}" \
)
#!/usr/bin/env bash
hosted_zone_dns_name=group-b.devcluster.openshift.com.
cluster_name=$(basename $PWD)
hosted_zone_id=$(
aws route53 list-hosted-zones-by-name \
--dns-name ${hosted_zone_dns_name} \
--query HostedZones[0].Id \
--output text
)
resource_record_sets=$( \
aws route53 list-resource-record-sets \
--hosted-zone-id "${hosted_zone_id}" \
--query "ResourceRecordSets[?contains(Name,'${cluster_name}')]" \
--output json
)
if [ "${resource_record_sets}" = "[]" ] ; then
printf "No resource record sets found for cluster %s\n" "${cluster_name}"
exit
fi
batch=$( \
jq '{ "Changes":[ .[] | {"Action":"DELETE", "ResourceRecordSet": . }]}' <<< "${resource_record_sets}"
)
printf "Requesting deletion of resource record sets..."
change_info_id=$(
aws route53 change-resource-record-sets \
--hosted-zone-id "${hosted_zone_id}" \
--query ChangeInfo.Id \
--output text \
--change-batch "${batch}"
)
printf "\n"
printf "Waiting for resource record sets to be deleted..."
aws route53 wait resource-record-sets-changed --id "${change_info_id}"
printf "\n"
#!/usr/bin/env bash
user_name="sanchezl"
user_password="dAGxvX#8"
user_full_name="Luis Sanchez"
oc -n openshift-config create secret generic htpasswd \
--from-file htpasswd=<(htpasswd -B -b -n "${user_name}" "${user_password}")
identity_provider_name="htpasswd"
oc patch oauths.config.openshift.io cluster --type merge --patch "---
spec:
identityProviders:
- htpasswd:
fileData:
name: htpasswd
mappingMethod: claim
name: ${identity_provider_name}
type: HTPasswd
"
oc create user ${user_name} --full-name "${user_full_name}"
user_identity="${identity_provider_name}:${user_name}"
oc create identity ${user_identity}
oc create useridentitymapping ${user_identity} ${user_name}
oc adm policy add-cluster-role-to-user cluster-admin sanchezl
#!/usr/bin/env bash
set -e
openshift-install create cluster
name="$(basename "$PWD")"
version="$(openshift-install version | awk '/^openshift-install/{print $2}')"
printf "Create cluster %s finished\n%s" "$name" "$version" | notify-me
#!/usr/bin/env bash
targetNS="openshift-user-workload-monitoring"
set -e
function co { tput setaf 6;"$@";tput sgr0; }
function reset { tput sgr0; }
trap reset EXIT
echo "Creating OperatorGroup..."
# add target namespace to operator group
co oc apply -f - <<EOF
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-user-workload-monitoring
namespace: ${targetNS}
spec:
targetNamespaces:
- ${targetNS}
upgradeStrategy: Default
EOF
# install operator
echo "Installing Grafana operator..."
co oc apply -f - <<EOF
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: grafana-operator
namespace: ${targetNS}
spec:
channel: v4
installPlanApproval: Automatic
name: grafana-operator
source: community-operators
sourceNamespace: openshift-marketplace
startingCSV: grafana-operator.v4.3.0
EOF
# wait for operator to be installed
echo "Waiting for Grafana operator install plan to complete..."
co oc -n ${targetNS} wait installplans.operators.coreos.com --all --for condition=Installed=True
# create grafana resource
echo "Creating a Grafana instance..."
co oc apply -f - <<EOF
apiVersion: integreatly.org/v1alpha1
kind: Grafana
metadata:
name: grafana-user-workload
namespace: ${targetNS}
spec:
config: {}
ingress:
enabled: true
EOF
echo "Waiting for Grafana instance to come up..."
co oc -n ${targetNS} wait grafana.integreatly.org/grafana-user-workload --all --for jsonpath='{status.phase}'=reconciling
# Add grafana-serviceaccount service account to the cluster-monitoring-view cluster role.
echo "Adding Grafana service account to instance to cluster-monitoring-view role..."
co oc -n "${targetNS}" adm policy add-cluster-role-to-user cluster-monitoring-view -z grafana-serviceaccount
# setup data source
echo "Adding Grafana datasource to cluster metrics..."
host=$(oc -n openshift-monitoring get route thanos-querier -o go-template='{{.spec.host}}')
token=$(oc -n "${targetNS}" serviceaccounts get-token grafana-serviceaccount)
co oc apply -f - <<EOF
apiVersion: integreatly.org/v1alpha1
kind: GrafanaDataSource
metadata:
name: prometheus-grafanadatasource
namespace: ${targetNS}
spec:
datasources:
- access: proxy
editable: true
isDefault: true
jsonData:
httpHeaderName1: 'Authorization'
timeInterval: 5s
tlsSkipVerify: true
name: Prometheus
secureJsonData:
httpHeaderValue1: 'Bearer ${token}'
type: prometheus
url: 'https://${host}'
name: prometheus-grafanadatasource.yaml
EOF
echo "Setting Grafana admin password..."
co oc -n "${targetNS}" patch secret grafana-admin-credentials --patch "$(printf "data:\n GF_SECURITY_ADMIN_PASSWORD: %s" "$(base64 <(printf "password"))")"
printf "Grafana available at: %s\n" "$(oc -n openshift-user-workload-monitoring get routes.route.openshift.io grafana-route -o go-template='https://{{.spec.host}}{{.spec.path}}')"
#!/usr/bin/env bash
openshift-install destroy cluster && echo "Cluster $(basename $PWD) destroyed." | notify-me
#!/usr/bin/env bash
releaseControllerUrl="https://amd64.ocp.releases.ci.openshift.org"
mapfile -t menu_items < \
<(curl -L -s ${releaseControllerUrl} \
| grep "<h2" \
| sort -V \
| sed -E 's/.*title="([^"]*)".*id="([^"]*)".*/\2\n\1/' \
)
latest_nightly=$(printf "%s\n" "${menu_items[@]}" | grep ".nightly" | tail -n 1)
if [ "$1" == "latest" ] ; then
release="${latest_nightly}"
elif [ -n "$1" ] ; then
release="$1"
else
release=$( \
whiptail --title "Download OpenShift Client" \
--menu "Select an image stream to download oc from:" \
20 80 12 "${menu_items[@]}" \
3>&1 1>&2 2>&3
)
fi
release_pullSpec=$(curl -L -s ${releaseControllerUrl}/api/v1/releasestream/${release}/latest?format=pullSpec)
if [ -n "$release" ] ; then
OC=oc
if [ "$(readlink -f "$(which oc)")" == "$(readlink -f bin/oc)" ]; then
OC="$(mktemp --suffix .oc)"
trap "rm -vf $OC" EXIT
cp bin/oc "$OC"
chmod +x "$OC"
fi
echo "Extracting oc to bin/..."
$OC adm release extract --command=oc --to=bin/ ${release_pullSpec}
fi
#!/usr/bin/env bash
releaseControllerUrl="https://amd64.ocp.releases.ci.openshift.org"
mapfile -t menu_items < \
<(curl -L -s ${releaseControllerUrl} \
| grep "<h2" \
| sort -V \
| sed -E 's/.*title="([^"]*)".*id="([^"]*)".*/\2\n\1/' \
)
latest_nightly=$(printf "%s\n" "${menu_items[@]}" | grep ".nightly" | tail -n 1)
if [ "$1" == "latest" ] ; then
release="${latest_nightly}"
elif [ -n "$1" ] ; then
release="$1"
else
release=$( \
whiptail --title "Download OpenShift Installer" \
--menu "Select an image stream to download openshift-install from:" \
20 80 12 "${menu_items[@]}" \
3>&1 1>&2 2>&3
)
fi
if [ "${release}" == "4-stable" ] ; then
mapfile -t menu_items < \
<(curl -L -s "${releaseControllerUrl}/api/v1/releasestream/${release}/tags?phase=Accepted&format=json" \
| jq -r '.tags[]|select(.name|contains("-")|not)|select(.pullSpec|contains("@sha")|not)|.pullSpec,""' \
)
release_pullSpec=$( \
whiptail --title "Download Stable OpenShift Installer" \
--menu "Select a release to download openshift-install from:" \
20 80 12 "${menu_items[@]}" \
3>&1 1>&2 2>&3
)
fi
if [ -z "$release_pullSpec" ] ; then
release_pullSpec=$(curl -L -s "${releaseControllerUrl}/api/v1/releasestream/${release}/latest?format=pullSpec")
fi
if [ -n "$release" ] ; then
echo "Extracting openshift-install (${release_pullSpec}) to bin/..."
oc adm release extract --command=openshift-install --to=bin/ "${release_pullSpec}"
fi
#!/usr/bin/env bash
cluster_name="$(basename $PWD)"
base_domain="group-b.devcluster.openshift.com"
#domains="api.${cluster_name}.${base_domain},*.apps.${cluster_name}.${base_domain}"
domains="*.apps.${cluster_name}.${base_domain}"
# authenticator script should be in the same directory as this script
# TODO: look into using the certbot-dns-route53 plugin (via the --dns-route53 option) instead
auth_hook="$(cd "$(dirname "${BASH_SOURCE[0]}")" || exit; pwd)/certbot-auth-hook.sh"
# use Let's Encrypt certbot to order a free certificate
certbot certonly --non-interactive --manual \
--manual-auth-hook "${auth_hook} UPSERT ${base_domain}" \
--manual-cleanup-hook "${auth_hook} DELETE ${base_domain}" \
--preferred-challenge dns \
--config-dir "$PWD/letsencrypt" \
--work-dir "$PWD/letsencrypt" \
--logs-dir "$PWD/letsencrypt" \
--agree-tos \
--domains "${domains}" \
--email sanchezl@redhat.com
#!/usr/bin/env bash
cluster_name="$(basename $PWD)"
base_domain="group-b.devcluster.openshift.com"
cluster_domain="${cluster_name}.${base_domain}"
cert_path="$PWD/letsencrypt/live/apps.${cluster_domain}/fullchain.pem"
key_path="$PWD/letsencrypt/live/apps.${cluster_domain}/privkey.pem"
oc --namespace openshift-ingress create secret tls letsencrypt --cert "${cert_path}" --key "${key_path}"
oc --namespace openshift-ingress-operator \
patch ingresscontrollers.operator.openshift.io default --type=merge --patch "---
spec:
defaultCertificate:
name: letsencrypt
"
#!/usr/bin/env bash
destroy-cluster.sh ; \
cleanup-records-sets.sh && \
reset-config.sh && \
create-cluster.sh && \
create-admin-user.sh && \
generate-or-renew-certs.sh && \
install-certs.sh
#!/usr/bin/env bash
backup_config="install-config.yaml.save"
if [ ! -f "${backup_config}" ]; then
echo "${backup_config} file not found"
exit 1
fi
rm -vRf \
metadata.json \
.openshift_install.log \
.openshift_install_state.json \
terraform.* \
auth \
tls
cp install-config.yaml.save install-config.yaml
#!/usr/bin/env bash
auth=$(< ~/.docker/config.json jq '.auths["registry.ci.openshift.org"].auth' -r)
find . -name install-config.yaml.save -print0 | \
xargs --null \
sed --in-place 's/\("registry.ci.openshift.org":{"auth":"\)[^"]*\("}\)/\1'"$auth"'\2/'
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment