- create the placement that references the set of clustersets
- create managedclustersetbinding to allow placement access to the clusterset
- create the gitopscluster resource to bind the argocd cluster to the placement
- enable argocd apps in the klusterletaddonconfig manifests
datestring() { | |
sdate=$(date -j -f "%Y-%m-%dT%H:%M:%SZ" $1 "+%s") | |
edate=$(date -j -f "%Y-%m-%dT%H:%M:%SZ" $2 "+%s") | |
delta=$(( (edate - sdate) / 60 )) | |
echo "$delta" | |
} | |
extract() { | |
appsub_name_local=$1 |
Start with the custom registry, which is an index image containing multiple release packages. Find the corresponding bundle image name associated to release 2.3.0.
podman run --name custom-registry --rm -p 50051:50051 \
quay.io/acm-d/acm-custom-registry:2.0.11-DOWNSTREAM-2021-05-21-21-35-58
grpcurl -plaintext localhost:50051 api.Registry/ListBundles | jq '.bundlePath' | grep 2.3.0
Create and index image from the bundle.
In order to mirror the downstream images, we need to get the image lists from the index image. There are a number of ways to do this, but probably the simplest way, is to generate a mapping.txt file fromt he index image and oc image mirror
.
- Generate the
mapping.txt
file.
Is it possible to test / verify moving managed cluster data from one hub to another? Or, create a cluster, then "remove" the data on the hub, and recreate it in place?
- have a hub with a managed cluster created through cli/manifests
- remove the data from the hub
- put the data back on the hub
#!/bin/bash | |
# collect a record of all the running pods | |
for NS in `oc get namespace | grep open-cluster | awk '{print $1}'` | |
do | |
oc get pods -n $NS >> temp.txt | |
done | |
# report the total number of pods in ACM | |
number_pods=$( cat temp.txt | grep -v NAME | wc -l | awk '{print $1}' ) |
I am writing this article to document data related to sizing classification for RHACM 2.2 deployments.
We define three generic size classifications, based on the number of managed clusters under management.
Then, we deploy RHACM 2.2 into each of the size classifications, and measure the system based on the workload.
Today, we will keep the workload very generic.
#!/bin/bash | |
NS=${NS:-open-cluster-management-observability} | |
# step 1 | |
oc create namespace $NS | |
# step 2 | |
# NOTE: the warning for export is just a warning, not an error | |
oc get secret multiclusterhub-operator-pull-secret -n open-cluster-management --export -o yaml | oc apply -n $NS -f - |
apiVersion: hive.openshift.io/v1 | |
kind: ClusterDeployment | |
metadata: | |
creationTimestamp: null | |
labels: | |
hive.openshift.io/hiveutil-created: "true" | |
name: fgt | |
spec: | |
baseDomain: demo.red-chesterfield.com | |
clusterName: fgt |