Install GRUB modules for BIOS and EFI:
yum install grub2-pc-modules grub2-efi-x64-modules
Install xorriso and mtools rpms that are used by grub2-mkrescue while creating the ISO image:
yum install xorriso mtools
Install GRUB modules for BIOS and EFI:
yum install grub2-pc-modules grub2-efi-x64-modules
Install xorriso and mtools rpms that are used by grub2-mkrescue while creating the ISO image:
yum install xorriso mtools
The CDROM ISO image is in LOCKED status. ssh to the RHEV Engine VM and unlock the particular disk:
/usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -t disk 6032270c-1cf7-4050-af4a-e903dd94b727
Unlock all objects:
/usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -t all
After RHVH restart, the RHEV Engine VM doesn't come up. SSH to RHVH:
# fetch etcd client credentials | |
FILES=' | |
/etc/kubernetes/static-pod-resources/configmaps/etcd-serving-ca/ca-bundle.crt | |
/etc/kubernetes/static-pod-resources/secrets/etcd-client/tls.crt | |
/etc/kubernetes/static-pod-resources/secrets/etcd-client/tls.key | |
' | |
APISERVER_POD=$(oc get pod --namespace openshift-kube-apiserver --selector apiserver=true --output jsonpath='{.items[0].metadata.name}') |
The corporate DNS server that is outside of our control doesn't handle AAAA queries properly. When sent a AAAA query, the DNS server doesn't respond. A properly working DNS server returns NOERROR, ANSWER: 0, if there is no AAAA record for a given name. Misconfigured DNS server doesn't send any response.
In an IPv6-enabled environment, the client tries to resolve both A and AAAA addresses. If the DNS server doesn't send any reply, the client repeats the query and eventually times out. Only after the AAAA query times out, the client will use the A address. Waiting for the timeouts renders utilities like curl, kubectl, oc, ... and others unusable.
# dump all forwarded input to stdout | |
<source> | |
@type forward | |
bind 0.0.0.0 | |
port 24224 | |
</source> | |
<source> | |
@type forward |
These instructions can be run from a command line using oc
authenticated to the target cluster. They were derived from the OpenShift documentation for installing the Cluster Logging Operator using the CLI. Here is an ansible playbook that was also used during testing.
Create a file named logging_namespace.yml
with the following content:
Install the operator using the Manual approval strategy, see the attached screenshot.
An install plan has been created but not executed as it has not been approved:
oc get installplan -n openshift-logging
NAME CSV APPROVAL APPROVED
install-dq68d clusterlogging.4.5.0-202007012112.p0 Manual false
apiVersion: v1 | |
kind: Pod | |
metadata: | |
name: nfs-client | |
spec: | |
containers: | |
- image: quay.io/noseka1/openshift-toolbox:basic | |
name: openshift-toolbox | |
volumeMounts: | |
- name: nfs-volume |