Skip to content

Instantly share code, notes, and snippets.

#!/bin/bash -ex
# yum install -y docker git patch jq
# systemctl start docker
# docker info
ISTIO_VERSION=${ISTIO_VERSION:-1.19.3}
MAJOR_ISTIO_VERSION=$(cut -f1-2 -d. <<< ${ISTIO_VERSION})
# Need a custom build-tools-proxy image for 1.18.3+
@jeesmon
jeesmon / istio-fips-build.sh
Last active April 3, 2024 18:45
Istio FIPS Build
#!/bin/bash -ex
# yum install -y docker git patch jq
# systemctl start docker
# docker info
ISTIO_VERSION=${ISTIO_VERSION:-1.19.3}
MAJOR_ISTIO_VERSION=$(cut -f1-2 -d. <<< ${ISTIO_VERSION})
# Need a custom build-tools-proxy image for 1.18.3+

From: kube-object-storage/lib-bucket-provisioner#132 (comment)

The best versioning story IMO to avoid the challenges with CRD versioning is the following:

  • Newer versions of the library should be backward compatible.
    • Deprecated properties would be ignored
    • Added properties would have defaults that work for the previous versions of the CRD and the latest library
  • Properties added to the CRD would be ignored by older version of the library

If this is done, there isn't a need to change the CRD version every time a property changes.

kubectl get --raw /metrics | prom2json | jq '.'

kubectl get --raw /metrics | prom2json | jq '
  .[] | select(.name=="apiserver_requested_deprecated_apis").metrics[]
'
oc -n openshift-ingress get ingress.config.openshift.io cluster -o jsonpath='{.spec.domain}'
//+kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
//+kubebuilder:printcolumn:name="Ready",type="string",JSONPath=".status.conditions[?(@.type==\"Available\")].status"
var decUnstructured = yaml.NewDecodingSerializer(unstructured.UnstructuredJSONScheme)
func DecodePodSpec(spec corev1.PodSpec) (*unstructured.Unstructured, error) {
pod := &corev1.Pod{
ObjectMeta: v1.ObjectMeta{
Name: "name",
Namespace: "namespace",
},
Spec: *spec.DeepCopy(),
}

Do you know that with proper admin permissions, you can connect and debug cluster nodes in OCP (even when ssh is disabled in ROKS on Satellite)?

In OCP Web Console:

Navigate to Compute -> Nodes -> Click on a Node Name -> Terminal

On command line:

# find node names
oc get nodes