This is the rough outline of how we successfully did an in-place control + data plane upgrade from Istio 1.4.7
-> 1.5.4
via the official Helm charts.
Upgrade was
- applied via scripting/automation
- on a mesh using
- mTLS
- Istio RBAC via
AuthorizationPolicy
- telemetry v1
- tracing enabled, but Jaeger not deployed via istio chart
- istio ingress gateway + secondary istio ingress gateway
- active traffic flowing through without any observed increase in error rates
This ignores anything specifically mentioned in the upgrade notes.
- Bug in RBAC backward compatibility with
1.4
in1.5.0
->1.5.2
, fixed in1.5.3
- issue with visibility of
ServiceEntry
s being scoped usingSidecar
resource- istio/istio#24251 subsequent added to the upgrade notes
- All traffic ports are now captured by default; this caused our non-mTLS metrics ports to start enforcing mTLS which they previously did not do on
1.4.7
- Fix: exclude metrics ports via sidecar annotations
traffic.sidecar.istio.io/excludeInboundPorts: "9080, 15090"
- Fix: exclude metrics ports via sidecar annotations
#!/usr/bin/env bash
# In 1.4 Galley manages the webhook configuration; in 1.5 Helm manages it and it is patched by galley dynamically
# without `ownerReferences`, so we can detect if we have upgraded Galley already
if kubectl get validatingwebhookconfiguration/istio-galley -n istio-system -o yaml | grep ownerReferences; then
echo "Detected 1.4 installation - preparing Helm upgrade to 1.5.x by deleting galley-managed webhook..."
# Disable webhook reconciliation so we can delete the webhook
kubectl get deployment/istio-galley -n istio-system -o yaml | \
sed 's/enable-reconcileWebhookConfiguration=true/enable-reconcileWebhookConfiguration=false/' | \
kubectl apply -f -
# Wait for Galley to come back up
kubectl rollout status deployment/istio-galley -n istio-system --timeout 60s
# Delete the webhook
kubectl delete validatingwebhookconfiguration/istio-galley -n istio-system
# Now we can proceed to helm upgrade to 1.5 which will recreate the webhook
fi
Not to be taken literally - this is pseudo-script...
helm upgrade --install --wait --atomic --cleanup-on-fail istio-init istio-init-1.5.4.tgz
# scripting to wait for jobs to complete goes here
helm upgrade --install --wait --atomic --cleanup-on-fail istio istio-1.5.4.tgz
# scripting to bounce `Deployment`s for injected services goes here
We noticed issues with ingress gateways coming up during the control plane upgrade.
It appears there was some kind of race condition when starting new 1.5.4
ingressgateway
instances while
parts of the 1.4.7
control plane were still running. Suspect new ingressgateway talking to old verison pilot
problem perhaps?
Symptom
- lots of weird errors about invalid configuration being received from pilot relating to
tracing
on the new version ingressgateway logs - a subset of new version ingress gateways would not become ready which could cause the
helm upgrade --wait
to get stuck
Fix
- delete the pods that fail to become ready (manual intervention in our case, although technically possible to automate)
- the new pods automatically re-created always came ready
helm upgrade
will go to completion
@chadlwilson Thanks so much for getting back to me. Yeah, like you the helm abandonment by Istio has led to a lot of grief for us. We're also in a place where we don't have the luxury of downtime in production. The script you created was helpful and I was able to perform a clean helm upgrade in a test cluster from 1.4.9 to 1.5.10. In order to get traffic flowing we also had to remove this env from pilot: PILOT_DISABLE_XDS_MARSHALING_TO_ANY: true. When I attempted this in our staging cluster, I still ran into trouble going from 1.4.9 -> 1.5.10, helm timed out trying to patch a bunch of resources, after several failed attempts to move on I reverted everything to the previous 1.4.9 state. In my previous attempt I did try to jump from 1.4 to a canary of 1.6 like you described, matching the 1.4 config. This is the odd piece about that whole situation, same namespace istio-system, updated all the namespaces with the canary label, bounced the workload, everything was beautiful and confirmed running. It wasn't until the next day after a helm deployment of one of our applications that traffic across one gateway stopped being able to talk to the services due to certificate issues between the gateway and all the services it provided ingress to. Not sure on the istio-ca-secret and whether it changed, but they were in the same namespace. This is my 2nd failed attempt to move forward off of 1.4 in this cluster, lots of fun, apparently we picked the wrong time to start using istio.