Skip to content

Instantly share code, notes, and snippets.

@bfleming-ciena
Last active June 5, 2018 22:56
Show Gist options
  • Save bfleming-ciena/492075edcfc3e51874f3a66f7e49ed33 to your computer and use it in GitHub Desktop.
Save bfleming-ciena/492075edcfc3e51874f3a66f7e49ed33 to your computer and use it in GitHub Desktop.
Rancher 1.6.13, Kubernetes 1.9.5, Istio .0.8.0 Configuration (working)
Deployment
Update kube-apiserver (it is under the service kubernetes) in rancher via the UI.
click upgrade
Replace the command line with this. The IP range is my cluster IP range, yours could be different, just use what was there for you.
kube-apiserver --storage-backend=etcd2 --storage-media-type=application/json --service-cluster-ip-range=10.43.0.0/16 --etcd-servers=http://etcd.kubernetes.rancher.internal:2379 --insecure-bind-address=0.0.0.0 --insecure-port=0 --cloud-provider=rancher --allow_privileged=true --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,ResourceQuota,MutatingAdmissionWebhook,ValidatingAdmissionWebhook --client-ca-file=/etc/kubernetes/ssl/ca.pem --tls-cert-file=/etc/kubernetes/ssl/cert.pem --tls-private-key-file=/etc/kubernetes/ssl/key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/cert.pem --kubelet-client-key=/etc/kubernetes/ssl/key.pem --runtime-config=batch/v2alpha1 --anonymous-auth=false --authentication-token-webhook-config-file=/etc/kubernetes/authconfig --runtime-config=authentication.k8s.io/v1beta1=true --external-hostname=kubernetes.kubernetes.rancher.internal
Clone the Istio repo
Deploy Istio 0.8.0
kubectl create namespace istio-system
Note: I use the includeIpRange option. This lets the Istio proxy ignore egress rules for external resources. This is violating the security benefits.
But I want this for testing purposes because I don't want to create a lot of egress rules right now opening all the ports we use.
The tracing option is an extra, but Istio requires that your app pass tracing data in http headers, so it's useless right now.
# I used the helm template generation option.
helm template install/kubernetes/helm/istio --name istio --namespace istio-system --set tracing.enabled=true --set global.proxy.includeIPRanges="10.43.0.0/16" | kubectl apply -f -
kubectl label namespace meshtest istio-injection=enabled
Switch to meshtest namespace
delete all pods (anything running)
kubectl delete pod $(kubectl get pods -o jsonpath='{.items[*].metadata.name}')
They will come back up with an extra container (2/2). Round-robin is the default, so nothing else is needed!
Here is a useful test of a service to check if round-robin is being used.
for i in `seq 1 20`; do curl -s someervice:8080; echo; done | sort | uniq -c
Round robin will show very high distribution.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment