Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save kubopanda/4c35f8887991f63ee6d18352362a3966 to your computer and use it in GitHub Desktop.
Save kubopanda/4c35f8887991f63ee6d18352362a3966 to your computer and use it in GitHub Desktop.
Q & A from Traefik Online Meetup: HolidayCheck Cloud Platform Using Traefik

Q & A Online Meetup: HolidayCheck Cloud Platform Using Traefik

Question: Did you have configuration shifts between the multiple Traefik pods (behind the Service Traefik’s VIP)? If yes, how did you manage it?

Answer: No, not that we noticed. We are very happy on how the ingress controller works.

Question: Is your Traefik image a custom one or the upstream one?

Answer: Yes, we are still maintaining an internal fork with a small set of patches. Previsously the set included patches of pending upstream PRs. Since we were part of the maintainers team we had a strong feeling of urgency to move forward our PRs and close the gap to upstream. Currently the only missing patch provides OpenCensus support for our demands. However, we believe this will be eliminated in the near future thanks to the merger with OpenTracing. Latter is support by Traefik for a long time.

Question: What did the language diversity look like prior to using Traefik?

Answer: Prior to Traefik the components were written in Scala (Marathon), C++ (Mesos), Go (Bamboo), C (HAProxy), and Java (Zookeeper). This increases the barrier to contribute and develop across the entire stack.

Question: Are you running Traefik as root user inside the custom image?

Answer: No, our traefik deployments consists of simple non-root container images.

Question: Nope we are using deployments. Back then when we started DeamonSets did not provide a good rolling restart functionality. As stated in the talk, we run traefik on a separate GKE NodePool. Mainly to make sure to have a proper network resource isolation. To upgrade k8s pools we use a A/B deployment strategy (not the official GKE rolling one) to make rollbacks faster. With the DaemonSet we discovered some rare issues with traefik. This was with earlier k8s and traefik versions, so not sure if it's still the case but when k8s nodes where started, the daemon sets where started as early as possible. Sometimes before the whole network stack was entirely ready which has caused problems with the mesos/marathon discovery in traefik. That's why we switched to deployments, because there it's guaranteed that on startup of a pod, the node is entirely ready. We don't have a latency problem with kube proxy, but we also use "externalTrafficPolicy: Local" in the traefik service objects that exposes traefik to the cloud loadbalancer which eliminates unnecessary network hops.

View the full video here: https://www.youtube.com/watch?v=3QTwu14sLVc&t=163s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment