Skip to content

Instantly share code, notes, and snippets.

@vfarcic
Last active January 27, 2024 02:29
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
Star You must be signed in to star a gist
Save vfarcic/b9b963ba540eaa3110b32b914ad22fe4 to your computer and use it in GitHub Desktop.
# Source: https://gist.github.com/b9b963ba540eaa3110b32b914ad22fe4
###########################################################
# Gateway API - Ingress And Service Mesh Spec Replacement #
# https://youtu.be/YAtXTI3NKtI #
###########################################################
# Additional Info:
# - Gateway API: https://gateway-api.sigs.k8s.io
# - How To Do Canary Deployments In Kubernetes Using Flagger And Linkerd?: https://youtu.be/NrytqS43dgw
# - Argo Rollouts - Canary Deployments Made Easy In Kubernetes: https://youtu.be/84Ky0aPbHvY
#########
# Setup #
#########
# The demo is based on Google Kubernetes Engine (GKE).
# The commands and the manifests might differ if using a different Kubernetes distribution.
git clone https://github.com/vfarcic/gateway-api-demo
cd gateway-api-demo
kubectl create namespace production
kubectl create namespace staging
# Install `yq` from https://github.com/mikefarah/yq if you do not have it already
###############################
# Gateway API Gateway Classes #
###############################
kubectl apply \
--kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.5.0"
kubectl get gatewayclasses
########################
# Gateway API Gateways #
########################
cat gateway.yaml
kubectl --namespace production apply \
--filename gateway.yaml
kubectl --namespace production get gateways
export INGRESS_HOST=$(kubectl \
--namespace production get gateway http \
--output jsonpath="{.status.addresses[0].value}")
######################
# Gateway API Routes #
######################
yq --inplace \
".spec.hostnames[0] = \"silly-demo.$INGRESS_HOST.nip.io\"" \
kustomize/overlays/simple/route.yaml
cat kustomize/overlays/simple/route.yaml
kubectl --namespace production apply \
--kustomize kustomize/overlays/simple
# Like, subscribe, join, and SPONSOR!
curl "http://silly-demo.$INGRESS_HOST.nip.io"
kubectl --namespace production get httproutes
#######################################
# Canary Deployments With Gateway API #
#######################################
yq --inplace \
".spec.hostnames[0] = \"silly-demo.$INGRESS_HOST.nip.io\"" \
kustomize/overlays/canary/route.yaml
kubectl --namespace production apply \
--kustomize kustomize/overlays/canary
curl "http://silly-demo.$INGRESS_HOST.nip.io"
curl -H "type: canary" \
"http://silly-demo.$INGRESS_HOST.nip.io"
cat kustomize/overlays/canary/route.yaml
###########
# Destroy #
###########
# Reset or destroy the cluster.
@Jonneal3
Copy link

Ive jsut tried running thes ecommands and I cannot get the kubectl --namespace production get gateways to assign an external ip.. any ideas why?

@vfarcic
Copy link
Author

vfarcic commented Jan 15, 2024

My best guess is that it could not create an external LB. Where are you running it (e.g., EKS, GKE, AKS, etc.)? What do events say (e.g., kubectl describe...)?

@Jonneal3
Copy link

Jonneal3 commented Jan 15, 2024

@vfarcic
Screenshot 2024-01-15 at 12 01 20 PM

I was hoping to test with Kind or k3d locally but im happy to set up EKS, what is the best way to proceed with setting up this example that makes the most sense? Im using AWS Route 53 and had the Amazon load balancer setup as well, but it looks like my k3d cluster is not associatng with AWS load balancer right now. If you have a "pre-requisute guide" to this video im happy to follow that as well just let me know whats easiest and ill implement that way :)

@vfarcic
Copy link
Author

vfarcic commented Jan 15, 2024

It should work with KinD. You might need to tweak KinD config. Also, if it does create LoadBalancer Service, you can just ignore the message that it cannot assign external IP since there is not external LB. You should still be able to access it through NodePort (LoadBalancer Services are NodePort Services).

As for EKS, I'm not 100% whether Gateway APi is now enabled by default or not (probably not). If it's not, just follow the instructions in the docs how to set it up in EKS. Once you do set it up, everything else should be the same no matter which Kubernetes cluster you're using.

@Jonneal3
Copy link

@vfarcic OK awesome! Thanks so much for the help! Great content too btw! I'll post back here soon if I run into anything but hopefull things will go smoothly from here. I am going to start fresh on kinD and i'll post back with how I configured things etc. and let you know if I run into any trouble

@Jonneal3
Copy link

@vfarcic So, in the beginning of setting this up via the youtube video, I was able to see the CRD's. Now I am not able to. The very first attempt showed me the classes so i know its possible :) but now I cannot figure out why they will no longer show up.... Ive tried to wipe my cpu of all things istio and still, I am not able to ive tried a few times now...

Screenshot 2024-01-15 at 3 17 45 PM

They are being created, but im not able to locate them, as seen via image above. Is there something Im missing that i need to configure before this? Do I need to install anything from here before runnign the above commands?? -- https://istio.io/latest/docs/tasks/traffic-management/ingress/gateway-api/

If you dont mind shedding some light on the pre-requsite installs that come before running gatewya api that would be great! Perhaps its an error as well

@vfarcic
Copy link
Author

vfarcic commented Jan 17, 2024

I'm not sure what might be the problem.

It's very strange for CRDs to just disappear. Most of the time it's the other way around; they stay even when you don't want them anymore.

I never installed it through Istio but always went through the "official" setup documented in the Gateway API docs. Most of the time I use it in GKE clusters which require only a flag to enable Gateway API.

@Jonneal3
Copy link

Jonneal3 commented Jan 17, 2024

@vfarcic Got it! Thanks for the help!

I "think" ive figured out the CRD issue but now onto another issue lol!

I have an EKS cluster that is not assigning an External LB to the gateway. Ive seen many post about this but no solution has worked from what Ive found yet. See picture below:

Screenshot 2024-01-17 at 12 05 43 PM Screenshot 2024-01-17 at 10 43 33 AM

Anyways, I've consulted a few friends that are Kubernetes Pros, and i've been told basically, that essentially the ingress pod has a listener port that does health checks on from istiod and i need to ensure that istiod is properly syncing with istio ingress and/or port forwarding.

Given that the cluster seems to be configured correctly to assign the External LB, how can i best fix this port-forwarding/ istiod not communicating with the ingress to then achieve teh goal of, provisioning my External IP?

Im running on:

Mac M2
VSCode
Istio Gateway API
EKS (Used Terraform to Configure This)
AWS LB
Any help is MUCH appreciated! Thanks!

Ive tried everything from reinitiating the port forwarding and killing the processes in place, and it has not worked. Everytime we re-initiate port forwaridng we get this error on port 15000.

If you have any thoughts, id love to hear... Appreciate the help a bunch mate! Jon

@vfarcic
Copy link
Author

vfarcic commented Jan 24, 2024

Sorry for not responding earlier. KubeCon is coming so my free time disappeared almost completely.

Unfortunately, I never tried to setup Gateway API through Istio so I'm not sure what might be failing. I can take a closer look at it, but probably not before KubeCon.

@samy-soliman
Copy link

Hi vfarcic can you help me i am a little confused in apiversions in the manifests, here they say the gke supports gateway.networking.k8s.io/v1beta1 and they link the apiversion for kubenetes gatway api here
and what confuses me is in here they say there is gateway.networking.k8s.io/v1 which i could not use and found it confusing in terms what is where and what should i use, and also tried installing the CRD of this and also did not work.

@vfarcic
Copy link
Author

vfarcic commented Jan 26, 2024

@samy-soliman
The ultimate source of truth is what's in the cluster. You can execute something like kubectl get crd ... --output yaml to find the details of any CRD, including those coming from Gateway API. In the output, you'll see an array of spec.versions. Those are all the versions supported by that CRD. Look at spec.versions[].name. If there is more than one entry, the active one, the one you should use, should have spec.versions[].server set to true.

Besides that, the docs of the provider through which you're installing it (in your case GKE) should have more accurate docs. The docs in Gateway API itself are generic and, when using baked-in implementations from a provider (like GKE), there might be some variations introduced by that provider (including which version they currently support).

@samy-soliman
Copy link

@vfarcic Thank you for that point i have inspected the CRD its name: v1alpha2 , i also tried installing the new CRD from github but when i delete the old CRD it gets deployed again by the GKE controller for gatewayAPI so i think i may try using the v1alpha2 for now as i inspected the GKE namespaces and did not find a way to determine where is the controller to delete it. also another question is can we install other gatewayAPI versions beside the GKE installed version or are we tied to their implementation.

@vfarcic
Copy link
Author

vfarcic commented Jan 26, 2024

@samy-soliman If you're using GKE, I recommend using whichever Gateway API version they bundle. If you prefer using a different one, you should disable the GKE Gateway API and install it yourself (I haven't tried that option myself). Otherwise, GKE and your installations will conflict each other since you cannot have the same CRD twice nor the same controllers duplicated.

@samy-soliman
Copy link

@vfarcic thank you very much, your points cleared a lot for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment