- Load balancer fronting monolithic application
- Begin to decouple services into microservices and place another load balancer in front (kubernetes service)
- Soon multiple services in cluster each with own load balancer
- Each have own rate limiting, logging, authorization
- Leads to expensive maintenance (many individual load balancers etc)
- Maintenance more difficult (more things to maintain)
- Replace all individual load balancers w/ this abstraction layer
- This is called Ingress Controller
- One port of entry, easier for maintenance and auditing, certificate management, less $$
- Ingress is not something new: it's similar to previous DMZ network from data center days
- Vendor neutral spec defining EXTERNAL access to services inside k8s
- some implementations may have specific annotations to get it to work
- some policies may not be shared or implemented in the same way
- purpose is to allow easily switching between cloud providers
- L7 metadata based (host headers, path, etc.)
- TLS termination
- (Thursday talk about Ingress API v2 from Google)
Ingress
Kubernetes object type (networking.v1beta1)- Currently the way you can express routing is very limited
- Host header has
paths
array (http) - gRPC also supported in addition to HTTP
- TLS section: which certificates to expose
- Can have default service when no rules apply
- There is no specification for service health checks (coming in v2)
- Ingress does respect the health checks from service level (readiness and liveliness probes setup correctly)
path
value supports POSIX-based regexeshostname
supports wildcard at first entry (*.example.com)
Example spec:
…
http:
paths:
- path: /bills
backend:
serviceName: bills
servicePort: 80
- path: /orders
backend:
…
- Nothing will happen until you have an ingress controller running
- Kubernetes does not ship with a default ingress controller
- Also there is no proxy installed in kubernetes that will make use of these policies (Envoy)
- Kubernetes controller paradigm (watches created objects and takes some actions)
- Infinite
while
loop listening for changes made, and then synchronizes configuration - Eventually consistent, declarative configuration
- Ingress layer is constructed from previously-mentioned abstraction layer
- Logic can be moved from services, to the ingress layer (request logging, auditing, authentication, rate-limiting)
- Plugins for auth, security, logging, request transformations, load balancing, etc.
- Requests and Responses BOTH flow through configured plugins (add CORS headers to server responses)
- handles HTTP and gRPC, some other protocols (tLS)
- For unrecognized protocols, functionality is limited to L4 (no request/response transformations)
- Can setup iptables rules to allow Kong to serve as a transparent proxy
- Alerting to failed pods
- Integration with prometheus and grafana
- Traffic types:
- East/west traffic: internal service-to-service, Kuma
- North/south traffic: external traffic coming into cluster
- Combine the proxy and controller, deploy them together as Kong
- Can store configuration in-memory or in database
- KongPlugins
- KongIngress
- KongConsumer & KongCredential
- per user/service customization
- authentication
- Load Balancing
- RR, weight-based, least conns
- sticky session and hash based
- active and passive health-checks (TCP/HTTP)
- Plugins
- Promotheus, Jaeger, Zipkin, Opentracing
- Routing
- Natively Integrates with k8s:
- cert-manager
- external-dns
- https://github.com/Kong/kuma
- Transition from Monolith to Microservices (docker, kubernetes)
- Build, deploy, testing, monitoring are all done differently now. It's a new ERA
- Data in use -> Data at rest -> Data in MOTION
- defines new era of software
- reliability of the CPU replaced by unreliability of the network
- with service mesh, network reliability is offloaded to the mesh
- KUMA: control plane for service mesh built atop Envoy
- Also works on VMs and baremetal servers
- Can operate in Universal mode or K8s mode (use during migration to k8s)
- basically kuma-dp run either as a sidecar (k8s) or another process (VMs)
- Istio: much more complicated service mesh offering
- Traffic Permissions: use for blue/green or canary deploys
- Similar to k8s NetworkPolicy but can work for N/S traffic also
- Only works with Services
- ./bin/kuma-cp.conf
- ./bin/kuma-dp
- ./bin/kumactl
- ./bin/kuma-tcp-echo
- ./bin/envoy
- local access: use port forwarding
kumactl config control-planes