Skip to content

Instantly share code, notes, and snippets.

@yaocw2020
Last active August 20, 2023 14:43
Show Gist options
  • Save yaocw2020/93edad0634732f48fc9305cafae3b124 to your computer and use it in GitHub Desktop.
Save yaocw2020/93edad0634732f48fc9305cafae3b124 to your computer and use it in GitHub Desktop.

Keep Load Balancer IP When Upgrading Harvester Cloud Provider

As the Harvester v1.2.0 releases, a new Harvester cloud provider version 0.2.2 is integrated into the RKE2 v1.27.3+rke2r1, v1.26.6+rke2r1, v1.25.11+rke2r1 and v1.24.15+rke2r1.

With Harvester v1.2.0, the new Harvester cloud provider provides more powerful load balancing for Kubernetes services. It modifies the network architecture, allowing the load balancer IP to be directly exposed in the RKE2 network. Refer to the design document for more details.

If upgrading RKE2 to the version with the new Harvester cloud provider using Rancher v2.7.6, Rancher will pass the cluster name to the Harvester cloud provider(refer to issue 4332) . The Harvester cloud provider will use the cluster name as part of the Harvester load balancer name and will create a new load balancer for the original load balancer service because of the name change. This article will detail how to keep the original load balancer IP for the services by the following example.

Environment Prerequisites

  • RKE2 v1.24.10+rke2r1 testing in the VM network default/mgmt-untagged with two load balancer services

    • default/lb0: DHCP mode, with load balancer IP 172.19.105.215
    • default/lb1: Pool mode, with load balancer IP 192.168.100.2
  • Harvester v1.2.0

    • IP Pool default
      apiVersion: loadbalancer.harvesterhci.io/v1beta1
      kind: IPPool
      metadata:
        labels:
          loadbalancer.harvesterhci.io/global-ip-pool: 'false'
          loadbalancer.harvesterhci.io/vid: '0'
        name: default
      spec:
        ranges:
        - subnet: 192.168.100.0/24
        selector:
          scope:
          - guestCluster: '*'
            namespace: default
            project: '*'
      status:
        allocated:
          192.168.100.2: default/kubernetes-default-lb1-cd8c89e7
        available: 252
        lastAllocated: 192.168.100.2
        total: 253
    • Tow Harvester load balancers
      • default/kubernetes-default-lb0-6df182ca corresponds to the service default/lb0 in the RKE2 cluster
      • default/kubernetes-default-lb1-cd8c89e7 corresponds to the service default/lb1 in the RKE2 cluster
  • Rancher v2.7.6 to manage the Harvester and RKE2

Steps to Keep Load Balancer IP

  1. Scale the Harvester cloud provider deployment to 0 in the RKE2 cluster testing

    kubectl -n kube-system scale deploy harvester-cloud-provider --replicas=0
  2. For the service with DHCP mode, copy annotations kube-vip.io/hwaddr and kube-vip.io/requestedIP of the service which has the same name with the load balancer. In our example, copy the annotations from default/kubernetes-default-lb0-6df182ca in the Harvester cluster to the service default/lb0 in the RKE2 cluster.

    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        kube-vip.io/hwaddr: 00:00:6c:4f:18:68
        kube-vip.io/requestedIP: 172.19.105.215
      name: kubernetes-default-lb0-6df182ca
      namespace: default
    
  3. For the service with Pool mode, use the following Golang codes to get the load balancer name after upgrading. Remember to modify the value of the variables clusterName, serviceNamespace, serviceName and serviceUID.

    package main
    
     import (
         "fmt"
         "hash/crc32"
    
         "k8s.io/apimachinery/pkg/util/validation"
     )
    
     const (
         maxNameLength = 63
         lenOfSuffix   = 8
     )
    
     func main() {
         clusterName, serviceNamespace, serviceName, serviceUID := "testing", "default", "lb1", "7861edaf-9b0e-47cd-a93a-551931fa64b1"
         name := loadBalancerName(clusterName, serviceNamespace, serviceName, serviceUID)
    
         fmt.Println(name)
     }
    
     func loadBalancerName(clusterName, serviceNamespace, serviceName, serviceUID string) string {
         if len(validation.IsDNS1035Label(clusterName)) > 0 {
     	    clusterName = "a" + clusterName
         }
         base := clusterName + "-" + serviceNamespace + "-" + serviceName + "-"
         digest := crc32.ChecksumIEEE([]byte(base + serviceUID))
         suffix := fmt.Sprintf("%08x", digest) // print in 8 width and pad with 0's
    
         // The name contains no more than 63 characters.
         if len(base) > maxNameLength-lenOfSuffix {
     	    base = base[:maxNameLength-lenOfSuffix]
         }
    
         return base + suffix
     }   

    In our example, the variable values are as follows:

     clusterName = "testing"
     serviceNamespace = "default"
     serviceName = "lb1"
     serviceUID = "7861edaf-9b0e-47cd-a93a-551931fa64b1"
    

    The output is testing-default-lb1-ddc13071

  4. Delete the Harvester load balancers in the Harvester.

    kubectl delete lb kubernetes-default-lb0-6df182ca kubernetes-default-lb1-cd8c89e7 -n default
    
  5. Add network selector for the pool.

    apiVersion: loadbalancer.harvesterhci.io/v1beta1
    kind: IPPool
    metadata:
      ......
      name: default
    spec:
      ranges:
      - subnet: 192.168.100.0/24
      selector:
        network: default/mgmt-untagged # add network selector
        scope:
        - guestCluster: '*'
          namespace: default
          project: '*'
  6. Update the allocatedHistory of the pool to replace the load balancer name got in the step 3.

    apiVersion: loadbalancer.harvesterhci.io/v1beta1
    kind: IPPool
    metadata:
      labels:
        loadbalancer.harvesterhci.io/global-ip-pool: 'false'
        loadbalancer.harvesterhci.io/vid: '0'
      name: default
    spec:
      ranges:
      - subnet: 192.168.100.0/24
      selector:
        scope:
        - guestCluster: '*'
          namespace: default
          project: '*'
    status:
      allocatedHistory:
        192.168.100.2: default/testing-default-lb1-ddc13071 # replace the load balancer name
      available: 253
      lastAllocated: 192.168.100.2
      total: 253
  7. Upgrade the RKE2 cluster.

@yaocw2020
Copy link
Author

  1. Add examples.
  2. Publish as a KB article.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment