Skip to content

Instantly share code, notes, and snippets.

@macroramesh6
Last active November 28, 2023 12:21
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save macroramesh6/f82ba96732488968ef877476884cd946 to your computer and use it in GitHub Desktop.
Save macroramesh6/f82ba96732488968ef877476884cd946 to your computer and use it in GitHub Desktop.
# Install and manage mysql operator on k8s cluster

Installing and Verifying MySQL Operator on a Kubernetes Cluster

If you have an existing Kubernetes cluster with multiple nodes, each having a unique namespace and running a MySQL database server, you can install and verify the MySQL Operator on this cluster. The MySQL Operator is an operator for managing MySQL InnoDB Cluster setups inside a Kubernetes Cluster. It automates the setup and maintenance of MySQL InnoDB Cluster setups, including upgrades and backups [Source 0].

To install and verify the MySQL Operator, follow these steps:

  1. Install the MySQL Operator

    Use kubectl to install the MySQL Operator by applying the manifest files. The manifest files contain the required Custom Resource Definitions (CRDs) and the operator deployment configuration.

    Apply the CRDs:

    kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-crds.yaml

    Then apply the operator:

    kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-operator.yaml

    [Source 0]

  2. Verify the Operator Deployment

    After applying the operator, you can verify its deployment by checking the status of the deployed resources. Run the following command to check the status of the operator:

    kubectl get deployment -n mysql-operator mysql-operator

    This command should show you the MySQL operator deployment status. If the operator is running correctly, it should display 1/1 under the READY column [Source 0].

  3. Create a MySQL InnoDB Cluster

    Once the MySQL Operator is running, you can create a MySQL InnoDB Cluster. First, create a secret with the credentials for a MySQL root user:

    kubectl create secret generic mypwds \
        --from-literal=rootUser=root \
        --from-literal=rootHost=% \
        --from-literal=rootPassword="sakila"

    Then, define your MySQL InnoDB Cluster in a YAML file, for example, mycluster.yaml:

    apiVersion: mysql.oracle.com/v2
    kind: InnoDBCluster
    metadata:
      name: mycluster
    spec:
      secretName: mypwds
      tlsUseSelfSigned: true
      instances: 3
      router:
        instances: 1

    Apply the cluster configuration:

    kubectl apply -f mycluster.yaml

    [Source 0]

  4. Monitor the Cluster Creation

    After applying the cluster configuration, you can monitor the cluster creation process using the following command:

    kubectl get innodbcluster --watch

    Once the cluster is successfully created, the STATUS should change to ONLINE [Source 0].

  5. Connect to the MySQL InnoDB Cluster

    You can connect to the MySQL InnoDB Cluster using the MySQL Shell. The following command creates a new container named myshell using the container-registry.oracle.com/mysql/community-operator image and immediately executes MySQL Shell:

    kubectl run --rm -it myshell --image=container-registry.oracle.com/mysql/community-operator -- mysqlsh

    Then, you can connect to the cluster using the MySQL Shell with the command \connect root@mycluster. This assumes that the default namespace is used. The long form is {innodbclustername}.{namespace}.svc.cluster.local [Source 0].

Remember to install the MySQL Operator in the same namespace where the MySQL databases are running. If each node in your cluster has its own namespace and MySQL database, you might need to install the Operator in each namespace separately.

Setting up Backups for MySQL Operator

To set up backups for MySQL Operator, you can use the integrated backup system provided by the operator. This system allows you to export your data to various storage providers such as Amazon S3, Google Cloud Storage, Azure Blob Storage, or local persistent volumes within your Kubernetes cluster.

To configure backups, you need to define a backup schedule and backup profile within your InnoDBCluster object. The backup schedule specifies when the backup should run using a cron expression, and the backup profile configures the storage location and MySQL export options.

Here's an example of how to set up hourly backups to an Amazon S3-compatible object storage service:

apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
  name: mysql-cluster
spec:
  secretName: mysql-root-user
  instances: 3
  tlsUseSelfSigned: true
  router:
    instances: 1
  backupSchedules:
  - name: hourly
    enabled: true
    schedule: "0 * * * *"
    backupProfileName: hourly-backup
  backupProfiles:
  - name: hourly-backup
    dumpInstance:
      storage:
        s3:
          bucketName: backups
          prefix: /mysql
          config: s3-secret
          profile: default

In the above example, an hourly backup is scheduled, and the backups are stored in an S3 bucket named "backups". The "config" field refers to a Kubernetes secret that contains the AWS credentials for accessing the S3 bucket.

Adding Multiple Users with RBAC Permissions

To add multiple users with Role-Based Access Control (RBAC) permissions in your Kubernetes cluster, you need to create Kubernetes Role and RoleBinding (or ClusterRole and ClusterRoleBinding for cluster-wide access) resources [[0]].

Here's a simple example of how to create a Role and RoleBinding for a user to access a specific namespace:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: default
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

The above Role allows the user to get, watch, and list Pods in the "default" namespace.

Next, create a RoleBinding that binds the Role to the user:

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: read-pods
  namespace: default
subjects:
- kind: User
  name: jane
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

In this example, the Role "pod-reader" is bound to the user "jane". With these resources in place, the user "jane" will have the permissions to get, watch, and list Pods in the "default" namespace.

You can create similar Role and RoleBinding resources for each user and customize the permissions according to your needs. Remember to replace "jane" with the actual name of your user, and "pod-reader" and "read-pods" with the actual names of your Role and RoleBinding, respectively. Also, replace the "default" namespace with the actual namespace where your MySQL databases are running.

Please note that the RBAC permissions mentioned here are for accessing Kubernetes resources, not MySQL resources. Managing users and their permissions in MySQL databases is a separate process and should be handled within the MySQL databases themselves.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment