Skip to content

Instantly share code, notes, and snippets.

@pascalandy

pascalandy/post.md Secret

Created Feb 26, 2019
Embed
What would you like to do?
Building Percona PXC Cluster on Swarm Mode

by: https://xinity.github.io/Percona-PXC-Swarm-mode/

Building Percona PXC Cluster on Swarm Mode As part of a personal project, i had to build a Mysql Galera Cluster. Being a Percona Server fan for several years, i decide to use PXC which stands for Percona XtraDB Cluster(Percona Galera cluster implementation)

Introduction This blog post decribe how to build a PXC cluster on top of docker Swarm Mode (1.13+).

Requirements: Docker tools:

Docker Engine: 1.13+ (17-04-ce recommended) Docker Compose: 1.11+ (1.12 recommended) Docker Machine: 0.9+ (0.10 recommended) Docker images:

PXC Etcd ProxySQL OS:

RancherOS Deploying a Swarm Cluster First step, let’s deploy a Swarm cluster. This setup uses virtualbox, but D.O, GCE or AWS are fine too.

The very simple shell script below will do the job Feel free to hack, forking my repo : github-pxc-swarm (Pull Request highly appreciated by the way)

#!/bin/bash

#Deploy RancherOS Virtual Machines #Switch to latest Docker Engine available #Switch to Debian console for i in pxcm1 pxcw1 pxcw2 pxcw3; do docker-machine create -d virtualbox --virtualbox-boot2docker-url https://releases.rancher.com/os/latest/rancheros.iso $i; docker-machine ssh $i "sudo ros engine switch docker-17.04.0-ce"; docker-machine ssh $i "sudo ros console switch debian -f"; sleep 15; docker-machine ssh $i "sudo apt update -qq && sudo apt install -qqy ca-certificates"; done

Initialize Swarm Manager and tokens

docker-machine ssh pxcm1 "docker swarm init
--listen-addr $(docker-machine ip pxcm1)
--advertise-addr $(docker-machine ip pxcm1)"

export worker_token=$(docker-machine ssh pxcm1 "docker swarm
join-token worker -q")

export manager_token=$(docker-machine ssh pxcm1 "docker swarm
join-token manager -q")

Initialize Swarm Workers and add them to the cluster

docker-machine ssh pxcw1 "docker swarm join
--token=${worker_token}
--listen-addr $(docker-machine ip pxcw1)
--advertise-addr $(docker-machine ip pxcw1)
$(docker-machine ip pxcm1)"

docker-machine ssh pxcw2 "docker swarm join
--token=${worker_token}
--listen-addr $(docker-machine ip pxcw2)
--advertise-addr $(docker-machine ip pxcw2)
$(docker-machine ip pxcm1)"

docker-machine ssh pxcw3 "docker swarm join
--token=${worker_token}
--listen-addr $(docker-machine ip pxcw3)
--advertise-addr $(docker-machine ip pxcw3)
$(docker-machine ip pxcm1)" Let’s see how our cluster is doing :)

eval "$(docker-machine env pxcm1)" docker node ls Output example:

ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
bd1oyur0ia0nkw2ru6mrrqvi3 pxcw1 Ready Active
jgymwuqmxlyl2g9ig6pgxkp1p pxcw2 Ready Active
n783lei0zipbcryj9r75hmu2k * pxcm1 Ready Active Leader
zmjv99aho5cv0nysfferqy6qf pxcw3 Ready Active
Deploying PXC cluster This PXC cluster setup uses proxySQL and Etcd.

ProxySQL as its name suggests, will act as a proxy to your sql queries (Galera doesn’t come with VIP mecanism built-in) Etcd wil be used for nodes discovery, each galera node will register itself into your Etcd instance As we are deploying our cluster on top of Docker Swarm, PXC instances will be hosted on workers, whereas ProxySQL and Etcd will be hosted on the manager. This is done using placement contraints feature of docker-compose. We’d like to use docker secret management feature, but current images doesn’t “support” it for now

All services need their own environmental variables, let’s put them in separated files.

galera.env:

DISCOVERY_SERVICE=galera_etcd:2379 CLUSTER_NAME=galera-15 MYSQL_ROOT_PASSWORD=s3cr3TL33tP@ssw0rd etcd.env:

ETCD_DATA_DIR=/opt/etcd/data ETCD_NAME=etcd-node-01 ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379,http://0.0.0.0:4001 ETCD_ADVERTISE_CLIENT_URLS=http://galera_etcd:2379,http://galera_etcd:4001 ETCD_LISTEN_PEER_URLS=http://0.0.0.0:2380 ETCD_INITIAL_ADVERTISE_PEER_URLS=http://galera_etcd:2380 ETCD-INITIAL-CLUSTER=etcd0=http://galera_etcd:2380 ETCD_INITIAL_CLUSTER_STATE=new ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster-1 proxysql.env:

CLUSTER_NAME=galera-15 ETCD_HOST=galera_etcd DISCOVERY_SERVICE=galera_etcd:2379 MYSQL_ROOT_PASSWORD=s3cr3TL33tP@ssw0rd MYSQL_PROXY_USER=proxyuser MYSQL_PROXY_PASSWORD=s3cr3TL33tPr0xyP@ssw0rd We want to use docker-compose right ?

docker-compose.yml:

version: '3.1'

services: proxy: image: perconalab/proxysql networks: - galera ports: - "3306:3306" - "6032:6032" env_file: proxysql.env deploy: mode: replicated replicas: 1 labels: [APP=proxysql] # service restart policy restart_policy: condition: on-failure delay: 5s max_attempts: 3 window: 120s # service update configuration update_config: parallelism: 1 delay: 10s failure_action: continue monitor: 60s max_failure_ratio: 0.3 # placement constraint - in this case on 'worker' nodes only placement: constraints: [node.role == manager]

etcd: image: quay.io/coreos/etcd command: etcd volumes: - /usr/share/ca-certificates/:/etc/ssl/certs env_file: etcd.env networks: - galera deploy: mode: replicated replicas: 1 placement: constraints: [node.role == manager]

percona-xtradb-cluster: image: percona/percona-xtradb-cluster:5.7 networks: - galera env_file: galera.env deploy: mode: global labels: [APP=pxc] # service restart policy restart_policy: condition: on-failure delay: 5s max_attempts: 3 window: 120s # service update configuration update_config: parallelism: 1 delay: 10s failure_action: continue monitor: 60s max_failure_ratio: 0.3 # placement constraint - in this case on 'worker' nodes only placement: constraints: [node.role == worker]

networks: galera: # Use a custom driver driver: overlay internal: true ipam: driver: default config: - subnet: 10.20.1.0/24 Let’s start the real thing: docker stack deploy -c docker-compose.yml galera Ouput example:

Creating network galera_galera Creating service galera_percona-xtradb-cluster Creating service galera_proxy Creating service galera_etcd Check how things are going: docker stack ps galera Output example (few minutes later):

ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS wxiavo62e7j5 galera_percona-xtradb-cluster.zmjv99aho5cv0nysfferqy6qf percona/percona-xtradb-cluster:5.7 pxcw3 Running Running 25 minutes ago
f7fn4zxpxzn3 galera_percona-xtradb-cluster.jgymwuqmxlyl2g9ig6pgxkp1p percona/percona-xtradb-cluster:5.7 pxcw2 Running Running 25 minutes ago
5yk2kvfnaipj galera_percona-xtradb-cluster.bd1oyur0ia0nkw2ru6mrrqvi3 percona/percona-xtradb-cluster:5.7 pxcw1 Running Running 25 minutes ago
tvt76rukcml6 galera_etcd.1 quay.io/coreos/etcd:latest pxcm1 Running Running 25 minutes ago
1bo8rf1s088z galera_proxy.1 perconalab/proxysql:latest pxcm1 Running Running 25 minutes ago
One last thing to do is to register our galera nodes into proxySQL. Easy as one, two, three :)

– One:

eval "$(docker-machine env pxcm1)" docker ps Output example:

docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ade1183337e5 quay.io/coreos/etcd "etcd" 2 hours ago Up 31 minutes galera_etcd.1.tvt76rukcml6h7h24vaef79cz 34129b98cd75 perconalab/proxysql "/entrypoint.sh " 2 hours ago Up 31 minutes 3306/tcp, 6032/tcp galera_proxy.1.1bo8rf1s088zgmsl9ho16wtk7 – Two:

docker exec -i [name of the proxySQL container] add_cluster_nodes.sh Output example:

docker exec -i galera_proxy.1.1bo8rf1s088zgmsl9ho16wtk7 add_cluster_nodes.sh % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 551 100 551 0 0 68328 0 --:--:-- --:--:-- --:--:-- 78714 10.20.1.9 Warning: Using a password on the command line interface can be insecure. Warning: Using a password on the command line interface can be insecure. 10.20.1.7 Warning: Using a password on the command line interface can be insecure. Warning: Using a password on the command line interface can be insecure. 10.20.1.8 Warning: Using a password on the command line interface can be insecure. Warning: Using a password on the command line interface can be insecure. Warning: Using a password on the command line interface can be insecure. Warning: Using a password on the command line interface can be insecure. – Three:

mysql -h$(docker-machine ip pxcm1) -uproxyuser -p Output example:

mysql -h$(docker-machine ip pxcm1) -uproxyuser -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 9 Server version: 5.1.30 (ProxySQL)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show variables like '%host%'; +-------------------------------+--------------+ | Variable_name | Value | +-------------------------------+--------------+ | host_cache_size | 279 | | hostname | 148ba588c919 | | performance_schema_hosts_size | -1 | | report_host | | +-------------------------------+--------------+ 4 rows in set (0,00 sec) Done!

You can enjoy a fresh PXC cluster on top of swarm managed by proxySQL :)

Have fun :)

R.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment