Skip to content

Instantly share code, notes, and snippets.

@dmagda
Last active February 5, 2023 09:00
Show Gist options
  • Save dmagda/b53f771bd36d1428afbfed00c6a60d4d to your computer and use it in GitHub Desktop.
Save dmagda/b53f771bd36d1428afbfed00c6a60d4d to your computer and use it in GitHub Desktop.
[Workshop] YugabyteDB Multi-Region Deployments

Workshop: YugabyteDB Multi-Region Deploymnets

The gist includes exercises for the "Mastering Multi-Region Deployments With YugabyteDB" workshop. Attend the summit and follow the steps below to get a practical experience.

Prerequisite

  • Java Developer Kit, version 11 or later
  • Apache Maven 3.0 or later
  • Docker 20 or later
  • psql 14.4 or later

1. Start Cluster With Default Configuration

  1. Create a Docker network for the workshop:
docker network create yugabytedb_network
  1. Start a three-node cluster:
rm -r ~/yb_docker_data
mkdir ~/yb_docker_data

docker run -d --name yugabytedb_node1 --net yugabytedb_network \
  -p 7001:7000 -p 9000:9000 -p 5433:5433 \
  -v ~/yb_docker_data/node1:/home/yugabyte/yb_data --restart unless-stopped \
  yugabytedb/yugabyte:2.15.1.0-b175 \
  bin/yugabyted start --listen=yugabytedb_node1 \
  --base_dir=/home/yugabyte/yb_data --daemon=false

docker run -d --name yugabytedb_node2 --net yugabytedb_network \
  -v ~/yb_docker_data/node2:/home/yugabyte/yb_data --restart unless-stopped \
  yugabytedb/yugabyte:2.15.1.0-b175 \
  bin/yugabyted start --join=yugabytedb_node1 --listen=yugabytedb_node2 \
  --base_dir=/home/yugabyte/yb_data --daemon=false

docker run -d --name yugabytedb_node3 --net yugabytedb_network \
  -v ~/yb_docker_data/node3:/home/yugabyte/yb_data --restart unless-stopped \
  yugabytedb/yugabyte:2.15.1.0-b175 \
  bin/yugabyted start --join=yugabytedb_node1 --listen=yugabytedb_node3 \
  --base_dir=/home/yugabyte/yb_data --daemon=false
  1. Connect to the first node with psql:
psql -h 127.0.0.1 -p 5433 yugabyte -U yugabyte -w
  1. Request information about the default regions the nodes are assigned to:
select * from yb_servers();

#The output should be as follows:

         host       | port | num_connections | node_type | cloud  |   region    | zone  |    public_ip     
------------------+------+-----------------+-----------+--------+-------------+-------+------------------
 yugabytedb_node2 | 5433 |               0 | primary   | cloud1 | datacenter1 | rack1 | yugabytedb_node2
 yugabytedb_node3 | 5433 |               0 | primary   | cloud1 | datacenter1 | rack1 | yugabytedb_node3
 yugabytedb_node1 | 5433 |               0 | primary   | cloud1 | datacenter1 | rack1 | yugabytedb_node1
  1. Find detailed information via the UI interface by connecting to the Yugabyte Master process http://127.0.0.1:7001/tablet-servers default-cluster

2. Start Cluster With Custom Regions

Start another multi-region cluster defining what region each node needs to be placed in.

  1. Remove the previous cluster:
docker kill yugabytedb_node1
docker container rm yugabytedb_node1

docker kill yugabytedb_node2
docker container rm yugabytedb_node2

docker kill yugabytedb_node3
docker container rm yugabytedb_node3

rm -r ~/yb_docker_data
mkdir ~/yb_docker_data
  1. Start a three node cluster placing nodes in different cloud regions:
 docker run -d --name yugabytedb_node1 --net yugabytedb_network \
  -p 7001:7000 -p 9000:9000 -p 5433:5433 \
  -v ~/yb_docker_data/node1:/home/yugabyte/yb_data --restart unless-stopped \
  yugabytedb/yugabyte:2.15.1.0-b175 \
  bin/yugabyted start --listen=yugabytedb_node1 \
  --base_dir=/home/yugabyte/yb_data --daemon=false \
  --master_flags="placement_cloud=aws,placement_region=us_west,placement_zone=zone-a" \
  --tserver_flags="placement_cloud=aws,placement_region=us_west,placement_zone=zone-a"

docker run -d --name yugabytedb_node2 --net yugabytedb_network \
  -v ~/yb_docker_data/node2:/home/yugabyte/yb_data --restart unless-stopped \
  yugabytedb/yugabyte:2.15.1.0-b175 \
  bin/yugabyted start --join=yugabytedb_node1 --listen=yugabytedb_node2 \
  --base_dir=/home/yugabyte/yb_data --daemon=false \
  --master_flags="placement_cloud=aws,placement_region=us_central,placement_zone=zone-b" \
  --tserver_flags="placement_cloud=aws,placement_region=us_central,placement_zone=zone-b"

docker run -d --name yugabytedb_node3 --net yugabytedb_network \
  -v ~/yb_docker_data/node3:/home/yugabyte/yb_data --restart unless-stopped \
  yugabytedb/yugabyte:2.15.1.0-b175 \
  bin/yugabyted start --join=yugabytedb_node1 --listen=yugabytedb_node3 \
  --base_dir=/home/yugabyte/yb_data --daemon=false \
  --master_flags="placement_cloud=aws,placement_region=us_east,placement_zone=zone-c" \
  --tserver_flags="placement_cloud=aws,placement_region=us_east,placement_zone=zone-c"
  1. Update the placement information cluster wide:
docker exec -i yugabytedb_node1 \
yb-admin -master_addresses yugabytedb_node1:7100,yugabytedb_node2:7100,yugabytedb_node3:7100 \
modify_placement_info aws.us_west.zone-a:1,aws.us_central.zone-b:1,aws.us_east.zone-c:1 3

where:

  • {placement_name:1}- at least 1 replica of a tablet needs to be in that placement block.
  • 3 - replication factor, the number of replicas for each tablet.
  1. Confirm the placement is updated in a terminal:
psql -h 127.0.0.1 -p 5433 yugabyte -U yugabyte -w

select * from yb_servers();

# The output should be as follows
         host       | port | num_connections | node_type | cloud | region   |  zone  |    public_ip     
------------------+------+-----------------+-----------+-------+---------+--------+------------------
 yugabytedb_node3 | 5433 |               0 | primary   | aws   | us_east    | zone-c | yugabytedb_node3
 yugabytedb_node2 | 5433 |               0 | primary   | aws   | us_central | zone-b | yugabytedb_node2
 yugabytedb_node1 | 5433 |               0 | primary   | aws   | us_west    | zone-a | yugabytedb_node1
  1. And via the UI http://127.0.0.1:7001/tablet-servers

custom-regions

Start Market Orders App

Start the market orders app and observe that the data placement and load is in fact balanced across the cluster.

  1. Clone the app:
git clone https://github.com/YugabyteDB-Samples/market-orders-app.git
  1. Build the app:
    mvn clean package 
  2. Create an image:
    docker build -t market-orders-app .
  3. Start the app inside a container:
    docker run --name market-orders-instance --net yugabytedb_network \
    market-orders-app:latest \
    java -jar /home/target/market-orders-app.jar \
    connectionProps=/home/yugabyte-docker.properties \
    loadScript=/home/schema_postgres.sql \
    refreshView=false \
    tradeStatsInterval=5000
  4. Open the UI and confirm the reads and writes are balanced across the nodes: http://127.0.0.1:7001/tablet-servers

balanced-reads-writes

3. Deploy Cluster With Preferred Region

Continue running the previous multi-region cluster but now specify a preferred region. Sets the preferred availability zones (AZs) and regions. Tablet leaders are placed in alive and healthy nodes of AZs in order of preference.

  1. Check the current tablets distribution for the Trade table (each node will be a leader of one of the tablets/shards): http://127.0.0.1:7001/tables

current-tablets-distribution

  1. Also, check the current cluster's config: http://127.0.0.1:7001/cluster-config

  2. Change the preffered regions order (version 2.12 and earlier):

    docker exec -i yugabytedb_node1 \
    yb-admin -master_addresses yugabytedb_node1:7100,yugabytedb_node2:7100,yugabytedb_node3:7100 \
    set_preferred_zones \
    aws.us_central.zone-b:1 \
    aws.us_west.zone-a:2 \
    aws.us_east.zone-c:3
  3. Check the config once again http://127.0.0.1:7001/cluster-config

    multi_affinitized_leaders {
      zones {
        placement_cloud: "aws"
        placement_region: "us_central"
        placement_zone: "zone-b"
      }
    }
    multi_affinitized_leaders {
      zones {
        placement_cloud: "aws"
        placement_region: "us_west"
        placement_zone: "zone-a"
      }
    }
    multi_affinitized_leaders {
      zones {
        placement_cloud: "aws"
        placement_region: "us_east"
        placement_zone: "zone-c"
      }
    }
  4. And check the tablets distribution for the Trade table (yugabytedb_node2 will become the leader for all tablets, can take time to update):

new-tablets-distribution

Simulate Regional Outage

Simulate a region-level outage by stopping the node from the preferred region aws.central.zone-b.

  1. Stop the node from the preferred region:
docker stop yugabytedb_node2
  1. Check the application logs and the Trade table's tablets distribution. The app will keep running and eventually the yugabytedb_node1 from aws.us_west.zone-a region will start serving all the traffic becaus its region is the next preferred one.

next-leader-load

next-leader-tablets

  1. Return the node back in the cluster and stop the application:
docker start yugabytedb_node2

docker stop market-orders-instance

4. Start Cluster With Read Replica

Read replica nodes let you boost performance for reads in distant locations. Follow this guide to add a replica node in another cloud region.

  1. Add placement information for a replica node from the European Union:
docker exec -i yugabytedb_node1 \
yb-admin -master_addresses yugabytedb_node1:7100,yugabytedb_node2:7100,yugabytedb_node3:7100 \
add_read_replica_placement_info aws.europe_west.zone-a:1 1 replica-node-europe

where

  • aws.europe_west.zone-a:1 - at least one copy of the data :1 needs to be in that region.
  • 1 - replication factor
  • replica-node-europe - repliace UUID
  1. Deploy a replica node in the EU region:
docker run -d --name yugabytedb_replica1 --net yugabytedb_network \
  -v ~/yb_docker_data/replica1:/home/yugabyte/yb_data --restart unless-stopped \
  yugabytedb/yugabyte:2.15.1.0-b175 \
  bin/yb-tserver --tserver_master_addrs yugabytedb_node1:7100,yugabytedb_node2:7100,yugabytedb_node3:7100 \
  --rpc_bind_addresses yugabytedb_replica1:9100 \
  --pgsql_proxy_bind_address yugabytedb_replica1:5433 --cql_proxy_bind_address yugabytedb_replica1:9042 \
  --webserver_interface yugabytedb_replica1 --fs_data_dirs /home/yugabyte/yb_data \
  --placement_cloud aws --placement_region europe_west --placement_zone zone-a --placement_uuid replica-node-europe 
  1. Confirm the replica is running http://127.0.0.1:7001/tablet-servers replica-node

Query Through Replica Node

  1. Connect to the replica node:
#Connect to the container
docker exec -it yugabytedb_replica1 bash

#Open a ysqlsh session
./bin/ysqlsh -h yugabytedb_replica1 -p 5433 -U yugabyte
  1. Allow reads through the replica node:
set session characteristics as transaction read only;
set yb_read_from_followers = true;
  1. Check the current number of trades (all the changes should have been already replicated from the primary cluster):
select count(*) from trade;

#The output might be as follows
   count 
-------
   299
(1 row)
  1. Start the market orders app:
docker start market-orders-instance
docker logs market-orders-instance --follow
  1. Execute the query from step 3 several times, the number of trades will be changing:
select count(*) from trade;

#The output might be as follows
 count 
-------
   372
(1 row)
  1. Stop the application:
docker stop market-orders-instance

Change Data Via Replica Node

Your application can change data through a replica node. The latter will automatically forward an update to the primary cluster.

  1. Within your ysqlsh session, get all the buyers:
select * from buyer order by id;
  1. Try to add a new buyer via the replica:
insert into buyer(first_name,last_name,age,goverment_id) values ('Jim','Mayson',56,'dnsldfnsdfk_22');

#you'll get this exception
ERROR:  cannot execute INSERT in a read-only transaction
  1. Allow writes through the replica:
set session characteristics as transaction read write;
  1. Execute the query from step 2 to confirm the buyer got added.

  2. Restore the session characteristics:

set session characteristics as transaction read only;
  1. Quit the replica connection:
\q
exit

5. Start Two Standalone Clusters

One of the options to deliver low latency for both reads and writes is by provisioning separate standalone clusters in distant locations. Follow this section to configure two clusters and set up unidirectional replication between them.

  1. Remove the previous cluster:
docker kill yugabytedb_node1
docker container rm yugabytedb_node1

docker kill yugabytedb_node2
docker container rm yugabytedb_node2

docker kill yugabytedb_node3
docker container rm yugabytedb_node3

docker kill yugabytedb_replica1
docker container rm yugabytedb_replica1

rm -r ~/yb_docker_data
mkdir ~/yb_docker_data
  1. Start two single-node clusters placing nodes in distant cloud regions:
 docker run -d --name yugabytedb_node1 --net yugabytedb_network \
  -p 7001:7000 -p 9000:9000 -p 5433:5433 \
  -v ~/yb_docker_data/node1:/home/yugabyte/yb_data --restart unless-stopped \
  yugabytedb/yugabyte:2.15.1.0-b175 \
  bin/yugabyted start --listen=yugabytedb_node1 \
  --base_dir=/home/yugabyte/yb_data --daemon=false \
  --master_flags="placement_cloud=aws,placement_region=us_west,placement_zone=zone-a" \
  --tserver_flags="placement_cloud=aws,placement_region=us_west,placement_zone=zone-a"

docker run -d --name yugabytedb_node2 --net yugabytedb_network \
  -p 7002:7000 -p 9001:9000 -p 5434:5433 \
  -v ~/yb_docker_data/node2:/home/yugabyte/yb_data --restart unless-stopped \
  yugabytedb/yugabyte:2.15.1.0-b175 \
  bin/yugabyted start --listen=yugabytedb_node2 \
  --base_dir=/home/yugabyte/yb_data --daemon=false \
  --master_flags="placement_cloud=aws,placement_region=europe_west,placement_zone=zone-a" \
  --tserver_flags="placement_cloud=aws,placement_region=europe_west,placement_zone=zone-a"
  1. Modify the placement info for both clusters:
#Modify placement for the US-based cluster
docker exec -i yugabytedb_node1 \
yb-admin -master_addresses yugabytedb_node1:7100 \
modify_placement_info aws.us_west.zone-a:1 1

#Modify placement for the EU-based cluster
docker exec -i yugabytedb_node2 \
yb-admin -master_addresses yugabytedb_node2:7100 \
modify_placement_info aws.europe_west.zone-a:1 1
  1. Confirm the yugabytedb_node1 node is in the first cluster: http://127.0.0.1:7001 first_cluster
  2. And make sure the yugabytedb_node2 node is in the second cluster: http://127.0.0.1:7002 second_cluster

Configure Unidirectional Replication

You can replicate changes between clusters in the unidirectional or bidirectional way. Below, it's shown how to set up the unidirectional replication.

Let's set up replication from the US-based to the EU-based cluster. The changes from the EU-based cluster won't be replicated to the USA.

  1. Apply the market orders schema on the first cluster (USA):
#Connect to the first cluster (listens on 5433 on your host machine)
psql -h 127.0.0.1 -p 5433 yugabyte -U yugabyte -w

#Create Table
CREATE TABLE Trade(
  id integer PRIMARY KEY,
  buyer_id integer NOT NULL,
  symbol text,
  order_quantity integer,
  bid_price float,
  trade_type text,
  order_time timestamp(0) DEFAULT NOW()
);
  1. Repeat the same but for the second cluster (Europe):
#Connect to the second cluster (listens on 5434 on your host machine)
psql -h 127.0.0.1 -p 5434 yugabyte -U yugabyte -w

#Create Table
CREATE TABLE Trade(
  id integer PRIMARY KEY,
  buyer_id integer NOT NULL,
  symbol text,
  order_quantity integer,
  bid_price float,
  trade_type text,
  order_time timestamp(0) DEFAULT NOW()
);

Keep both psql sessions opened and in another terminal window perform the following:

  1. Connect to the container of the US-based cluster:
docker exec -it yugabytedb_node1 bash
  1. Find the Trade table's id:
yb-admin -master_addresses yugabytedb_node1:7100 list_tables include_table_id | grep trade
  1. Find the US-based cluster's UUID:
curl -s http://yugabytedb_node1:7000/cluster-config | grep cluster_uuid
  1. Set up the replication from the US-cluster to the European one:
#The command looks as follows
yb-admin -master_addresses yugabytedb_node2:7100 \
  setup_universe_replication <source_cluster_uuid> \
  yugabytedb_node1:7100 \
  <trade_table_id>
  
#Replace <source_cluster_uuid> and <trade_table_id> to get a complete command like this:
yb-admin -master_addresses yugabytedb_node2:7100 \
  setup_universe_replication 051ba5f6-c869-4292-aae8-75bb0d975f66 \
  yugabytedb_node1:7100 \
  000033e8000030008000000000004000

Test Replication

  1. Open a terminal window with a psql connection to the first (US-based) cluster:
psql -h 127.0.0.1 -p 5433 yugabyte -U yugabyte -w
  1. Insert a trade:
INSERT INTO Trade VALUES(
1, 1, 'Google', '5', '108','Buy',now());
  1. Switch to a terminal window with a psql connection to the second cluster (EU-based):
psql -h 127.0.0.1 -p 5434 yugabyte -U yugabyte -w
  1. Confirm the trade was replicated to Europe:
select * from trade;

 id | buyer_id | symbol | order_quantity | bid_price | trade_type |     order_time      
----+----------+--------+----------------+-----------+------------+---------------------
  1 |        1 | Google |              5 |       108 | Buy        | 2022-09-07 13:56:50
  1. Now, add another trade but to the European cluster:
INSERT INTO Trade VALUES(
2, 4, 'Tesla', '100', '270','Sell',now());
  1. Open the US-based session and cofirm that last trade was not replicated from Europe:
select * from trade;

#you'll see only the first trade's data in the US cluster
 id | buyer_id | symbol | order_quantity | bid_price | trade_type |     order_time      
----+----------+--------+----------------+-----------+------------+---------------------
  1 |        1 | Google |              5 |       108 | Buy        | 2022-09-07 13:56:50

6. Start Geo-Partitioned Cluster

With a geo-partitioned cluster you can deliver a low latency for both reads & writes in distant locations and comply with data regulatory requirements. You can also run requests across multiple distant regions (what's not possible with standalone clusters) but with a higher latency.

Let's set up a cluster with nodes in the USA and Europe:

  1. Remove the previous cluster:
docker kill yugabytedb_node1
docker container rm yugabytedb_node1

docker kill yugabytedb_node2
docker container rm yugabytedb_node2

docker kill yugabytedb_node3
docker container rm yugabytedb_node3

docker kill yugabytedb_replica1
docker container rm yugabytedb_replica1

rm -r ~/yb_docker_data
mkdir ~/yb_docker_data
  1. Start a two-node clusters with nodes in distant locations:
 docker run -d --name yugabytedb_node1 --net yugabytedb_network \
  -p 7001:7000 -p 9000:9000 -p 5433:5433 \
  -v ~/yb_docker_data/node1:/home/yugabyte/yb_data --restart unless-stopped \
  yugabytedb/yugabyte:2.15.1.0-b175 \
  bin/yugabyted start --listen=yugabytedb_node1 \
  --base_dir=/home/yugabyte/yb_data --daemon=false \
  --master_flags="placement_cloud=aws,placement_region=us_west,placement_zone=zone-a" \
  --tserver_flags="placement_cloud=aws,placement_region=us_west,placement_zone=zone-a"

docker run -d --name yugabytedb_node2 --net yugabytedb_network \
  -v ~/yb_docker_data/node2:/home/yugabyte/yb_data --restart unless-stopped \
  yugabytedb/yugabyte:2.15.1.0-b175 \
  bin/yugabyted start --join=yugabytedb_node1 --listen=yugabytedb_node2 \
  --base_dir=/home/yugabyte/yb_data --daemon=false \
  --master_flags="placement_cloud=aws,placement_region=europe_west,placement_zone=zone-a" \
  --tserver_flags="placement_cloud=aws,placement_region=europe_west,placement_zone=zone-a"
  1. Modify the placement info:
docker exec -i yugabytedb_node1 \
yb-admin -master_addresses yugabytedb_node1:7100,yugabytedb_node2:7100 \
modify_placement_info aws.us_west.zone-a:1,aws.europe_west.zone-a:1 2
  1. Confirm the cluster is running http://127.0.0.1:7001 geo-partitioned

Create Geo-Partitioned Schema for Trades

  1. Connect to the cluster via psql:
psql -h 127.0.0.1 -p 5433 yugabyte -U yugabyte -w
  1. Create tablespaces:
CREATE TABLESPACE usa_tablespace WITH (
replica_placement='{"num_replicas": 1, "placement_blocks":
[{"cloud":"aws","region":"us_west","zone":"zone-a","min_num_replicas":1}]}'
);

CREATE TABLESPACE europe_tablespace WITH (
replica_placement='{"num_replicas": 1, "placement_blocks":
[{"cloud":"aws","region":"europe_west","zone":"zone-a","min_num_replicas":1}]}'
);
  1. Create a geo-partitioned version of the Trade table:
CREATE TABLE Trade(
  id integer NOT NULL,
  buyer_id integer NOT NULL,
  symbol text,
  order_quantity integer,
  bid_price float,
  trade_type text,
  country text,
  order_time timestamp(0) DEFAULT NOW(),
  PRIMARY KEY(id,country)
) PARTITION BY LIST(country);
  1. Create table partitions for the USA and a few European countries:
CREATE TABLE Trade_USA
  PARTITION OF Trade
  FOR VALUES IN ('USA') TABLESPACE usa_tablespace;
  
CREATE TABLE Trade_Europe
  PARTITION OF Trade
  FOR VALUES IN ('Germany','France','Italy') TABLESPACE europe_tablespace;
  1. Confirm the Trade table now comes with two partitions:
\d+ Trade

Partitioned table "public.trade"
#Other information 

Partition key: LIST (country)
Indexes:
    "trade_pkey" PRIMARY KEY, lsm (id HASH, country ASC)
Partitions: trade_europe FOR VALUES IN ('Germany', 'France', 'Italy'),
            trade_usa FOR VALUES IN ('USA')

Test Geo-Partitioned Cluster

  1. Inserting a trade that took place in Europe:
INSERT INTO Trade VALUES(
2, 4, 'Tesla', '100', '270','Sell','Germany',now());
  1. Confirm that trade was placed into the node from Europe (you can do this by querying the trade_europe partition directly):
 select * from trade_europe;
 
 id | buyer_id | symbol | order_quantity | bid_price | trade_type | country |     order_time      
----+----------+--------+----------------+-----------+------------+---------+---------------------
  2 |        4 | Tesla  |            100 |       270 | Sell       | Germany | 2022-09-07 14:34:22
(1 row)
  1. Double-check there is no trade copy in the US-based partition:
select * from trade_usa;

 id | buyer_id | symbol | order_quantity | bid_price | trade_type | country | order_time 
----+----------+--------+----------------+-----------+------------+---------+------------
(0 rows)
  1. But, the US-based customers still can get data from Europe by querying the top-leve Trade table:
select * from trade;

 id | buyer_id | symbol | order_quantity | bid_price | trade_type | country |     order_time      
----+----------+--------+----------------+-----------+------------+---------+---------------------
  2 |        4 | Tesla  |            100 |       270 | Sell       | Germany | 2022-09-07 14:34:22
(1 row)

Useful Commands

This section lists commands that you might need during the workshop

Market Orders App

Restart the application/container:

docker restart market-orders-instance

Remove the app's container:

docker kill market-orders-instance
docker container rm market-orders-instance

Start an app container without preloading the database:

docker run --name market-orders-instance --net yugabytedb_network \
market-orders-app:latest \
java -jar /home/target/market-orders-app.jar \
connectionProps=/home/yugabyte-docker.properties \
tradeStatsInterval=5000

Cleanup

  docker kill yugabytedb_node1
  docker container rm yugabytedb_node1
  
  docker kill yugabytedb_node2
  docker container rm yugabytedb_node2
  
  docker kill yugabytedb_node3
  docker container rm yugabytedb_node3
  
  docker kill yugabytedb_replica1
  docker container rm yugabytedb_replica1
  
  docker kill market-orders-instance
  docker container rm market-orders-instance
  
  rm -r ~/yb_docker_data
  mkdir ~/yb_docker_data
  
  docker network rm yugabytedb_network
  docker volume prune
@dmagda
Copy link
Author

dmagda commented Oct 20, 2022

rm -r ~/yb_docker_data
mkdir ~/yb_docker_data

docker run -d --name yugabytedb_node1 --net yugabytedb_network \
  -p 7001:7000 -p 9000:9000 -p 5433:5433 \
  -v ~/yb_docker_data/node1:/home/yugabyte/yb_data --restart unless-stopped \
  yugabytedb/yugabyte:2.15.1.0-b175 \
  bin/yugabyted start --listen=yugabytedb_node1 \
  --base_dir=/home/yugabyte/yb_data --daemon=false

docker exec -i yugabytedb_node1 \
yb-admin -master_addresses yugabytedb_node1:7100 \
add_read_replica_placement_info aws.europe_west.zone-a:1 1 replica-node-europe

docker run -d --name yugabytedb_replica1 --net yugabytedb_network -p 5434:5433\
  -v ~/yb_docker_data/replica1:/home/yugabyte/yb_data --restart unless-stopped \
  yugabytedb/yugabyte:2.15.1.0-b175 \
  bin/yb-tserver --tserver_master_addrs yugabytedb_node1:7100 \
  --rpc_bind_addresses yugabytedb_replica1:9100 \
  --pgsql_proxy_bind_address yugabytedb_replica1:5433 --cql_proxy_bind_address yugabytedb_replica1:9042 \
  --webserver_interface yugabytedb_replica1 --fs_data_dirs /home/yugabyte/yb_data \
  --placement_cloud aws --placement_region europe_west --placement_zone zone-a --placement_uuid replica-node-europe 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment