Skip to content

Instantly share code, notes, and snippets.

@yuvalif
Last active January 30, 2023 15:35
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save yuvalif/f5706f9a6afab19df5ce31186eeb7c69 to your computer and use it in GitHub Desktop.
Save yuvalif/f5706f9a6afab19df5ce31186eeb7c69 to your computer and use it in GitHub Desktop.

Cluster 1

start the cluster using mstart.sh:

MON=1 OSD=1 MDS=0 MGR=0 RGW=1 ../src/mstart.sh cluster1 -n -d

create a default realm:

bin/radosgw-admin -c run/cluster1/ceph.conf realm create --rgw-realm=myrealm --default

create a default master zonegroup:

bin/radosgw-admin -c run/cluster1/ceph.conf zonegroup create --rgw-zonegroup=mygroup --endpoints=http://localhost:8001 --master --default

create a default master zone:

bin/radosgw-admin -c run/cluster1/ceph.conf zone create --rgw-zone=zone1 --rgw-zonegroup=mygroup --endpoints=http://localhost:8001 --master --default

create a system user:

bin/radosgw-admin -c run/cluster1/ceph.conf user create --uid="synchronization-user" --display-name="Synchronization User" --system

and fetch the access_key and secret_key:

access_key=$(bin/radosgw-admin -c run/cluster1/ceph.conf user info --uid=synchronization-user | jq -r ".keys[0].access_key")

secret_key=$(bin/radosgw-admin -c run/cluster1/ceph.conf user info --uid=synchronization-user | jq -r ".keys[0].secret_key")

add the system user to the master zone:

bin/radosgw-admin -c run/cluster1/ceph.conf zone modify --rgw-zone=zone1 --access-key="$access_key" --secret="$secret_key"

bin/radosgw-admin -c run/cluster1/ceph.conf period update --commit

update the conf file run/cluster1/ceph.conf with the zone name:

[client.rgw.8001]
    rgw frontends = beast port=8001
    admin socket = /root/projects/ceph/build/run/cluster1/out/radosgw.8001.asok
    rgw_zone = zone1

restart the cluster without deleting it (don't use the "-n" flag):

../src/mstop.sh cluster1

MON=1 OSD=1 MDS=0 MGR=0 RGW=1 ../src/mstart.sh cluster1 -d

Cluster 2

start the cluster using mstart.sh:

MON=1 OSD=1 MDS=0 MGR=0 RGW=1 ../src/mstart.sh cluster2 -d -n

pull the realm (use the keys from cluster1):

bin/radosgw-admin -c run/cluster2/ceph.conf realm pull --url=http://localhost:8001 --access-key="$access_key" --secret="$secret_key"

create a secondary zone:

bin/radosgw-admin -c run/cluster2/ceph.conf zone create --rgw-zonegroup=mygroup --rgw-zone=zone2 --access-key="$access_key" --secret="$secret_key" --endpoints=http://localhost:8002

update the conf file run/cluster2/ceph.conf with the zone name:

[client.rgw.8002]
    rgw frontends = beast port=8002
    admin socket = /root/projects/ceph/build/run/cluster2/out/radosgw.8002.asok
    rgw_zone = zone2

restart the cluster without deleting it (don't use the "-n" flag):

../src/mstop.sh cluster2

MON=1 OSD=1 MDS=0 MGR=0 RGW=1 ../src/mstart.sh cluster2 -d

update the period:

bin/radosgw-admin -c run/cluster2/ceph.conf period update --commit

check sync status in both clusters:

bin/radosgw-admin -c run/cluster1/ceph.conf sync status
bin/radosgw-admin -c run/cluster2/ceph.conf sync status

create a bucket in zone1 (usign s3cmd), and upload an object:

s3cmd --host=localhost:8001 --region="mygroup" --access_key="$access_key" --secret_key="$secret_key" mb s3://mybucket
head -c 1K </dev/urandom > myfile
s3cmd --host=localhost:8001 --region="mygroup" --access_key="$access_key" --secret_key="$secret_key" put myfile s3://mybucket --force

fetch the file from zone2:

s3cmd --host=localhost:8002 --region="mygroup" --access_key="$access_key" --secret_key="$secret_key" get s3://mybucket/myfile --force
@aliakseimakarau
Copy link

Thank you for this manual! Helped a lot to test! :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment