Skip to content

Instantly share code, notes, and snippets.

@kalaspuffar
Last active April 24, 2024 14:00
Show Gist options
  • Star 11 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save kalaspuffar/94b338168fe7200cb44b8111cb3172b3 to your computer and use it in GitHub Desktop.
Save kalaspuffar/94b338168fe7200cb44b8111cb3172b3 to your computer and use it in GitHub Desktop.
Simple installation of a Ceph RADOS gateway
sudo apt install radosgw

sudo mkdir -p /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`
sudo ceph auth get-or-create client.rgw.`hostname -s` osd 'allow rwx' mon 'allow rw' -o /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`/keyring

[client.rgw.n1]
host = n1
keyring = /var/lib/ceph/radosgw/ceph-rgw.n1/keyring
log file = /var/log/ceph/ceph-rgw-n1.log
rgw frontends = "beast endpoint=192.168.6.44:8080"
rgw thread pool size = 512

sudo systemctl start ceph-radosgw@rgw.`hostname -s`
sudo systemctl status ceph-radosgw@rgw.`hostname -s`
sudo systemctl enable ceph-radosgw@rgw.`hostname -s`


sudo radosgw-admin realm create --rgw-realm=eu-east --default
sudo radosgw-admin zonegroup create --rgw-zonegroup=eu --endpoints=http://n1:8080,http://n2:8080,http://n3:8080 --rgw-realm=eu-east --master --default

sudo radosgw-admin zone create --rgw-zonegroup=eu --endpoints=http://n1:8080,http://n2:8080,http://n3:8080 --rgw-zone=eu-east --master --default

sudo radosgw-admin zonegroup remove --rgw-zonegroup=default --rgw-zone=default
sudo radosgw-admin period update --commit
sudo radosgw-admin zone delete --rgw-zone=default
sudo radosgw-admin period update --commit
sudo radosgw-admin zonegroup delete --rgw-zonegroup=default
sudo radosgw-admin period update --commit

sudo ceph config get mon mon_max_pg_per_osd
sudo ceph config set mon mon_max_pg_per_osd 500


sudo ceph osd pool delete default.rgw.control default.rgw.control --yes-i-really-really-mean-it
sudo ceph osd pool delete default.rgw.log default.rgw.log --yes-i-really-really-mean-it
sudo ceph osd pool delete default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it


sudo radosgw-admin user create --uid="synchronization-user" --display-name="Synchronization User" --system

sudo radosgw-admin zone modify --rgw-zone=eu-east --access-key=9NNB6GYTK5Z8GDUORORH --secret=r7NR3YwciVzdlrS4eNuHwAfkLQ0cMFjj4LEJuBbv
sudo radosgw-admin period update --commit
rgw_zone=eu-east
sudo systemctl restart ceph-radosgw@rgw.`hostname -s`

sudo ceph dashboard set-rgw-credentials








sudo systemctl stop ceph-radosgw@rgw.`hostname -s`
sudo systemctl disable ceph-radosgw@rgw.`hostname -s`
sudo rm -rf /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`


sudo ceph osd pool delete eu-east.rgw.buckets.data eu-east.rgw.buckets.data --yes-i-really-really-mean-it
sudo ceph osd pool delete eu-east.rgw.buckets.index eu-east.rgw.buckets.index --yes-i-really-really-mean-it
sudo ceph osd pool delete eu-east.rgw.control eu-east.rgw.control --yes-i-really-really-mean-it
sudo ceph osd pool delete eu-east.rgw.log eu-east.rgw.log --yes-i-really-really-mean-it
sudo ceph osd pool delete eu-east.rgw.meta eu-east.rgw.meta --yes-i-really-really-mean-it
sudo ceph osd pool delete .rgw.root .rgw.root --yes-i-really-really-mean-it
@zentavr
Copy link

zentavr commented Mar 21, 2023

I have an issue with that configuration:

Mar 22 00:16:53 rgw-slow-dev01 systemd[1]: Starting LSB: radosgw RESTful rados gateway...
Mar 22 00:16:53 rgw-slow-dev01 radosgw[85035]: Starting client.radosgw.rgw-slow-dev01...
Mar 22 00:16:53 rgw-slow-dev01 radosgw[85058]: failed to fetch mon config (--no-mon-config to skip)
Mar 22 00:16:53 rgw-slow-dev01 systemd[1]: radosgw.service: Control process exited, code=exited, status=1/FAILURE

@zentavr
Copy link

zentavr commented Mar 22, 2023

Many manuals provide ceph auth get-or-create client.rgw..... as an example how to create the credentials. When installing
ceph from deb https://download.ceph.com/debian-octopus/ repo radosgw uses /etc/init.d/radosgw script which seeks for
client.radosgw. prefixes in /etc/ceph/ceph.conf. in such case we create the keyring for client.radosgw.* instead of client.rgw.*.

@fpiraneo
Copy link

fpiraneo commented Mar 23, 2023

I'm pretty sure that with the release of quincy something changed in setup; after starting radosgw on the first monitor (called mon1 in my case) the daemon crashed with that into the logs:

2023-03-20T[16:04:36].414+0100  0 ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable), process radosgw, pid 2134
2023-03-20T[16:04:36]  0 framework: beast
2023-03-20T[16:04:36]  0 framework conf key: endpoint, val: mon1.anonicloud.intra:8080
2023-03-20T[16:04:36]  1 radosgw_Main not setting numa affinity
2023-03-20T[16:04:36]  1 rgw_d3n: rgw_d3n_l1_local_datacache_enabled=0
2023-03-20T[16:04:36]  1 D3N datacache enabled: 0
2023-03-20T[16:04:37]  0 rgw main: rgw_init_ioctx ERROR: librados::Rados::pool_create returned (34) Numerical result out of range (this can be due to a pool or placement group misconfiguration, e.g. pg_num < pgp_num or mon_max_pg_per_osd exceeded)
2023-03-20T[16:04:37]  0 rgw main: failed reading realm info: ret -34 (34) Numerical result out of range
2023-03-20T[16:04:37]  0 rgw main: ERROR: failed to start notify service ((34) Numerical result out of range
2023-03-20T[16:04:37]  0 rgw main: ERROR: failed to init services (ret=(34) Numerical result out of range)
2023-03-20T[16:04:37] -1 Couldn't init storage provider (RADOS)

So it seems that is not able to create the default pools you get after starting radosgw the first time; I started with a clean cluster.

One of the solution I get is set osd_pool_default_pgp_num to 1; the second solution (not tested) is run:

# ceph config get mon mon_max_pg_per_osd
250
# ceph config set mon mon_max_pg_per_osd 500

Set to 500 just before starting radosgw the first time.

Any feedback about this is apreciated.

@kalaspuffar
Copy link
Author

Hi @fpiraneo and @zentavr

I have never seen these errors myself. But the solutions you've found seem reasonable. On my end, it worked just fine with Pacific, and then I got access denied when I upgraded to Quincy. Without any real explanation. Still investigating. But the API seems a bit strange. We are switching to using S3 API more and more because of the stable interface but if an upgrade could change these kinda fundamentals it might not be a good idea.

Best regards
Daniel

@icepic
Copy link

icepic commented Dec 27, 2023

I'm pretty sure that with the release of quincy something changed in setup; after starting radosgw on the first monitor (called mon1 in my case) the daemon crashed with that into the logs:

2023-03-20T[16:04:36].414+0100  0 ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable), process radosgw, pid 2134
2023-03-20T[16:04:36]  0 framework: beast
2023-03-20T[16:04:36]  0 framework conf key: endpoint, val: mon1.anonicloud.intra:8080
2023-03-20T[16:04:36]  1 radosgw_Main not setting numa affinity
2023-03-20T[16:04:36]  1 rgw_d3n: rgw_d3n_l1_local_datacache_enabled=0
2023-03-20T[16:04:36]  1 D3N datacache enabled: 0
2023-03-20T[16:04:37]  0 rgw main: rgw_init_ioctx ERROR: librados::Rados::pool_create returned (34) Numerical result out of range (this can be due to a pool or placement group misconfiguration, e.g. pg_num < pgp_num or mon_max_pg_per_osd exceeded)
2023-03-20T[16:04:37]  0 rgw main: failed reading realm info: ret -34 (34) Numerical result out of range
2023-03-20T[16:04:37]  0 rgw main: ERROR: failed to start notify service ((34) Numerical result out of range
2023-03-20T[16:04:37]  0 rgw main: ERROR: failed to init services (ret=(34) Numerical result out of range)
2023-03-20T[16:04:37] -1 Couldn't init storage provider (RADOS)

So it seems that is not able to create the default pools you get after starting radosgw the first time; I started with a clean cluster.

One of the solution I get is set osd_pool_default_pgp_num to 1; the second solution (not tested) is run:

# ceph config get mon mon_max_pg_per_osd
250
# ceph config set mon mon_max_pg_per_osd 500

Set to 500 just before starting radosgw the first time.

Any feedback about this is apreciated.

https://tracker.ceph.com/issues/62770
I've seen this also.

@kalaspuffar
Copy link
Author

Hi @icepic

It all has to do with the size of the cluster, a large enough cluster should not have this problem but I guess for a single node cluster there might be some tweeks you need for all kind of services for example.

Best regards
Daniel

@plumbery
Copy link

Hello @icepic thanks for the video and repo. It's very helpfull.
i have a trouble in setting up credentials step. I run this command but i got error. Error is,
root@pm01:/etc/ceph# ceph dashboard set-rgw-credentials Error EINVAL: No RGW credentials found, please consult the documentation on how to enable RGW for the dashboard.

I think this command searching some keyring file in some directory. I checked my /etc/ceph directory and there is no client.admin keyring in that directory and i don't know why and how to create it.

I tried to import credentials manually with bellow commands.
ceph dashboard set-rgw-api-secret-key -i ./secretkey.txt ceph dashboard set-rgw-api-access-key -i ./accesskey.txt i used synchronization-user secret and key in here

after import api secret and key i become to able to see object gatewat in dashboard but i am not able to list any object, it's like still i have credentials problem i cant fetch any object from rgw.

What do you think how can i resolve this issue ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment