Skip to content

Instantly share code, notes, and snippets.

@ti-ka
Created July 18, 2023 22:38
Show Gist options
  • Save ti-ka/027a5152499923fb83441a23bccbd050 to your computer and use it in GitHub Desktop.
Save ti-ka/027a5152499923fb83441a23bccbd050 to your computer and use it in GitHub Desktop.
This file has been truncated, but you can view the full file.
sa@r0:~$ cat mon.log
cluster 2023-07-18T19:58:12.429674+0000 mgr.b (mgr.12834102) 26035 : cluster [DBG] pgmap v26600: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:58:12.609555+0000 osd.54 (osd.54) 51250 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:13.625523+0000 osd.54 (osd.54) 51251 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:14.430644+0000 mgr.b (mgr.12834102) 26036 : cluster [DBG] pgmap v26601: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:58:14.623314+0000 osd.54 (osd.54) 51252 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:58:14.848416+0000 mon.l (mon.2) 15413 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:58:14.848704+0000 mon.l (mon.2) 15414 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T19:58:15.764+0000 7f7fb651d700 1 mon.j@0(leader).osd e20717 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 314572800
debug 2023-07-18T19:58:16.492+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T19:58:16.492+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T19:58:15.643346+0000 osd.54 (osd.54) 51253 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:58:16.306415+0000 mon.k (mon.1) 18682 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:58:16.306690+0000 mon.k (mon.1) 18683 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T19:58:16.497040+0000 mon.j (mon.0) 21381 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:58:16.497354+0000 mon.j (mon.0) 21382 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T19:58:17.396+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T19:58:17.396+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/3900625861' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T19:58:16.431632+0000 mgr.b (mgr.12834102) 26037 : cluster [DBG] pgmap v26602: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:58:16.654023+0000 osd.54 (osd.54) 51254 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:58:17.400038+0000 mon.j (mon.0) 21383 : audit [DBG] from='client.? 10.1.182.12:0/3900625861' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T19:58:17.659282+0000 osd.54 (osd.54) 51255 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:18.432641+0000 mgr.b (mgr.12834102) 26038 : cluster [DBG] pgmap v26603: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:58:18.638723+0000 osd.54 (osd.54) 51256 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:19.598989+0000 osd.54 (osd.54) 51257 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:58:20.764+0000 7f7fb651d700 1 mon.j@0(leader).osd e20717 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 314572800
cluster 2023-07-18T19:58:20.433619+0000 mgr.b (mgr.12834102) 26039 : cluster [DBG] pgmap v26604: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:58:20.643118+0000 osd.54 (osd.54) 51258 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:21.686658+0000 osd.54 (osd.54) 51259 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:22.434596+0000 mgr.b (mgr.12834102) 26040 : cluster [DBG] pgmap v26605: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:58:22.659690+0000 osd.54 (osd.54) 51260 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:23.651260+0000 osd.54 (osd.54) 51261 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:24.435636+0000 mgr.b (mgr.12834102) 26041 : cluster [DBG] pgmap v26606: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:58:24.658475+0000 osd.54 (osd.54) 51262 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:58:24.855445+0000 mon.l (mon.2) 15415 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:58:24.855708+0000 mon.l (mon.2) 15416 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T19:58:25.768+0000 7f7fb651d700 1 mon.j@0(leader).osd e20717 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 314572800
debug 2023-07-18T19:58:26.480+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T19:58:26.480+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T19:58:25.670645+0000 osd.54 (osd.54) 51263 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:58:25.707056+0000 mon.k (mon.1) 18684 : audit [DBG] from='client.? 10.1.207.132:0/3707239498' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T19:58:26.285469+0000 mon.k (mon.1) 18685 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:58:26.285770+0000 mon.k (mon.1) 18686 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T19:58:26.481614+0000 mon.j (mon.0) 21384 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:58:26.481939+0000 mon.j (mon.0) 21385 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T19:58:26.876+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T19:58:26.876+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T19:58:26.436638+0000 mgr.b (mgr.12834102) 26042 : cluster [DBG] pgmap v26607: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:58:26.701279+0000 osd.54 (osd.54) 51264 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:58:26.881045+0000 mon.j (mon.0) 21386 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T19:58:27.698316+0000 osd.54 (osd.54) 51265 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:28.437610+0000 mgr.b (mgr.12834102) 26043 : cluster [DBG] pgmap v26608: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:58:28.666173+0000 osd.54 (osd.54) 51266 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:29.684745+0000 osd.54 (osd.54) 51267 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:58:30.768+0000 7f7fb651d700 1 mon.j@0(leader).osd e20717 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 314572800
cluster 2023-07-18T19:58:30.438673+0000 mgr.b (mgr.12834102) 26044 : cluster [DBG] pgmap v26609: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:58:30.713154+0000 osd.54 (osd.54) 51268 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:58:32.872+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T19:58:32.872+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/2587075786' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T19:58:31.736555+0000 osd.54 (osd.54) 51269 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:32.439650+0000 mgr.b (mgr.12834102) 26045 : cluster [DBG] pgmap v26610: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:58:32.722318+0000 osd.54 (osd.54) 51270 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:58:32.876933+0000 mon.j (mon.0) 21387 : audit [DBG] from='client.? 10.1.182.12:0/2587075786' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T19:58:33.685948+0000 osd.54 (osd.54) 51271 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:34.440671+0000 mgr.b (mgr.12834102) 26046 : cluster [DBG] pgmap v26611: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:58:34.679889+0000 osd.54 (osd.54) 51272 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:58:34.874608+0000 mon.l (mon.2) 15417 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:58:34.874887+0000 mon.l (mon.2) 15418 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T19:58:35.190653+0000 mon.l (mon.2) 15419 : audit [DBG] from='client.? 10.1.222.242:0/3776822071' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T19:58:35.621424+0000 mon.l (mon.2) 15420 : audit [DBG] from='client.? 10.1.222.242:0/401213622' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
debug 2023-07-18T19:58:35.772+0000 7f7fb651d700 1 mon.j@0(leader).osd e20717 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 314572800
debug 2023-07-18T19:58:36.484+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T19:58:36.484+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T19:58:35.668748+0000 osd.54 (osd.54) 51273 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:58:36.284796+0000 mon.k (mon.1) 18687 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:58:36.285068+0000 mon.k (mon.1) 18688 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T19:58:36.487125+0000 mon.j (mon.0) 21388 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:58:36.487452+0000 mon.j (mon.0) 21389 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T19:58:36.441751+0000 mgr.b (mgr.12834102) 26047 : cluster [DBG] pgmap v26612: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:58:36.669873+0000 osd.54 (osd.54) 51274 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:37.688202+0000 osd.54 (osd.54) 51275 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:38.442582+0000 mgr.b (mgr.12834102) 26048 : cluster [DBG] pgmap v26613: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:58:38.684338+0000 osd.54 (osd.54) 51276 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:39.647912+0000 osd.54 (osd.54) 51277 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:58:40.772+0000 7f7fb651d700 1 mon.j@0(leader).osd e20717 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 314572800
cluster 2023-07-18T19:58:40.443547+0000 mgr.b (mgr.12834102) 26049 : cluster [DBG] pgmap v26614: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:58:40.685298+0000 osd.54 (osd.54) 51278 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:58:41.561448+0000 mon.k (mon.1) 18689 : audit [DBG] from='client.? 10.1.222.242:0/3251096258' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
debug 2023-07-18T19:58:41.876+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T19:58:41.876+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T19:58:41.676461+0000 osd.54 (osd.54) 51279 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:58:41.879307+0000 mon.j (mon.0) 21390 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T19:58:41.978598+0000 mon.k (mon.1) 18690 : audit [DBG] from='client.? 10.1.222.242:0/1679869685' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T19:58:42.444574+0000 mgr.b (mgr.12834102) 26050 : cluster [DBG] pgmap v26615: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:58:42.685501+0000 osd.54 (osd.54) 51280 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:43.640437+0000 osd.54 (osd.54) 51281 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:44.675421+0000 osd.54 (osd.54) 51282 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:44.445535+0000 mgr.b (mgr.12834102) 26051 : cluster [DBG] pgmap v26616: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T19:58:44.854336+0000 mon.l (mon.2) 15421 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:58:44.854600+0000 mon.l (mon.2) 15422 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T19:58:45.662637+0000 osd.54 (osd.54) 51283 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:58:45.772+0000 7f7fb651d700 1 mon.j@0(leader).osd e20717 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 314572800
debug 2023-07-18T19:58:45.928+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T19:58:45.928+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.207.132:0/582392387' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T19:58:46.492+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T19:58:46.492+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T19:58:45.932087+0000 mon.j (mon.0) 21391 : audit [DBG] from='client.? 10.1.207.132:0/582392387' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T19:58:46.306379+0000 mon.k (mon.1) 18691 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:58:46.306651+0000 mon.k (mon.1) 18692 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T19:58:46.496594+0000 mon.j (mon.0) 21392 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:58:46.496887+0000 mon.j (mon.0) 21393 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T19:58:46.699404+0000 osd.54 (osd.54) 51284 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:46.446327+0000 mgr.b (mgr.12834102) 26052 : cluster [DBG] pgmap v26617: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:58:47.682943+0000 osd.54 (osd.54) 51285 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:58:48.360+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T19:58:48.360+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/52427899' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T19:58:48.361530+0000 mon.j (mon.0) 21394 : audit [DBG] from='client.? 10.1.182.12:0/52427899' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T19:58:48.634041+0000 osd.54 (osd.54) 51286 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:48.447129+0000 mgr.b (mgr.12834102) 26053 : cluster [DBG] pgmap v26618: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:58:49.605703+0000 osd.54 (osd.54) 51287 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:58:50.776+0000 7f7fb651d700 1 mon.j@0(leader).osd e20717 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 314572800
audit 2023-07-18T19:58:49.992513+0000 mon.k (mon.1) 18693 : audit [DBG] from='client.? 10.1.222.242:0/3200420585' entity='client.admin' cmd=[{"prefix": "osd crush dump", "format": "json"}]: dispatch
audit 2023-07-18T19:58:50.425171+0000 mon.k (mon.1) 18694 : audit [DBG] from='client.? 10.1.222.242:0/3593397573' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile get", "name": "default", "format": "json"}]: dispatch
cluster 2023-07-18T19:58:50.614084+0000 osd.54 (osd.54) 51288 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:58:50.920+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd erasure-code-profile set", "name": "ceph-ssd-erasure-default-data_ecprofile", "force": true, "profile": ["k=5", "m=2", "plugin=jerasure", "technique=reed_sol_van", "crush-failure-domain=osd", "crush-device-class=ssd"], "format": "json"} v 0) v1
debug 2023-07-18T19:58:50.920+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "ceph-ssd-erasure-default-data_ecprofile", "force": true, "profile": ["k=5", "m=2", "plugin=jerasure", "technique=reed_sol_van", "crush-failure-domain=osd", "crush-device-class=ssd"], "format": "json"}]: dispatch
debug 2023-07-18T19:58:51.304+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool create", "pool": "ceph-ssd-erasure-default-data", "pg_num": 0, "pool_type": "erasure", "erasure_code_profile": "ceph-ssd-erasure-default-data_ecprofile", "format": "json"} v 0) v1
debug 2023-07-18T19:58:51.304+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ceph-ssd-erasure-default-data", "pg_num": 0, "pool_type": "erasure", "erasure_code_profile": "ceph-ssd-erasure-default-data_ecprofile", "format": "json"}]: dispatch
cluster 2023-07-18T19:58:50.447959+0000 mgr.b (mgr.12834102) 26054 : cluster [DBG] pgmap v26619: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T19:58:50.924414+0000 mon.k (mon.1) 18695 : audit [INF] from='client.? 10.1.222.242:0/4093058818' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "ceph-ssd-erasure-default-data_ecprofile", "force": true, "profile": ["k=5", "m=2", "plugin=jerasure", "technique=reed_sol_van", "crush-failure-domain=osd", "crush-device-class=ssd"], "format": "json"}]: dispatch
audit 2023-07-18T19:58:50.925401+0000 mon.j (mon.0) 21395 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "ceph-ssd-erasure-default-data_ecprofile", "force": true, "profile": ["k=5", "m=2", "plugin=jerasure", "technique=reed_sol_van", "crush-failure-domain=osd", "crush-device-class=ssd"], "format": "json"}]: dispatch
audit 2023-07-18T19:58:51.308323+0000 mon.k (mon.1) 18696 : audit [INF] from='client.? 10.1.222.242:0/56921057' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ceph-ssd-erasure-default-data", "pg_num": 0, "pool_type": "erasure", "erasure_code_profile": "ceph-ssd-erasure-default-data_ecprofile", "format": "json"}]: dispatch
audit 2023-07-18T19:58:51.309190+0000 mon.j (mon.0) 21396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ceph-ssd-erasure-default-data", "pg_num": 0, "pool_type": "erasure", "erasure_code_profile": "ceph-ssd-erasure-default-data_ecprofile", "format": "json"}]: dispatch
cluster 2023-07-18T19:58:51.620056+0000 osd.54 (osd.54) 51289 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:58:51.840+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool set", "pool": "ceph-ssd-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"} v 0) v1
debug 2023-07-18T19:58:51.840+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "ceph-ssd-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]: dispatch
debug 2023-07-18T19:58:52.816+0000 7f7fb651d700 1 mon.j@0(leader).osd e20717 do_prune osdmap full prune enabled
audit 2023-07-18T19:58:51.841666+0000 mon.k (mon.1) 18697 : audit [INF] from='client.? 10.1.222.242:0/2513229431' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "ceph-ssd-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]: dispatch
audit 2023-07-18T19:58:51.842381+0000 mon.j (mon.0) 21397 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "ceph-ssd-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]: dispatch
cluster 2023-07-18T19:58:52.573004+0000 osd.54 (osd.54) 51290 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:58:52.828+0000 7f7fb1513700 1 mon.j@0(leader).osd e20718 e20718: 57 total, 3 up, 41 in
debug 2023-07-18T19:58:52.832+0000 7f7fb1513700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "ceph-ssd-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]': finished
debug 2023-07-18T19:58:52.832+0000 7f7fb1513700 0 log_channel(cluster) log [DBG] : osdmap e20718: 57 total, 3 up, 41 in
debug 2023-07-18T19:58:53.311+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool application enable","pool": "ceph-ssd-erasure-default-data","app": "rbd"} v 0) v1
debug 2023-07-18T19:58:53.311+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-ssd-erasure-default-data","app": "rbd"}]: dispatch
debug 2023-07-18T19:58:53.827+0000 7f7fb651d700 1 mon.j@0(leader).osd e20718 do_prune osdmap full prune enabled
debug 2023-07-18T19:58:53.835+0000 7f7fb1513700 1 mon.j@0(leader).osd e20719 e20719: 57 total, 3 up, 41 in
debug 2023-07-18T19:58:53.835+0000 7f7fb1513700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-ssd-erasure-default-data","app": "rbd"}]': finished
debug 2023-07-18T19:58:53.835+0000 7f7fb1513700 0 log_channel(cluster) log [DBG] : osdmap e20719: 57 total, 3 up, 41 in
cluster 2023-07-18T19:58:52.448746+0000 mgr.b (mgr.12834102) 26055 : cluster [DBG] pgmap v26620: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T19:58:52.833573+0000 mon.j (mon.0) 21398 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "ceph-ssd-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]': finished
cluster 2023-07-18T19:58:52.833608+0000 mon.j (mon.0) 21399 : cluster [DBG] osdmap e20718: 57 total, 3 up, 41 in
audit 2023-07-18T19:58:53.255868+0000 mon.k (mon.1) 18698 : audit [DBG] from='client.? 10.1.222.242:0/2351227178' entity='client.admin' cmd=[{"prefix": "osd pool application get", "pool": "ceph-ssd-erasure-default-data", "format": "json"}]: dispatch
audit 2023-07-18T19:58:53.313609+0000 mon.l (mon.2) 15423 : audit [INF] from='client.? 10.1.222.242:0/1513814697' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-ssd-erasure-default-data","app": "rbd"}]: dispatch
audit 2023-07-18T19:58:53.314334+0000 mon.j (mon.0) 21400 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-ssd-erasure-default-data","app": "rbd"}]: dispatch
cluster 2023-07-18T19:58:53.542762+0000 osd.54 (osd.54) 51291 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:58:53.841298+0000 mon.j (mon.0) 21401 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-ssd-erasure-default-data","app": "rbd"}]': finished
cluster 2023-07-18T19:58:53.841387+0000 mon.j (mon.0) 21402 : cluster [DBG] osdmap e20719: 57 total, 3 up, 41 in
cluster 2023-07-18T19:58:54.537080+0000 osd.54 (osd.54) 51292 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:58:55.775+0000 7f7fb651d700 1 mon.j@0(leader).osd e20719 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 314572800
cluster 2023-07-18T19:58:54.449494+0000 mgr.b (mgr.12834102) 26056 : cluster [DBG] pgmap v26623: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T19:58:54.866491+0000 mon.l (mon.2) 15424 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:58:54.866751+0000 mon.l (mon.2) 15425 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T19:58:55.573458+0000 osd.54 (osd.54) 51293 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:58:56.483+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T19:58:56.483+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T19:58:56.281944+0000 mon.k (mon.1) 18699 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:58:56.282225+0000 mon.k (mon.1) 18700 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T19:58:56.486181+0000 mon.j (mon.0) 21403 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:58:56.486446+0000 mon.j (mon.0) 21404 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T19:58:56.621696+0000 osd.54 (osd.54) 51294 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:58:56.875+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T19:58:56.875+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T19:58:56.450275+0000 mgr.b (mgr.12834102) 26057 : cluster [DBG] pgmap v26624: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T19:58:56.880114+0000 mon.j (mon.0) 21405 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T19:58:57.589204+0000 osd.54 (osd.54) 51295 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:58.628651+0000 osd.54 (osd.54) 51296 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:58:58.451080+0000 mgr.b (mgr.12834102) 26058 : cluster [DBG] pgmap v26625: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:58:59.647421+0000 osd.54 (osd.54) 51297 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:59:00.779+0000 7f7fb651d700 1 mon.j@0(leader).osd e20719 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 314572800
cluster 2023-07-18T19:59:00.616044+0000 osd.54 (osd.54) 51298 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:00.451882+0000 mgr.b (mgr.12834102) 26059 : cluster [DBG] pgmap v26626: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:59:01.602611+0000 osd.54 (osd.54) 51299 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:02.611640+0000 osd.54 (osd.54) 51300 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:59:03.835+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T19:59:03.835+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/537212232' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:02.452657+0000 mgr.b (mgr.12834102) 26060 : cluster [DBG] pgmap v26627: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:59:03.621452+0000 osd.54 (osd.54) 51301 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:03.840977+0000 mon.j (mon.0) 21406 : audit [DBG] from='client.? 10.1.182.12:0/537212232' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:04.620605+0000 osd.54 (osd.54) 51302 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:04.885862+0000 mon.l (mon.2) 15426 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:59:04.886132+0000 mon.l (mon.2) 15427 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T19:59:05.779+0000 7f7fb651d700 1 mon.j@0(leader).osd e20719 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 314572800
cluster 2023-07-18T19:59:04.453427+0000 mgr.b (mgr.12834102) 26061 : cluster [DBG] pgmap v26628: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:59:05.619153+0000 osd.54 (osd.54) 51303 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:59:06.155+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T19:59:06.155+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.207.132:0/3348460150' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T19:59:06.475+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T19:59:06.475+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T19:59:06.160838+0000 mon.j (mon.0) 21407 : audit [DBG] from='client.? 10.1.207.132:0/3348460150' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T19:59:06.181519+0000 mon.k (mon.1) 18701 : audit [DBG] from='client.? 10.1.222.242:0/230509413' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T19:59:06.300631+0000 mon.k (mon.1) 18702 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:59:06.300907+0000 mon.k (mon.1) 18703 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T19:59:06.477706+0000 mon.j (mon.0) 21408 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:59:06.477866+0000 mon.j (mon.0) 21409 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T19:59:06.582939+0000 mon.k (mon.1) 18704 : audit [DBG] from='client.? 10.1.222.242:0/3748411915' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:06.590262+0000 osd.54 (osd.54) 51304 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:06.454176+0000 mgr.b (mgr.12834102) 26062 : cluster [DBG] pgmap v26629: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:59:07.589867+0000 osd.54 (osd.54) 51305 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:08.563683+0000 osd.54 (osd.54) 51306 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:08.690368+0000 mon.k (mon.1) 18705 : audit [DBG] from='client.? 10.1.222.242:0/1172773421' entity='client.admin' cmd=[{"prefix": "osd crush dump", "format": "json"}]: dispatch
debug 2023-07-18T19:59:09.111+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd crush rule create-replicated", "name": "ceph-ssd-replica-default", "root": "default", "type": "host", "class": "ssd", "format": "json"} v 0) v1
debug 2023-07-18T19:59:09.111+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-ssd-replica-default", "root": "default", "type": "host", "class": "ssd", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:08.454929+0000 mgr.b (mgr.12834102) 26063 : cluster [DBG] pgmap v26630: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T19:59:09.113480+0000 mon.l (mon.2) 15428 : audit [INF] from='client.? 10.1.222.242:0/2923762631' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-ssd-replica-default", "root": "default", "type": "host", "class": "ssd", "format": "json"}]: dispatch
audit 2023-07-18T19:59:09.114233+0000 mon.j (mon.0) 21410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-ssd-replica-default", "root": "default", "type": "host", "class": "ssd", "format": "json"}]: dispatch
audit 2023-07-18T19:59:09.592234+0000 mon.l (mon.2) 15429 : audit [DBG] from='client.? 10.1.222.242:0/1179315556' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": "ceph-ssd-replica-default", "var": "all", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:09.612877+0000 osd.54 (osd.54) 51307 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:59:10.783+0000 7f7fb651d700 1 mon.j@0(leader).osd e20719 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 314572800
debug 2023-07-18T19:59:10.891+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool application enable","pool": "ceph-ssd-replica-default","app": "rbd"} v 0) v1
debug 2023-07-18T19:59:10.891+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-ssd-replica-default","app": "rbd"}]: dispatch
audit 2023-07-18T19:59:09.973649+0000 mon.k (mon.1) 18706 : audit [DBG] from='client.? 10.1.222.242:0/420612821' entity='client.admin' cmd=[{"prefix": "osd pool application get", "pool": "ceph-ssd-replica-default", "format": "json"}]: dispatch
audit 2023-07-18T19:59:10.418943+0000 mon.k (mon.1) 18707 : audit [DBG] from='client.? 10.1.222.242:0/2308787060' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": "ceph-ssd-replica-default", "var": "all", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:10.578565+0000 osd.54 (osd.54) 51308 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:10.834138+0000 mon.k (mon.1) 18708 : audit [DBG] from='client.? 10.1.222.242:0/2071522469' entity='client.admin' cmd=[{"prefix": "osd crush rule dump", "name": "ceph-ssd-replica-default", "format": "json"}]: dispatch
audit 2023-07-18T19:59:10.894229+0000 mon.k (mon.1) 18709 : audit [INF] from='client.? 10.1.222.242:0/1158930582' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-ssd-replica-default","app": "rbd"}]: dispatch
audit 2023-07-18T19:59:10.895073+0000 mon.j (mon.0) 21411 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-ssd-replica-default","app": "rbd"}]: dispatch
debug 2023-07-18T19:59:11.783+0000 7f7fb651d700 1 mon.j@0(leader).osd e20719 do_prune osdmap full prune enabled
debug 2023-07-18T19:59:11.791+0000 7f7fb1513700 1 mon.j@0(leader).osd e20720 e20720: 57 total, 3 up, 41 in
debug 2023-07-18T19:59:11.791+0000 7f7fb1513700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-ssd-replica-default","app": "rbd"}]': finished
debug 2023-07-18T19:59:11.791+0000 7f7fb1513700 0 log_channel(cluster) log [DBG] : osdmap e20720: 57 total, 3 up, 41 in
debug 2023-07-18T19:59:11.875+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T19:59:11.875+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:10.455694+0000 mgr.b (mgr.12834102) 26064 : cluster [DBG] pgmap v26631: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:59:11.569139+0000 osd.54 (osd.54) 51309 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:11.796708+0000 mon.j (mon.0) 21412 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-ssd-replica-default","app": "rbd"}]': finished
cluster 2023-07-18T19:59:11.796737+0000 mon.j (mon.0) 21413 : cluster [DBG] osdmap e20720: 57 total, 3 up, 41 in
audit 2023-07-18T19:59:11.879701+0000 mon.j (mon.0) 21414 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T19:59:12.569650+0000 mon.k (mon.1) 18710 : audit [DBG] from='client.? 10.1.222.242:0/3910590819' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:12.604560+0000 osd.54 (osd.54) 51310 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:12.456495+0000 mgr.b (mgr.12834102) 26065 : cluster [DBG] pgmap v26633: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T19:59:13.018995+0000 mon.k (mon.1) 18711 : audit [DBG] from='client.? 10.1.222.242:0/2883492251' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:13.573260+0000 osd.54 (osd.54) 51311 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:14.564311+0000 osd.54 (osd.54) 51312 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:14.866020+0000 mon.l (mon.2) 15430 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:59:14.866298+0000 mon.l (mon.2) 15431 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T19:59:15.783+0000 7f7fb651d700 1 mon.j@0(leader).osd e20720 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 314572800
cluster 2023-07-18T19:59:14.457230+0000 mgr.b (mgr.12834102) 26066 : cluster [DBG] pgmap v26634: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:59:15.572953+0000 osd.54 (osd.54) 51313 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:59:16.479+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T19:59:16.479+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T19:59:16.282185+0000 mon.k (mon.1) 18712 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:59:16.282465+0000 mon.k (mon.1) 18713 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T19:59:16.483944+0000 mon.j (mon.0) 21415 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:59:16.484209+0000 mon.j (mon.0) 21416 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T19:59:16.586300+0000 osd.54 (osd.54) 51314 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:16.458000+0000 mgr.b (mgr.12834102) 26067 : cluster [DBG] pgmap v26635: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:59:17.603146+0000 osd.54 (osd.54) 51315 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:59:18.027+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_write.cc:1736] [default] New memtable created with log file: #21689. Immutable memtables: 0.
debug 2023-07-18T19:59:18.027+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-19:59:18.029541) [db/db_impl/db_impl_compaction_flush.cc:2394] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
debug 2023-07-18T19:59:18.027+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:335] [default] [JOB 1647] Flushing memtable with next log file: 21689
debug 2023-07-18T19:59:18.027+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710358029589, "job": 1647, "event": "flush_started", "num_memtables": 1, "num_entries": 2019, "num_deletes": 548, "total_data_size": 3094166, "memory_usage": 3130208, "flush_reason": "Manual Compaction"}
debug 2023-07-18T19:59:18.027+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:364] [default] [JOB 1647] Level-0 flush table #21690: started
debug 2023-07-18T19:59:18.039+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710358041916, "cf_name": "default", "job": 1647, "event": "table_file_creation", "file_number": 21690, "file_size": 2534276, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 2525820, "index_size": 4455, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 29056, "raw_average_key_size": 24, "raw_value_size": 2505252, "raw_average_value_size": 2117, "num_data_blocks": 174, "num_entries": 1183, "num_deletions": 548, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689710260, "oldest_key_time": 1689710260, "file_creation_time": 1689710358, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T19:59:18.039+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:424] [default] [JOB 1647] Level-0 flush table #21690: 2534276 bytes OK
debug 2023-07-18T19:59:18.039+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-19:59:18.042118) [db/memtable_list.cc:449] [default] Level-0 commit table #21690 started
debug 2023-07-18T19:59:18.039+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-19:59:18.042398) [db/memtable_list.cc:628] [default] Level-0 commit table #21690: memtable #1 done
debug 2023-07-18T19:59:18.039+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-19:59:18.042410) EVENT_LOG_v1 {"time_micros": 1689710358042406, "job": 1647, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 3], "immutable_memtables": 0}
debug 2023-07-18T19:59:18.039+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-19:59:18.042422) [db/db_impl/db_impl_compaction_flush.cc:233] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 3] max score 0.25
debug 2023-07-18T19:59:18.039+0000 7f7fb8521700 4 rocksdb: [db/db_impl/db_impl_files.cc:415] [JOB 1647] Try to delete WAL files size 3083901, prev total WAL file size 3083901, number of live WAL files 2.
debug 2023-07-18T19:59:18.039+0000 7f7fb8521700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021684.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T19:59:18.039+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T19:59:18.039+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-19:59:18.043102) [db/db_impl/db_impl_compaction_flush.cc:2712] [default] Manual compaction from level-0 to level-6 from '7061786F730036323534343138' seq:72057594037927935, type:20 .. '7061786F730036323534363730' seq:0, type:0; will stop at (end)
debug 2023-07-18T19:59:18.039+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1881] [default] [JOB 1648] Compacting 1@0 + 3@6 files to L6, score -1.00
debug 2023-07-18T19:59:18.039+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1887] [default] Compaction start summary: Base version 1648 Base level 0, inputs: [21690(2474KB)], [21686(64MB) 21687(64MB) 21688(4912KB)]
debug 2023-07-18T19:59:18.039+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710358043139, "job": 1648, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [21690], "files_L6": [21686, 21687, 21688], "score": -1, "input_data_size": 142115763}
debug 2023-07-18T19:59:18.239+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1648] Generated table #21691: 21848 keys, 67274204 bytes
debug 2023-07-18T19:59:18.239+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710358245352, "cf_name": "default", "job": 1648, "event": "table_file_creation", "file_number": 21691, "file_size": 67274204, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67159717, "index_size": 58775, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 54725, "raw_key_size": 592381, "raw_average_key_size": 27, "raw_value_size": 66804885, "raw_average_value_size": 3057, "num_data_blocks": 2173, "num_entries": 21848, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710358, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T19:59:18.443+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1648] Generated table #21692: 13069 keys, 67276463 bytes
debug 2023-07-18T19:59:18.443+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710358448768, "cf_name": "default", "job": 1648, "event": "table_file_creation", "file_number": 21692, "file_size": 67276463, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67148975, "index_size": 93793, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32709, "raw_key_size": 289499, "raw_average_key_size": 22, "raw_value_size": 66871152, "raw_average_value_size": 5116, "num_data_blocks": 3478, "num_entries": 13069, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710358, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T19:59:18.459+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1648] Generated table #21693: 538 keys, 5170040 bytes
debug 2023-07-18T19:59:18.459+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710358465142, "cf_name": "default", "job": 1648, "event": "table_file_creation", "file_number": 21693, "file_size": 5170040, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 5161511, "index_size": 6070, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11430, "raw_average_key_size": 21, "raw_value_size": 5148200, "raw_average_value_size": 9569, "num_data_blocks": 238, "num_entries": 538, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710358, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T19:59:18.463+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1594] [default] [JOB 1648] Compacted 1@0 + 3@6 files to L6 => 139720707 bytes
debug 2023-07-18T19:59:18.463+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-19:59:18.467003) [db/compaction/compaction_job.cc:812] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 3] max score 0.00, MB/sec: 336.8 rd, 331.1 wr, level 6, files in(1, 3) out(3) MB in(2.4, 133.1) out(133.2), read-write-amplify(111.2) write-amplify(55.1) OK, records in: 36570, records dropped: 1115 output_compression: NoCompression
debug 2023-07-18T19:59:18.463+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-19:59:18.467024) EVENT_LOG_v1 {"time_micros": 1689710358467015, "job": 1648, "event": "compaction_finished", "compaction_time_micros": 422021, "compaction_time_cpu_micros": 213533, "output_level": 6, "num_output_files": 3, "total_output_size": 139720707, "num_input_records": 36570, "num_output_records": 35455, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 3]}
debug 2023-07-18T19:59:18.463+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021690.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T19:59:18.463+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710358467621, "job": 1648, "event": "table_file_deletion", "file_number": 21690}
debug 2023-07-18T19:59:18.463+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021688.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T19:59:18.463+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710358468480, "job": 1648, "event": "table_file_deletion", "file_number": 21688}
debug 2023-07-18T19:59:18.475+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021687.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T19:59:18.475+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710358480773, "job": 1648, "event": "table_file_deletion", "file_number": 21687}
debug 2023-07-18T19:59:18.491+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021686.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T19:59:18.491+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710358493591, "job": 1648, "event": "table_file_deletion", "file_number": 21686}
debug 2023-07-18T19:59:18.491+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T19:59:18.491+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T19:59:18.491+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T19:59:18.491+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T19:59:18.491+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
cluster 2023-07-18T19:59:18.634888+0000 osd.54 (osd.54) 51316 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:59:19.311+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T19:59:19.311+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/2899579170' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:18.458785+0000 mgr.b (mgr.12834102) 26068 : cluster [DBG] pgmap v26636: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T19:59:19.315247+0000 mon.j (mon.0) 21417 : audit [DBG] from='client.? 10.1.182.12:0/2899579170' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:19.622795+0000 osd.54 (osd.54) 51317 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:59:20.787+0000 7f7fb651d700 1 mon.j@0(leader).osd e20720 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 314572800
cluster 2023-07-18T19:59:20.663204+0000 osd.54 (osd.54) 51318 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:20.459734+0000 mgr.b (mgr.12834102) 26069 : cluster [DBG] pgmap v26637: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:59:21.664284+0000 osd.54 (osd.54) 51319 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:22.697628+0000 osd.54 (osd.54) 51320 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:22.460501+0000 mgr.b (mgr.12834102) 26070 : cluster [DBG] pgmap v26638: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:59:23.711967+0000 osd.54 (osd.54) 51321 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:24.677971+0000 osd.54 (osd.54) 51322 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:24.847454+0000 mon.l (mon.2) 15432 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:59:24.847777+0000 mon.l (mon.2) 15433 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T19:59:25.787+0000 7f7fb651d700 1 mon.j@0(leader).osd e20720 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 314572800
cluster 2023-07-18T19:59:24.461423+0000 mgr.b (mgr.12834102) 26071 : cluster [DBG] pgmap v26639: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:59:25.709503+0000 osd.54 (osd.54) 51323 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:59:26.479+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T19:59:26.479+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T19:59:26.875+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T19:59:26.875+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T19:59:26.281508+0000 mon.k (mon.1) 18714 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:59:26.281802+0000 mon.k (mon.1) 18715 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T19:59:26.344330+0000 mon.l (mon.2) 15434 : audit [DBG] from='client.? 10.1.207.132:0/1269784163' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T19:59:26.483909+0000 mon.j (mon.0) 21418 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:59:26.484078+0000 mon.j (mon.0) 21419 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T19:59:26.879305+0000 mon.j (mon.0) 21420 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:26.462193+0000 mgr.b (mgr.12834102) 26072 : cluster [DBG] pgmap v26640: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:59:26.742279+0000 osd.54 (osd.54) 51324 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:27.756700+0000 osd.54 (osd.54) 51325 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:28.462936+0000 mgr.b (mgr.12834102) 26073 : cluster [DBG] pgmap v26641: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:59:28.793224+0000 osd.54 (osd.54) 51326 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:29.945755+0000 mon.l (mon.2) 15435 : audit [DBG] from='client.? 10.1.222.242:0/593129556' entity='client.admin' cmd=[{"prefix": "osd crush dump", "format": "json"}]: dispatch
debug 2023-07-18T19:59:30.791+0000 7f7fb651d700 1 mon.j@0(leader).osd e20720 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 314572800
debug 2023-07-18T19:59:30.891+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd erasure-code-profile set", "name": "ceph-nvme-erasure-default-data_ecprofile", "force": true, "profile": ["k=2", "m=1", "plugin=jerasure", "technique=reed_sol_van", "crush-failure-domain=osd", "crush-device-class=nvme"], "format": "json"} v 0) v1
debug 2023-07-18T19:59:30.891+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "ceph-nvme-erasure-default-data_ecprofile", "force": true, "profile": ["k=2", "m=1", "plugin=jerasure", "technique=reed_sol_van", "crush-failure-domain=osd", "crush-device-class=nvme"], "format": "json"}]: dispatch
cluster 2023-07-18T19:59:29.817906+0000 osd.54 (osd.54) 51327 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:30.449016+0000 mon.k (mon.1) 18716 : audit [DBG] from='client.? 10.1.222.242:0/4253381240' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile get", "name": "default", "format": "json"}]: dispatch
audit 2023-07-18T19:59:30.895221+0000 mon.k (mon.1) 18717 : audit [INF] from='client.? 10.1.222.242:0/2337494033' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "ceph-nvme-erasure-default-data_ecprofile", "force": true, "profile": ["k=2", "m=1", "plugin=jerasure", "technique=reed_sol_van", "crush-failure-domain=osd", "crush-device-class=nvme"], "format": "json"}]: dispatch
audit 2023-07-18T19:59:30.896185+0000 mon.j (mon.0) 21421 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "ceph-nvme-erasure-default-data_ecprofile", "force": true, "profile": ["k=2", "m=1", "plugin=jerasure", "technique=reed_sol_van", "crush-failure-domain=osd", "crush-device-class=nvme"], "format": "json"}]: dispatch
debug 2023-07-18T19:59:31.339+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool create", "pool": "ceph-nvme-erasure-default-data", "pg_num": 0, "pool_type": "erasure", "erasure_code_profile": "ceph-nvme-erasure-default-data_ecprofile", "format": "json"} v 0) v1
debug 2023-07-18T19:59:31.339+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ceph-nvme-erasure-default-data", "pg_num": 0, "pool_type": "erasure", "erasure_code_profile": "ceph-nvme-erasure-default-data_ecprofile", "format": "json"}]: dispatch
debug 2023-07-18T19:59:31.739+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool set", "pool": "ceph-nvme-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"} v 0) v1
debug 2023-07-18T19:59:31.739+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "ceph-nvme-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]: dispatch
debug 2023-07-18T19:59:32.119+0000 7f7fb651d700 1 mon.j@0(leader).osd e20720 do_prune osdmap full prune enabled
debug 2023-07-18T19:59:32.123+0000 7f7fb1513700 1 mon.j@0(leader).osd e20721 e20721: 57 total, 3 up, 41 in
debug 2023-07-18T19:59:32.127+0000 7f7fb1513700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "ceph-nvme-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]': finished
debug 2023-07-18T19:59:32.127+0000 7f7fb1513700 0 log_channel(cluster) log [DBG] : osdmap e20721: 57 total, 3 up, 41 in
cluster 2023-07-18T19:59:30.463751+0000 mgr.b (mgr.12834102) 26074 : cluster [DBG] pgmap v26642: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:59:30.790938+0000 osd.54 (osd.54) 51328 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:31.341597+0000 mon.k (mon.1) 18718 : audit [INF] from='client.? 10.1.222.242:0/341233941' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ceph-nvme-erasure-default-data", "pg_num": 0, "pool_type": "erasure", "erasure_code_profile": "ceph-nvme-erasure-default-data_ecprofile", "format": "json"}]: dispatch
audit 2023-07-18T19:59:31.342472+0000 mon.j (mon.0) 21422 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ceph-nvme-erasure-default-data", "pg_num": 0, "pool_type": "erasure", "erasure_code_profile": "ceph-nvme-erasure-default-data_ecprofile", "format": "json"}]: dispatch
audit 2023-07-18T19:59:31.744523+0000 mon.k (mon.1) 18719 : audit [INF] from='client.? 10.1.222.242:0/2184614141' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "ceph-nvme-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]: dispatch
audit 2023-07-18T19:59:31.745292+0000 mon.j (mon.0) 21423 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "ceph-nvme-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]: dispatch
debug 2023-07-18T19:59:32.607+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool application enable","pool": "ceph-nvme-erasure-default-data","app": "rbd"} v 0) v1
debug 2023-07-18T19:59:32.607+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-nvme-erasure-default-data","app": "rbd"}]: dispatch
debug 2023-07-18T19:59:33.131+0000 7f7fb651d700 1 mon.j@0(leader).osd e20721 do_prune osdmap full prune enabled
cluster 2023-07-18T19:59:31.831156+0000 osd.54 (osd.54) 51329 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:32.130669+0000 mon.j (mon.0) 21424 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "ceph-nvme-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]': finished
cluster 2023-07-18T19:59:32.130719+0000 mon.j (mon.0) 21425 : cluster [DBG] osdmap e20721: 57 total, 3 up, 41 in
audit 2023-07-18T19:59:32.547305+0000 mon.k (mon.1) 18720 : audit [DBG] from='client.? 10.1.222.242:0/1974671199' entity='client.admin' cmd=[{"prefix": "osd pool application get", "pool": "ceph-nvme-erasure-default-data", "format": "json"}]: dispatch
audit 2023-07-18T19:59:32.610409+0000 mon.k (mon.1) 18721 : audit [INF] from='client.? 10.1.222.242:0/1587526358' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-nvme-erasure-default-data","app": "rbd"}]: dispatch
audit 2023-07-18T19:59:32.611170+0000 mon.j (mon.0) 21426 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-nvme-erasure-default-data","app": "rbd"}]: dispatch
debug 2023-07-18T19:59:33.143+0000 7f7fb1513700 1 mon.j@0(leader).osd e20722 e20722: 57 total, 3 up, 41 in
debug 2023-07-18T19:59:33.147+0000 7f7fb1513700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-nvme-erasure-default-data","app": "rbd"}]': finished
debug 2023-07-18T19:59:33.147+0000 7f7fb1513700 0 log_channel(cluster) log [DBG] : osdmap e20722: 57 total, 3 up, 41 in
cluster 2023-07-18T19:59:32.464533+0000 mgr.b (mgr.12834102) 26075 : cluster [DBG] pgmap v26644: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:59:32.822870+0000 osd.54 (osd.54) 51330 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:33.149418+0000 mon.j (mon.0) 21427 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-nvme-erasure-default-data","app": "rbd"}]': finished
cluster 2023-07-18T19:59:33.149449+0000 mon.j (mon.0) 21428 : cluster [DBG] osdmap e20722: 57 total, 3 up, 41 in
debug 2023-07-18T19:59:34.787+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T19:59:34.787+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/1423633984' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:33.834779+0000 osd.54 (osd.54) 51331 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:34.791728+0000 mon.j (mon.0) 21429 : audit [DBG] from='client.? 10.1.182.12:0/1423633984' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T19:59:34.876178+0000 mon.l (mon.2) 15436 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:59:34.876440+0000 mon.l (mon.2) 15437 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T19:59:35.791+0000 7f7fb651d700 1 mon.j@0(leader).osd e20722 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T19:59:34.465252+0000 mgr.b (mgr.12834102) 26076 : cluster [DBG] pgmap v26646: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:59:34.841431+0000 osd.54 (osd.54) 51332 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:59:36.475+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T19:59:36.475+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T19:59:35.829357+0000 osd.54 (osd.54) 51333 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:36.288090+0000 mon.k (mon.1) 18722 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:59:36.288382+0000 mon.k (mon.1) 18723 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T19:59:36.478265+0000 mon.j (mon.0) 21430 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:59:36.478425+0000 mon.j (mon.0) 21431 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T19:59:37.148373+0000 mon.k (mon.1) 18724 : audit [DBG] from='client.? 10.1.222.242:0/748550596' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:36.465994+0000 mgr.b (mgr.12834102) 26077 : cluster [DBG] pgmap v26647: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:59:36.781268+0000 osd.54 (osd.54) 51334 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:37.540228+0000 mon.l (mon.2) 15438 : audit [DBG] from='client.? 10.1.222.242:0/468726098' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:37.829239+0000 osd.54 (osd.54) 51335 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:38.466745+0000 mgr.b (mgr.12834102) 26078 : cluster [DBG] pgmap v26648: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:59:38.799364+0000 osd.54 (osd.54) 51336 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:59:40.795+0000 7f7fb651d700 1 mon.j@0(leader).osd e20722 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T19:59:39.807059+0000 osd.54 (osd.54) 51337 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:59:41.875+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T19:59:41.875+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:40.467508+0000 mgr.b (mgr.12834102) 26079 : cluster [DBG] pgmap v26649: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:59:40.775795+0000 osd.54 (osd.54) 51338 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:41.879489+0000 mon.j (mon.0) 21432 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:41.785999+0000 osd.54 (osd.54) 51339 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:42.468143+0000 mgr.b (mgr.12834102) 26080 : cluster [DBG] pgmap v26650: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:59:42.741357+0000 osd.54 (osd.54) 51340 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:43.574095+0000 mon.k (mon.1) 18725 : audit [DBG] from='client.? 10.1.222.242:0/3337487079' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:43.695326+0000 osd.54 (osd.54) 51341 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:43.972698+0000 mon.k (mon.1) 18726 : audit [DBG] from='client.? 10.1.222.242:0/3718095771' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:44.468913+0000 mgr.b (mgr.12834102) 26081 : cluster [DBG] pgmap v26651: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T19:59:44.848491+0000 mon.l (mon.2) 15439 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:59:44.848776+0000 mon.l (mon.2) 15440 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T19:59:45.795+0000 7f7fb651d700 1 mon.j@0(leader).osd e20722 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T19:59:44.745384+0000 osd.54 (osd.54) 51342 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:45.729596+0000 osd.54 (osd.54) 51343 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T19:59:46.479+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T19:59:46.479+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T19:59:46.286505+0000 mon.k (mon.1) 18727 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:59:46.286687+0000 mon.k (mon.1) 18728 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T19:59:46.469848+0000 mgr.b (mgr.12834102) 26082 : cluster [DBG] pgmap v26652: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T19:59:46.482186+0000 mon.j (mon.0) 21433 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:59:46.482449+0000 mon.j (mon.0) 21434 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T19:59:46.576649+0000 mon.k (mon.1) 18729 : audit [DBG] from='client.? 10.1.207.132:0/257074170' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:46.702054+0000 osd.54 (osd.54) 51344 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:47.733030+0000 osd.54 (osd.54) 51345 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:47.984206+0000 mon.k (mon.1) 18730 : audit [DBG] from='client.? 10.1.222.242:0/3340367674' entity='client.admin' cmd=[{"prefix": "osd crush dump", "format": "json"}]: dispatch
debug 2023-07-18T19:59:48.523+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd crush rule create-replicated", "name": "ceph-nvme-replica-default", "root": "default", "type": "osd", "class": "nvme", "format": "json"} v 0) v1
debug 2023-07-18T19:59:48.523+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-nvme-replica-default", "root": "default", "type": "osd", "class": "nvme", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:48.470634+0000 mgr.b (mgr.12834102) 26083 : cluster [DBG] pgmap v26653: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T19:59:48.526967+0000 mon.k (mon.1) 18731 : audit [INF] from='client.? 10.1.222.242:0/3123016245' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-nvme-replica-default", "root": "default", "type": "osd", "class": "nvme", "format": "json"}]: dispatch
audit 2023-07-18T19:59:48.527840+0000 mon.j (mon.0) 21435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-nvme-replica-default", "root": "default", "type": "osd", "class": "nvme", "format": "json"}]: dispatch
audit 2023-07-18T19:59:48.940470+0000 mon.k (mon.1) 18732 : audit [DBG] from='client.? 10.1.222.242:0/2024781038' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": "ceph-nvme-replica-default", "var": "all", "format": "json"}]: dispatch
debug 2023-07-18T19:59:50.263+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T19:59:50.263+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/328719520' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:48.770522+0000 osd.54 (osd.54) 51346 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:49.659481+0000 mon.k (mon.1) 18733 : audit [DBG] from='client.? 10.1.222.242:0/2024632843' entity='client.admin' cmd=[{"prefix": "osd pool application get", "pool": "ceph-nvme-replica-default", "format": "json"}]: dispatch
audit 2023-07-18T19:59:50.084670+0000 mon.l (mon.2) 15441 : audit [DBG] from='client.? 10.1.222.242:0/3497967000' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": "ceph-nvme-replica-default", "var": "all", "format": "json"}]: dispatch
audit 2023-07-18T19:59:50.268253+0000 mon.j (mon.0) 21436 : audit [DBG] from='client.? 10.1.182.12:0/328719520' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T19:59:50.611+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool application enable","pool": "ceph-nvme-replica-default","app": "rbd"} v 0) v1
debug 2023-07-18T19:59:50.611+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-nvme-replica-default","app": "rbd"}]: dispatch
debug 2023-07-18T19:59:50.795+0000 7f7fb651d700 1 mon.j@0(leader).osd e20722 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T19:59:51.271+0000 7f7fb651d700 1 mon.j@0(leader).osd e20722 do_prune osdmap full prune enabled
debug 2023-07-18T19:59:51.279+0000 7f7fb1513700 1 mon.j@0(leader).osd e20723 e20723: 57 total, 3 up, 41 in
debug 2023-07-18T19:59:51.279+0000 7f7fb1513700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-nvme-replica-default","app": "rbd"}]': finished
debug 2023-07-18T19:59:51.279+0000 7f7fb1513700 0 log_channel(cluster) log [DBG] : osdmap e20723: 57 total, 3 up, 41 in
cluster 2023-07-18T19:59:49.748965+0000 osd.54 (osd.54) 51347 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:50.471424+0000 mgr.b (mgr.12834102) 26084 : cluster [DBG] pgmap v26654: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T19:59:50.552421+0000 mon.l (mon.2) 15442 : audit [DBG] from='client.? 10.1.222.242:0/2539613809' entity='client.admin' cmd=[{"prefix": "osd crush rule dump", "name": "ceph-nvme-replica-default", "format": "json"}]: dispatch
audit 2023-07-18T19:59:50.613249+0000 mon.k (mon.1) 18734 : audit [INF] from='client.? 10.1.222.242:0/1123325188' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-nvme-replica-default","app": "rbd"}]: dispatch
audit 2023-07-18T19:59:50.613991+0000 mon.j (mon.0) 21437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-nvme-replica-default","app": "rbd"}]: dispatch
cluster 2023-07-18T19:59:50.708067+0000 osd.54 (osd.54) 51348 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:51.284321+0000 mon.j (mon.0) 21438 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-nvme-replica-default","app": "rbd"}]': finished
cluster 2023-07-18T19:59:51.284362+0000 mon.j (mon.0) 21439 : cluster [DBG] osdmap e20723: 57 total, 3 up, 41 in
cluster 2023-07-18T19:59:51.669028+0000 osd.54 (osd.54) 51349 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:52.472206+0000 mgr.b (mgr.12834102) 26085 : cluster [DBG] pgmap v26656: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:59:52.707725+0000 osd.54 (osd.54) 51350 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:53.659802+0000 osd.54 (osd.54) 51351 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:54.472944+0000 mgr.b (mgr.12834102) 26086 : cluster [DBG] pgmap v26657: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T19:59:54.661393+0000 osd.54 (osd.54) 51352 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:54.875459+0000 mon.l (mon.2) 15443 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:59:54.875727+0000 mon.l (mon.2) 15444 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T19:59:55.799+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T19:59:55.664423+0000 osd.54 (osd.54) 51353 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:56.299407+0000 mon.k (mon.1) 18735 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:59:56.299680+0000 mon.k (mon.1) 18736 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T19:59:56.491+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T19:59:56.491+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T19:59:56.875+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T19:59:56.875+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:56.473706+0000 mgr.b (mgr.12834102) 26087 : cluster [DBG] pgmap v26658: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T19:59:56.494065+0000 mon.j (mon.0) 21440 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T19:59:56.494342+0000 mon.j (mon.0) 21441 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T19:59:56.685315+0000 osd.54 (osd.54) 51354 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T19:59:56.880914+0000 mon.j (mon.0) 21442 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T19:59:57.721272+0000 osd.54 (osd.54) 51355 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T19:59:58.474469+0000 mgr.b (mgr.12834102) 26088 : cluster [DBG] pgmap v26659: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T19:59:59.995+0000 7f7fb651d700 0 log_channel(cluster) log [WRN] : overall HEALTH_WARN 38 osds down; 12 hosts (54 osds) down; Reduced data availability: 196 pgs inactive, 9 pgs down, 1 pg incomplete; 2 pgs not deep-scrubbed in time; 69 daemons have recently crashed; 3 slow ops, oldest one blocked for 390 sec, osd.54 has slow ops
cluster 2023-07-18T19:59:58.760276+0000 osd.54 (osd.54) 51356 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:00.000133+0000 mon.j (mon.0) 21443 : cluster [WRN] overall HEALTH_WARN 38 osds down; 12 hosts (54 osds) down; Reduced data availability: 196 pgs inactive, 9 pgs down, 1 pg incomplete; 2 pgs not deep-scrubbed in time; 69 daemons have recently crashed; 3 slow ops, oldest one blocked for 390 sec, osd.54 has slow ops
debug 2023-07-18T20:00:00.803+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T19:59:59.747924+0000 osd.54 (osd.54) 51357 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:00.475413+0000 mgr.b (mgr.12834102) 26089 : cluster [DBG] pgmap v26660: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:00.744206+0000 osd.54 (osd.54) 51358 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:01.710217+0000 osd.54 (osd.54) 51359 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:02.476426+0000 mgr.b (mgr.12834102) 26090 : cluster [DBG] pgmap v26661: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:02.757615+0000 osd.54 (osd.54) 51360 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:03.777122+0000 osd.54 (osd.54) 51361 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:04.477411+0000 mgr.b (mgr.12834102) 26091 : cluster [DBG] pgmap v26662: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:04.734137+0000 osd.54 (osd.54) 51362 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:00:04.895900+0000 mon.l (mon.2) 15445 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:00:04.896168+0000 mon.l (mon.2) 15446 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:00:05.751+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:00:05.751+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/4037061659' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:00:05.803+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:00:05.693597+0000 osd.54 (osd.54) 51363 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:00:05.755577+0000 mon.j (mon.0) 21444 : audit [DBG] from='client.? 10.1.182.12:0/4037061659' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:00:06.292820+0000 mon.k (mon.1) 18737 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:00:06.293092+0000 mon.k (mon.1) 18738 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:00:06.483+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:00:06.483+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:00:06.478370+0000 mgr.b (mgr.12834102) 26092 : cluster [DBG] pgmap v26663: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:00:06.489053+0000 mon.j (mon.0) 21445 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:00:06.489251+0000 mon.j (mon.0) 21446 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:00:06.652509+0000 osd.54 (osd.54) 51364 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:00:06.995491+0000 mon.l (mon.2) 15447 : audit [DBG] from='client.? 10.1.207.132:0/3911936077' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:00:07.668429+0000 osd.54 (osd.54) 51365 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:00:08.083361+0000 mon.k (mon.1) 18739 : audit [DBG] from='client.? 10.1.222.242:0/3277377332' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:00:08.479308+0000 mgr.b (mgr.12834102) 26093 : cluster [DBG] pgmap v26664: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:00:08.484715+0000 mon.k (mon.1) 18740 : audit [DBG] from='client.? 10.1.222.242:0/3147389339' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:00:08.635357+0000 osd.54 (osd.54) 51366 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:09.635155+0000 osd.54 (osd.54) 51367 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:00:10.803+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:00:10.480267+0000 mgr.b (mgr.12834102) 26094 : cluster [DBG] pgmap v26665: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:10.671660+0000 osd.54 (osd.54) 51368 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:00:11.875+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:00:11.875+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:00:11.720865+0000 osd.54 (osd.54) 51369 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:00:11.879533+0000 mon.j (mon.0) 21447 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:00:12.481292+0000 mgr.b (mgr.12834102) 26095 : cluster [DBG] pgmap v26666: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:12.767967+0000 osd.54 (osd.54) 51370 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:13.721076+0000 osd.54 (osd.54) 51371 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:14.482291+0000 mgr.b (mgr.12834102) 26096 : cluster [DBG] pgmap v26667: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:00:14.484757+0000 mon.l (mon.2) 15448 : audit [DBG] from='client.? 10.1.222.242:0/1240276317' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:00:14.875415+0000 mon.l (mon.2) 15449 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:00:14.875684+0000 mon.l (mon.2) 15450 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:00:15.009806+0000 mon.k (mon.1) 18741 : audit [DBG] from='client.? 10.1.222.242:0/2510906376' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
debug 2023-07-18T20:00:15.802+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:00:14.768574+0000 osd.54 (osd.54) 51372 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:00:16.288384+0000 mon.k (mon.1) 18742 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:00:16.288655+0000 mon.k (mon.1) 18743 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:00:16.470+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:00:16.470+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:00:15.780990+0000 osd.54 (osd.54) 51373 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:00:16.475976+0000 mon.j (mon.0) 21448 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:00:16.476154+0000 mon.j (mon.0) 21449 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:00:16.483274+0000 mgr.b (mgr.12834102) 26097 : cluster [DBG] pgmap v26668: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:16.769551+0000 osd.54 (osd.54) 51374 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:17.753452+0000 osd.54 (osd.54) 51375 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:18.484269+0000 mgr.b (mgr.12834102) 26098 : cluster [DBG] pgmap v26669: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:18.710196+0000 osd.54 (osd.54) 51376 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:19.677456+0000 osd.54 (osd.54) 51377 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:00:20.566+0000 7f7fb0d12700 4 rocksdb: [db/db_impl/db_impl.cc:901] ------- DUMPING STATS -------
debug 2023-07-18T20:00:20.566+0000 7f7fb0d12700 4 rocksdb: [db/db_impl/db_impl.cc:903]
** DB Stats **
Uptime(secs): 51600.1 total, 600.0 interval
Cumulative writes: 187K writes, 1117K keys, 186K commit groups, 1.0 writes per commit group, ingest: 1.73 GB, 0.03 MB/s
Cumulative WAL: 187K writes, 186K syncs, 1.00 writes per sync, written: 1.73 GB, 0.03 MB/s
Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
Interval writes: 2159 writes, 11K keys, 2159 commit groups, 1.0 writes per commit group, ingest: 15.83 MB, 0.03 MB/s
Interval WAL: 2159 writes, 2159 syncs, 1.00 writes per sync, written: 0.02 MB, 0.03 MB/s
Interval stall: 00:00:0.000 H:M:S, 0.0 percent
** Compaction Stats [default] **
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 1.4 1.4 0.0 1.0 0.0 195.4 7.58 5.32 667 0.011 0 0
L5 0/0 0.00 KB 0.0 1.1 0.4 0.7 1.1 0.4 0.5 2.6 311.2 309.7 3.69 1.74 9 0.410 149K 13K
L6 3/0 133.25 MB 0.0 181.6 1.4 180.2 180.0 -0.2 0.0 127.2 325.5 322.7 571.26 253.48 655 0.872 26M 677K
Sum 3/0 133.25 MB 0.0 182.7 1.9 180.9 182.6 1.7 0.5 126.2 321.2 320.9 582.53 260.54 1331 0.438 26M 691K
Int 0/0 0.00 KB 0.0 0.9 0.0 0.9 0.9 0.0 0.0 77.9 300.8 301.1 3.14 1.55 14 0.224 255K 6695
** Compaction Stats [default] **
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Low 0/0 0.00 KB 0.0 182.7 1.9 180.9 181.1 0.3 0.0 0.0 325.4 322.6 574.94 255.22 664 0.866 26M 691K
High 0/0 0.00 KB 0.0 0.0 0.0 0.0 1.4 1.4 0.0 0.0 0.0 195.3 7.58 5.32 666 0.011 0 0
User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 287.9 0.01 0.00 1 0.009 0 0
Uptime(secs): 51600.1 total, 600.0 interval
Flush(GB): cumulative 1.447, interval 0.012
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 182.58 GB write, 3.62 MB/s write, 182.72 GB read, 3.63 MB/s read, 582.5 seconds
Interval compaction: 0.92 GB write, 1.58 MB/s write, 0.92 GB read, 1.57 MB/s read, 3.1 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
** File Read Latency Histogram By Level [default] **
** Compaction Stats [default] **
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 1.4 1.4 0.0 1.0 0.0 195.4 7.58 5.32 667 0.011 0 0
L5 0/0 0.00 KB 0.0 1.1 0.4 0.7 1.1 0.4 0.5 2.6 311.2 309.7 3.69 1.74 9 0.410 149K 13K
L6 3/0 133.25 MB 0.0 181.6 1.4 180.2 180.0 -0.2 0.0 127.2 325.5 322.7 571.26 253.48 655 0.872 26M 677K
Sum 3/0 133.25 MB 0.0 182.7 1.9 180.9 182.6 1.7 0.5 126.2 321.2 320.9 582.53 260.54 1331 0.438 26M 691K
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0
** Compaction Stats [default] **
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Low 0/0 0.00 KB 0.0 182.7 1.9 180.9 181.1 0.3 0.0 0.0 325.4 322.6 574.94 255.22 664 0.866 26M 691K
High 0/0 0.00 KB 0.0 0.0 0.0 0.0 1.4 1.4 0.0 0.0 0.0 195.3 7.58 5.32 666 0.011 0 0
User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 287.9 0.01 0.00 1 0.009 0 0
Uptime(secs): 51600.1 total, 0.0 interval
Flush(GB): cumulative 1.447, interval 0.000
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 182.58 GB write, 3.62 MB/s write, 182.72 GB read, 3.63 MB/s read, 582.5 seconds
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
** File Read Latency Histogram By Level [default] **
debug 2023-07-18T20:00:20.806+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:00:21.238+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:00:21.238+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/1502720677' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:00:20.485294+0000 mgr.b (mgr.12834102) 26099 : cluster [DBG] pgmap v26670: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:20.679415+0000 osd.54 (osd.54) 51378 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:00:21.241869+0000 mon.j (mon.0) 21450 : audit [DBG] from='client.? 10.1.182.12:0/1502720677' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:00:21.660846+0000 osd.54 (osd.54) 51379 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:22.486128+0000 mgr.b (mgr.12834102) 26100 : cluster [DBG] pgmap v26671: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:22.681047+0000 osd.54 (osd.54) 51380 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:23.636931+0000 osd.54 (osd.54) 51381 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:24.487069+0000 mgr.b (mgr.12834102) 26101 : cluster [DBG] pgmap v26672: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:24.644798+0000 osd.54 (osd.54) 51382 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:00:24.869355+0000 mon.l (mon.2) 15451 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:00:24.869622+0000 mon.l (mon.2) 15452 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:00:25.806+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:00:26.478+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:00:26.478+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:00:25.659651+0000 osd.54 (osd.54) 51383 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:00:26.290796+0000 mon.k (mon.1) 18744 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:00:26.291066+0000 mon.k (mon.1) 18745 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:00:26.482834+0000 mon.j (mon.0) 21451 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:00:26.483001+0000 mon.j (mon.0) 21452 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:00:26.874+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:00:26.874+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:00:26.488086+0000 mgr.b (mgr.12834102) 26102 : cluster [DBG] pgmap v26673: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:26.622645+0000 osd.54 (osd.54) 51384 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:00:26.879754+0000 mon.j (mon.0) 21453 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T20:00:27.494236+0000 mon.k (mon.1) 18746 : audit [DBG] from='client.? 10.1.207.132:0/212862231' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:00:27.640583+0000 osd.54 (osd.54) 51385 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:28.489059+0000 mgr.b (mgr.12834102) 26103 : cluster [DBG] pgmap v26674: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:28.677616+0000 osd.54 (osd.54) 51386 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:29.727794+0000 osd.54 (osd.54) 51387 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:00:30.810+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:00:30.490042+0000 mgr.b (mgr.12834102) 26104 : cluster [DBG] pgmap v26675: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:30.701075+0000 osd.54 (osd.54) 51388 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:31.735583+0000 osd.54 (osd.54) 51389 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:32.491005+0000 mgr.b (mgr.12834102) 26105 : cluster [DBG] pgmap v26676: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:32.762048+0000 osd.54 (osd.54) 51390 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:33.718875+0000 osd.54 (osd.54) 51391 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:34.491952+0000 mgr.b (mgr.12834102) 26106 : cluster [DBG] pgmap v26677: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:34.706677+0000 osd.54 (osd.54) 51392 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:00:34.920268+0000 mon.l (mon.2) 15453 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:00:34.920536+0000 mon.l (mon.2) 15454 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:00:35.810+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:00:36.478+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:00:36.478+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:00:35.702519+0000 osd.54 (osd.54) 51393 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:00:36.308261+0000 mon.k (mon.1) 18747 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:00:36.308532+0000 mon.k (mon.1) 18748 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:00:36.483776+0000 mon.j (mon.0) 21454 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:00:36.484054+0000 mon.j (mon.0) 21455 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:00:36.702+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:00:36.702+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/824636484' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:00:36.492934+0000 mgr.b (mgr.12834102) 26107 : cluster [DBG] pgmap v26678: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:00:36.706540+0000 mon.j (mon.0) 21456 : audit [DBG] from='client.? 10.1.182.12:0/824636484' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:00:36.709185+0000 osd.54 (osd.54) 51394 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:37.724092+0000 osd.54 (osd.54) 51395 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:38.493932+0000 mgr.b (mgr.12834102) 26108 : cluster [DBG] pgmap v26679: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:38.729440+0000 osd.54 (osd.54) 51396 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:00:39.026195+0000 mon.k (mon.1) 18749 : audit [DBG] from='client.? 10.1.222.242:0/1304744898' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:00:39.461766+0000 mon.k (mon.1) 18750 : audit [DBG] from='client.? 10.1.222.242:0/2287560846' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:00:39.703506+0000 osd.54 (osd.54) 51397 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:00:40.814+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:00:40.494850+0000 mgr.b (mgr.12834102) 26109 : cluster [DBG] pgmap v26680: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:40.718693+0000 osd.54 (osd.54) 51398 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:00:41.878+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:00:41.878+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:00:41.747231+0000 osd.54 (osd.54) 51399 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:00:41.883401+0000 mon.j (mon.0) 21457 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:00:42.495800+0000 mgr.b (mgr.12834102) 26110 : cluster [DBG] pgmap v26681: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:42.780468+0000 osd.54 (osd.54) 51400 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:43.825450+0000 osd.54 (osd.54) 51401 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:44.496741+0000 mgr.b (mgr.12834102) 26111 : cluster [DBG] pgmap v26682: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:00:44.868592+0000 mon.l (mon.2) 15455 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:00:44.868858+0000 mon.l (mon.2) 15456 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:00:45.552130+0000 mon.k (mon.1) 18751 : audit [DBG] from='client.? 10.1.222.242:0/401303892' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
debug 2023-07-18T20:00:45.814+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:00:46.486+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:00:46.486+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:00:44.872997+0000 osd.54 (osd.54) 51402 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:00:45.945192+0000 mon.k (mon.1) 18752 : audit [DBG] from='client.? 10.1.222.242:0/1695586547' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
audit 2023-07-18T20:00:46.294673+0000 mon.k (mon.1) 18753 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:00:46.294975+0000 mon.k (mon.1) 18754 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:00:46.492995+0000 mon.j (mon.0) 21458 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:00:46.493200+0000 mon.j (mon.0) 21459 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:00:45.914239+0000 osd.54 (osd.54) 51403 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:46.497693+0000 mgr.b (mgr.12834102) 26112 : cluster [DBG] pgmap v26683: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:46.867992+0000 osd.54 (osd.54) 51404 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:00:47.826923+0000 mon.k (mon.1) 18755 : audit [DBG] from='client.? 10.1.207.132:0/3938807591' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:00:47.879313+0000 osd.54 (osd.54) 51405 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:48.498635+0000 mgr.b (mgr.12834102) 26113 : cluster [DBG] pgmap v26684: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:48.903296+0000 osd.54 (osd.54) 51406 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:00:50.814+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:00:49.901107+0000 osd.54 (osd.54) 51407 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:50.499614+0000 mgr.b (mgr.12834102) 26114 : cluster [DBG] pgmap v26685: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:00:52.178+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:00:52.178+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/3679764534' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:00:50.865272+0000 osd.54 (osd.54) 51408 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:00:52.182450+0000 mon.j (mon.0) 21460 : audit [DBG] from='client.? 10.1.182.12:0/3679764534' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:00:51.855871+0000 osd.54 (osd.54) 51409 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:52.500592+0000 mgr.b (mgr.12834102) 26115 : cluster [DBG] pgmap v26686: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:52.859571+0000 osd.54 (osd.54) 51410 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:53.900686+0000 osd.54 (osd.54) 51411 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:54.501532+0000 mgr.b (mgr.12834102) 26116 : cluster [DBG] pgmap v26687: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:00:54.895083+0000 mon.l (mon.2) 15457 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:00:54.895350+0000 mon.l (mon.2) 15458 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:00:55.818+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:00:56.482+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:00:56.482+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:00:54.889499+0000 osd.54 (osd.54) 51412 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:00:56.301182+0000 mon.k (mon.1) 18756 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:00:56.301361+0000 mon.k (mon.1) 18757 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:00:56.487989+0000 mon.j (mon.0) 21461 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:00:56.488170+0000 mon.j (mon.0) 21462 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:00:56.878+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:00:56.878+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:00:55.909875+0000 osd.54 (osd.54) 51413 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:56.502464+0000 mgr.b (mgr.12834102) 26117 : cluster [DBG] pgmap v26688: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:00:56.883705+0000 mon.j (mon.0) 21463 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:00:56.862452+0000 osd.54 (osd.54) 51414 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:57.866317+0000 osd.54 (osd.54) 51415 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:00:58.503420+0000 mgr.b (mgr.12834102) 26118 : cluster [DBG] pgmap v26689: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:00:58.867546+0000 osd.54 (osd.54) 51416 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:01:00.818+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:00:59.839757+0000 osd.54 (osd.54) 51417 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:00.504390+0000 mgr.b (mgr.12834102) 26119 : cluster [DBG] pgmap v26690: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:00.879806+0000 osd.54 (osd.54) 51418 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:01.927004+0000 osd.54 (osd.54) 51419 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:02.505341+0000 mgr.b (mgr.12834102) 26120 : cluster [DBG] pgmap v26691: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:02.958171+0000 osd.54 (osd.54) 51420 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:03.993626+0000 osd.54 (osd.54) 51421 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:04.506284+0000 mgr.b (mgr.12834102) 26121 : cluster [DBG] pgmap v26692: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:01:04.845726+0000 mon.l (mon.2) 15459 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:01:04.846012+0000 mon.l (mon.2) 15460 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:01:05.032690+0000 osd.54 (osd.54) 51422 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:01:05.822+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:01:06.482+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:01:06.482+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:01:06.074414+0000 osd.54 (osd.54) 51423 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:01:06.312546+0000 mon.k (mon.1) 18758 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:01:06.312748+0000 mon.k (mon.1) 18759 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:01:06.486043+0000 mon.j (mon.0) 21464 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:01:06.486229+0000 mon.j (mon.0) 21465 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:01:07.690+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:01:07.690+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/1033427581' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:06.507667+0000 mgr.b (mgr.12834102) 26122 : cluster [DBG] pgmap v26693: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:07.081092+0000 osd.54 (osd.54) 51424 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:01:07.694582+0000 mon.j (mon.0) 21466 : audit [DBG] from='client.? 10.1.182.12:0/1033427581' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:08.078020+0000 osd.54 (osd.54) 51425 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:01:08.088239+0000 mon.k (mon.1) 18760 : audit [DBG] from='client.? 10.1.207.132:0/3040392085' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:08.508683+0000 mgr.b (mgr.12834102) 26123 : cluster [DBG] pgmap v26694: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:09.120761+0000 osd.54 (osd.54) 51426 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:01:10.822+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
audit 2023-07-18T20:01:09.994610+0000 mon.k (mon.1) 18761 : audit [DBG] from='client.? 10.1.222.242:0/986706244' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:10.099471+0000 osd.54 (osd.54) 51427 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:01:10.533424+0000 mon.k (mon.1) 18762 : audit [DBG] from='client.? 10.1.222.242:0/3597162887' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:10.509666+0000 mgr.b (mgr.12834102) 26124 : cluster [DBG] pgmap v26695: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:11.137051+0000 osd.54 (osd.54) 51428 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:01:11.878+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:01:11.878+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T20:01:11.883341+0000 mon.j (mon.0) 21467 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:12.163489+0000 osd.54 (osd.54) 51429 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:12.510682+0000 mgr.b (mgr.12834102) 26125 : cluster [DBG] pgmap v26696: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:13.120126+0000 osd.54 (osd.54) 51430 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:14.146462+0000 osd.54 (osd.54) 51431 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:01:14.860772+0000 mon.l (mon.2) 15461 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:01:14.861043+0000 mon.l (mon.2) 15462 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:01:15.822+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:01:14.511630+0000 mgr.b (mgr.12834102) 26126 : cluster [DBG] pgmap v26697: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:15.163792+0000 osd.54 (osd.54) 51432 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:01:16.486+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:01:16.486+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:01:16.136602+0000 osd.54 (osd.54) 51433 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:01:16.302290+0000 mon.k (mon.1) 18763 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:01:16.302556+0000 mon.k (mon.1) 18764 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:01:16.490142+0000 mon.j (mon.0) 21468 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:01:16.490409+0000 mon.j (mon.0) 21469 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:01:16.618898+0000 mon.k (mon.1) 18765 : audit [DBG] from='client.? 10.1.222.242:0/1838373105' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:16.512620+0000 mgr.b (mgr.12834102) 26127 : cluster [DBG] pgmap v26698: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:01:17.107579+0000 mon.k (mon.1) 18766 : audit [DBG] from='client.? 10.1.222.242:0/3136346264' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:17.119338+0000 osd.54 (osd.54) 51434 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:18.119455+0000 osd.54 (osd.54) 51435 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:18.513627+0000 mgr.b (mgr.12834102) 26128 : cluster [DBG] pgmap v26699: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:19.113171+0000 osd.54 (osd.54) 51436 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:01:20.826+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:01:20.140861+0000 osd.54 (osd.54) 51437 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:20.514604+0000 mgr.b (mgr.12834102) 26129 : cluster [DBG] pgmap v26700: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:21.141734+0000 osd.54 (osd.54) 51438 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:22.125829+0000 osd.54 (osd.54) 51439 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:01:23.162+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:01:23.162+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/695028895' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:22.515612+0000 mgr.b (mgr.12834102) 26130 : cluster [DBG] pgmap v26701: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:23.118556+0000 osd.54 (osd.54) 51440 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:01:23.166050+0000 mon.j (mon.0) 21470 : audit [DBG] from='client.? 10.1.182.12:0/695028895' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:24.103403+0000 osd.54 (osd.54) 51441 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:01:24.853659+0000 mon.l (mon.2) 15463 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:01:24.853957+0000 mon.l (mon.2) 15464 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:01:25.826+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:01:24.516649+0000 mgr.b (mgr.12834102) 26131 : cluster [DBG] pgmap v26702: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:25.060547+0000 osd.54 (osd.54) 51442 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:01:26.486+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:01:26.486+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:01:26.878+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:01:26.878+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:26.049606+0000 osd.54 (osd.54) 51443 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:01:26.295765+0000 mon.k (mon.1) 18767 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:01:26.296037+0000 mon.k (mon.1) 18768 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:01:26.489520+0000 mon.j (mon.0) 21471 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:01:26.489687+0000 mon.j (mon.0) 21472 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:01:26.883389+0000 mon.j (mon.0) 21473 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:26.517690+0000 mgr.b (mgr.12834102) 26132 : cluster [DBG] pgmap v26703: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:27.000428+0000 osd.54 (osd.54) 51444 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:27.982023+0000 osd.54 (osd.54) 51445 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:01:28.369768+0000 mon.l (mon.2) 15465 : audit [DBG] from='client.? 10.1.207.132:0/141039065' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:28.518702+0000 mgr.b (mgr.12834102) 26133 : cluster [DBG] pgmap v26704: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:28.934733+0000 osd.54 (osd.54) 51446 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:01:30.830+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:01:29.946612+0000 osd.54 (osd.54) 51447 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:30.519753+0000 mgr.b (mgr.12834102) 26134 : cluster [DBG] pgmap v26705: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:30.989145+0000 osd.54 (osd.54) 51448 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:31.965034+0000 osd.54 (osd.54) 51449 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:32.520774+0000 mgr.b (mgr.12834102) 26135 : cluster [DBG] pgmap v26706: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:32.929405+0000 osd.54 (osd.54) 51450 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:33.910054+0000 osd.54 (osd.54) 51451 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:01:34.872250+0000 mon.l (mon.2) 15466 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:01:34.872513+0000 mon.l (mon.2) 15467 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:01:35.833+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:01:34.521787+0000 mgr.b (mgr.12834102) 26136 : cluster [DBG] pgmap v26707: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:34.887718+0000 osd.54 (osd.54) 51452 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:01:36.489+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:01:36.489+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:01:35.890544+0000 osd.54 (osd.54) 51453 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:01:36.298723+0000 mon.k (mon.1) 18769 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:01:36.299020+0000 mon.k (mon.1) 18770 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:01:36.494382+0000 mon.j (mon.0) 21474 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:01:36.494644+0000 mon.j (mon.0) 21475 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:01:36.522777+0000 mgr.b (mgr.12834102) 26137 : cluster [DBG] pgmap v26708: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:36.917157+0000 osd.54 (osd.54) 51454 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:01:38.645+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:01:38.645+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/362545330' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:37.873139+0000 osd.54 (osd.54) 51455 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:01:38.648393+0000 mon.j (mon.0) 21476 : audit [DBG] from='client.? 10.1.182.12:0/362545330' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:38.523724+0000 mgr.b (mgr.12834102) 26138 : cluster [DBG] pgmap v26709: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:38.905543+0000 osd.54 (osd.54) 51456 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:01:40.829+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:01:39.901990+0000 osd.54 (osd.54) 51457 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:01:41.877+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:01:41.877+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:40.524730+0000 mgr.b (mgr.12834102) 26139 : cluster [DBG] pgmap v26710: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:40.925755+0000 osd.54 (osd.54) 51458 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:01:41.086980+0000 mon.l (mon.2) 15468 : audit [DBG] from='client.? 10.1.222.242:0/220892194' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:01:41.484174+0000 mon.k (mon.1) 18771 : audit [DBG] from='client.? 10.1.222.242:0/1272808850' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
audit 2023-07-18T20:01:41.883966+0000 mon.j (mon.0) 21477 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:41.891150+0000 osd.54 (osd.54) 51459 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:42.525568+0000 mgr.b (mgr.12834102) 26140 : cluster [DBG] pgmap v26711: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:42.908605+0000 osd.54 (osd.54) 51460 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:43.947824+0000 osd.54 (osd.54) 51461 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:01:44.861998+0000 mon.l (mon.2) 15469 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:01:44.862262+0000 mon.l (mon.2) 15470 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:01:45.109+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_write.cc:1736] [default] New memtable created with log file: #21694. Immutable memtables: 0.
debug 2023-07-18T20:01:45.109+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:01:45.114690) [db/db_impl/db_impl_compaction_flush.cc:2394] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
debug 2023-07-18T20:01:45.109+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:335] [default] [JOB 1649] Flushing memtable with next log file: 21694
debug 2023-07-18T20:01:45.109+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710505114790, "job": 1649, "event": "flush_started", "num_memtables": 1, "num_entries": 2721, "num_deletes": 627, "total_data_size": 4074171, "memory_usage": 4121784, "flush_reason": "Manual Compaction"}
debug 2023-07-18T20:01:45.109+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:364] [default] [JOB 1649] Level-0 flush table #21695: started
debug 2023-07-18T20:01:45.125+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710505130513, "cf_name": "default", "job": 1649, "event": "table_file_creation", "file_number": 21695, "file_size": 3243398, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 3232663, "index_size": 5966, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3781, "raw_key_size": 37061, "raw_average_key_size": 24, "raw_value_size": 3206360, "raw_average_value_size": 2134, "num_data_blocks": 231, "num_entries": 1502, "num_deletions": 627, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689710359, "oldest_key_time": 1689710359, "file_creation_time": 1689710505, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:01:45.125+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:424] [default] [JOB 1649] Level-0 flush table #21695: 3243398 bytes OK
debug 2023-07-18T20:01:45.125+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:01:45.130743) [db/memtable_list.cc:449] [default] Level-0 commit table #21695 started
debug 2023-07-18T20:01:45.125+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:01:45.131041) [db/memtable_list.cc:628] [default] Level-0 commit table #21695: memtable #1 done
debug 2023-07-18T20:01:45.125+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:01:45.131053) EVENT_LOG_v1 {"time_micros": 1689710505131049, "job": 1649, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 3], "immutable_memtables": 0}
debug 2023-07-18T20:01:45.125+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:01:45.131066) [db/db_impl/db_impl_compaction_flush.cc:233] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 3] max score 0.25
debug 2023-07-18T20:01:45.125+0000 7f7fb8521700 4 rocksdb: [db/db_impl/db_impl_files.cc:415] [JOB 1649] Try to delete WAL files size 4060905, prev total WAL file size 4060905, number of live WAL files 2.
debug 2023-07-18T20:01:45.125+0000 7f7fb8521700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021689.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:01:45.125+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:01:45.125+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:01:45.131997) [db/db_impl/db_impl_compaction_flush.cc:2712] [default] Manual compaction from level-0 to level-6 from '7061786F730036323534363639' seq:72057594037927935, type:20 .. '7061786F730036323534393231' seq:0, type:0; will stop at (end)
debug 2023-07-18T20:01:45.125+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1881] [default] [JOB 1650] Compacting 1@0 + 3@6 files to L6, score -1.00
debug 2023-07-18T20:01:45.125+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1887] [default] Compaction start summary: Base version 1650 Base level 0, inputs: [21695(3167KB)], [21691(64MB) 21692(64MB) 21693(5048KB)]
debug 2023-07-18T20:01:45.125+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710505132093, "job": 1650, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [21695], "files_L6": [21691, 21692, 21693], "score": -1, "input_data_size": 142964105}
debug 2023-07-18T20:01:45.333+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1650] Generated table #21696: 21997 keys, 67334241 bytes
debug 2023-07-18T20:01:45.333+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710505340371, "cf_name": "default", "job": 1650, "event": "table_file_creation", "file_number": 21696, "file_size": 67334241, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67218742, "index_size": 59403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 55109, "raw_key_size": 595385, "raw_average_key_size": 27, "raw_value_size": 66861367, "raw_average_value_size": 3039, "num_data_blocks": 2196, "num_entries": 21997, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710505, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:01:45.561+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1650] Generated table #21697: 13123 keys, 67307877 bytes
debug 2023-07-18T20:01:45.561+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710505565825, "cf_name": "default", "job": 1650, "event": "table_file_creation", "file_number": 21697, "file_size": 67307877, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67178941, "index_size": 95113, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32837, "raw_key_size": 290677, "raw_average_key_size": 22, "raw_value_size": 66899094, "raw_average_value_size": 5097, "num_data_blocks": 3530, "num_entries": 13123, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710505, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:01:45.585+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1650] Generated table #21698: 565 keys, 6159226 bytes
debug 2023-07-18T20:01:45.585+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710505590579, "cf_name": "default", "job": 1650, "event": "table_file_creation", "file_number": 21698, "file_size": 6159226, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 6149798, "index_size": 6969, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 12105, "raw_average_key_size": 21, "raw_value_size": 6135255, "raw_average_value_size": 10858, "num_data_blocks": 268, "num_entries": 565, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710505, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:01:45.593+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1594] [default] [JOB 1650] Compacted 1@0 + 3@6 files to L6 => 140801344 bytes
debug 2023-07-18T20:01:45.593+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:01:45.599904) [db/compaction/compaction_job.cc:812] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 3] max score 0.00, MB/sec: 311.8 rd, 307.1 wr, level 6, files in(1, 3) out(3) MB in(3.1, 133.2) out(134.3), read-write-amplify(87.5) write-amplify(43.4) OK, records in: 36957, records dropped: 1272 output_compression: NoCompression
debug 2023-07-18T20:01:45.593+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:01:45.599943) EVENT_LOG_v1 {"time_micros": 1689710505599924, "job": 1650, "event": "compaction_finished", "compaction_time_micros": 458525, "compaction_time_cpu_micros": 232768, "output_level": 6, "num_output_files": 3, "total_output_size": 140801344, "num_input_records": 36957, "num_output_records": 35685, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 3]}
debug 2023-07-18T20:01:45.593+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021695.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:01:45.593+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710505601037, "job": 1650, "event": "table_file_deletion", "file_number": 21695}
debug 2023-07-18T20:01:45.597+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021693.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:01:45.597+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710505602599, "job": 1650, "event": "table_file_deletion", "file_number": 21693}
debug 2023-07-18T20:01:45.613+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021692.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:01:45.613+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710505621400, "job": 1650, "event": "table_file_deletion", "file_number": 21692}
debug 2023-07-18T20:01:45.633+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021691.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:01:45.633+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710505639773, "job": 1650, "event": "table_file_deletion", "file_number": 21691}
debug 2023-07-18T20:01:45.633+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:01:45.633+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:01:45.633+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:01:45.633+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:01:45.633+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:01:45.833+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:01:44.526304+0000 mgr.b (mgr.12834102) 26141 : cluster [DBG] pgmap v26712: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:44.994162+0000 osd.54 (osd.54) 51462 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:01:46.481+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:01:46.481+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:01:46.021413+0000 osd.54 (osd.54) 51463 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:01:46.297902+0000 mon.k (mon.1) 18772 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:01:46.298224+0000 mon.k (mon.1) 18773 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:01:46.486582+0000 mon.j (mon.0) 21478 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:01:46.486848+0000 mon.j (mon.0) 21479 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:01:46.527049+0000 mgr.b (mgr.12834102) 26142 : cluster [DBG] pgmap v26713: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:47.021228+0000 osd.54 (osd.54) 51464 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:01:47.707379+0000 mon.l (mon.2) 15471 : audit [DBG] from='client.? 10.1.222.242:0/2121987935' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:01:48.096514+0000 mon.l (mon.2) 15472 : audit [DBG] from='client.? 10.1.222.242:0/1074438979' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
debug 2023-07-18T20:01:48.681+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:01:48.681+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.207.132:0/313509827' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:48.017034+0000 osd.54 (osd.54) 51465 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:01:48.688760+0000 mon.j (mon.0) 21480 : audit [DBG] from='client.? 10.1.207.132:0/313509827' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:48.527974+0000 mgr.b (mgr.12834102) 26143 : cluster [DBG] pgmap v26714: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:49.016132+0000 osd.54 (osd.54) 51466 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:01:50.837+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:01:49.982207+0000 osd.54 (osd.54) 51467 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:50.529000+0000 mgr.b (mgr.12834102) 26144 : cluster [DBG] pgmap v26715: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:51.026740+0000 osd.54 (osd.54) 51468 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:52.018786+0000 osd.54 (osd.54) 51469 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:01:54.133+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:01:54.133+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/997621946' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:52.529971+0000 mgr.b (mgr.12834102) 26145 : cluster [DBG] pgmap v26716: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:53.024669+0000 osd.54 (osd.54) 51470 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:01:54.137674+0000 mon.j (mon.0) 21481 : audit [DBG] from='client.? 10.1.182.12:0/997621946' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:54.006635+0000 osd.54 (osd.54) 51471 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:01:54.871497+0000 mon.l (mon.2) 15473 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:01:54.871770+0000 mon.l (mon.2) 15474 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:01:55.837+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:01:55.845+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_write.cc:1736] [default] New memtable created with log file: #21699. Immutable memtables: 0.
debug 2023-07-18T20:01:55.845+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:01:55.849655) [db/db_impl/db_impl_compaction_flush.cc:2394] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
debug 2023-07-18T20:01:55.845+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:335] [default] [JOB 1651] Flushing memtable with next log file: 21699
debug 2023-07-18T20:01:55.845+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710515849705, "job": 1651, "event": "flush_started", "num_memtables": 1, "num_entries": 427, "num_deletes": 282, "total_data_size": 271014, "memory_usage": 278968, "flush_reason": "Manual Compaction"}
debug 2023-07-18T20:01:55.845+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:364] [default] [JOB 1651] Level-0 flush table #21700: started
debug 2023-07-18T20:01:55.845+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710515851799, "cf_name": "default", "job": 1651, "event": "table_file_creation", "file_number": 21700, "file_size": 225287, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 222869, "index_size": 471, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7597, "raw_average_key_size": 21, "raw_value_size": 217580, "raw_average_value_size": 616, "num_data_blocks": 19, "num_entries": 353, "num_deletions": 282, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689710505, "oldest_key_time": 1689710505, "file_creation_time": 1689710515, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:01:55.845+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:424] [default] [JOB 1651] Level-0 flush table #21700: 225287 bytes OK
debug 2023-07-18T20:01:55.845+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:01:55.851856) [db/memtable_list.cc:449] [default] Level-0 commit table #21700 started
debug 2023-07-18T20:01:55.845+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:01:55.852154) [db/memtable_list.cc:628] [default] Level-0 commit table #21700: memtable #1 done
debug 2023-07-18T20:01:55.845+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:01:55.852169) EVENT_LOG_v1 {"time_micros": 1689710515852164, "job": 1651, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 3], "immutable_memtables": 0}
debug 2023-07-18T20:01:55.845+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:01:55.852179) [db/db_impl/db_impl_compaction_flush.cc:233] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 3] max score 0.25
debug 2023-07-18T20:01:55.845+0000 7f7fb8521700 4 rocksdb: [db/db_impl/db_impl_files.cc:415] [JOB 1651] Try to delete WAL files size 268199, prev total WAL file size 268199, number of live WAL files 2.
debug 2023-07-18T20:01:55.845+0000 7f7fb8521700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021694.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:01:55.845+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:01:55.845+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:01:55.852503) [db/db_impl/db_impl_compaction_flush.cc:2712] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323638313330' seq:72057594037927935, type:20 .. '6C6F676D0033323638333833' seq:0, type:0; will stop at (end)
debug 2023-07-18T20:01:55.845+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1881] [default] [JOB 1652] Compacting 1@0 + 3@6 files to L6, score -1.00
debug 2023-07-18T20:01:55.845+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1887] [default] Compaction start summary: Base version 1652 Base level 0, inputs: [21700(220KB)], [21696(64MB) 21697(64MB) 21698(6014KB)]
debug 2023-07-18T20:01:55.845+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710515852542, "job": 1652, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [21700], "files_L6": [21696, 21697, 21698], "score": -1, "input_data_size": 141026631}
debug 2023-07-18T20:01:56.057+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1652] Generated table #21701: 21753 keys, 67299934 bytes
debug 2023-07-18T20:01:56.057+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710516064862, "cf_name": "default", "job": 1652, "event": "table_file_creation", "file_number": 21701, "file_size": 67299934, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67185994, "index_size": 58484, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 54469, "raw_key_size": 590468, "raw_average_key_size": 27, "raw_value_size": 66832649, "raw_average_value_size": 3072, "num_data_blocks": 2161, "num_entries": 21753, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710515, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:01:56.269+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1652] Generated table #21702: 13126 keys, 67276421 bytes
debug 2023-07-18T20:01:56.269+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710516275941, "cf_name": "default", "job": 1652, "event": "table_file_creation", "file_number": 21702, "file_size": 67276421, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67147391, "index_size": 95207, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32837, "raw_key_size": 290769, "raw_average_key_size": 22, "raw_value_size": 66867403, "raw_average_value_size": 5094, "num_data_blocks": 3533, "num_entries": 13126, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710516, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:01:56.289+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1652] Generated table #21703: 582 keys, 6259445 bytes
debug 2023-07-18T20:01:56.289+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710516294863, "cf_name": "default", "job": 1652, "event": "table_file_creation", "file_number": 21703, "file_size": 6259445, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 6249846, "index_size": 7140, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 12458, "raw_average_key_size": 21, "raw_value_size": 6234906, "raw_average_value_size": 10712, "num_data_blocks": 275, "num_entries": 582, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710516, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:01:56.289+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1594] [default] [JOB 1652] Compacted 1@0 + 3@6 files to L6 => 140835800 bytes
debug 2023-07-18T20:01:56.289+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:01:56.295836) [db/compaction/compaction_job.cc:812] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 3] max score 0.00, MB/sec: 318.8 rd, 318.4 wr, level 6, files in(1, 3) out(3) MB in(0.2, 134.3) out(134.3), read-write-amplify(1251.1) write-amplify(625.1) OK, records in: 36038, records dropped: 577 output_compression: NoCompression
debug 2023-07-18T20:01:56.289+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:01:56.295852) EVENT_LOG_v1 {"time_micros": 1689710516295845, "job": 1652, "event": "compaction_finished", "compaction_time_micros": 442344, "compaction_time_cpu_micros": 232795, "output_level": 6, "num_output_files": 3, "total_output_size": 140835800, "num_input_records": 36038, "num_output_records": 35461, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 3]}
debug 2023-07-18T20:01:56.289+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021700.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:01:56.289+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710516296003, "job": 1652, "event": "table_file_deletion", "file_number": 21700}
debug 2023-07-18T20:01:56.289+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021698.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:01:56.289+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710516296769, "job": 1652, "event": "table_file_deletion", "file_number": 21698}
cluster 2023-07-18T20:01:54.530928+0000 mgr.b (mgr.12834102) 26146 : cluster [DBG] pgmap v26717: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:55.056278+0000 osd.54 (osd.54) 51472 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:01:56.301+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021697.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:01:56.301+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710516306614, "job": 1652, "event": "table_file_deletion", "file_number": 21697}
debug 2023-07-18T20:01:56.309+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021696.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:01:56.309+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710516316790, "job": 1652, "event": "table_file_deletion", "file_number": 21696}
debug 2023-07-18T20:01:56.309+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:01:56.309+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:01:56.309+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:01:56.309+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:01:56.309+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:01:56.477+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:01:56.477+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:01:56.877+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:01:56.877+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:56.034155+0000 osd.54 (osd.54) 51473 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:01:56.304300+0000 mon.k (mon.1) 18774 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:01:56.304582+0000 mon.k (mon.1) 18775 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:01:56.484364+0000 mon.j (mon.0) 21482 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:01:56.484639+0000 mon.j (mon.0) 21483 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:01:56.531973+0000 mgr.b (mgr.12834102) 26147 : cluster [DBG] pgmap v26718: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:01:56.883784+0000 mon.j (mon.0) 21484 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:01:56.992510+0000 osd.54 (osd.54) 51474 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:57.966697+0000 osd.54 (osd.54) 51475 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:01:58.532936+0000 mgr.b (mgr.12834102) 26148 : cluster [DBG] pgmap v26719: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:01:58.944841+0000 osd.54 (osd.54) 51476 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:02:00.841+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:01:59.978387+0000 osd.54 (osd.54) 51477 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:00.533919+0000 mgr.b (mgr.12834102) 26149 : cluster [DBG] pgmap v26720: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:02:01.022375+0000 osd.54 (osd.54) 51478 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:02.004677+0000 osd.54 (osd.54) 51479 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:02.534903+0000 mgr.b (mgr.12834102) 26150 : cluster [DBG] pgmap v26721: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:02:03.035892+0000 osd.54 (osd.54) 51480 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:04.062994+0000 osd.54 (osd.54) 51481 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:04.535881+0000 mgr.b (mgr.12834102) 26151 : cluster [DBG] pgmap v26722: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:02:04.869812+0000 mon.l (mon.2) 15475 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:02:04.870119+0000 mon.l (mon.2) 15476 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:02:05.845+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:02:05.055939+0000 osd.54 (osd.54) 51482 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:02:06.293548+0000 mon.k (mon.1) 18776 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:02:06.293842+0000 mon.k (mon.1) 18777 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:02:06.469+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:02:06.469+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:02:06.068489+0000 osd.54 (osd.54) 51483 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:02:06.475321+0000 mon.j (mon.0) 21485 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:02:06.475583+0000 mon.j (mon.0) 21486 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:02:06.536858+0000 mgr.b (mgr.12834102) 26152 : cluster [DBG] pgmap v26723: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:02:07.064085+0000 osd.54 (osd.54) 51484 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:08.077861+0000 osd.54 (osd.54) 51485 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:08.537820+0000 mgr.b (mgr.12834102) 26153 : cluster [DBG] pgmap v26724: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:02:08.808737+0000 mon.k (mon.1) 18778 : audit [DBG] from='client.? 10.1.207.132:0/2189195496' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:02:09.621+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:02:09.621+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/2968192642' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:02:09.084978+0000 osd.54 (osd.54) 51486 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:02:09.622734+0000 mon.j (mon.0) 21487 : audit [DBG] from='client.? 10.1.182.12:0/2968192642' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:02:10.845+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:02:10.047402+0000 osd.54 (osd.54) 51487 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:10.538768+0000 mgr.b (mgr.12834102) 26154 : cluster [DBG] pgmap v26725: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:02:11.877+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:02:11.877+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:02:11.027119+0000 osd.54 (osd.54) 51488 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:02:11.883999+0000 mon.j (mon.0) 21488 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T20:02:12.061728+0000 mon.l (mon.2) 15477 : audit [DBG] from='client.? 10.1.222.242:0/1742880463' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:02:12.020406+0000 osd.54 (osd.54) 51489 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:02:12.515139+0000 mon.k (mon.1) 18779 : audit [DBG] from='client.? 10.1.222.242:0/3343063052' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:02:12.539710+0000 mgr.b (mgr.12834102) 26155 : cluster [DBG] pgmap v26726: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:02:13.004007+0000 osd.54 (osd.54) 51490 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:13.969745+0000 osd.54 (osd.54) 51491 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:14.540680+0000 mgr.b (mgr.12834102) 26156 : cluster [DBG] pgmap v26727: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:02:14.836917+0000 mon.l (mon.2) 15478 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:02:14.837223+0000 mon.l (mon.2) 15479 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:02:15.845+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:02:15.016897+0000 osd.54 (osd.54) 51492 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:02:16.301403+0000 mon.k (mon.1) 18780 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:02:16.301678+0000 mon.k (mon.1) 18781 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:02:16.469+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:02:16.473+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:02:16.026778+0000 osd.54 (osd.54) 51493 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:02:16.477418+0000 mon.j (mon.0) 21489 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:02:16.477575+0000 mon.j (mon.0) 21490 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:02:16.541774+0000 mgr.b (mgr.12834102) 26157 : cluster [DBG] pgmap v26728: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:02:17.069658+0000 osd.54 (osd.54) 51494 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:18.053302+0000 osd.54 (osd.54) 51495 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:18.542714+0000 mgr.b (mgr.12834102) 26158 : cluster [DBG] pgmap v26729: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:02:18.664176+0000 mon.k (mon.1) 18782 : audit [DBG] from='client.? 10.1.222.242:0/4167191060' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:02:19.090755+0000 mon.k (mon.1) 18783 : audit [DBG] from='client.? 10.1.222.242:0/3586846064' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:02:19.045208+0000 osd.54 (osd.54) 51496 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:02:20.853+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:02:20.048441+0000 osd.54 (osd.54) 51497 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:20.543684+0000 mgr.b (mgr.12834102) 26159 : cluster [DBG] pgmap v26730: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:02:21.022683+0000 osd.54 (osd.54) 51498 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:22.058561+0000 osd.54 (osd.54) 51499 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:22.544592+0000 mgr.b (mgr.12834102) 26160 : cluster [DBG] pgmap v26731: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:02:23.087142+0000 osd.54 (osd.54) 51500 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:02:25.105+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:02:25.105+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/2657592413' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:02:24.125548+0000 osd.54 (osd.54) 51501 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:24.545529+0000 mgr.b (mgr.12834102) 26161 : cluster [DBG] pgmap v26732: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:02:24.894650+0000 mon.l (mon.2) 15480 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:02:24.894940+0000 mon.l (mon.2) 15481 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:02:25.105500+0000 mon.j (mon.0) 21491 : audit [DBG] from='client.? 10.1.182.12:0/2657592413' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:02:25.853+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:02:26.485+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:02:26.485+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:02:25.115062+0000 osd.54 (osd.54) 51502 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:02:26.296090+0000 mon.k (mon.1) 18784 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:02:26.296393+0000 mon.k (mon.1) 18785 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:02:26.486433+0000 mon.j (mon.0) 21492 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:02:26.486703+0000 mon.j (mon.0) 21493 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:02:26.881+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:02:26.881+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:02:26.118559+0000 osd.54 (osd.54) 51503 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:26.546483+0000 mgr.b (mgr.12834102) 26162 : cluster [DBG] pgmap v26733: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:02:26.883246+0000 mon.j (mon.0) 21494 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:02:27.074559+0000 osd.54 (osd.54) 51504 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:28.111768+0000 osd.54 (osd.54) 51505 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:28.547431+0000 mgr.b (mgr.12834102) 26163 : cluster [DBG] pgmap v26734: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:02:28.901392+0000 mon.l (mon.2) 15482 : audit [DBG] from='client.? 10.1.207.132:0/3322750090' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:02:29.098636+0000 osd.54 (osd.54) 51506 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:02:30.857+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:02:30.129367+0000 osd.54 (osd.54) 51507 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:30.548433+0000 mgr.b (mgr.12834102) 26164 : cluster [DBG] pgmap v26735: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:02:31.146045+0000 osd.54 (osd.54) 51508 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:32.190479+0000 osd.54 (osd.54) 51509 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:32.549232+0000 mgr.b (mgr.12834102) 26165 : cluster [DBG] pgmap v26736: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:02:33.156504+0000 osd.54 (osd.54) 51510 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:34.136968+0000 osd.54 (osd.54) 51511 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:34.550170+0000 mgr.b (mgr.12834102) 26166 : cluster [DBG] pgmap v26737: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:02:34.892692+0000 mon.l (mon.2) 15483 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:02:34.892968+0000 mon.l (mon.2) 15484 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:02:35.857+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:02:36.501+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:02:36.501+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:02:35.131198+0000 osd.54 (osd.54) 51512 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:02:36.289866+0000 mon.k (mon.1) 18786 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:02:36.290051+0000 mon.k (mon.1) 18787 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:02:36.504435+0000 mon.j (mon.0) 21495 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:02:36.504565+0000 mon.j (mon.0) 21496 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:02:36.164356+0000 osd.54 (osd.54) 51513 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:36.551118+0000 mgr.b (mgr.12834102) 26167 : cluster [DBG] pgmap v26738: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:02:37.159565+0000 osd.54 (osd.54) 51514 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:38.171500+0000 osd.54 (osd.54) 51515 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:38.552119+0000 mgr.b (mgr.12834102) 26168 : cluster [DBG] pgmap v26739: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:02:40.621+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:02:40.621+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/529909208' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:02:39.175594+0000 osd.54 (osd.54) 51516 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:02:40.623972+0000 mon.j (mon.0) 21497 : audit [DBG] from='client.? 10.1.182.12:0/529909208' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:02:40.857+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:02:40.186514+0000 osd.54 (osd.54) 51517 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:40.553122+0000 mgr.b (mgr.12834102) 26169 : cluster [DBG] pgmap v26740: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:02:41.881+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:02:41.881+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:02:41.160496+0000 osd.54 (osd.54) 51518 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:02:41.883185+0000 mon.j (mon.0) 21498 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:02:42.178011+0000 osd.54 (osd.54) 51519 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:42.553828+0000 mgr.b (mgr.12834102) 26170 : cluster [DBG] pgmap v26741: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:02:43.090528+0000 mon.l (mon.2) 15485 : audit [DBG] from='client.? 10.1.222.242:0/2054335765' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:02:43.477960+0000 mon.l (mon.2) 15486 : audit [DBG] from='client.? 10.1.222.242:0/3029541121' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:02:43.154073+0000 osd.54 (osd.54) 51520 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:44.139570+0000 osd.54 (osd.54) 51521 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:44.554771+0000 mgr.b (mgr.12834102) 26171 : cluster [DBG] pgmap v26742: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:02:44.854156+0000 mon.l (mon.2) 15487 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:02:44.854431+0000 mon.l (mon.2) 15488 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:02:45.861+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:02:46.489+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:02:46.489+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:02:45.166189+0000 osd.54 (osd.54) 51522 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:02:46.297089+0000 mon.k (mon.1) 18788 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:02:46.297428+0000 mon.k (mon.1) 18789 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:02:46.492919+0000 mon.j (mon.0) 21499 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:02:46.493182+0000 mon.j (mon.0) 21500 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:02:46.161822+0000 osd.54 (osd.54) 51523 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:46.555706+0000 mgr.b (mgr.12834102) 26172 : cluster [DBG] pgmap v26743: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:02:47.137747+0000 osd.54 (osd.54) 51524 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:48.101571+0000 osd.54 (osd.54) 51525 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:48.556643+0000 mgr.b (mgr.12834102) 26173 : cluster [DBG] pgmap v26744: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:02:49.181043+0000 mon.l (mon.2) 15489 : audit [DBG] from='client.? 10.1.207.132:0/418657855' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:02:49.638050+0000 mon.k (mon.1) 18790 : audit [DBG] from='client.? 10.1.222.242:0/2145367265' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:02:49.087842+0000 osd.54 (osd.54) 51526 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:02:50.028600+0000 mon.l (mon.2) 15490 : audit [DBG] from='client.? 10.1.222.242:0/3436515705' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
debug 2023-07-18T20:02:50.861+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:02:50.066334+0000 osd.54 (osd.54) 51527 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:50.557640+0000 mgr.b (mgr.12834102) 26174 : cluster [DBG] pgmap v26745: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:02:51.091897+0000 osd.54 (osd.54) 51528 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:52.086301+0000 osd.54 (osd.54) 51529 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:52.558604+0000 mgr.b (mgr.12834102) 26175 : cluster [DBG] pgmap v26746: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:02:53.109254+0000 osd.54 (osd.54) 51530 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:54.112806+0000 osd.54 (osd.54) 51531 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:54.559565+0000 mgr.b (mgr.12834102) 26176 : cluster [DBG] pgmap v26747: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:02:54.888371+0000 mon.l (mon.2) 15491 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:02:54.888636+0000 mon.l (mon.2) 15492 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:02:55.864+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:02:56.104+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:02:56.104+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/3096364322' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:02:56.484+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:02:56.488+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:02:55.096327+0000 osd.54 (osd.54) 51532 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:02:56.105580+0000 mon.j (mon.0) 21501 : audit [DBG] from='client.? 10.1.182.12:0/3096364322' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:02:56.284956+0000 mon.k (mon.1) 18791 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:02:56.285155+0000 mon.k (mon.1) 18792 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:02:56.489268+0000 mon.j (mon.0) 21502 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:02:56.489547+0000 mon.j (mon.0) 21503 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:02:56.880+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:02:56.880+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:02:56.141700+0000 osd.54 (osd.54) 51533 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:56.560733+0000 mgr.b (mgr.12834102) 26177 : cluster [DBG] pgmap v26748: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:02:56.883767+0000 mon.j (mon.0) 21504 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:02:57.187549+0000 osd.54 (osd.54) 51534 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:58.174721+0000 osd.54 (osd.54) 51535 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:02:58.561822+0000 mgr.b (mgr.12834102) 26178 : cluster [DBG] pgmap v26749: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:02:59.173377+0000 osd.54 (osd.54) 51536 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:03:00.864+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:03:00.196497+0000 osd.54 (osd.54) 51537 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:00.562803+0000 mgr.b (mgr.12834102) 26179 : cluster [DBG] pgmap v26750: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:01.187492+0000 osd.54 (osd.54) 51538 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:02.178562+0000 osd.54 (osd.54) 51539 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:02.563776+0000 mgr.b (mgr.12834102) 26180 : cluster [DBG] pgmap v26751: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:03.204478+0000 osd.54 (osd.54) 51540 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:04.239986+0000 osd.54 (osd.54) 51541 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:04.564763+0000 mgr.b (mgr.12834102) 26181 : cluster [DBG] pgmap v26752: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:03:04.868911+0000 mon.l (mon.2) 15493 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:03:04.869200+0000 mon.l (mon.2) 15494 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:03:05.194294+0000 osd.54 (osd.54) 51542 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:03:05.868+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:03:06.484+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:03:06.484+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:03:06.226773+0000 osd.54 (osd.54) 51543 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:06.290755+0000 mon.k (mon.1) 18793 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:03:06.291026+0000 mon.k (mon.1) 18794 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:03:06.487895+0000 mon.j (mon.0) 21505 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:03:06.488166+0000 mon.j (mon.0) 21506 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:03:06.565761+0000 mgr.b (mgr.12834102) 26182 : cluster [DBG] pgmap v26753: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:07.239398+0000 osd.54 (osd.54) 51544 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:08.267082+0000 osd.54 (osd.54) 51545 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:08.566717+0000 mgr.b (mgr.12834102) 26183 : cluster [DBG] pgmap v26754: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:09.254025+0000 osd.54 (osd.54) 51546 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:09.344063+0000 mon.l (mon.2) 15495 : audit [DBG] from='client.? 10.1.207.132:0/4162515499' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:03:10.868+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:03:10.294418+0000 osd.54 (osd.54) 51547 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:03:11.584+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:03:11.584+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/3513304011' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:03:11.880+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:03:11.880+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:03:10.567696+0000 mgr.b (mgr.12834102) 26184 : cluster [DBG] pgmap v26755: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:11.254615+0000 osd.54 (osd.54) 51548 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:11.587287+0000 mon.j (mon.0) 21507 : audit [DBG] from='client.? 10.1.182.12:0/3513304011' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:03:11.883698+0000 mon.j (mon.0) 21508 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:03:12.219186+0000 osd.54 (osd.54) 51549 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:12.568693+0000 mgr.b (mgr.12834102) 26185 : cluster [DBG] pgmap v26756: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:13.183376+0000 osd.54 (osd.54) 51550 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:14.007288+0000 mon.k (mon.1) 18795 : audit [DBG] from='client.? 10.1.222.242:0/1807983593' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:03:14.231625+0000 osd.54 (osd.54) 51551 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:14.409165+0000 mon.k (mon.1) 18796 : audit [DBG] from='client.? 10.1.222.242:0/82014279' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
audit 2023-07-18T20:03:14.869085+0000 mon.l (mon.2) 15496 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:03:14.869365+0000 mon.l (mon.2) 15497 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:03:15.868+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:03:14.569668+0000 mgr.b (mgr.12834102) 26186 : cluster [DBG] pgmap v26757: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:15.236986+0000 osd.54 (osd.54) 51552 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:03:16.488+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:03:16.488+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:03:16.199955+0000 osd.54 (osd.54) 51553 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:16.293625+0000 mon.k (mon.1) 18797 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:03:16.293911+0000 mon.k (mon.1) 18798 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:03:16.491960+0000 mon.j (mon.0) 21509 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:03:16.492226+0000 mon.j (mon.0) 21510 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:03:16.570642+0000 mgr.b (mgr.12834102) 26187 : cluster [DBG] pgmap v26758: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:17.238821+0000 osd.54 (osd.54) 51554 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:18.213682+0000 osd.54 (osd.54) 51555 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:18.571588+0000 mgr.b (mgr.12834102) 26188 : cluster [DBG] pgmap v26759: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:19.210027+0000 osd.54 (osd.54) 51556 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:03:20.872+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:03:20.175646+0000 osd.54 (osd.54) 51557 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:20.567940+0000 mon.l (mon.2) 15498 : audit [DBG] from='client.? 10.1.222.242:0/399214525' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:03:20.962114+0000 mon.k (mon.1) 18799 : audit [DBG] from='client.? 10.1.222.242:0/1987802501' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:03:20.572618+0000 mgr.b (mgr.12834102) 26189 : cluster [DBG] pgmap v26760: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:21.167123+0000 osd.54 (osd.54) 51558 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:22.165103+0000 osd.54 (osd.54) 51559 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:22.573605+0000 mgr.b (mgr.12834102) 26190 : cluster [DBG] pgmap v26761: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:23.196170+0000 osd.54 (osd.54) 51560 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:24.159478+0000 osd.54 (osd.54) 51561 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:24.852529+0000 mon.l (mon.2) 15499 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:03:24.852736+0000 mon.l (mon.2) 15500 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:03:25.872+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:03:24.574565+0000 mgr.b (mgr.12834102) 26191 : cluster [DBG] pgmap v26762: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:25.127055+0000 osd.54 (osd.54) 51562 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:03:26.472+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:03:26.472+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:03:26.880+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:03:26.880+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:03:26.170963+0000 osd.54 (osd.54) 51563 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:26.310132+0000 mon.k (mon.1) 18800 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:03:26.310437+0000 mon.k (mon.1) 18801 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:03:26.477141+0000 mon.j (mon.0) 21511 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:03:26.477425+0000 mon.j (mon.0) 21512 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:03:26.883901+0000 mon.j (mon.0) 21513 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
debug 2023-07-18T20:03:27.048+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:03:27.048+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/2092866902' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:03:26.575524+0000 mgr.b (mgr.12834102) 26192 : cluster [DBG] pgmap v26763: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:03:27.051365+0000 mon.j (mon.0) 21514 : audit [DBG] from='client.? 10.1.182.12:0/2092866902' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:03:27.165021+0000 osd.54 (osd.54) 51564 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:28.128293+0000 osd.54 (osd.54) 51565 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:28.576521+0000 mgr.b (mgr.12834102) 26193 : cluster [DBG] pgmap v26764: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:29.100460+0000 osd.54 (osd.54) 51566 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:29.531832+0000 mon.k (mon.1) 18802 : audit [DBG] from='client.? 10.1.207.132:0/2027265426' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:03:30.876+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:03:30.075823+0000 osd.54 (osd.54) 51567 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:30.577500+0000 mgr.b (mgr.12834102) 26194 : cluster [DBG] pgmap v26765: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:31.090785+0000 osd.54 (osd.54) 51568 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:32.048273+0000 osd.54 (osd.54) 51569 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:32.578468+0000 mgr.b (mgr.12834102) 26195 : cluster [DBG] pgmap v26766: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:33.053759+0000 osd.54 (osd.54) 51570 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:34.089231+0000 osd.54 (osd.54) 51571 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:34.864015+0000 mon.l (mon.2) 15501 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:03:34.864289+0000 mon.l (mon.2) 15502 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:03:35.876+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:03:34.579459+0000 mgr.b (mgr.12834102) 26196 : cluster [DBG] pgmap v26767: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:35.104788+0000 osd.54 (osd.54) 51572 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:03:36.500+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:03:36.500+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:03:36.129871+0000 osd.54 (osd.54) 51573 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:36.293997+0000 mon.k (mon.1) 18803 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:03:36.294296+0000 mon.k (mon.1) 18804 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:03:36.502513+0000 mon.j (mon.0) 21515 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:03:36.502778+0000 mon.j (mon.0) 21516 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:03:36.580444+0000 mgr.b (mgr.12834102) 26197 : cluster [DBG] pgmap v26768: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:37.117271+0000 osd.54 (osd.54) 51574 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:38.162968+0000 osd.54 (osd.54) 51575 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:38.581381+0000 mgr.b (mgr.12834102) 26198 : cluster [DBG] pgmap v26769: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:39.181285+0000 osd.54 (osd.54) 51576 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:03:40.876+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:03:40.163570+0000 osd.54 (osd.54) 51577 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:03:41.880+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:03:41.880+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:03:40.582368+0000 mgr.b (mgr.12834102) 26199 : cluster [DBG] pgmap v26770: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:41.182595+0000 osd.54 (osd.54) 51578 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:41.884290+0000 mon.j (mon.0) 21517 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
debug 2023-07-18T20:03:42.556+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:03:42.556+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/2105215020' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:03:42.197459+0000 osd.54 (osd.54) 51579 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:42.561136+0000 mon.j (mon.0) 21518 : audit [DBG] from='client.? 10.1.182.12:0/2105215020' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:03:42.583372+0000 mgr.b (mgr.12834102) 26200 : cluster [DBG] pgmap v26771: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:43.246235+0000 osd.54 (osd.54) 51580 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:44.280707+0000 osd.54 (osd.54) 51581 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:44.870037+0000 mon.l (mon.2) 15503 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:03:44.870308+0000 mon.l (mon.2) 15504 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:03:45.072295+0000 mon.k (mon.1) 18805 : audit [DBG] from='client.? 10.1.222.242:0/1192878864' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
debug 2023-07-18T20:03:45.880+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:03:45.884+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_write.cc:1736] [default] New memtable created with log file: #21704. Immutable memtables: 0.
debug 2023-07-18T20:03:45.884+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:03:45.889192) [db/db_impl/db_impl_compaction_flush.cc:2394] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
debug 2023-07-18T20:03:45.884+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:335] [default] [JOB 1653] Flushing memtable with next log file: 21704
debug 2023-07-18T20:03:45.884+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710625889242, "job": 1653, "event": "flush_started", "num_memtables": 1, "num_entries": 2045, "num_deletes": 513, "total_data_size": 2827011, "memory_usage": 2866032, "flush_reason": "Manual Compaction"}
debug 2023-07-18T20:03:45.884+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:364] [default] [JOB 1653] Level-0 flush table #21705: started
debug 2023-07-18T20:03:45.896+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710625898118, "cf_name": "default", "job": 1653, "event": "table_file_creation", "file_number": 21705, "file_size": 1638966, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 1632382, "index_size": 2844, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 27290, "raw_average_key_size": 24, "raw_value_size": 1614669, "raw_average_value_size": 1474, "num_data_blocks": 112, "num_entries": 1095, "num_deletions": 513, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689710515, "oldest_key_time": 1689710515, "file_creation_time": 1689710625, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:03:45.896+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:424] [default] [JOB 1653] Level-0 flush table #21705: 1638966 bytes OK
debug 2023-07-18T20:03:45.896+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:03:45.898370) [db/memtable_list.cc:449] [default] Level-0 commit table #21705 started
debug 2023-07-18T20:03:45.896+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:03:45.898663) [db/memtable_list.cc:628] [default] Level-0 commit table #21705: memtable #1 done
debug 2023-07-18T20:03:45.896+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:03:45.898676) EVENT_LOG_v1 {"time_micros": 1689710625898671, "job": 1653, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 3], "immutable_memtables": 0}
debug 2023-07-18T20:03:45.896+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:03:45.898686) [db/db_impl/db_impl_compaction_flush.cc:233] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 3] max score 0.25
debug 2023-07-18T20:03:45.896+0000 7f7fb8521700 4 rocksdb: [db/db_impl/db_impl_files.cc:415] [JOB 1653] Try to delete WAL files size 2816935, prev total WAL file size 2817511, number of live WAL files 2.
debug 2023-07-18T20:03:45.896+0000 7f7fb8521700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021699.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:03:45.896+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:03:45.896+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:03:45.899366) [db/db_impl/db_impl_compaction_flush.cc:2712] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303836373439' seq:72057594037927935, type:20 .. '6D6772737461740032303837303030' seq:0, type:0; will stop at (end)
debug 2023-07-18T20:03:45.896+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1881] [default] [JOB 1654] Compacting 1@0 + 3@6 files to L6, score -1.00
debug 2023-07-18T20:03:45.896+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1887] [default] Compaction start summary: Base version 1654 Base level 0, inputs: [21705(1600KB)], [21701(64MB) 21702(64MB) 21703(6112KB)]
debug 2023-07-18T20:03:45.896+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710625899417, "job": 1654, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [21705], "files_L6": [21701, 21702, 21703], "score": -1, "input_data_size": 142474766}
debug 2023-07-18T20:03:46.100+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1654] Generated table #21706: 21863 keys, 67283716 bytes
debug 2023-07-18T20:03:46.100+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710626105196, "cf_name": "default", "job": 1654, "event": "table_file_creation", "file_number": 21706, "file_size": 67283716, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67169111, "index_size": 58893, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 54725, "raw_key_size": 592693, "raw_average_key_size": 27, "raw_value_size": 66814027, "raw_average_value_size": 3056, "num_data_blocks": 2177, "num_entries": 21863, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710625, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
cluster 2023-07-18T20:03:44.584424+0000 mgr.b (mgr.12834102) 26201 : cluster [DBG] pgmap v26772: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:45.312625+0000 osd.54 (osd.54) 51582 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:45.474654+0000 mon.l (mon.2) 15505 : audit [DBG] from='client.? 10.1.222.242:0/3224358550' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
debug 2023-07-18T20:03:46.344+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1654] Generated table #21707: 13030 keys, 67257570 bytes
debug 2023-07-18T20:03:46.344+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710626347025, "cf_name": "default", "job": 1654, "event": "table_file_creation", "file_number": 21707, "file_size": 67257570, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67132100, "index_size": 91903, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32581, "raw_key_size": 288565, "raw_average_key_size": 22, "raw_value_size": 66856495, "raw_average_value_size": 5130, "num_data_blocks": 3412, "num_entries": 13030, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710626, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:03:46.360+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1654] Generated table #21708: 672 keys, 5479929 bytes
debug 2023-07-18T20:03:46.360+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710626364148, "cf_name": "default", "job": 1654, "event": "table_file_creation", "file_number": 21708, "file_size": 5479929, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 5470318, "index_size": 6896, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14136, "raw_average_key_size": 21, "raw_value_size": 5454477, "raw_average_value_size": 8116, "num_data_blocks": 273, "num_entries": 672, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710626, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:03:46.360+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1594] [default] [JOB 1654] Compacted 1@0 + 3@6 files to L6 => 140021215 bytes
debug 2023-07-18T20:03:46.364+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:03:46.365639) [db/compaction/compaction_job.cc:812] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 3] max score 0.00, MB/sec: 306.6 rd, 301.3 wr, level 6, files in(1, 3) out(3) MB in(1.6, 134.3) out(133.5), read-write-amplify(172.4) write-amplify(85.4) OK, records in: 36556, records dropped: 991 output_compression: NoCompression
debug 2023-07-18T20:03:46.364+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:03:46.365654) EVENT_LOG_v1 {"time_micros": 1689710626365648, "job": 1654, "event": "compaction_finished", "compaction_time_micros": 464746, "compaction_time_cpu_micros": 237770, "output_level": 6, "num_output_files": 3, "total_output_size": 140021215, "num_input_records": 36556, "num_output_records": 35565, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 3]}
debug 2023-07-18T20:03:46.364+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021705.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:03:46.364+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710626365987, "job": 1654, "event": "table_file_deletion", "file_number": 21705}
debug 2023-07-18T20:03:46.364+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021703.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:03:46.364+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710626366841, "job": 1654, "event": "table_file_deletion", "file_number": 21703}
debug 2023-07-18T20:03:46.372+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021702.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:03:46.372+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710626375835, "job": 1654, "event": "table_file_deletion", "file_number": 21702}
debug 2023-07-18T20:03:46.384+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021701.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:03:46.384+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710626385635, "job": 1654, "event": "table_file_deletion", "file_number": 21701}
debug 2023-07-18T20:03:46.384+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:03:46.384+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:03:46.384+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:03:46.384+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:03:46.384+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:03:46.480+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:03:46.480+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:03:46.316169+0000 mon.k (mon.1) 18806 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:03:46.316497+0000 mon.k (mon.1) 18807 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:03:46.361070+0000 osd.54 (osd.54) 51583 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:46.484706+0000 mon.j (mon.0) 21519 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:03:46.484971+0000 mon.j (mon.0) 21520 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:03:46.585198+0000 mgr.b (mgr.12834102) 26202 : cluster [DBG] pgmap v26773: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:47.377996+0000 osd.54 (osd.54) 51584 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:48.361565+0000 osd.54 (osd.54) 51585 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:48.586129+0000 mgr.b (mgr.12834102) 26203 : cluster [DBG] pgmap v26774: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:49.384360+0000 osd.54 (osd.54) 51586 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:49.883648+0000 mon.l (mon.2) 15506 : audit [DBG] from='client.? 10.1.207.132:0/2028762834' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:03:50.884+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:03:50.344402+0000 osd.54 (osd.54) 51587 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:50.587100+0000 mgr.b (mgr.12834102) 26204 : cluster [DBG] pgmap v26775: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:51.360781+0000 osd.54 (osd.54) 51588 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:51.493286+0000 mon.l (mon.2) 15507 : audit [DBG] from='client.? 10.1.222.242:0/3845426084' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:03:52.025148+0000 mon.l (mon.2) 15508 : audit [DBG] from='client.? 10.1.222.242:0/208348387' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:03:52.325886+0000 osd.54 (osd.54) 51589 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:52.588079+0000 mgr.b (mgr.12834102) 26205 : cluster [DBG] pgmap v26776: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:53.303951+0000 osd.54 (osd.54) 51590 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:54.335434+0000 osd.54 (osd.54) 51591 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:54.852554+0000 mon.l (mon.2) 15509 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:03:54.852859+0000 mon.l (mon.2) 15510 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:03:55.884+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:03:54.589066+0000 mgr.b (mgr.12834102) 26206 : cluster [DBG] pgmap v26777: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:55.329990+0000 osd.54 (osd.54) 51592 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:03:56.484+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:03:56.484+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:03:56.880+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:03:56.880+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:03:56.295330+0000 osd.54 (osd.54) 51593 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:56.306851+0000 mon.k (mon.1) 18808 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:03:56.307217+0000 mon.k (mon.1) 18809 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:03:56.487424+0000 mon.j (mon.0) 21521 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:03:56.487548+0000 mon.j (mon.0) 21522 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:03:56.883557+0000 mon.j (mon.0) 21523 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
debug 2023-07-18T20:03:58.040+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:03:58.040+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/1045030696' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:03:56.590057+0000 mgr.b (mgr.12834102) 26207 : cluster [DBG] pgmap v26778: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:57.295164+0000 osd.54 (osd.54) 51594 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:03:58.043296+0000 mon.j (mon.0) 21524 : audit [DBG] from='client.? 10.1.182.12:0/1045030696' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:03:58.297042+0000 osd.54 (osd.54) 51595 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:03:58.591028+0000 mgr.b (mgr.12834102) 26208 : cluster [DBG] pgmap v26779: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:03:59.269373+0000 osd.54 (osd.54) 51596 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:04:00.888+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:04:00.230869+0000 osd.54 (osd.54) 51597 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:00.592010+0000 mgr.b (mgr.12834102) 26209 : cluster [DBG] pgmap v26780: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:04:01.188601+0000 osd.54 (osd.54) 51598 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:02.187553+0000 osd.54 (osd.54) 51599 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:02.592617+0000 mgr.b (mgr.12834102) 26210 : cluster [DBG] pgmap v26781: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:04:03.172647+0000 osd.54 (osd.54) 51600 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:04.175211+0000 osd.54 (osd.54) 51601 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:04.593176+0000 mgr.b (mgr.12834102) 26211 : cluster [DBG] pgmap v26782: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:04:04.889407+0000 mon.l (mon.2) 15511 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:04:04.889677+0000 mon.l (mon.2) 15512 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:04:05.888+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:04:05.195871+0000 osd.54 (osd.54) 51602 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:04:06.484+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:04:06.484+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:04:06.186813+0000 osd.54 (osd.54) 51603 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:04:06.303566+0000 mon.k (mon.1) 18810 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:04:06.303853+0000 mon.k (mon.1) 18811 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:04:06.485951+0000 mon.j (mon.0) 21525 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:04:06.486217+0000 mon.j (mon.0) 21526 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:04:06.594119+0000 mgr.b (mgr.12834102) 26212 : cluster [DBG] pgmap v26783: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:04:07.159529+0000 osd.54 (osd.54) 51604 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:08.168829+0000 osd.54 (osd.54) 51605 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:08.595080+0000 mgr.b (mgr.12834102) 26213 : cluster [DBG] pgmap v26784: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:04:09.137514+0000 osd.54 (osd.54) 51606 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:04:09.997517+0000 mon.k (mon.1) 18812 : audit [DBG] from='client.? 10.1.207.132:0/850588253' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:04:10.892+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:04:10.183311+0000 osd.54 (osd.54) 51607 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:10.596057+0000 mgr.b (mgr.12834102) 26214 : cluster [DBG] pgmap v26785: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:04:11.344+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_write.cc:1736] [default] New memtable created with log file: #21709. Immutable memtables: 0.
debug 2023-07-18T20:04:11.344+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:04:11.348467) [db/db_impl/db_impl_compaction_flush.cc:2394] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
debug 2023-07-18T20:04:11.344+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:335] [default] [JOB 1655] Flushing memtable with next log file: 21709
debug 2023-07-18T20:04:11.344+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710651348516, "job": 1655, "event": "flush_started", "num_memtables": 1, "num_entries": 692, "num_deletes": 316, "total_data_size": 681911, "memory_usage": 694104, "flush_reason": "Manual Compaction"}
debug 2023-07-18T20:04:11.344+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:364] [default] [JOB 1655] Level-0 flush table #21710: started
debug 2023-07-18T20:04:11.348+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710651352823, "cf_name": "default", "job": 1655, "event": "table_file_creation", "file_number": 21710, "file_size": 545452, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 542190, "index_size": 1059, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 11049, "raw_average_key_size": 23, "raw_value_size": 534570, "raw_average_value_size": 1116, "num_data_blocks": 42, "num_entries": 479, "num_deletions": 316, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689710625, "oldest_key_time": 1689710625, "file_creation_time": 1689710651, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:04:11.348+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:424] [default] [JOB 1655] Level-0 flush table #21710: 545452 bytes OK
debug 2023-07-18T20:04:11.352+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:04:11.353072) [db/memtable_list.cc:449] [default] Level-0 commit table #21710 started
debug 2023-07-18T20:04:11.352+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:04:11.353369) [db/memtable_list.cc:628] [default] Level-0 commit table #21710: memtable #1 done
debug 2023-07-18T20:04:11.352+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:04:11.353394) EVENT_LOG_v1 {"time_micros": 1689710651353386, "job": 1655, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 3], "immutable_memtables": 0}
debug 2023-07-18T20:04:11.352+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:04:11.353430) [db/db_impl/db_impl_compaction_flush.cc:233] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 3] max score 0.25
debug 2023-07-18T20:04:11.352+0000 7f7fb8521700 4 rocksdb: [db/db_impl/db_impl_files.cc:415] [JOB 1655] Try to delete WAL files size 677915, prev total WAL file size 677915, number of live WAL files 2.
debug 2023-07-18T20:04:11.352+0000 7f7fb8521700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021704.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:04:11.352+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:04:11.352+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:04:11.354007) [db/db_impl/db_impl_compaction_flush.cc:2712] [default] Manual compaction from level-0 to level-6 from '7061786F730036323534393230' seq:72057594037927935, type:20 .. '7061786F730036323535313732' seq:0, type:0; will stop at (end)
debug 2023-07-18T20:04:11.352+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1881] [default] [JOB 1656] Compacting 1@0 + 3@6 files to L6, score -1.00
debug 2023-07-18T20:04:11.352+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1887] [default] Compaction start summary: Base version 1656 Base level 0, inputs: [21710(532KB)], [21706(64MB) 21707(64MB) 21708(5351KB)]
debug 2023-07-18T20:04:11.352+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710651354059, "job": 1656, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [21710], "files_L6": [21706, 21707, 21708], "score": -1, "input_data_size": 140566667}
debug 2023-07-18T20:04:11.564+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1656] Generated table #21711: 21888 keys, 67300524 bytes
debug 2023-07-18T20:04:11.564+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710651567550, "cf_name": "default", "job": 1656, "event": "table_file_creation", "file_number": 21711, "file_size": 67300524, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67185779, "index_size": 59033, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 54725, "raw_key_size": 593186, "raw_average_key_size": 27, "raw_value_size": 66829918, "raw_average_value_size": 3053, "num_data_blocks": 2181, "num_entries": 21888, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710651, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:04:11.799+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1656] Generated table #21712: 13022 keys, 67252390 bytes
debug 2023-07-18T20:04:11.799+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710651804542, "cf_name": "default", "job": 1656, "event": "table_file_creation", "file_number": 21712, "file_size": 67252390, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67126819, "index_size": 92004, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32581, "raw_key_size": 288423, "raw_average_key_size": 22, "raw_value_size": 66851224, "raw_average_value_size": 5133, "num_data_blocks": 3415, "num_entries": 13022, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710651, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:04:11.811+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1656] Generated table #21713: 488 keys, 3782746 bytes
debug 2023-07-18T20:04:11.811+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710651816946, "cf_name": "default", "job": 1656, "event": "table_file_creation", "file_number": 21713, "file_size": 3782746, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 3775619, "index_size": 4796, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10272, "raw_average_key_size": 21, "raw_value_size": 3764288, "raw_average_value_size": 7713, "num_data_blocks": 191, "num_entries": 488, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710651, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:04:11.815+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1594] [default] [JOB 1656] Compacted 1@0 + 3@6 files to L6 => 138335660 bytes
debug 2023-07-18T20:04:11.815+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:04:11.817962) [db/compaction/compaction_job.cc:812] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 3] max score 0.00, MB/sec: 303.7 rd, 298.9 wr, level 6, files in(1, 3) out(3) MB in(0.5, 133.5) out(131.9), read-write-amplify(511.3) write-amplify(253.6) OK, records in: 36044, records dropped: 646 output_compression: NoCompression
debug 2023-07-18T20:04:11.815+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:04:11.817984) EVENT_LOG_v1 {"time_micros": 1689710651817974, "job": 1656, "event": "compaction_finished", "compaction_time_micros": 462889, "compaction_time_cpu_micros": 238522, "output_level": 6, "num_output_files": 3, "total_output_size": 138335660, "num_input_records": 36044, "num_output_records": 35398, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 3]}
debug 2023-07-18T20:04:11.815+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021710.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:04:11.815+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710651818374, "job": 1656, "event": "table_file_deletion", "file_number": 21710}
debug 2023-07-18T20:04:11.815+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021708.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:04:11.815+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710651819495, "job": 1656, "event": "table_file_deletion", "file_number": 21708}
debug 2023-07-18T20:04:11.823+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021707.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:04:11.823+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710651829312, "job": 1656, "event": "table_file_deletion", "file_number": 21707}
debug 2023-07-18T20:04:11.835+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021706.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:04:11.835+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710651839251, "job": 1656, "event": "table_file_deletion", "file_number": 21706}
debug 2023-07-18T20:04:11.835+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:04:11.835+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:04:11.835+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:04:11.835+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:04:11.835+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:04:11.879+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:04:11.879+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:04:11.168481+0000 osd.54 (osd.54) 51608 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:04:11.884145+0000 mon.j (mon.0) 21527 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:04:12.129532+0000 osd.54 (osd.54) 51609 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:12.597014+0000 mgr.b (mgr.12834102) 26215 : cluster [DBG] pgmap v26786: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:04:13.519+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:04:13.519+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/191099780' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:04:13.164090+0000 osd.54 (osd.54) 51610 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:04:13.524513+0000 mon.j (mon.0) 21528 : audit [DBG] from='client.? 10.1.182.12:0/191099780' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:04:14.151435+0000 osd.54 (osd.54) 51611 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:14.597955+0000 mgr.b (mgr.12834102) 26216 : cluster [DBG] pgmap v26787: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:04:14.876979+0000 mon.l (mon.2) 15513 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:04:14.877268+0000 mon.l (mon.2) 15514 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:04:15.891+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:04:15.150588+0000 osd.54 (osd.54) 51612 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:04:16.000080+0000 mon.k (mon.1) 18813 : audit [DBG] from='client.? 10.1.222.242:0/3437917042' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:04:16.295257+0000 mon.k (mon.1) 18814 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:04:16.295415+0000 mon.k (mon.1) 18815 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:04:16.483+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:04:16.483+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:04:16.113511+0000 osd.54 (osd.54) 51613 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:04:16.395820+0000 mon.l (mon.2) 15515 : audit [DBG] from='client.? 10.1.222.242:0/3918280025' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
audit 2023-07-18T20:04:16.489142+0000 mon.j (mon.0) 21529 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:04:16.489398+0000 mon.j (mon.0) 21530 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:04:16.598914+0000 mgr.b (mgr.12834102) 26217 : cluster [DBG] pgmap v26788: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:04:17.087476+0000 osd.54 (osd.54) 51614 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:18.059736+0000 osd.54 (osd.54) 51615 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:18.599852+0000 mgr.b (mgr.12834102) 26218 : cluster [DBG] pgmap v26789: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:04:19.067600+0000 osd.54 (osd.54) 51616 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:04:20.895+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:04:20.034149+0000 osd.54 (osd.54) 51617 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:20.600838+0000 mgr.b (mgr.12834102) 26219 : cluster [DBG] pgmap v26790: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:04:21.035621+0000 osd.54 (osd.54) 51618 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:22.034301+0000 osd.54 (osd.54) 51619 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:22.601828+0000 mgr.b (mgr.12834102) 26220 : cluster [DBG] pgmap v26791: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:04:22.647712+0000 mon.k (mon.1) 18816 : audit [DBG] from='client.? 10.1.222.242:0/1609605661' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:04:23.044242+0000 mon.k (mon.1) 18817 : audit [DBG] from='client.? 10.1.222.242:0/3011426927' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:04:23.066862+0000 osd.54 (osd.54) 51620 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:24.065486+0000 osd.54 (osd.54) 51621 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:24.602778+0000 mgr.b (mgr.12834102) 26221 : cluster [DBG] pgmap v26792: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:04:24.846904+0000 mon.l (mon.2) 15516 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:04:24.847185+0000 mon.l (mon.2) 15517 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:04:25.895+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:04:25.079770+0000 osd.54 (osd.54) 51622 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:04:26.292611+0000 mon.k (mon.1) 18818 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:04:26.292884+0000 mon.k (mon.1) 18819 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:04:26.479+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:04:26.479+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:04:26.879+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:04:26.879+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:04:26.085291+0000 osd.54 (osd.54) 51623 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:04:26.484391+0000 mon.j (mon.0) 21531 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:04:26.484553+0000 mon.j (mon.0) 21532 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:04:26.603721+0000 mgr.b (mgr.12834102) 26222 : cluster [DBG] pgmap v26793: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:04:26.884083+0000 mon.j (mon.0) 21533 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:04:27.084812+0000 osd.54 (osd.54) 51624 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:04:29.003+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:04:29.003+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/1055677162' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:04:28.067024+0000 osd.54 (osd.54) 51625 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:28.604674+0000 mgr.b (mgr.12834102) 26223 : cluster [DBG] pgmap v26794: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:04:29.006127+0000 mon.j (mon.0) 21534 : audit [DBG] from='client.? 10.1.182.12:0/1055677162' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:04:29.034454+0000 osd.54 (osd.54) 51626 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:04:30.143261+0000 mon.k (mon.1) 18820 : audit [DBG] from='client.? 10.1.207.132:0/3850931885' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:04:30.895+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:04:30.015023+0000 osd.54 (osd.54) 51627 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:30.605599+0000 mgr.b (mgr.12834102) 26224 : cluster [DBG] pgmap v26795: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:04:31.043653+0000 osd.54 (osd.54) 51628 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:32.004082+0000 osd.54 (osd.54) 51629 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:32.606562+0000 mgr.b (mgr.12834102) 26225 : cluster [DBG] pgmap v26796: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:04:33.041760+0000 osd.54 (osd.54) 51630 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:34.004146+0000 osd.54 (osd.54) 51631 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:34.607517+0000 mgr.b (mgr.12834102) 26226 : cluster [DBG] pgmap v26797: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:04:34.848385+0000 mon.l (mon.2) 15518 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:04:34.848702+0000 mon.l (mon.2) 15519 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:04:35.899+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:04:36.483+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:04:36.483+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:04:35.015227+0000 osd.54 (osd.54) 51632 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:04:36.282245+0000 mon.k (mon.1) 18821 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:04:36.282533+0000 mon.k (mon.1) 18822 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:04:36.485717+0000 mon.j (mon.0) 21535 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:04:36.486005+0000 mon.j (mon.0) 21536 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:04:36.025994+0000 osd.54 (osd.54) 51633 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:36.608504+0000 mgr.b (mgr.12834102) 26227 : cluster [DBG] pgmap v26798: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:04:37.072924+0000 osd.54 (osd.54) 51634 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:38.050019+0000 osd.54 (osd.54) 51635 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:38.609426+0000 mgr.b (mgr.12834102) 26228 : cluster [DBG] pgmap v26799: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:04:39.046001+0000 osd.54 (osd.54) 51636 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:04:40.899+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:04:40.076469+0000 osd.54 (osd.54) 51637 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:40.610404+0000 mgr.b (mgr.12834102) 26229 : cluster [DBG] pgmap v26800: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:04:41.883+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:04:41.883+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:04:41.079115+0000 osd.54 (osd.54) 51638 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:04:41.887810+0000 mon.j (mon.0) 21537 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:04:42.100154+0000 osd.54 (osd.54) 51639 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:42.611378+0000 mgr.b (mgr.12834102) 26230 : cluster [DBG] pgmap v26801: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:04:44.487+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:04:44.487+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/1007658177' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:04:43.123449+0000 osd.54 (osd.54) 51640 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:04:44.492507+0000 mon.j (mon.0) 21538 : audit [DBG] from='client.? 10.1.182.12:0/1007658177' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:04:44.084144+0000 osd.54 (osd.54) 51641 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:44.612377+0000 mgr.b (mgr.12834102) 26231 : cluster [DBG] pgmap v26802: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:04:44.851880+0000 mon.l (mon.2) 15520 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:04:44.852177+0000 mon.l (mon.2) 15521 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:04:45.903+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:04:46.471+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:04:46.471+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:04:45.079616+0000 osd.54 (osd.54) 51642 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:04:46.313075+0000 mon.k (mon.1) 18823 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:04:46.313351+0000 mon.k (mon.1) 18824 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:04:46.474938+0000 mon.j (mon.0) 21539 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:04:46.475206+0000 mon.j (mon.0) 21540 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:04:46.097283+0000 osd.54 (osd.54) 51643 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:46.613359+0000 mgr.b (mgr.12834102) 26232 : cluster [DBG] pgmap v26803: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:04:47.328012+0000 mon.l (mon.2) 15522 : audit [DBG] from='client.? 10.1.222.242:0/1130632532' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:04:47.060968+0000 osd.54 (osd.54) 51644 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:04:47.729883+0000 mon.l (mon.2) 15523 : audit [DBG] from='client.? 10.1.222.242:0/1462510004' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:04:48.061324+0000 osd.54 (osd.54) 51645 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:48.614291+0000 mgr.b (mgr.12834102) 26233 : cluster [DBG] pgmap v26804: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:04:49.072924+0000 osd.54 (osd.54) 51646 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:04:50.457382+0000 mon.l (mon.2) 15524 : audit [DBG] from='client.? 10.1.207.132:0/2261429380' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:04:50.903+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:04:50.033733+0000 osd.54 (osd.54) 51647 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:50.615251+0000 mgr.b (mgr.12834102) 26234 : cluster [DBG] pgmap v26805: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:04:51.070569+0000 osd.54 (osd.54) 51648 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:52.056699+0000 osd.54 (osd.54) 51649 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:52.616236+0000 mgr.b (mgr.12834102) 26235 : cluster [DBG] pgmap v26806: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:04:53.575183+0000 mon.k (mon.1) 18825 : audit [DBG] from='client.? 10.1.222.242:0/4086736194' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:04:53.048063+0000 osd.54 (osd.54) 51650 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:04:53.973241+0000 mon.l (mon.2) 15525 : audit [DBG] from='client.? 10.1.222.242:0/70361257' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:04:54.061496+0000 osd.54 (osd.54) 51651 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:54.616765+0000 mgr.b (mgr.12834102) 26236 : cluster [DBG] pgmap v26807: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:04:54.865873+0000 mon.l (mon.2) 15526 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:04:54.866149+0000 mon.l (mon.2) 15527 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:04:55.903+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:04:56.471+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:04:56.471+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:04:55.047879+0000 osd.54 (osd.54) 51652 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:04:56.286383+0000 mon.k (mon.1) 18826 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:04:56.286541+0000 mon.k (mon.1) 18827 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:04:56.477114+0000 mon.j (mon.0) 21541 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:04:56.477390+0000 mon.j (mon.0) 21542 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:04:56.883+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:04:56.883+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:04:56.095280+0000 osd.54 (osd.54) 51653 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:56.617668+0000 mgr.b (mgr.12834102) 26237 : cluster [DBG] pgmap v26808: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:04:56.887824+0000 mon.j (mon.0) 21543 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:04:57.106125+0000 osd.54 (osd.54) 51654 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:58.105385+0000 osd.54 (osd.54) 51655 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:04:58.618600+0000 mgr.b (mgr.12834102) 26238 : cluster [DBG] pgmap v26809: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:04:59.975+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:04:59.975+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/2967966844' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:04:59.136002+0000 osd.54 (osd.54) 51656 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:04:59.978350+0000 mon.j (mon.0) 21544 : audit [DBG] from='client.? 10.1.182.12:0/2967966844' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:05:00.907+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:05:00.088055+0000 osd.54 (osd.54) 51657 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:00.619580+0000 mgr.b (mgr.12834102) 26239 : cluster [DBG] pgmap v26810: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:01.076118+0000 osd.54 (osd.54) 51658 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:02.052510+0000 osd.54 (osd.54) 51659 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:02.620502+0000 mgr.b (mgr.12834102) 26240 : cluster [DBG] pgmap v26811: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:03.050920+0000 osd.54 (osd.54) 51660 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:04.054514+0000 osd.54 (osd.54) 51661 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:04.621487+0000 mgr.b (mgr.12834102) 26241 : cluster [DBG] pgmap v26812: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:05:04.851241+0000 mon.l (mon.2) 15528 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:05:04.851561+0000 mon.l (mon.2) 15529 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:05:05.907+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:05:06.491+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:05:06.491+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:05:05.008117+0000 osd.54 (osd.54) 51662 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:05:06.318078+0000 mon.k (mon.1) 18828 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:05:06.318245+0000 mon.k (mon.1) 18829 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:05:06.495615+0000 mon.j (mon.0) 21545 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:05:06.495784+0000 mon.j (mon.0) 21546 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:05:06.020901+0000 osd.54 (osd.54) 51663 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:06.622462+0000 mgr.b (mgr.12834102) 26242 : cluster [DBG] pgmap v26813: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:07.065006+0000 osd.54 (osd.54) 51664 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:08.017264+0000 osd.54 (osd.54) 51665 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:08.623428+0000 mgr.b (mgr.12834102) 26243 : cluster [DBG] pgmap v26814: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:09.013446+0000 osd.54 (osd.54) 51666 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:05:10.584335+0000 mon.k (mon.1) 18830 : audit [DBG] from='client.? 10.1.207.132:0/3310373138' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:05:10.911+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:05:10.050932+0000 osd.54 (osd.54) 51667 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:10.624423+0000 mgr.b (mgr.12834102) 26244 : cluster [DBG] pgmap v26815: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:05:11.883+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:05:11.883+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:05:11.011638+0000 osd.54 (osd.54) 51668 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:05:11.887455+0000 mon.j (mon.0) 21547 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:05:12.011876+0000 osd.54 (osd.54) 51669 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:12.625401+0000 mgr.b (mgr.12834102) 26245 : cluster [DBG] pgmap v26816: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:12.989399+0000 osd.54 (osd.54) 51670 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:05:15.487+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:05:15.487+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/1902654138' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:05:13.971695+0000 osd.54 (osd.54) 51671 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:14.626316+0000 mgr.b (mgr.12834102) 26246 : cluster [DBG] pgmap v26817: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:05:14.854025+0000 mon.l (mon.2) 15530 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:05:14.854318+0000 mon.l (mon.2) 15531 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:05:15.490396+0000 mon.j (mon.0) 21548 : audit [DBG] from='client.? 10.1.182.12:0/1902654138' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:05:15.911+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:05:16.491+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:05:16.491+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:05:14.952845+0000 osd.54 (osd.54) 51672 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:05:16.313750+0000 mon.k (mon.1) 18831 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:05:16.314021+0000 mon.k (mon.1) 18832 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:05:16.494846+0000 mon.j (mon.0) 21549 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:05:16.495111+0000 mon.j (mon.0) 21550 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:05:15.961488+0000 osd.54 (osd.54) 51673 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:16.627260+0000 mgr.b (mgr.12834102) 26247 : cluster [DBG] pgmap v26818: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:16.917453+0000 osd.54 (osd.54) 51674 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:05:18.342269+0000 mon.l (mon.2) 15532 : audit [DBG] from='client.? 10.1.222.242:0/3474899828' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:05:18.746164+0000 mon.l (mon.2) 15533 : audit [DBG] from='client.? 10.1.222.242:0/2970437081' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:05:17.905811+0000 osd.54 (osd.54) 51675 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:18.628170+0000 mgr.b (mgr.12834102) 26248 : cluster [DBG] pgmap v26819: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:18.876650+0000 osd.54 (osd.54) 51676 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:05:20.915+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:05:19.906134+0000 osd.54 (osd.54) 51677 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:20.629133+0000 mgr.b (mgr.12834102) 26249 : cluster [DBG] pgmap v26820: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:20.947765+0000 osd.54 (osd.54) 51678 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:21.993876+0000 osd.54 (osd.54) 51679 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:22.630114+0000 mgr.b (mgr.12834102) 26250 : cluster [DBG] pgmap v26821: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:22.993328+0000 osd.54 (osd.54) 51680 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:24.020307+0000 osd.54 (osd.54) 51681 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:05:24.491650+0000 mon.l (mon.2) 15534 : audit [DBG] from='client.? 10.1.222.242:0/3878743078' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:05:24.631051+0000 mgr.b (mgr.12834102) 26251 : cluster [DBG] pgmap v26822: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:05:24.833415+0000 mon.l (mon.2) 15535 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:05:24.833757+0000 mon.l (mon.2) 15536 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:05:25.023867+0000 mon.k (mon.1) 18833 : audit [DBG] from='client.? 10.1.222.242:0/2141587797' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:05:25.056625+0000 osd.54 (osd.54) 51682 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:05:25.915+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:05:26.487+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:05:26.487+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:05:26.883+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:05:26.883+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:05:26.081899+0000 osd.54 (osd.54) 51683 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:05:26.288036+0000 mon.k (mon.1) 18834 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:05:26.288327+0000 mon.k (mon.1) 18835 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:05:26.491626+0000 mon.j (mon.0) 21551 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:05:26.491821+0000 mon.j (mon.0) 21552 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:05:26.887846+0000 mon.j (mon.0) 21553 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:05:26.632039+0000 mgr.b (mgr.12834102) 26252 : cluster [DBG] pgmap v26823: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:27.125270+0000 osd.54 (osd.54) 51684 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:28.135689+0000 osd.54 (osd.54) 51685 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:28.633033+0000 mgr.b (mgr.12834102) 26253 : cluster [DBG] pgmap v26824: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:29.098358+0000 osd.54 (osd.54) 51686 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:05:30.914+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:05:30.946+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:05:30.946+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/1076335047' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:05:30.059399+0000 osd.54 (osd.54) 51687 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:05:30.702094+0000 mon.l (mon.2) 15537 : audit [DBG] from='client.? 10.1.207.132:0/743683444' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:05:30.950888+0000 mon.j (mon.0) 21554 : audit [DBG] from='client.? 10.1.182.12:0/1076335047' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:05:30.633729+0000 mgr.b (mgr.12834102) 26254 : cluster [DBG] pgmap v26825: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:31.103798+0000 osd.54 (osd.54) 51688 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:32.151506+0000 osd.54 (osd.54) 51689 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:32.634656+0000 mgr.b (mgr.12834102) 26255 : cluster [DBG] pgmap v26826: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:33.102228+0000 osd.54 (osd.54) 51690 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:34.120033+0000 osd.54 (osd.54) 51691 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:05:34.851465+0000 mon.l (mon.2) 15538 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:05:34.851748+0000 mon.l (mon.2) 15539 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:05:35.918+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:05:34.635606+0000 mgr.b (mgr.12834102) 26256 : cluster [DBG] pgmap v26827: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:35.133821+0000 osd.54 (osd.54) 51692 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:05:36.486+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:05:36.486+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:05:36.177602+0000 osd.54 (osd.54) 51693 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:05:36.304867+0000 mon.k (mon.1) 18836 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:05:36.305031+0000 mon.k (mon.1) 18837 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:05:36.490598+0000 mon.j (mon.0) 21555 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:05:36.490862+0000 mon.j (mon.0) 21556 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:05:36.636620+0000 mgr.b (mgr.12834102) 26257 : cluster [DBG] pgmap v26828: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:37.172905+0000 osd.54 (osd.54) 51694 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:38.213970+0000 osd.54 (osd.54) 51695 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:38.637645+0000 mgr.b (mgr.12834102) 26258 : cluster [DBG] pgmap v26829: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:39.202058+0000 osd.54 (osd.54) 51696 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:05:40.922+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:05:40.202642+0000 osd.54 (osd.54) 51697 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:05:41.882+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:05:41.882+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:05:40.638622+0000 mgr.b (mgr.12834102) 26259 : cluster [DBG] pgmap v26830: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:41.175699+0000 osd.54 (osd.54) 51698 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:05:41.887420+0000 mon.j (mon.0) 21557 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:05:42.196657+0000 osd.54 (osd.54) 51699 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:42.639627+0000 mgr.b (mgr.12834102) 26260 : cluster [DBG] pgmap v26831: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:43.227190+0000 osd.54 (osd.54) 51700 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:44.206684+0000 osd.54 (osd.54) 51701 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:05:44.853387+0000 mon.l (mon.2) 15540 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:05:44.853591+0000 mon.l (mon.2) 15541 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:05:45.922+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:05:44.640613+0000 mgr.b (mgr.12834102) 26261 : cluster [DBG] pgmap v26832: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:45.216123+0000 osd.54 (osd.54) 51702 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:05:46.422+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:05:46.422+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/439995287' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:05:46.478+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:05:46.478+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:05:46.170046+0000 osd.54 (osd.54) 51703 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:05:46.286963+0000 mon.k (mon.1) 18838 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:05:46.287260+0000 mon.k (mon.1) 18839 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:05:46.427905+0000 mon.j (mon.0) 21558 : audit [DBG] from='client.? 10.1.182.12:0/439995287' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:05:46.483529+0000 mon.j (mon.0) 21559 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:05:46.483788+0000 mon.j (mon.0) 21560 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:05:46.641643+0000 mgr.b (mgr.12834102) 26262 : cluster [DBG] pgmap v26833: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:47.214627+0000 osd.54 (osd.54) 51704 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:48.198529+0000 osd.54 (osd.54) 51705 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:48.642656+0000 mgr.b (mgr.12834102) 26263 : cluster [DBG] pgmap v26834: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:49.182783+0000 osd.54 (osd.54) 51706 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:05:49.362044+0000 mon.k (mon.1) 18840 : audit [DBG] from='client.? 10.1.222.242:0/2457331936' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
debug 2023-07-18T20:05:50.922+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
audit 2023-07-18T20:05:50.083169+0000 mon.k (mon.1) 18841 : audit [DBG] from='client.? 10.1.222.242:0/2503463718' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:05:50.212048+0000 osd.54 (osd.54) 51707 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:50.643660+0000 mgr.b (mgr.12834102) 26264 : cluster [DBG] pgmap v26835: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:51.246932+0000 osd.54 (osd.54) 51708 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:05:51.313046+0000 mon.l (mon.2) 15542 : audit [DBG] from='client.? 10.1.207.132:0/112062504' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:05:52.258064+0000 osd.54 (osd.54) 51709 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:52.644678+0000 mgr.b (mgr.12834102) 26265 : cluster [DBG] pgmap v26836: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:53.284898+0000 osd.54 (osd.54) 51710 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:54.321803+0000 osd.54 (osd.54) 51711 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:05:54.854088+0000 mon.l (mon.2) 15543 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:05:54.854408+0000 mon.l (mon.2) 15544 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:05:55.922+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:05:54.645654+0000 mgr.b (mgr.12834102) 26266 : cluster [DBG] pgmap v26837: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:55.327327+0000 osd.54 (osd.54) 51712 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:05:55.545318+0000 mon.k (mon.1) 18842 : audit [DBG] from='client.? 10.1.222.242:0/3936005862' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:05:55.948270+0000 mon.k (mon.1) 18843 : audit [DBG] from='client.? 10.1.222.242:0/1906355773' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
debug 2023-07-18T20:05:56.478+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:05:56.478+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:05:56.882+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:05:56.882+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T20:05:56.289732+0000 mon.k (mon.1) 18844 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:05:56.290054+0000 mon.k (mon.1) 18845 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:05:56.324702+0000 osd.54 (osd.54) 51713 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:05:56.485124+0000 mon.j (mon.0) 21561 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:05:56.485394+0000 mon.j (mon.0) 21562 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:05:56.887929+0000 mon.j (mon.0) 21563 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:05:56.646633+0000 mgr.b (mgr.12834102) 26267 : cluster [DBG] pgmap v26838: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:57.308231+0000 osd.54 (osd.54) 51714 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:58.352973+0000 osd.54 (osd.54) 51715 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:05:58.647591+0000 mgr.b (mgr.12834102) 26268 : cluster [DBG] pgmap v26839: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:05:59.359511+0000 osd.54 (osd.54) 51716 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:06:00.926+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:06:00.380493+0000 osd.54 (osd.54) 51717 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:06:01.898+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:06:01.898+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/2655798085' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:06:00.648669+0000 mgr.b (mgr.12834102) 26269 : cluster [DBG] pgmap v26840: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:06:01.409040+0000 osd.54 (osd.54) 51718 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:06:01.902001+0000 mon.j (mon.0) 21564 : audit [DBG] from='client.? 10.1.182.12:0/2655798085' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:06:02.386420+0000 osd.54 (osd.54) 51719 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:02.649633+0000 mgr.b (mgr.12834102) 26270 : cluster [DBG] pgmap v26841: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:06:03.338075+0000 osd.54 (osd.54) 51720 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:04.330574+0000 osd.54 (osd.54) 51721 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:04.650567+0000 mgr.b (mgr.12834102) 26271 : cluster [DBG] pgmap v26842: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:06:04.897078+0000 mon.l (mon.2) 15545 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:06:04.897377+0000 mon.l (mon.2) 15546 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:06:05.926+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:06:05.302135+0000 osd.54 (osd.54) 51722 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:06:06.282685+0000 mon.k (mon.1) 18846 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:06:06.282977+0000 mon.k (mon.1) 18847 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:06:06.474+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:06:06.474+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:06:06.309919+0000 osd.54 (osd.54) 51723 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:06:06.480364+0000 mon.j (mon.0) 21565 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:06:06.480627+0000 mon.j (mon.0) 21566 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:06:06.651545+0000 mgr.b (mgr.12834102) 26272 : cluster [DBG] pgmap v26843: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:06:07.359517+0000 osd.54 (osd.54) 51724 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:08.380057+0000 osd.54 (osd.54) 51725 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:08.652558+0000 mgr.b (mgr.12834102) 26273 : cluster [DBG] pgmap v26844: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:06:09.425619+0000 osd.54 (osd.54) 51726 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:06:10.934+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:06:10.934+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_write.cc:1736] [default] New memtable created with log file: #21714. Immutable memtables: 0.
debug 2023-07-18T20:06:10.938+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:06:10.941435) [db/db_impl/db_impl_compaction_flush.cc:2394] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
debug 2023-07-18T20:06:10.938+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:335] [default] [JOB 1657] Flushing memtable with next log file: 21714
debug 2023-07-18T20:06:10.938+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710770941491, "job": 1657, "event": "flush_started", "num_memtables": 1, "num_entries": 2194, "num_deletes": 546, "total_data_size": 3034537, "memory_usage": 3074600, "flush_reason": "Manual Compaction"}
debug 2023-07-18T20:06:10.938+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:364] [default] [JOB 1657] Level-0 flush table #21715: started
debug 2023-07-18T20:06:10.950+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710770954768, "cf_name": "default", "job": 1657, "event": "table_file_creation", "file_number": 21715, "file_size": 2379292, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 2370731, "index_size": 4432, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 29765, "raw_average_key_size": 24, "raw_value_size": 2349635, "raw_average_value_size": 1914, "num_data_blocks": 173, "num_entries": 1227, "num_deletions": 546, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689710651, "oldest_key_time": 1689710651, "file_creation_time": 1689710770, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:06:10.950+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:424] [default] [JOB 1657] Level-0 flush table #21715: 2379292 bytes OK
debug 2023-07-18T20:06:10.950+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:06:10.954986) [db/memtable_list.cc:449] [default] Level-0 commit table #21715 started
debug 2023-07-18T20:06:10.950+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:06:10.955278) [db/memtable_list.cc:628] [default] Level-0 commit table #21715: memtable #1 done
debug 2023-07-18T20:06:10.950+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:06:10.955290) EVENT_LOG_v1 {"time_micros": 1689710770955286, "job": 1657, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 3], "immutable_memtables": 0}
debug 2023-07-18T20:06:10.950+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:06:10.955301) [db/db_impl/db_impl_compaction_flush.cc:233] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 3] max score 0.25
debug 2023-07-18T20:06:10.950+0000 7f7fb8521700 4 rocksdb: [db/db_impl/db_impl_files.cc:415] [JOB 1657] Try to delete WAL files size 3023745, prev total WAL file size 3024321, number of live WAL files 2.
debug 2023-07-18T20:06:10.950+0000 7f7fb8521700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021709.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:06:10.950+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:06:10.950+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:06:10.955991) [db/db_impl/db_impl_compaction_flush.cc:2712] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323638333832' seq:72057594037927935, type:20 .. '6C6F676D0033323638363336' seq:0, type:0; will stop at (end)
debug 2023-07-18T20:06:10.950+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1881] [default] [JOB 1658] Compacting 1@0 + 3@6 files to L6, score -1.00
debug 2023-07-18T20:06:10.950+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1887] [default] Compaction start summary: Base version 1658 Base level 0, inputs: [21715(2323KB)], [21711(64MB) 21712(64MB) 21713(3694KB)]
debug 2023-07-18T20:06:10.950+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710770956043, "job": 1658, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [21715], "files_L6": [21711, 21712, 21713], "score": -1, "input_data_size": 140714952}
debug 2023-07-18T20:06:11.178+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1658] Generated table #21716: 21751 keys, 67286334 bytes
debug 2023-07-18T20:06:11.178+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710771183969, "cf_name": "default", "job": 1658, "event": "table_file_creation", "file_number": 21716, "file_size": 67286334, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67172356, "index_size": 58522, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 54469, "raw_key_size": 590414, "raw_average_key_size": 27, "raw_value_size": 66819142, "raw_average_value_size": 3072, "num_data_blocks": 2159, "num_entries": 21751, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710770, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
cluster 2023-07-18T20:06:10.459857+0000 osd.54 (osd.54) 51727 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:10.653558+0000 mgr.b (mgr.12834102) 26274 : cluster [DBG] pgmap v26845: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:06:11.366+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1658] Generated table #21717: 13043 keys, 67269343 bytes
debug 2023-07-18T20:06:11.366+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710771371138, "cf_name": "default", "job": 1658, "event": "table_file_creation", "file_number": 21717, "file_size": 67269343, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67142618, "index_size": 93030, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32709, "raw_key_size": 288931, "raw_average_key_size": 22, "raw_value_size": 66865839, "raw_average_value_size": 5126, "num_data_blocks": 3450, "num_entries": 13043, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710771, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:06:11.386+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1658] Generated table #21718: 725 keys, 5894749 bytes
debug 2023-07-18T20:06:11.386+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710771389635, "cf_name": "default", "job": 1658, "event": "table_file_creation", "file_number": 21718, "file_size": 5894749, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 5884594, "index_size": 7312, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15310, "raw_average_key_size": 21, "raw_value_size": 5867593, "raw_average_value_size": 8093, "num_data_blocks": 288, "num_entries": 725, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710771, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:06:11.386+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1594] [default] [JOB 1658] Compacted 1@0 + 3@6 files to L6 => 140450426 bytes
debug 2023-07-18T20:06:11.386+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:06:11.390586) [db/compaction/compaction_job.cc:812] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 3] max score 0.00, MB/sec: 324.5 rd, 323.9 wr, level 6, files in(1, 3) out(3) MB in(2.3, 131.9) out(133.9), read-write-amplify(118.2) write-amplify(59.0) OK, records in: 36625, records dropped: 1106 output_compression: NoCompression
debug 2023-07-18T20:06:11.386+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:06:11.390604) EVENT_LOG_v1 {"time_micros": 1689710771390595, "job": 1658, "event": "compaction_finished", "compaction_time_micros": 433603, "compaction_time_cpu_micros": 202535, "output_level": 6, "num_output_files": 3, "total_output_size": 140450426, "num_input_records": 36625, "num_output_records": 35519, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 3]}
debug 2023-07-18T20:06:11.386+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021715.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:06:11.386+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710771391027, "job": 1658, "event": "table_file_deletion", "file_number": 21715}
debug 2023-07-18T20:06:11.386+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021713.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:06:11.386+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710771391508, "job": 1658, "event": "table_file_deletion", "file_number": 21713}
debug 2023-07-18T20:06:11.394+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021712.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:06:11.394+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710771400577, "job": 1658, "event": "table_file_deletion", "file_number": 21712}
debug 2023-07-18T20:06:11.406+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021711.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:06:11.406+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710771410538, "job": 1658, "event": "table_file_deletion", "file_number": 21711}
debug 2023-07-18T20:06:11.406+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:06:11.406+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:06:11.406+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:06:11.406+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:06:11.406+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:06:11.882+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:06:11.882+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:06:11.450022+0000 osd.54 (osd.54) 51728 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:06:11.757223+0000 mon.k (mon.1) 18848 : audit [DBG] from='client.? 10.1.207.132:0/511641357' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:06:11.888141+0000 mon.j (mon.0) 21567 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:06:12.436420+0000 osd.54 (osd.54) 51729 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:12.654559+0000 mgr.b (mgr.12834102) 26275 : cluster [DBG] pgmap v26846: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:06:13.459684+0000 osd.54 (osd.54) 51730 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:14.458881+0000 osd.54 (osd.54) 51731 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:14.655516+0000 mgr.b (mgr.12834102) 26276 : cluster [DBG] pgmap v26847: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:06:14.848190+0000 mon.l (mon.2) 15547 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:06:14.848469+0000 mon.l (mon.2) 15548 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:06:15.934+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:06:15.410343+0000 osd.54 (osd.54) 51732 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:06:16.294040+0000 mon.k (mon.1) 18849 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:06:16.294309+0000 mon.k (mon.1) 18850 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:06:16.494+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:06:16.494+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:06:17.374+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:06:17.374+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/674204810' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:06:16.443429+0000 osd.54 (osd.54) 51733 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:06:16.500297+0000 mon.j (mon.0) 21568 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:06:16.500615+0000 mon.j (mon.0) 21569 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:06:16.656579+0000 mgr.b (mgr.12834102) 26277 : cluster [DBG] pgmap v26848: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:06:17.381330+0000 mon.j (mon.0) 21570 : audit [DBG] from='client.? 10.1.182.12:0/674204810' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:06:17.409687+0000 osd.54 (osd.54) 51734 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:18.410719+0000 osd.54 (osd.54) 51735 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:18.657584+0000 mgr.b (mgr.12834102) 26278 : cluster [DBG] pgmap v26849: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:06:19.398870+0000 osd.54 (osd.54) 51736 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:06:20.938+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:06:20.353674+0000 osd.54 (osd.54) 51737 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:06:20.656019+0000 mon.l (mon.2) 15549 : audit [DBG] from='client.? 10.1.222.242:0/1710391953' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:06:20.658593+0000 mgr.b (mgr.12834102) 26279 : cluster [DBG] pgmap v26850: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:06:21.146046+0000 mon.k (mon.1) 18851 : audit [DBG] from='client.? 10.1.222.242:0/2446205776' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:06:21.355640+0000 osd.54 (osd.54) 51738 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:22.395378+0000 osd.54 (osd.54) 51739 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:22.659631+0000 mgr.b (mgr.12834102) 26280 : cluster [DBG] pgmap v26851: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:06:23.412950+0000 osd.54 (osd.54) 51740 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:24.431629+0000 osd.54 (osd.54) 51741 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:24.660653+0000 mgr.b (mgr.12834102) 26281 : cluster [DBG] pgmap v26852: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:06:24.896546+0000 mon.l (mon.2) 15550 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:06:24.896816+0000 mon.l (mon.2) 15551 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:06:25.938+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:06:26.478+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:06:26.478+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:06:25.415747+0000 osd.54 (osd.54) 51742 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:06:26.313313+0000 mon.k (mon.1) 18852 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:06:26.313577+0000 mon.k (mon.1) 18853 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:06:26.483469+0000 mon.j (mon.0) 21571 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:06:26.483738+0000 mon.j (mon.0) 21572 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:06:26.882+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:06:26.882+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:06:26.439536+0000 osd.54 (osd.54) 51743 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:26.661664+0000 mgr.b (mgr.12834102) 26282 : cluster [DBG] pgmap v26853: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:06:26.665595+0000 mon.k (mon.1) 18854 : audit [DBG] from='client.? 10.1.222.242:0/2380015513' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:06:26.887469+0000 mon.j (mon.0) 21573 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T20:06:27.113890+0000 mon.k (mon.1) 18855 : audit [DBG] from='client.? 10.1.222.242:0/60083846' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:06:27.404974+0000 osd.54 (osd.54) 51744 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:28.376247+0000 osd.54 (osd.54) 51745 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:28.662729+0000 mgr.b (mgr.12834102) 26283 : cluster [DBG] pgmap v26854: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:06:29.381815+0000 osd.54 (osd.54) 51746 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:06:30.938+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:06:30.376676+0000 osd.54 (osd.54) 51747 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:30.663592+0000 mgr.b (mgr.12834102) 26284 : cluster [DBG] pgmap v26855: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:06:31.400253+0000 osd.54 (osd.54) 51748 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:06:32.144630+0000 mon.l (mon.2) 15552 : audit [DBG] from='client.? 10.1.207.132:0/1883139727' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:06:32.854+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:06:32.854+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/1091489994' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:06:32.414605+0000 osd.54 (osd.54) 51749 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:32.664187+0000 mgr.b (mgr.12834102) 26285 : cluster [DBG] pgmap v26856: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:06:32.857596+0000 mon.j (mon.0) 21574 : audit [DBG] from='client.? 10.1.182.12:0/1091489994' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:06:33.378434+0000 osd.54 (osd.54) 51750 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:34.351043+0000 osd.54 (osd.54) 51751 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:34.664984+0000 mgr.b (mgr.12834102) 26286 : cluster [DBG] pgmap v26857: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:06:34.856578+0000 mon.l (mon.2) 15553 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:06:34.858006+0000 mon.l (mon.2) 15554 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:06:35.942+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:06:36.494+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:06:36.494+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:06:35.385268+0000 osd.54 (osd.54) 51752 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:06:36.282289+0000 mon.k (mon.1) 18856 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:06:36.282579+0000 mon.k (mon.1) 18857 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:06:36.500073+0000 mon.j (mon.0) 21575 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:06:36.500230+0000 mon.j (mon.0) 21576 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:06:36.356464+0000 osd.54 (osd.54) 51753 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:36.665844+0000 mgr.b (mgr.12834102) 26287 : cluster [DBG] pgmap v26858: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:06:37.387348+0000 osd.54 (osd.54) 51754 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:38.371189+0000 osd.54 (osd.54) 51755 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:38.666669+0000 mgr.b (mgr.12834102) 26288 : cluster [DBG] pgmap v26859: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:06:39.586+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_write.cc:1736] [default] New memtable created with log file: #21719. Immutable memtables: 0.
debug 2023-07-18T20:06:39.586+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:06:39.592058) [db/db_impl/db_impl_compaction_flush.cc:2394] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
debug 2023-07-18T20:06:39.586+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:335] [default] [JOB 1659] Flushing memtable with next log file: 21719
debug 2023-07-18T20:06:39.586+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710799592112, "job": 1659, "event": "flush_started", "num_memtables": 1, "num_entries": 736, "num_deletes": 322, "total_data_size": 782363, "memory_usage": 795488, "flush_reason": "Manual Compaction"}
debug 2023-07-18T20:06:39.586+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:364] [default] [JOB 1659] Level-0 flush table #21720: started
debug 2023-07-18T20:06:39.590+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710799596900, "cf_name": "default", "job": 1659, "event": "table_file_creation", "file_number": 21720, "file_size": 623815, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 620316, "index_size": 1168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 11660, "raw_average_key_size": 23, "raw_value_size": 612300, "raw_average_value_size": 1217, "num_data_blocks": 46, "num_entries": 503, "num_deletions": 322, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689710770, "oldest_key_time": 1689710770, "file_creation_time": 1689710799, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:06:39.590+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:424] [default] [JOB 1659] Level-0 flush table #21720: 623815 bytes OK
debug 2023-07-18T20:06:39.594+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:06:39.597112) [db/memtable_list.cc:449] [default] Level-0 commit table #21720 started
debug 2023-07-18T20:06:39.594+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:06:39.597388) [db/memtable_list.cc:628] [default] Level-0 commit table #21720: memtable #1 done
debug 2023-07-18T20:06:39.594+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:06:39.597400) EVENT_LOG_v1 {"time_micros": 1689710799597396, "job": 1659, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 3], "immutable_memtables": 0}
debug 2023-07-18T20:06:39.594+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:06:39.597424) [db/db_impl/db_impl_compaction_flush.cc:233] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 3] max score 0.25
debug 2023-07-18T20:06:39.594+0000 7f7fb8521700 4 rocksdb: [db/db_impl/db_impl_files.cc:415] [JOB 1659] Try to delete WAL files size 778161, prev total WAL file size 778161, number of live WAL files 2.
debug 2023-07-18T20:06:39.594+0000 7f7fb8521700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021714.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:06:39.594+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:06:39.594+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:06:39.597817) [db/db_impl/db_impl_compaction_flush.cc:2712] [default] Manual compaction from level-0 to level-6 from '7061786F730036323535313731' seq:72057594037927935, type:20 .. '7061786F730036323535343233' seq:0, type:0; will stop at (end)
debug 2023-07-18T20:06:39.594+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1881] [default] [JOB 1660] Compacting 1@0 + 3@6 files to L6, score -1.00
debug 2023-07-18T20:06:39.594+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1887] [default] Compaction start summary: Base version 1660 Base level 0, inputs: [21720(609KB)], [21716(64MB) 21717(64MB) 21718(5756KB)]
debug 2023-07-18T20:06:39.594+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710799597869, "job": 1660, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [21720], "files_L6": [21716, 21717, 21718], "score": -1, "input_data_size": 141074241}
debug 2023-07-18T20:06:39.814+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1660] Generated table #21721: 21782 keys, 67305804 bytes
debug 2023-07-18T20:06:39.814+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710799820655, "cf_name": "default", "job": 1660, "event": "table_file_creation", "file_number": 21721, "file_size": 67305804, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67191719, "index_size": 58629, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 54469, "raw_key_size": 591048, "raw_average_key_size": 27, "raw_value_size": 66838010, "raw_average_value_size": 3068, "num_data_blocks": 2164, "num_entries": 21782, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710799, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:06:40.002+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1660] Generated table #21722: 13053 keys, 67279998 bytes
debug 2023-07-18T20:06:40.002+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710800007903, "cf_name": "default", "job": 1660, "event": "table_file_creation", "file_number": 21722, "file_size": 67279998, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67152994, "index_size": 93309, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32709, "raw_key_size": 289156, "raw_average_key_size": 22, "raw_value_size": 66875825, "raw_average_value_size": 5123, "num_data_blocks": 3460, "num_entries": 13053, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710799, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:06:40.018+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1660] Generated table #21723: 529 keys, 4414495 bytes
debug 2023-07-18T20:06:40.018+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710800021687, "cf_name": "default", "job": 1660, "event": "table_file_creation", "file_number": 21723, "file_size": 4414495, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 4406835, "index_size": 5329, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 11209, "raw_average_key_size": 21, "raw_value_size": 4394385, "raw_average_value_size": 8306, "num_data_blocks": 210, "num_entries": 529, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710800, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:06:40.018+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1594] [default] [JOB 1660] Compacted 1@0 + 3@6 files to L6 => 139000297 bytes
debug 2023-07-18T20:06:40.018+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:06:40.022640) [db/compaction/compaction_job.cc:812] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 3] max score 0.00, MB/sec: 332.8 rd, 327.9 wr, level 6, files in(1, 3) out(3) MB in(0.6, 133.9) out(132.6), read-write-amplify(449.0) write-amplify(222.8) OK, records in: 36022, records dropped: 658 output_compression: NoCompression
debug 2023-07-18T20:06:40.018+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:06:40.022657) EVENT_LOG_v1 {"time_micros": 1689710800022651, "job": 1660, "event": "compaction_finished", "compaction_time_micros": 423867, "compaction_time_cpu_micros": 199293, "output_level": 6, "num_output_files": 3, "total_output_size": 139000297, "num_input_records": 36022, "num_output_records": 35364, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 3]}
debug 2023-07-18T20:06:40.018+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021720.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:06:40.018+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710800022859, "job": 1660, "event": "table_file_deletion", "file_number": 21720}
debug 2023-07-18T20:06:40.018+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021718.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:06:40.018+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710800023585, "job": 1660, "event": "table_file_deletion", "file_number": 21718}
debug 2023-07-18T20:06:40.026+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021717.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:06:40.026+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710800033110, "job": 1660, "event": "table_file_deletion", "file_number": 21717}
debug 2023-07-18T20:06:40.038+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021716.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:06:40.038+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710800043016, "job": 1660, "event": "table_file_deletion", "file_number": 21716}
debug 2023-07-18T20:06:40.038+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:06:40.038+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:06:40.038+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:06:40.038+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:06:40.038+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
cluster 2023-07-18T20:06:39.330118+0000 osd.54 (osd.54) 51756 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:06:40.942+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:06:40.331119+0000 osd.54 (osd.54) 51757 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:40.667438+0000 mgr.b (mgr.12834102) 26289 : cluster [DBG] pgmap v26860: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:06:41.882+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:06:41.882+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:06:41.353872+0000 osd.54 (osd.54) 51758 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:06:41.887358+0000 mon.j (mon.0) 21577 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:06:42.350554+0000 osd.54 (osd.54) 51759 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:42.668419+0000 mgr.b (mgr.12834102) 26290 : cluster [DBG] pgmap v26861: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:06:43.369905+0000 osd.54 (osd.54) 51760 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:44.397421+0000 osd.54 (osd.54) 51761 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:44.669355+0000 mgr.b (mgr.12834102) 26291 : cluster [DBG] pgmap v26862: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:06:44.850737+0000 mon.l (mon.2) 15555 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:06:44.851053+0000 mon.l (mon.2) 15556 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:06:45.946+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:06:46.478+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:06:46.478+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:06:45.426370+0000 osd.54 (osd.54) 51762 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:06:46.293412+0000 mon.k (mon.1) 18858 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:06:46.293661+0000 mon.k (mon.1) 18859 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:06:46.483506+0000 mon.j (mon.0) 21578 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:06:46.483636+0000 mon.j (mon.0) 21579 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:06:46.423974+0000 osd.54 (osd.54) 51763 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:46.670318+0000 mgr.b (mgr.12834102) 26292 : cluster [DBG] pgmap v26863: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:06:48.333+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:06:48.333+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/448462030' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:06:47.399678+0000 osd.54 (osd.54) 51764 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:06:48.338376+0000 mon.j (mon.0) 21580 : audit [DBG] from='client.? 10.1.182.12:0/448462030' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:06:48.420674+0000 osd.54 (osd.54) 51765 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:48.671299+0000 mgr.b (mgr.12834102) 26293 : cluster [DBG] pgmap v26864: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:06:49.461754+0000 osd.54 (osd.54) 51766 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:06:50.945+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:06:50.413957+0000 osd.54 (osd.54) 51767 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:50.672262+0000 mgr.b (mgr.12834102) 26294 : cluster [DBG] pgmap v26865: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:06:51.379313+0000 osd.54 (osd.54) 51768 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:06:51.716905+0000 mon.k (mon.1) 18860 : audit [DBG] from='client.? 10.1.222.242:0/3104411148' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:06:52.130869+0000 mon.k (mon.1) 18861 : audit [DBG] from='client.? 10.1.222.242:0/3330153945' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
audit 2023-07-18T20:06:52.228709+0000 mon.k (mon.1) 18862 : audit [DBG] from='client.? 10.1.207.132:0/4066156627' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:06:52.344954+0000 osd.54 (osd.54) 51769 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:52.672998+0000 mgr.b (mgr.12834102) 26295 : cluster [DBG] pgmap v26866: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:06:53.301293+0000 osd.54 (osd.54) 51770 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:54.304559+0000 osd.54 (osd.54) 51771 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:54.673932+0000 mgr.b (mgr.12834102) 26296 : cluster [DBG] pgmap v26867: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:06:54.869986+0000 mon.l (mon.2) 15557 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:06:54.870256+0000 mon.l (mon.2) 15558 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:06:55.945+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:06:56.493+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:06:56.493+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:06:55.264325+0000 osd.54 (osd.54) 51772 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:06:56.281338+0000 mon.k (mon.1) 18863 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:06:56.281622+0000 mon.k (mon.1) 18864 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:06:56.497583+0000 mon.j (mon.0) 21581 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:06:56.497852+0000 mon.j (mon.0) 21582 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:06:56.881+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:06:56.881+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:06:56.268029+0000 osd.54 (osd.54) 51773 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:56.674892+0000 mgr.b (mgr.12834102) 26297 : cluster [DBG] pgmap v26868: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:06:56.888259+0000 mon.j (mon.0) 21583 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T20:06:57.688186+0000 mon.l (mon.2) 15559 : audit [DBG] from='client.? 10.1.222.242:0/2117047800' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:06:57.300488+0000 osd.54 (osd.54) 51774 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:06:58.130954+0000 mon.k (mon.1) 18865 : audit [DBG] from='client.? 10.1.222.242:0/1750157023' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:06:58.264852+0000 osd.54 (osd.54) 51775 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:06:58.675869+0000 mgr.b (mgr.12834102) 26298 : cluster [DBG] pgmap v26869: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:06:59.260455+0000 osd.54 (osd.54) 51776 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:07:00.949+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:07:00.257174+0000 osd.54 (osd.54) 51777 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:00.676847+0000 mgr.b (mgr.12834102) 26299 : cluster [DBG] pgmap v26870: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:01.276473+0000 osd.54 (osd.54) 51778 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:02.271814+0000 osd.54 (osd.54) 51779 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:02.677563+0000 mgr.b (mgr.12834102) 26300 : cluster [DBG] pgmap v26871: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:07:03.813+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:07:03.813+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/1123446327' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:07:03.311403+0000 osd.54 (osd.54) 51780 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:07:03.818529+0000 mon.j (mon.0) 21584 : audit [DBG] from='client.? 10.1.182.12:0/1123446327' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:07:04.328386+0000 osd.54 (osd.54) 51781 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:04.678492+0000 mgr.b (mgr.12834102) 26301 : cluster [DBG] pgmap v26872: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:07:04.864794+0000 mon.l (mon.2) 15560 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:07:04.865055+0000 mon.l (mon.2) 15561 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:07:05.949+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:07:06.485+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:07:06.489+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:07:05.306193+0000 osd.54 (osd.54) 51782 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:07:06.291353+0000 mon.k (mon.1) 18866 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:07:06.291660+0000 mon.k (mon.1) 18867 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:07:06.493203+0000 mon.j (mon.0) 21585 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:07:06.493489+0000 mon.j (mon.0) 21586 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:07:06.298286+0000 osd.54 (osd.54) 51783 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:06.679442+0000 mgr.b (mgr.12834102) 26302 : cluster [DBG] pgmap v26873: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:07.346598+0000 osd.54 (osd.54) 51784 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:08.316130+0000 osd.54 (osd.54) 51785 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:08.680471+0000 mgr.b (mgr.12834102) 26303 : cluster [DBG] pgmap v26874: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:09.283161+0000 osd.54 (osd.54) 51786 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:07:10.953+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:07:10.287868+0000 osd.54 (osd.54) 51787 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:10.681465+0000 mgr.b (mgr.12834102) 26304 : cluster [DBG] pgmap v26875: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:11.291135+0000 osd.54 (osd.54) 51788 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:07:11.881+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:07:11.881+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T20:07:11.887349+0000 mon.j (mon.0) 21587 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:07:12.268254+0000 osd.54 (osd.54) 51789 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:07:12.388731+0000 mon.k (mon.1) 18868 : audit [DBG] from='client.? 10.1.207.132:0/3375730285' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:07:12.682437+0000 mgr.b (mgr.12834102) 26305 : cluster [DBG] pgmap v26876: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:13.252692+0000 osd.54 (osd.54) 51790 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:14.279523+0000 osd.54 (osd.54) 51791 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:14.683408+0000 mgr.b (mgr.12834102) 26306 : cluster [DBG] pgmap v26877: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:07:14.856605+0000 mon.l (mon.2) 15562 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:07:14.856869+0000 mon.l (mon.2) 15563 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:07:15.282426+0000 osd.54 (osd.54) 51792 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:07:15.953+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:07:16.481+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:07:16.481+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:07:16.298761+0000 mon.k (mon.1) 18869 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:07:16.299006+0000 mon.k (mon.1) 18870 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:07:16.306434+0000 osd.54 (osd.54) 51793 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:07:16.487938+0000 mon.j (mon.0) 21588 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:07:16.488225+0000 mon.j (mon.0) 21589 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:07:16.684349+0000 mgr.b (mgr.12834102) 26307 : cluster [DBG] pgmap v26878: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:17.348172+0000 osd.54 (osd.54) 51794 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:18.364347+0000 osd.54 (osd.54) 51795 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:07:19.301+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:07:19.301+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/2479720893' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:07:18.685339+0000 mgr.b (mgr.12834102) 26308 : cluster [DBG] pgmap v26879: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:07:19.306620+0000 mon.j (mon.0) 21590 : audit [DBG] from='client.? 10.1.182.12:0/2479720893' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:07:19.350772+0000 osd.54 (osd.54) 51796 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:07:20.953+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:07:20.334250+0000 osd.54 (osd.54) 51797 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:20.686310+0000 mgr.b (mgr.12834102) 26309 : cluster [DBG] pgmap v26880: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:21.371295+0000 osd.54 (osd.54) 51798 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:22.343822+0000 osd.54 (osd.54) 51799 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:07:22.752260+0000 mon.k (mon.1) 18871 : audit [DBG] from='client.? 10.1.222.242:0/2106392508' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:07:22.687277+0000 mgr.b (mgr.12834102) 26310 : cluster [DBG] pgmap v26881: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:07:23.199056+0000 mon.l (mon.2) 15564 : audit [DBG] from='client.? 10.1.222.242:0/1282562545' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:07:23.370371+0000 osd.54 (osd.54) 51800 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:24.384027+0000 osd.54 (osd.54) 51801 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:07:24.858278+0000 mon.l (mon.2) 15565 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:07:24.858545+0000 mon.l (mon.2) 15566 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:07:25.957+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:07:24.688251+0000 mgr.b (mgr.12834102) 26311 : cluster [DBG] pgmap v26882: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:25.385709+0000 osd.54 (osd.54) 51802 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:07:26.477+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:07:26.477+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:07:26.881+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:07:26.881+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T20:07:26.305297+0000 mon.k (mon.1) 18872 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:07:26.305523+0000 mon.k (mon.1) 18873 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:07:26.357288+0000 osd.54 (osd.54) 51803 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:07:26.484254+0000 mon.j (mon.0) 21591 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:07:26.484519+0000 mon.j (mon.0) 21592 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:07:26.887295+0000 mon.j (mon.0) 21593 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:07:26.689224+0000 mgr.b (mgr.12834102) 26312 : cluster [DBG] pgmap v26883: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:27.385568+0000 osd.54 (osd.54) 51804 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:28.376470+0000 osd.54 (osd.54) 51805 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:07:28.763047+0000 mon.k (mon.1) 18874 : audit [DBG] from='client.? 10.1.222.242:0/931124819' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:07:28.690231+0000 mgr.b (mgr.12834102) 26313 : cluster [DBG] pgmap v26884: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:07:29.172407+0000 mon.k (mon.1) 18875 : audit [DBG] from='client.? 10.1.222.242:0/4035150952' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:07:29.336648+0000 osd.54 (osd.54) 51806 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:07:30.961+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:07:30.337094+0000 osd.54 (osd.54) 51807 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:30.691316+0000 mgr.b (mgr.12834102) 26314 : cluster [DBG] pgmap v26885: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:31.313908+0000 osd.54 (osd.54) 51808 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:32.354040+0000 osd.54 (osd.54) 51809 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:07:32.494626+0000 mon.k (mon.1) 18876 : audit [DBG] from='client.? 10.1.207.132:0/3320430822' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:07:32.692303+0000 mgr.b (mgr.12834102) 26315 : cluster [DBG] pgmap v26886: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:33.380781+0000 osd.54 (osd.54) 51810 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:07:34.769+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:07:34.769+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/356177200' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:07:34.386083+0000 osd.54 (osd.54) 51811 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:07:34.770467+0000 mon.j (mon.0) 21594 : audit [DBG] from='client.? 10.1.182.12:0/356177200' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:07:34.881928+0000 mon.l (mon.2) 15567 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:07:34.882196+0000 mon.l (mon.2) 15568 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:07:35.965+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:07:34.693324+0000 mgr.b (mgr.12834102) 26316 : cluster [DBG] pgmap v26887: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:35.399902+0000 osd.54 (osd.54) 51812 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:07:36.469+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:07:36.469+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:07:36.290230+0000 mon.k (mon.1) 18877 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:07:36.290464+0000 mon.k (mon.1) 18878 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:07:36.427938+0000 osd.54 (osd.54) 51813 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:07:36.470355+0000 mon.j (mon.0) 21595 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:07:36.470519+0000 mon.j (mon.0) 21596 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:07:36.694306+0000 mgr.b (mgr.12834102) 26317 : cluster [DBG] pgmap v26888: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:37.461848+0000 osd.54 (osd.54) 51814 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:38.480110+0000 osd.54 (osd.54) 51815 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:38.695305+0000 mgr.b (mgr.12834102) 26318 : cluster [DBG] pgmap v26889: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:39.459563+0000 osd.54 (osd.54) 51816 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:07:40.965+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:07:40.416557+0000 osd.54 (osd.54) 51817 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:07:41.885+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:07:41.885+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:07:40.696244+0000 mgr.b (mgr.12834102) 26319 : cluster [DBG] pgmap v26890: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:41.461333+0000 osd.54 (osd.54) 51818 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:07:41.887969+0000 mon.j (mon.0) 21597 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:07:42.421443+0000 osd.54 (osd.54) 51819 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:42.697187+0000 mgr.b (mgr.12834102) 26320 : cluster [DBG] pgmap v26891: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:43.435872+0000 osd.54 (osd.54) 51820 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:44.428962+0000 osd.54 (osd.54) 51821 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:07:44.855132+0000 mon.l (mon.2) 15569 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:07:44.855405+0000 mon.l (mon.2) 15570 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:07:45.965+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:07:44.698137+0000 mgr.b (mgr.12834102) 26321 : cluster [DBG] pgmap v26892: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:45.444623+0000 osd.54 (osd.54) 51822 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:07:46.481+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:07:46.481+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:07:46.288017+0000 mon.k (mon.1) 18879 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:07:46.288324+0000 mon.k (mon.1) 18880 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:07:46.420995+0000 osd.54 (osd.54) 51823 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:07:46.483808+0000 mon.j (mon.0) 21598 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:07:46.484101+0000 mon.j (mon.0) 21599 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:07:46.699081+0000 mgr.b (mgr.12834102) 26322 : cluster [DBG] pgmap v26893: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:47.432720+0000 osd.54 (osd.54) 51824 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:48.447156+0000 osd.54 (osd.54) 51825 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:48.700017+0000 mgr.b (mgr.12834102) 26323 : cluster [DBG] pgmap v26894: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:49.419578+0000 osd.54 (osd.54) 51826 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:07:50.245+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:07:50.245+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/2617472538' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:07:50.969+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
audit 2023-07-18T20:07:50.249083+0000 mon.j (mon.0) 21600 : audit [DBG] from='client.? 10.1.182.12:0/2617472538' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:07:50.432472+0000 osd.54 (osd.54) 51827 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:50.700977+0000 mgr.b (mgr.12834102) 26324 : cluster [DBG] pgmap v26895: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:51.431203+0000 osd.54 (osd.54) 51828 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:52.395576+0000 osd.54 (osd.54) 51829 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:07:52.630724+0000 mon.k (mon.1) 18881 : audit [DBG] from='client.? 10.1.207.132:0/2015536267' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:07:52.701931+0000 mgr.b (mgr.12834102) 26325 : cluster [DBG] pgmap v26896: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:53.431451+0000 osd.54 (osd.54) 51830 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:07:53.775948+0000 mon.l (mon.2) 15571 : audit [DBG] from='client.? 10.1.222.242:0/3136759414' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:07:54.212036+0000 mon.k (mon.1) 18882 : audit [DBG] from='client.? 10.1.222.242:0/1508538776' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:07:54.434953+0000 osd.54 (osd.54) 51831 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:07:54.871878+0000 mon.l (mon.2) 15572 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:07:54.872147+0000 mon.l (mon.2) 15573 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:07:55.969+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:07:54.702849+0000 mgr.b (mgr.12834102) 26326 : cluster [DBG] pgmap v26897: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:55.404497+0000 osd.54 (osd.54) 51832 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:07:56.481+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:07:56.485+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:07:56.885+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:07:56.885+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T20:07:56.298326+0000 mon.k (mon.1) 18883 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:07:56.298622+0000 mon.k (mon.1) 18884 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:07:56.432973+0000 osd.54 (osd.54) 51833 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:07:56.485232+0000 mon.j (mon.0) 21601 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:07:56.485519+0000 mon.j (mon.0) 21602 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:07:56.887777+0000 mon.j (mon.0) 21603 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:07:56.703766+0000 mgr.b (mgr.12834102) 26327 : cluster [DBG] pgmap v26898: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:57.403018+0000 osd.54 (osd.54) 51834 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:58.403257+0000 osd.54 (osd.54) 51835 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:07:58.704715+0000 mgr.b (mgr.12834102) 26328 : cluster [DBG] pgmap v26899: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:07:59.354641+0000 osd.54 (osd.54) 51836 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:07:59.745261+0000 mon.k (mon.1) 18885 : audit [DBG] from='client.? 10.1.222.242:0/1401537652' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:08:00.197811+0000 mon.k (mon.1) 18886 : audit [DBG] from='client.? 10.1.222.242:0/2085460337' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
debug 2023-07-18T20:08:00.973+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:08:00.322874+0000 osd.54 (osd.54) 51837 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:00.705686+0000 mgr.b (mgr.12834102) 26329 : cluster [DBG] pgmap v26900: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:08:01.333175+0000 osd.54 (osd.54) 51838 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:02.306061+0000 osd.54 (osd.54) 51839 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:02.706648+0000 mgr.b (mgr.12834102) 26330 : cluster [DBG] pgmap v26901: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:08:03.263852+0000 osd.54 (osd.54) 51840 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:04.226440+0000 osd.54 (osd.54) 51841 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:04.707570+0000 mgr.b (mgr.12834102) 26331 : cluster [DBG] pgmap v26902: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:08:04.866261+0000 mon.l (mon.2) 15574 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:08:04.866537+0000 mon.l (mon.2) 15575 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:08:05.712+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:08:05.712+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/2453018094' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:08:05.972+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:08:05.207617+0000 osd.54 (osd.54) 51842 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:08:05.715843+0000 mon.j (mon.0) 21604 : audit [DBG] from='client.? 10.1.182.12:0/2453018094' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:08:06.293370+0000 mon.k (mon.1) 18887 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:08:06.293680+0000 mon.k (mon.1) 18888 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:08:06.472+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:08:06.472+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:08:06.166345+0000 osd.54 (osd.54) 51843 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:08:06.476384+0000 mon.j (mon.0) 21605 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:08:06.476729+0000 mon.j (mon.0) 21606 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:08:06.708515+0000 mgr.b (mgr.12834102) 26332 : cluster [DBG] pgmap v26903: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:08:07.131725+0000 osd.54 (osd.54) 51844 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:08.163802+0000 osd.54 (osd.54) 51845 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:08.709453+0000 mgr.b (mgr.12834102) 26333 : cluster [DBG] pgmap v26904: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:08:09.180493+0000 osd.54 (osd.54) 51846 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:08:10.972+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:08:10.149891+0000 osd.54 (osd.54) 51847 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:10.710420+0000 mgr.b (mgr.12834102) 26334 : cluster [DBG] pgmap v26905: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:08:11.888+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:08:11.888+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:08:11.138521+0000 osd.54 (osd.54) 51848 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:08:11.891569+0000 mon.j (mon.0) 21607 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
debug 2023-07-18T20:08:12.944+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:08:12.944+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.207.132:0/1901070479' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:08:12.136036+0000 osd.54 (osd.54) 51849 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:12.711384+0000 mgr.b (mgr.12834102) 26335 : cluster [DBG] pgmap v26906: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:08:12.946514+0000 mon.j (mon.0) 21608 : audit [DBG] from='client.? 10.1.207.132:0/1901070479' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:08:13.147199+0000 osd.54 (osd.54) 51850 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:14.112638+0000 osd.54 (osd.54) 51851 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:14.712451+0000 mgr.b (mgr.12834102) 26336 : cluster [DBG] pgmap v26907: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:08:14.845870+0000 mon.l (mon.2) 15576 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:08:14.846069+0000 mon.l (mon.2) 15577 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:08:15.976+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:08:15.084421+0000 osd.54 (osd.54) 51852 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:08:16.290726+0000 mon.k (mon.1) 18889 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:08:16.290879+0000 mon.k (mon.1) 18890 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:08:16.472+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:08:16.472+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:08:16.105989+0000 osd.54 (osd.54) 51853 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:08:16.475692+0000 mon.j (mon.0) 21609 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:08:16.475947+0000 mon.j (mon.0) 21610 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:08:16.713391+0000 mgr.b (mgr.12834102) 26337 : cluster [DBG] pgmap v26908: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:08:17.120559+0000 osd.54 (osd.54) 51854 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:18.157057+0000 osd.54 (osd.54) 51855 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:18.714347+0000 mgr.b (mgr.12834102) 26338 : cluster [DBG] pgmap v26909: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:08:19.115010+0000 osd.54 (osd.54) 51856 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:08:20.976+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:08:21.188+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:08:21.188+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/2008156591' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:08:20.156709+0000 osd.54 (osd.54) 51857 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:20.715343+0000 mgr.b (mgr.12834102) 26339 : cluster [DBG] pgmap v26910: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:08:21.192152+0000 mon.j (mon.0) 21611 : audit [DBG] from='client.? 10.1.182.12:0/2008156591' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:08:21.138646+0000 osd.54 (osd.54) 51858 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:22.187432+0000 osd.54 (osd.54) 51859 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:22.716972+0000 mgr.b (mgr.12834102) 26340 : cluster [DBG] pgmap v26911: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:08:23.209664+0000 osd.54 (osd.54) 51860 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:24.196494+0000 osd.54 (osd.54) 51861 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:24.718008+0000 mgr.b (mgr.12834102) 26341 : cluster [DBG] pgmap v26912: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:08:24.843544+0000 mon.k (mon.1) 18891 : audit [DBG] from='client.? 10.1.222.242:0/2861509083' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:08:24.852878+0000 mon.l (mon.2) 15578 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:08:24.853156+0000 mon.l (mon.2) 15579 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:08:25.258663+0000 mon.l (mon.2) 15580 : audit [DBG] from='client.? 10.1.222.242:0/1975017733' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
debug 2023-07-18T20:08:25.980+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:08:25.186481+0000 osd.54 (osd.54) 51862 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:08:26.296907+0000 mon.k (mon.1) 18892 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:08:26.297175+0000 mon.k (mon.1) 18893 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:08:26.476+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:08:26.476+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:08:26.888+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:08:26.888+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:08:26.159321+0000 osd.54 (osd.54) 51863 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:08:26.477523+0000 mon.j (mon.0) 21612 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:08:26.477692+0000 mon.j (mon.0) 21613 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:08:26.718990+0000 mgr.b (mgr.12834102) 26342 : cluster [DBG] pgmap v26913: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:08:26.891272+0000 mon.j (mon.0) 21614 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:08:27.177215+0000 osd.54 (osd.54) 51864 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:28.177722+0000 osd.54 (osd.54) 51865 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:28.719957+0000 mgr.b (mgr.12834102) 26343 : cluster [DBG] pgmap v26914: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:08:29.183323+0000 osd.54 (osd.54) 51866 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:08:30.980+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:08:30.223351+0000 osd.54 (osd.54) 51867 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:30.720945+0000 mgr.b (mgr.12834102) 26344 : cluster [DBG] pgmap v26915: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:08:30.769536+0000 mon.k (mon.1) 18894 : audit [DBG] from='client.? 10.1.222.242:0/3385588255' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:08:31.214152+0000 mon.k (mon.1) 18895 : audit [DBG] from='client.? 10.1.222.242:0/68867678' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:08:31.257841+0000 osd.54 (osd.54) 51868 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:32.243280+0000 osd.54 (osd.54) 51869 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:32.721939+0000 mgr.b (mgr.12834102) 26345 : cluster [DBG] pgmap v26916: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:08:33.121026+0000 mon.k (mon.1) 18896 : audit [DBG] from='client.? 10.1.207.132:0/3389986148' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:08:33.236047+0000 osd.54 (osd.54) 51870 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:34.234335+0000 osd.54 (osd.54) 51871 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:34.722992+0000 mgr.b (mgr.12834102) 26346 : cluster [DBG] pgmap v26917: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:08:34.875838+0000 mon.l (mon.2) 15581 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:08:34.876110+0000 mon.l (mon.2) 15582 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:08:35.980+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:08:36.492+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:08:36.492+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:08:35.221127+0000 osd.54 (osd.54) 51872 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:08:36.289224+0000 mon.k (mon.1) 18897 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:08:36.289494+0000 mon.k (mon.1) 18898 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:08:36.496022+0000 mon.j (mon.0) 21615 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:08:36.496287+0000 mon.j (mon.0) 21616 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:08:36.668+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:08:36.668+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/3541225614' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:08:36.257560+0000 osd.54 (osd.54) 51873 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:08:36.671169+0000 mon.j (mon.0) 21617 : audit [DBG] from='client.? 10.1.182.12:0/3541225614' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:08:36.724032+0000 mgr.b (mgr.12834102) 26347 : cluster [DBG] pgmap v26918: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:08:37.263183+0000 osd.54 (osd.54) 51874 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:38.276015+0000 osd.54 (osd.54) 51875 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:38.725029+0000 mgr.b (mgr.12834102) 26348 : cluster [DBG] pgmap v26919: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:08:39.323894+0000 osd.54 (osd.54) 51876 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:08:40.984+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:08:40.285415+0000 osd.54 (osd.54) 51877 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:40.726030+0000 mgr.b (mgr.12834102) 26349 : cluster [DBG] pgmap v26920: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:08:41.888+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:08:41.888+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:08:41.256357+0000 osd.54 (osd.54) 51878 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:08:41.892057+0000 mon.j (mon.0) 21618 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:08:42.268055+0000 osd.54 (osd.54) 51879 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:42.727001+0000 mgr.b (mgr.12834102) 26350 : cluster [DBG] pgmap v26921: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:08:43.222561+0000 osd.54 (osd.54) 51880 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:44.178996+0000 osd.54 (osd.54) 51881 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:44.727954+0000 mgr.b (mgr.12834102) 26351 : cluster [DBG] pgmap v26922: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:08:44.851701+0000 mon.l (mon.2) 15583 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:08:44.851982+0000 mon.l (mon.2) 15584 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:08:45.984+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:08:46.484+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:08:46.484+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:08:45.167633+0000 osd.54 (osd.54) 51882 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:08:46.314757+0000 mon.k (mon.1) 18899 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:08:46.315039+0000 mon.k (mon.1) 18900 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:08:46.486665+0000 mon.j (mon.0) 21619 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:08:46.486830+0000 mon.j (mon.0) 21620 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:08:46.209341+0000 osd.54 (osd.54) 51883 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:46.728970+0000 mgr.b (mgr.12834102) 26352 : cluster [DBG] pgmap v26923: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:08:47.259248+0000 osd.54 (osd.54) 51884 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:48.248098+0000 osd.54 (osd.54) 51885 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:48.729986+0000 mgr.b (mgr.12834102) 26353 : cluster [DBG] pgmap v26924: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:08:49.232786+0000 osd.54 (osd.54) 51886 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:08:50.988+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:08:50.242388+0000 osd.54 (osd.54) 51887 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:50.730966+0000 mgr.b (mgr.12834102) 26354 : cluster [DBG] pgmap v26925: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:08:52.148+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:08:52.148+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/1578037599' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:08:51.269317+0000 osd.54 (osd.54) 51888 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:08:52.150332+0000 mon.j (mon.0) 21621 : audit [DBG] from='client.? 10.1.182.12:0/1578037599' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:08:52.305381+0000 osd.54 (osd.54) 51889 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:52.731927+0000 mgr.b (mgr.12834102) 26355 : cluster [DBG] pgmap v26926: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:08:53.290693+0000 mon.l (mon.2) 15585 : audit [DBG] from='client.? 10.1.207.132:0/1329945970' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:08:53.322532+0000 osd.54 (osd.54) 51890 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:54.328888+0000 osd.54 (osd.54) 51891 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:54.732952+0000 mgr.b (mgr.12834102) 26356 : cluster [DBG] pgmap v26927: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:08:54.856006+0000 mon.l (mon.2) 15586 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:08:54.856304+0000 mon.l (mon.2) 15587 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:08:55.988+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:08:56.496+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:08:56.496+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:08:55.368431+0000 osd.54 (osd.54) 51892 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:08:55.891705+0000 mon.k (mon.1) 18901 : audit [DBG] from='client.? 10.1.222.242:0/397617980' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:08:56.290147+0000 mon.k (mon.1) 18902 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:08:56.290314+0000 mon.k (mon.1) 18903 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:08:56.310946+0000 mon.k (mon.1) 18904 : audit [DBG] from='client.? 10.1.222.242:0/3596814598' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
audit 2023-07-18T20:08:56.500952+0000 mon.j (mon.0) 21622 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:08:56.501222+0000 mon.j (mon.0) 21623 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:08:56.888+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:08:56.888+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:08:56.391168+0000 osd.54 (osd.54) 51893 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:56.733977+0000 mgr.b (mgr.12834102) 26357 : cluster [DBG] pgmap v26928: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:08:56.891580+0000 mon.j (mon.0) 21624 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:08:57.433631+0000 osd.54 (osd.54) 51894 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:58.393861+0000 osd.54 (osd.54) 51895 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:08:58.734938+0000 mgr.b (mgr.12834102) 26358 : cluster [DBG] pgmap v26929: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:08:59.412658+0000 osd.54 (osd.54) 51896 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:09:00.988+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:09:00.427923+0000 osd.54 (osd.54) 51897 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:00.735963+0000 mgr.b (mgr.12834102) 26359 : cluster [DBG] pgmap v26930: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:01.424920+0000 osd.54 (osd.54) 51898 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:09:02.105262+0000 mon.k (mon.1) 18905 : audit [DBG] from='client.? 10.1.222.242:0/3423599935' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:09:02.517108+0000 mon.l (mon.2) 15588 : audit [DBG] from='client.? 10.1.222.242:0/2367342530' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:09:02.406723+0000 osd.54 (osd.54) 51899 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:02.736957+0000 mgr.b (mgr.12834102) 26360 : cluster [DBG] pgmap v26931: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:03.396183+0000 osd.54 (osd.54) 51900 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:04.349936+0000 osd.54 (osd.54) 51901 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:04.737910+0000 mgr.b (mgr.12834102) 26361 : cluster [DBG] pgmap v26932: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:09:04.846955+0000 mon.l (mon.2) 15589 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:09:04.847239+0000 mon.l (mon.2) 15590 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:09:05.992+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:09:06.504+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:09:06.504+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:09:05.369337+0000 osd.54 (osd.54) 51902 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:09:06.285422+0000 mon.k (mon.1) 18906 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:09:06.285708+0000 mon.k (mon.1) 18907 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:09:06.505920+0000 mon.j (mon.0) 21625 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:09:06.506187+0000 mon.j (mon.0) 21626 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:09:07.632+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:09:07.632+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/1626013070' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:09:06.323366+0000 osd.54 (osd.54) 51903 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:06.738901+0000 mgr.b (mgr.12834102) 26362 : cluster [DBG] pgmap v26933: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:09:07.634731+0000 mon.j (mon.0) 21627 : audit [DBG] from='client.? 10.1.182.12:0/1626013070' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:09:07.744+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_write.cc:1736] [default] New memtable created with log file: #21724. Immutable memtables: 0.
debug 2023-07-18T20:09:07.744+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:09:07.748361) [db/db_impl/db_impl_compaction_flush.cc:2394] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
debug 2023-07-18T20:09:07.744+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:335] [default] [JOB 1661] Flushing memtable with next log file: 21724
debug 2023-07-18T20:09:07.744+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710947748423, "job": 1661, "event": "flush_started", "num_memtables": 1, "num_entries": 2675, "num_deletes": 609, "total_data_size": 3802560, "memory_usage": 3849888, "flush_reason": "Manual Compaction"}
debug 2023-07-18T20:09:07.744+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:364] [default] [JOB 1661] Level-0 flush table #21725: started
debug 2023-07-18T20:09:07.760+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710947761834, "cf_name": "default", "job": 1661, "event": "table_file_creation", "file_number": 21725, "file_size": 2977787, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 2967567, "index_size": 5579, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3653, "raw_key_size": 35860, "raw_average_key_size": 24, "raw_value_size": 2942238, "raw_average_value_size": 2019, "num_data_blocks": 217, "num_entries": 1457, "num_deletions": 609, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689710800, "oldest_key_time": 1689710800, "file_creation_time": 1689710947, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:09:07.760+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:424] [default] [JOB 1661] Level-0 flush table #21725: 2977787 bytes OK
debug 2023-07-18T20:09:07.760+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:09:07.762065) [db/memtable_list.cc:449] [default] Level-0 commit table #21725 started
debug 2023-07-18T20:09:07.760+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:09:07.762343) [db/memtable_list.cc:628] [default] Level-0 commit table #21725: memtable #1 done
debug 2023-07-18T20:09:07.760+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:09:07.762355) EVENT_LOG_v1 {"time_micros": 1689710947762351, "job": 1661, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 3], "immutable_memtables": 0}
debug 2023-07-18T20:09:07.760+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:09:07.762368) [db/db_impl/db_impl_compaction_flush.cc:233] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 3] max score 0.25
debug 2023-07-18T20:09:07.760+0000 7f7fb8521700 4 rocksdb: [db/db_impl/db_impl_files.cc:415] [JOB 1661] Try to delete WAL files size 3789610, prev total WAL file size 3789610, number of live WAL files 2.
debug 2023-07-18T20:09:07.760+0000 7f7fb8521700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021719.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:09:07.760+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:09:07.760+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:09:07.763201) [db/db_impl/db_impl_compaction_flush.cc:2712] [default] Manual compaction from level-0 to level-6 from '7061786F730036323535343232' seq:72057594037927935, type:20 .. '7061786F730036323535363734' seq:0, type:0; will stop at (end)
debug 2023-07-18T20:09:07.760+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1881] [default] [JOB 1662] Compacting 1@0 + 3@6 files to L6, score -1.00
debug 2023-07-18T20:09:07.760+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1887] [default] Compaction start summary: Base version 1662 Base level 0, inputs: [21725(2907KB)], [21721(64MB) 21722(64MB) 21723(4311KB)]
debug 2023-07-18T20:09:07.760+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710947763259, "job": 1662, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [21725], "files_L6": [21721, 21722, 21723], "score": -1, "input_data_size": 141978084}
debug 2023-07-18T20:09:07.972+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1662] Generated table #21726: 21931 keys, 67313054 bytes
debug 2023-07-18T20:09:07.972+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710947977232, "cf_name": "default", "job": 1662, "event": "table_file_creation", "file_number": 21726, "file_size": 67313054, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67198111, "index_size": 59103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 54853, "raw_key_size": 594058, "raw_average_key_size": 27, "raw_value_size": 66841841, "raw_average_value_size": 3047, "num_data_blocks": 2184, "num_entries": 21931, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710947, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:09:08.168+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1662] Generated table #21727: 13103 keys, 67292555 bytes
debug 2023-07-18T20:09:08.168+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710948169857, "cf_name": "default", "job": 1662, "event": "table_file_creation", "file_number": 21727, "file_size": 67292555, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67164124, "index_size": 94608, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32837, "raw_key_size": 290244, "raw_average_key_size": 22, "raw_value_size": 66885041, "raw_average_value_size": 5104, "num_data_blocks": 3510, "num_entries": 13103, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710947, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:09:08.188+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1662] Generated table #21728: 555 keys, 5367127 bytes
debug 2023-07-18T20:09:08.188+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710948190815, "cf_name": "default", "job": 1662, "event": "table_file_creation", "file_number": 21728, "file_size": 5367127, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 5358575, "index_size": 6093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11855, "raw_average_key_size": 21, "raw_value_size": 5345032, "raw_average_value_size": 9630, "num_data_blocks": 235, "num_entries": 555, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689710948, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:09:08.188+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1594] [default] [JOB 1662] Compacted 1@0 + 3@6 files to L6 => 139972736 bytes
debug 2023-07-18T20:09:08.188+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:09:08.191737) [db/compaction/compaction_job.cc:812] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 3] max score 0.00, MB/sec: 332.1 rd, 327.4 wr, level 6, files in(1, 3) out(3) MB in(2.8, 132.6) out(133.5), read-write-amplify(94.7) write-amplify(47.0) OK, records in: 36821, records dropped: 1232 output_compression: NoCompression
debug 2023-07-18T20:09:08.188+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:09:08.191754) EVENT_LOG_v1 {"time_micros": 1689710948191747, "job": 1662, "event": "compaction_finished", "compaction_time_micros": 427568, "compaction_time_cpu_micros": 202970, "output_level": 6, "num_output_files": 3, "total_output_size": 139972736, "num_input_records": 36821, "num_output_records": 35589, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 3]}
debug 2023-07-18T20:09:08.188+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021725.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:09:08.188+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710948192286, "job": 1662, "event": "table_file_deletion", "file_number": 21725}
debug 2023-07-18T20:09:08.188+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021723.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:09:08.188+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710948192873, "job": 1662, "event": "table_file_deletion", "file_number": 21723}
debug 2023-07-18T20:09:08.200+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021722.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:09:08.200+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710948201952, "job": 1662, "event": "table_file_deletion", "file_number": 21722}
debug 2023-07-18T20:09:08.208+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021721.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:09:08.208+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689710948211936, "job": 1662, "event": "table_file_deletion", "file_number": 21721}
debug 2023-07-18T20:09:08.208+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:09:08.208+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:09:08.208+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:09:08.208+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:09:08.208+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
cluster 2023-07-18T20:09:07.318914+0000 osd.54 (osd.54) 51904 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:08.324524+0000 osd.54 (osd.54) 51905 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:08.739950+0000 mgr.b (mgr.12834102) 26363 : cluster [DBG] pgmap v26934: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:09.328272+0000 osd.54 (osd.54) 51906 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:09:10.992+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:09:10.312552+0000 osd.54 (osd.54) 51907 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:10.741016+0000 mgr.b (mgr.12834102) 26364 : cluster [DBG] pgmap v26935: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:09:11.888+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:09:11.888+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:09:11.330560+0000 osd.54 (osd.54) 51908 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:09:11.891787+0000 mon.j (mon.0) 21628 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:09:12.281419+0000 osd.54 (osd.54) 51909 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:12.742008+0000 mgr.b (mgr.12834102) 26365 : cluster [DBG] pgmap v26936: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:09:13.727166+0000 mon.k (mon.1) 18908 : audit [DBG] from='client.? 10.1.207.132:0/1694732930' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:09:13.236946+0000 osd.54 (osd.54) 51910 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:14.206263+0000 osd.54 (osd.54) 51911 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:14.742960+0000 mgr.b (mgr.12834102) 26366 : cluster [DBG] pgmap v26937: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:09:14.838317+0000 mon.l (mon.2) 15591 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:09:14.838590+0000 mon.l (mon.2) 15592 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:09:15.996+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:09:16.476+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:09:16.476+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:09:15.247670+0000 osd.54 (osd.54) 51912 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:09:16.280408+0000 mon.k (mon.1) 18909 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:09:16.280634+0000 mon.k (mon.1) 18910 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:09:16.480382+0000 mon.j (mon.0) 21629 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:09:16.480645+0000 mon.j (mon.0) 21630 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:09:16.230058+0000 osd.54 (osd.54) 51913 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:16.743997+0000 mgr.b (mgr.12834102) 26367 : cluster [DBG] pgmap v26938: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:17.248674+0000 osd.54 (osd.54) 51914 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:18.278795+0000 osd.54 (osd.54) 51915 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:18.745001+0000 mgr.b (mgr.12834102) 26368 : cluster [DBG] pgmap v26939: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:19.301974+0000 osd.54 (osd.54) 51916 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:09:20.996+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:09:20.259447+0000 osd.54 (osd.54) 51917 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:20.745982+0000 mgr.b (mgr.12834102) 26369 : cluster [DBG] pgmap v26940: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:21.231586+0000 osd.54 (osd.54) 51918 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:09:23.111+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:09:23.111+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/1780414654' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:09:22.276522+0000 osd.54 (osd.54) 51919 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:22.746989+0000 mgr.b (mgr.12834102) 26370 : cluster [DBG] pgmap v26941: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:09:23.115617+0000 mon.j (mon.0) 21631 : audit [DBG] from='client.? 10.1.182.12:0/1780414654' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:09:23.268096+0000 osd.54 (osd.54) 51920 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:24.298553+0000 osd.54 (osd.54) 51921 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:24.747983+0000 mgr.b (mgr.12834102) 26371 : cluster [DBG] pgmap v26942: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:09:24.873103+0000 mon.l (mon.2) 15593 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:09:24.873365+0000 mon.l (mon.2) 15594 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:09:25.326482+0000 osd.54 (osd.54) 51922 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:09:25.999+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:09:26.495+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:09:26.495+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:09:26.887+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:09:26.887+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T20:09:26.291470+0000 mon.k (mon.1) 18911 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:09:26.291746+0000 mon.k (mon.1) 18912 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:09:26.339987+0000 osd.54 (osd.54) 51923 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:09:26.497791+0000 mon.j (mon.0) 21632 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:09:26.498099+0000 mon.j (mon.0) 21633 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:09:26.841550+0000 mon.k (mon.1) 18913 : audit [DBG] from='client.? 10.1.222.242:0/1229922682' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:09:26.892005+0000 mon.j (mon.0) 21634 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:09:26.748975+0000 mgr.b (mgr.12834102) 26372 : cluster [DBG] pgmap v26943: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:27.356151+0000 osd.54 (osd.54) 51924 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:09:27.470930+0000 mon.l (mon.2) 15595 : audit [DBG] from='client.? 10.1.222.242:0/2716318652' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:09:28.395908+0000 osd.54 (osd.54) 51925 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:28.749921+0000 mgr.b (mgr.12834102) 26373 : cluster [DBG] pgmap v26944: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:29.445211+0000 osd.54 (osd.54) 51926 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:09:30.999+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:09:30.411406+0000 osd.54 (osd.54) 51927 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:30.750851+0000 mgr.b (mgr.12834102) 26374 : cluster [DBG] pgmap v26945: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:31.373002+0000 osd.54 (osd.54) 51928 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:32.384059+0000 osd.54 (osd.54) 51929 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:32.751804+0000 mgr.b (mgr.12834102) 26375 : cluster [DBG] pgmap v26946: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:09:33.078992+0000 mon.l (mon.2) 15596 : audit [DBG] from='client.? 10.1.222.242:0/2895497646' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:09:33.377509+0000 osd.54 (osd.54) 51930 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:09:33.471562+0000 mon.k (mon.1) 18914 : audit [DBG] from='client.? 10.1.222.242:0/2576717441' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
audit 2023-07-18T20:09:33.937178+0000 mon.l (mon.2) 15597 : audit [DBG] from='client.? 10.1.207.132:0/25510350' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:09:34.328100+0000 osd.54 (osd.54) 51931 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:09:34.866231+0000 mon.l (mon.2) 15598 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:09:34.866497+0000 mon.l (mon.2) 15599 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:09:35.999+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:09:34.752791+0000 mgr.b (mgr.12834102) 26376 : cluster [DBG] pgmap v26947: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:35.289039+0000 osd.54 (osd.54) 51932 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:09:36.475+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:09:36.475+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:09:36.280226+0000 osd.54 (osd.54) 51933 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:09:36.301536+0000 mon.k (mon.1) 18915 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:09:36.301821+0000 mon.k (mon.1) 18916 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:09:36.479253+0000 mon.j (mon.0) 21635 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:09:36.479509+0000 mon.j (mon.0) 21636 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:09:36.753748+0000 mgr.b (mgr.12834102) 26377 : cluster [DBG] pgmap v26948: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:37.327282+0000 osd.54 (osd.54) 51934 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:09:38.579+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:09:38.579+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/1709375586' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:09:38.373814+0000 osd.54 (osd.54) 51935 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:09:38.585199+0000 mon.j (mon.0) 21637 : audit [DBG] from='client.? 10.1.182.12:0/1709375586' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:09:38.754740+0000 mgr.b (mgr.12834102) 26378 : cluster [DBG] pgmap v26949: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:39.370698+0000 osd.54 (osd.54) 51936 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:09:41.003+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:09:40.346135+0000 osd.54 (osd.54) 51937 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:09:41.887+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:09:41.887+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:09:40.755708+0000 mgr.b (mgr.12834102) 26379 : cluster [DBG] pgmap v26950: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:41.396609+0000 osd.54 (osd.54) 51938 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:09:41.891238+0000 mon.j (mon.0) 21638 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:09:42.407025+0000 osd.54 (osd.54) 51939 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:42.756674+0000 mgr.b (mgr.12834102) 26380 : cluster [DBG] pgmap v26951: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:43.402882+0000 osd.54 (osd.54) 51940 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:44.433477+0000 osd.54 (osd.54) 51941 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:09:44.898563+0000 mon.l (mon.2) 15600 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:09:44.898859+0000 mon.l (mon.2) 15601 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:09:46.003+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:09:44.757591+0000 mgr.b (mgr.12834102) 26381 : cluster [DBG] pgmap v26952: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:45.428585+0000 osd.54 (osd.54) 51942 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:09:46.483+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:09:46.483+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:09:46.292654+0000 mon.k (mon.1) 18917 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:09:46.292929+0000 mon.k (mon.1) 18918 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:09:46.475080+0000 osd.54 (osd.54) 51943 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:09:46.487139+0000 mon.j (mon.0) 21639 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:09:46.487424+0000 mon.j (mon.0) 21640 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:09:46.758571+0000 mgr.b (mgr.12834102) 26382 : cluster [DBG] pgmap v26953: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:47.468293+0000 osd.54 (osd.54) 51944 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:48.433629+0000 osd.54 (osd.54) 51945 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:48.759497+0000 mgr.b (mgr.12834102) 26383 : cluster [DBG] pgmap v26954: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:49.460500+0000 osd.54 (osd.54) 51946 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:09:51.007+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:09:50.412124+0000 osd.54 (osd.54) 51947 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:50.760467+0000 mgr.b (mgr.12834102) 26384 : cluster [DBG] pgmap v26955: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:51.431436+0000 osd.54 (osd.54) 51948 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:09:52.386601+0000 osd.54 (osd.54) 51949 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:09:54.063+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:09:54.063+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/3250725522' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:09:52.761463+0000 mgr.b (mgr.12834102) 26385 : cluster [DBG] pgmap v26956: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:53.383231+0000 osd.54 (osd.54) 51950 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:09:54.065852+0000 mon.j (mon.0) 21641 : audit [DBG] from='client.? 10.1.182.12:0/3250725522' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:09:54.254046+0000 mon.l (mon.2) 15602 : audit [DBG] from='client.? 10.1.207.132:0/176730586' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:09:54.367107+0000 osd.54 (osd.54) 51951 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:09:54.850985+0000 mon.l (mon.2) 15603 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:09:54.851273+0000 mon.l (mon.2) 15604 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:09:56.007+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:09:54.762403+0000 mgr.b (mgr.12834102) 26386 : cluster [DBG] pgmap v26957: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:55.330766+0000 osd.54 (osd.54) 51952 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:09:56.507+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:09:56.507+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:09:56.887+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:09:56.887+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T20:09:56.290754+0000 mon.k (mon.1) 18919 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:09:56.291020+0000 mon.k (mon.1) 18920 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:09:56.367801+0000 osd.54 (osd.54) 51953 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:09:56.509448+0000 mon.j (mon.0) 21642 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:09:56.509732+0000 mon.j (mon.0) 21643 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:09:56.891101+0000 mon.j (mon.0) 21644 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:09:56.763358+0000 mgr.b (mgr.12834102) 26387 : cluster [DBG] pgmap v26958: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:57.371459+0000 osd.54 (osd.54) 51954 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:09:58.079757+0000 mon.k (mon.1) 18921 : audit [DBG] from='client.? 10.1.222.242:0/3618216723' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:09:58.372220+0000 osd.54 (osd.54) 51955 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:09:58.469392+0000 mon.k (mon.1) 18922 : audit [DBG] from='client.? 10.1.222.242:0/2335362120' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
debug 2023-07-18T20:09:59.995+0000 7f7fb651d700 0 log_channel(cluster) log [WRN] : overall HEALTH_WARN 38 osds down; 12 hosts (54 osds) down; Reduced data availability: 196 pgs inactive, 9 pgs down, 1 pg incomplete; 2 pgs not deep-scrubbed in time; 69 daemons have recently crashed; 3 slow ops, oldest one blocked for 390 sec, osd.54 has slow ops
cluster 2023-07-18T20:09:58.764323+0000 mgr.b (mgr.12834102) 26388 : cluster [DBG] pgmap v26959: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:09:59.335216+0000 osd.54 (osd.54) 51956 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:00.000147+0000 mon.j (mon.0) 21645 : cluster [WRN] overall HEALTH_WARN 38 osds down; 12 hosts (54 osds) down; Reduced data availability: 196 pgs inactive, 9 pgs down, 1 pg incomplete; 2 pgs not deep-scrubbed in time; 69 daemons have recently crashed; 3 slow ops, oldest one blocked for 390 sec, osd.54 has slow ops
debug 2023-07-18T20:10:01.011+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:10:00.349200+0000 osd.54 (osd.54) 51957 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:00.765321+0000 mgr.b (mgr.12834102) 26389 : cluster [DBG] pgmap v26960: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:10:01.360780+0000 osd.54 (osd.54) 51958 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:02.377993+0000 osd.54 (osd.54) 51959 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:02.766309+0000 mgr.b (mgr.12834102) 26390 : cluster [DBG] pgmap v26961: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:10:03.369174+0000 osd.54 (osd.54) 51960 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:10:03.985954+0000 mon.k (mon.1) 18923 : audit [DBG] from='client.? 10.1.222.242:0/2860876439' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:10:04.324348+0000 osd.54 (osd.54) 51961 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:10:04.342707+0000 mon.l (mon.2) 15605 : audit [DBG] from='client.? 10.1.222.242:0/2938684902' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
audit 2023-07-18T20:10:04.849689+0000 mon.l (mon.2) 15606 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:10:04.849988+0000 mon.l (mon.2) 15607 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:10:06.011+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:10:04.767267+0000 mgr.b (mgr.12834102) 26391 : cluster [DBG] pgmap v26962: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:10:05.278548+0000 osd.54 (osd.54) 51962 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:10:06.286229+0000 mon.k (mon.1) 18924 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:10:06.286506+0000 mon.k (mon.1) 18925 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:10:06.479+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:10:06.479+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:10:06.295491+0000 osd.54 (osd.54) 51963 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:10:06.483095+0000 mon.j (mon.0) 21646 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:10:06.483270+0000 mon.j (mon.0) 21647 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:10:06.768268+0000 mgr.b (mgr.12834102) 26392 : cluster [DBG] pgmap v26963: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:10:07.271324+0000 osd.54 (osd.54) 51964 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:08.309139+0000 osd.54 (osd.54) 51965 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:08.769277+0000 mgr.b (mgr.12834102) 26393 : cluster [DBG] pgmap v26964: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:10:09.539+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:10:09.539+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/365701525' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:10:09.271580+0000 osd.54 (osd.54) 51966 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:10:09.544763+0000 mon.j (mon.0) 21648 : audit [DBG] from='client.? 10.1.182.12:0/365701525' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:10:11.015+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:10:10.295273+0000 osd.54 (osd.54) 51967 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:10.770284+0000 mgr.b (mgr.12834102) 26394 : cluster [DBG] pgmap v26965: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:10:11.887+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:10:11.887+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:10:11.335731+0000 osd.54 (osd.54) 51968 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:10:11.891478+0000 mon.j (mon.0) 21649 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:10:12.380388+0000 osd.54 (osd.54) 51969 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:12.771288+0000 mgr.b (mgr.12834102) 26395 : cluster [DBG] pgmap v26966: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:10:13.375745+0000 osd.54 (osd.54) 51970 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:14.415175+0000 osd.54 (osd.54) 51971 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:10:14.500596+0000 mon.l (mon.2) 15608 : audit [DBG] from='client.? 10.1.207.132:0/1688191218' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:10:14.772275+0000 mgr.b (mgr.12834102) 26396 : cluster [DBG] pgmap v26967: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:10:14.854174+0000 mon.l (mon.2) 15609 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:10:14.854497+0000 mon.l (mon.2) 15610 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:10:16.015+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:10:15.384376+0000 osd.54 (osd.54) 51972 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:10:16.293929+0000 mon.k (mon.1) 18926 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:10:16.294306+0000 mon.k (mon.1) 18927 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:10:16.483+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:10:16.483+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:10:16.369127+0000 osd.54 (osd.54) 51973 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:10:16.485977+0000 mon.j (mon.0) 21650 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:10:16.486113+0000 mon.j (mon.0) 21651 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:10:16.773283+0000 mgr.b (mgr.12834102) 26397 : cluster [DBG] pgmap v26968: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:10:17.385685+0000 osd.54 (osd.54) 51974 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:18.346226+0000 osd.54 (osd.54) 51975 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:18.774256+0000 mgr.b (mgr.12834102) 26398 : cluster [DBG] pgmap v26969: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:10:19.378631+0000 osd.54 (osd.54) 51976 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:10:20.567+0000 7f7fb0d12700 4 rocksdb: [db/db_impl/db_impl.cc:901] ------- DUMPING STATS -------
debug 2023-07-18T20:10:20.567+0000 7f7fb0d12700 4 rocksdb: [db/db_impl/db_impl.cc:903]
** DB Stats **
Uptime(secs): 52200.1 total, 600.0 interval
Cumulative writes: 189K writes, 1129K keys, 189K commit groups, 1.0 writes per commit group, ingest: 1.75 GB, 0.03 MB/s
Cumulative WAL: 189K writes, 189K syncs, 1.00 writes per sync, written: 1.75 GB, 0.03 MB/s
Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
Interval writes: 2160 writes, 11K keys, 2160 commit groups, 1.0 writes per commit group, ingest: 14.68 MB, 0.02 MB/s
Interval WAL: 2160 writes, 2160 syncs, 1.00 writes per sync, written: 0.01 MB, 0.02 MB/s
Interval stall: 00:00:0.000 H:M:S, 0.0 percent
** Compaction Stats [default] **
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 1.5 1.5 0.0 1.0 0.0 195.2 7.65 5.37 674 0.011 0 0
L5 0/0 0.00 KB 0.0 1.1 0.4 0.7 1.1 0.4 0.5 2.6 311.2 309.7 3.69 1.74 9 0.410 149K 13K
L6 3/0 133.49 MB 0.0 182.5 1.4 181.1 180.9 -0.2 0.0 126.9 325.4 322.6 574.37 255.03 662 0.868 26M 684K
Sum 3/0 133.49 MB 0.0 183.6 1.9 181.8 183.5 1.7 0.5 125.8 321.1 320.8 585.71 262.13 1345 0.435 26M 697K
Int 0/0 0.00 KB 0.0 0.9 0.0 0.9 0.9 0.0 0.0 85.2 297.3 297.4 3.18 1.60 14 0.227 255K 6482
** Compaction Stats [default] **
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Low 0/0 0.00 KB 0.0 183.6 1.9 181.8 182.0 0.3 0.0 0.0 325.3 322.5 578.06 256.76 671 0.861 26M 697K
High 0/0 0.00 KB 0.0 0.0 0.0 0.0 1.5 1.5 0.0 0.0 0.0 195.1 7.64 5.37 673 0.011 0 0
User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 287.9 0.01 0.00 1 0.009 0 0
Uptime(secs): 52200.1 total, 600.0 interval
Flush(GB): cumulative 1.458, interval 0.011
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 183.50 GB write, 3.60 MB/s write, 183.65 GB read, 3.60 MB/s read, 585.7 seconds
Interval compaction: 0.92 GB write, 1.58 MB/s write, 0.92 GB read, 1.57 MB/s read, 3.2 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
** File Read Latency Histogram By Level [default] **
** Compaction Stats [default] **
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 1.5 1.5 0.0 1.0 0.0 195.2 7.65 5.37 674 0.011 0 0
L5 0/0 0.00 KB 0.0 1.1 0.4 0.7 1.1 0.4 0.5 2.6 311.2 309.7 3.69 1.74 9 0.410 149K 13K
L6 3/0 133.49 MB 0.0 182.5 1.4 181.1 180.9 -0.2 0.0 126.9 325.4 322.6 574.37 255.03 662 0.868 26M 684K
Sum 3/0 133.49 MB 0.0 183.6 1.9 181.8 183.5 1.7 0.5 125.8 321.1 320.8 585.71 262.13 1345 0.435 26M 697K
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0
** Compaction Stats [default] **
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Low 0/0 0.00 KB 0.0 183.6 1.9 181.8 182.0 0.3 0.0 0.0 325.3 322.5 578.06 256.76 671 0.861 26M 697K
High 0/0 0.00 KB 0.0 0.0 0.0 0.0 1.5 1.5 0.0 0.0 0.0 195.1 7.64 5.37 673 0.011 0 0
User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 287.9 0.01 0.00 1 0.009 0 0
Uptime(secs): 52200.1 total, 0.0 interval
Flush(GB): cumulative 1.458, interval 0.000
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 183.50 GB write, 3.60 MB/s write, 183.65 GB read, 3.60 MB/s read, 585.7 seconds
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
** File Read Latency Histogram By Level [default] **
debug 2023-07-18T20:10:21.015+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:10:20.376681+0000 osd.54 (osd.54) 51977 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:20.775213+0000 mgr.b (mgr.12834102) 26399 : cluster [DBG] pgmap v26970: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:10:21.390100+0000 osd.54 (osd.54) 51978 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:22.440082+0000 osd.54 (osd.54) 51979 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:22.776166+0000 mgr.b (mgr.12834102) 26400 : cluster [DBG] pgmap v26971: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:10:23.490458+0000 osd.54 (osd.54) 51980 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:10:25.019+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:10:25.019+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/2081773814' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:10:24.512180+0000 osd.54 (osd.54) 51981 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:24.777187+0000 mgr.b (mgr.12834102) 26401 : cluster [DBG] pgmap v26972: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:10:24.869581+0000 mon.l (mon.2) 15611 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:10:24.869858+0000 mon.l (mon.2) 15612 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:10:25.022701+0000 mon.j (mon.0) 21652 : audit [DBG] from='client.? 10.1.182.12:0/2081773814' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:10:26.019+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:10:26.023+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_write.cc:1736] [default] New memtable created with log file: #21729. Immutable memtables: 0.
debug 2023-07-18T20:10:26.023+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:10:26.028517) [db/db_impl/db_impl_compaction_flush.cc:2394] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
debug 2023-07-18T20:10:26.023+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:335] [default] [JOB 1663] Flushing memtable with next log file: 21729
debug 2023-07-18T20:10:26.023+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711026028569, "job": 1663, "event": "flush_started", "num_memtables": 1, "num_entries": 1516, "num_deletes": 440, "total_data_size": 2003687, "memory_usage": 2030296, "flush_reason": "Manual Compaction"}
debug 2023-07-18T20:10:26.023+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:364] [default] [JOB 1663] Level-0 flush table #21730: started
debug 2023-07-18T20:10:26.031+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711026037049, "cf_name": "default", "job": 1663, "event": "table_file_creation", "file_number": 21730, "file_size": 1574644, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 1568509, "index_size": 2907, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 21126, "raw_average_key_size": 23, "raw_value_size": 1553598, "raw_average_value_size": 1751, "num_data_blocks": 114, "num_entries": 887, "num_deletions": 440, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689710948, "oldest_key_time": 1689710948, "file_creation_time": 1689711026, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:10:26.031+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:424] [default] [JOB 1663] Level-0 flush table #21730: 1574644 bytes OK
debug 2023-07-18T20:10:26.035+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:10:26.037300) [db/memtable_list.cc:449] [default] Level-0 commit table #21730 started
debug 2023-07-18T20:10:26.035+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:10:26.037602) [db/memtable_list.cc:628] [default] Level-0 commit table #21730: memtable #1 done
debug 2023-07-18T20:10:26.035+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:10:26.037615) EVENT_LOG_v1 {"time_micros": 1689711026037611, "job": 1663, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 3], "immutable_memtables": 0}
debug 2023-07-18T20:10:26.035+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:10:26.037623) [db/db_impl/db_impl_compaction_flush.cc:233] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 3] max score 0.25
debug 2023-07-18T20:10:26.035+0000 7f7fb8521700 4 rocksdb: [db/db_impl/db_impl_files.cc:415] [JOB 1663] Try to delete WAL files size 1995983, prev total WAL file size 1996559, number of live WAL files 2.
debug 2023-07-18T20:10:26.035+0000 7f7fb8521700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021724.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:10:26.035+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:10:26.035+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:10:26.038162) [db/db_impl/db_impl_compaction_flush.cc:2712] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323638363335' seq:72057594037927935, type:20 .. '6C6F676D0033323638383839' seq:0, type:0; will stop at (end)
debug 2023-07-18T20:10:26.035+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1881] [default] [JOB 1664] Compacting 1@0 + 3@6 files to L6, score -1.00
debug 2023-07-18T20:10:26.035+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1887] [default] Compaction start summary: Base version 1664 Base level 0, inputs: [21730(1537KB)], [21726(64MB) 21727(64MB) 21728(5241KB)]
debug 2023-07-18T20:10:26.035+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711026038214, "job": 1664, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [21730], "files_L6": [21726, 21727, 21728], "score": -1, "input_data_size": 141547380}
debug 2023-07-18T20:10:26.247+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1664] Generated table #21731: 21756 keys, 67346904 bytes
debug 2023-07-18T20:10:26.247+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711026253100, "cf_name": "default", "job": 1664, "event": "table_file_creation", "file_number": 21731, "file_size": 67346904, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67233148, "index_size": 58300, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 54469, "raw_key_size": 590540, "raw_average_key_size": 27, "raw_value_size": 66879753, "raw_average_value_size": 3074, "num_data_blocks": 2154, "num_entries": 21756, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689711026, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
cluster 2023-07-18T20:10:25.486384+0000 osd.54 (osd.54) 51982 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:10:26.291460+0000 mon.k (mon.1) 18928 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:10:26.291840+0000 mon.k (mon.1) 18929 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:10:26.463+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1664] Generated table #21732: 13131 keys, 67287495 bytes
debug 2023-07-18T20:10:26.463+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711026466216, "cf_name": "default", "job": 1664, "event": "table_file_creation", "file_number": 21732, "file_size": 67287495, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67158327, "index_size": 95345, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32837, "raw_key_size": 290893, "raw_average_key_size": 22, "raw_value_size": 66878130, "raw_average_value_size": 5093, "num_data_blocks": 3538, "num_entries": 13131, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689711026, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:10:26.483+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1664] Generated table #21733: 695 keys, 6665356 bytes
debug 2023-07-18T20:10:26.483+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711026486421, "cf_name": "default", "job": 1664, "event": "table_file_creation", "file_number": 21733, "file_size": 6665356, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 6654915, "index_size": 7598, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14827, "raw_average_key_size": 21, "raw_value_size": 6638038, "raw_average_value_size": 9551, "num_data_blocks": 292, "num_entries": 695, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689711026, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:10:26.483+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1594] [default] [JOB 1664] Compacted 1@0 + 3@6 files to L6 => 141299755 bytes
debug 2023-07-18T20:10:26.483+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:10:26.487242) [db/compaction/compaction_job.cc:812] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 3] max score 0.00, MB/sec: 315.8 rd, 315.2 wr, level 6, files in(1, 3) out(3) MB in(1.5, 133.5) out(134.8), read-write-amplify(179.6) write-amplify(89.7) OK, records in: 36476, records dropped: 894 output_compression: NoCompression
debug 2023-07-18T20:10:26.483+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:10:26.487258) EVENT_LOG_v1 {"time_micros": 1689711026487251, "job": 1664, "event": "compaction_finished", "compaction_time_micros": 448220, "compaction_time_cpu_micros": 214989, "output_level": 6, "num_output_files": 3, "total_output_size": 141299755, "num_input_records": 36476, "num_output_records": 35582, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 3]}
debug 2023-07-18T20:10:26.483+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021730.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:10:26.483+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711026487580, "job": 1664, "event": "table_file_deletion", "file_number": 21730}
debug 2023-07-18T20:10:26.483+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021728.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:10:26.483+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711026488235, "job": 1664, "event": "table_file_deletion", "file_number": 21728}
debug 2023-07-18T20:10:26.487+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:10:26.487+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:10:26.495+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021727.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:10:26.495+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711026497468, "job": 1664, "event": "table_file_deletion", "file_number": 21727}
debug 2023-07-18T20:10:26.503+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021726.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:10:26.503+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711026507308, "job": 1664, "event": "table_file_deletion", "file_number": 21726}
debug 2023-07-18T20:10:26.503+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:10:26.503+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:10:26.503+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:10:26.503+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:10:26.503+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:10:26.887+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:10:26.887+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:10:26.440742+0000 osd.54 (osd.54) 51983 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:10:26.490539+0000 mon.j (mon.0) 21653 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:10:26.490812+0000 mon.j (mon.0) 21654 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:10:26.778132+0000 mgr.b (mgr.12834102) 26402 : cluster [DBG] pgmap v26973: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:10:26.892140+0000 mon.j (mon.0) 21655 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:10:27.395130+0000 osd.54 (osd.54) 51984 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:28.368996+0000 osd.54 (osd.54) 51985 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:28.779114+0000 mgr.b (mgr.12834102) 26403 : cluster [DBG] pgmap v26974: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:10:29.005178+0000 mon.l (mon.2) 15613 : audit [DBG] from='client.? 10.1.222.242:0/3343857367' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:10:29.468688+0000 mon.l (mon.2) 15614 : audit [DBG] from='client.? 10.1.222.242:0/2725848834' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:10:29.324898+0000 osd.54 (osd.54) 51986 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:10:31.023+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:10:30.360248+0000 osd.54 (osd.54) 51987 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:30.780085+0000 mgr.b (mgr.12834102) 26404 : cluster [DBG] pgmap v26975: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:10:31.345976+0000 osd.54 (osd.54) 51988 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:32.370717+0000 osd.54 (osd.54) 51989 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:32.781128+0000 mgr.b (mgr.12834102) 26405 : cluster [DBG] pgmap v26976: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:10:33.402926+0000 osd.54 (osd.54) 51990 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:34.375233+0000 osd.54 (osd.54) 51991 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:34.782096+0000 mgr.b (mgr.12834102) 26406 : cluster [DBG] pgmap v26977: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:10:34.785325+0000 mon.l (mon.2) 15615 : audit [DBG] from='client.? 10.1.207.132:0/3996242034' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:10:34.861595+0000 mon.l (mon.2) 15616 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:10:34.861855+0000 mon.l (mon.2) 15617 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:10:35.258239+0000 mon.k (mon.1) 18930 : audit [DBG] from='client.? 10.1.222.242:0/2306356894' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
debug 2023-07-18T20:10:36.023+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:10:36.487+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:10:36.487+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:10:35.360541+0000 osd.54 (osd.54) 51992 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:10:35.726314+0000 mon.k (mon.1) 18931 : audit [DBG] from='client.? 10.1.222.242:0/4072572294' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
audit 2023-07-18T20:10:36.305193+0000 mon.k (mon.1) 18932 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:10:36.305511+0000 mon.k (mon.1) 18933 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:10:36.490842+0000 mon.j (mon.0) 21656 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:10:36.491029+0000 mon.j (mon.0) 21657 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:10:36.409729+0000 osd.54 (osd.54) 51993 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:36.783061+0000 mgr.b (mgr.12834102) 26407 : cluster [DBG] pgmap v26978: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:10:37.405346+0000 osd.54 (osd.54) 51994 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:38.388318+0000 osd.54 (osd.54) 51995 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:38.783897+0000 mgr.b (mgr.12834102) 26408 : cluster [DBG] pgmap v26979: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:10:40.498+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:10:40.498+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/3731968964' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:10:39.387707+0000 osd.54 (osd.54) 51996 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:10:40.502823+0000 mon.j (mon.0) 21658 : audit [DBG] from='client.? 10.1.182.12:0/3731968964' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:10:41.026+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:10:40.420061+0000 osd.54 (osd.54) 51997 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:40.784694+0000 mgr.b (mgr.12834102) 26409 : cluster [DBG] pgmap v26980: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:10:41.886+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:10:41.886+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:10:41.390081+0000 osd.54 (osd.54) 51998 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:10:41.892649+0000 mon.j (mon.0) 21659 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:10:42.414124+0000 osd.54 (osd.54) 51999 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:42.785329+0000 mgr.b (mgr.12834102) 26410 : cluster [DBG] pgmap v26981: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:10:43.387162+0000 osd.54 (osd.54) 52000 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:44.341393+0000 osd.54 (osd.54) 52001 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:44.786072+0000 mgr.b (mgr.12834102) 26411 : cluster [DBG] pgmap v26982: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:10:44.851446+0000 mon.l (mon.2) 15618 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:10:44.851735+0000 mon.l (mon.2) 15619 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:10:46.026+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:10:46.470+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:10:46.470+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:10:45.327243+0000 osd.54 (osd.54) 52002 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:10:46.285099+0000 mon.k (mon.1) 18934 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:10:46.285284+0000 mon.k (mon.1) 18935 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:10:46.475384+0000 mon.j (mon.0) 21660 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:10:46.475742+0000 mon.j (mon.0) 21661 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:10:46.374059+0000 osd.54 (osd.54) 52003 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:46.786819+0000 mgr.b (mgr.12834102) 26412 : cluster [DBG] pgmap v26983: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:10:47.380265+0000 osd.54 (osd.54) 52004 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:48.364207+0000 osd.54 (osd.54) 52005 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:48.787599+0000 mgr.b (mgr.12834102) 26413 : cluster [DBG] pgmap v26984: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:10:49.375852+0000 osd.54 (osd.54) 52006 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:10:51.030+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:10:50.340133+0000 osd.54 (osd.54) 52007 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:50.788163+0000 mgr.b (mgr.12834102) 26414 : cluster [DBG] pgmap v26985: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:10:51.320192+0000 osd.54 (osd.54) 52008 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:52.349014+0000 osd.54 (osd.54) 52009 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:52.788929+0000 mgr.b (mgr.12834102) 26415 : cluster [DBG] pgmap v26986: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:10:53.394735+0000 osd.54 (osd.54) 52010 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:54.375979+0000 osd.54 (osd.54) 52011 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:54.789606+0000 mgr.b (mgr.12834102) 26416 : cluster [DBG] pgmap v26987: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:10:54.852747+0000 mon.l (mon.2) 15620 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:10:54.852977+0000 mon.l (mon.2) 15621 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:10:55.036156+0000 mon.k (mon.1) 18936 : audit [DBG] from='client.? 10.1.207.132:0/1584655792' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:10:55.978+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:10:55.978+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/156441976' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:10:56.034+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:10:56.474+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:10:56.478+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:10:55.352046+0000 osd.54 (osd.54) 52012 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:10:55.982483+0000 mon.j (mon.0) 21662 : audit [DBG] from='client.? 10.1.182.12:0/156441976' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:10:56.286184+0000 mon.k (mon.1) 18937 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:10:56.286501+0000 mon.k (mon.1) 18938 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:10:56.481266+0000 mon.j (mon.0) 21663 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:10:56.481570+0000 mon.j (mon.0) 21664 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:10:56.886+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:10:56.886+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:10:56.389447+0000 osd.54 (osd.54) 52013 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:56.790378+0000 mgr.b (mgr.12834102) 26417 : cluster [DBG] pgmap v26988: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:10:56.891792+0000 mon.j (mon.0) 21665 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:10:57.409510+0000 osd.54 (osd.54) 52014 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:58.420593+0000 osd.54 (osd.54) 52015 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:10:58.791156+0000 mgr.b (mgr.12834102) 26418 : cluster [DBG] pgmap v26989: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:10:59.393831+0000 osd.54 (osd.54) 52016 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:11:00.003307+0000 mon.l (mon.2) 15622 : audit [DBG] from='client.? 10.1.222.242:0/1523818308' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:11:00.403006+0000 mon.k (mon.1) 18939 : audit [DBG] from='client.? 10.1.222.242:0/1703872726' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
debug 2023-07-18T20:11:01.034+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:11:00.419772+0000 osd.54 (osd.54) 52017 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:00.791957+0000 mgr.b (mgr.12834102) 26419 : cluster [DBG] pgmap v26990: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:01.448760+0000 osd.54 (osd.54) 52018 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:02.442532+0000 osd.54 (osd.54) 52019 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:02.792759+0000 mgr.b (mgr.12834102) 26420 : cluster [DBG] pgmap v26991: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:03.440321+0000 osd.54 (osd.54) 52020 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:04.406688+0000 osd.54 (osd.54) 52021 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:04.793524+0000 mgr.b (mgr.12834102) 26421 : cluster [DBG] pgmap v26992: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:11:04.861327+0000 mon.l (mon.2) 15623 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:11:04.861606+0000 mon.l (mon.2) 15624 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:11:06.034+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:11:06.470+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:11:06.470+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:11:05.405174+0000 osd.54 (osd.54) 52022 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:11:06.274880+0000 mon.k (mon.1) 18940 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:11:06.275101+0000 mon.k (mon.1) 18941 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:11:06.391890+0000 mon.l (mon.2) 15625 : audit [DBG] from='client.? 10.1.222.242:0/1492005918' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:11:06.476940+0000 mon.j (mon.0) 21666 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:11:06.477220+0000 mon.j (mon.0) 21667 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:11:06.440770+0000 osd.54 (osd.54) 52023 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:06.794313+0000 mgr.b (mgr.12834102) 26422 : cluster [DBG] pgmap v26993: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:11:06.860987+0000 mon.k (mon.1) 18942 : audit [DBG] from='client.? 10.1.222.242:0/128102630' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:11:07.415064+0000 osd.54 (osd.54) 52024 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:08.409116+0000 osd.54 (osd.54) 52025 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:08.795091+0000 mgr.b (mgr.12834102) 26423 : cluster [DBG] pgmap v26994: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:09.413912+0000 osd.54 (osd.54) 52026 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:11:11.034+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:11:11.458+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:11:11.458+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/3896161076' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:11:10.437530+0000 osd.54 (osd.54) 52027 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:10.795873+0000 mgr.b (mgr.12834102) 26424 : cluster [DBG] pgmap v26995: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:11:11.462310+0000 mon.j (mon.0) 21668 : audit [DBG] from='client.? 10.1.182.12:0/3896161076' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:11:11.886+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:11:11.886+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:11:11.423722+0000 osd.54 (osd.54) 52028 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:11:11.891861+0000 mon.j (mon.0) 21669 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:11:12.393937+0000 osd.54 (osd.54) 52029 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:12.796455+0000 mgr.b (mgr.12834102) 26425 : cluster [DBG] pgmap v26996: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:13.438343+0000 osd.54 (osd.54) 52030 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:14.470926+0000 osd.54 (osd.54) 52031 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:14.797383+0000 mgr.b (mgr.12834102) 26426 : cluster [DBG] pgmap v26997: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:11:14.856770+0000 mon.l (mon.2) 15626 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:11:14.857063+0000 mon.l (mon.2) 15627 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:11:15.474134+0000 mon.l (mon.2) 15628 : audit [DBG] from='client.? 10.1.207.132:0/940506423' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:11:16.038+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:11:16.474+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:11:16.474+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:11:15.487170+0000 osd.54 (osd.54) 52032 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:11:16.279927+0000 mon.k (mon.1) 18943 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:11:16.280210+0000 mon.k (mon.1) 18944 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:11:16.480110+0000 mon.j (mon.0) 21670 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:11:16.480307+0000 mon.j (mon.0) 21671 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:11:16.476949+0000 osd.54 (osd.54) 52033 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:16.798353+0000 mgr.b (mgr.12834102) 26427 : cluster [DBG] pgmap v26998: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:17.505215+0000 osd.54 (osd.54) 52034 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:18.489713+0000 osd.54 (osd.54) 52035 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:18.799314+0000 mgr.b (mgr.12834102) 26428 : cluster [DBG] pgmap v26999: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:19.461586+0000 osd.54 (osd.54) 52036 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:11:21.038+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:11:20.464666+0000 osd.54 (osd.54) 52037 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:20.800269+0000 mgr.b (mgr.12834102) 26429 : cluster [DBG] pgmap v27000: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:21.511864+0000 osd.54 (osd.54) 52038 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:22.463748+0000 osd.54 (osd.54) 52039 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:22.801269+0000 mgr.b (mgr.12834102) 26430 : cluster [DBG] pgmap v27001: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:23.462027+0000 osd.54 (osd.54) 52040 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:24.441736+0000 osd.54 (osd.54) 52041 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:24.802064+0000 mgr.b (mgr.12834102) 26431 : cluster [DBG] pgmap v27002: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:11:24.884784+0000 mon.l (mon.2) 15629 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:11:24.885080+0000 mon.l (mon.2) 15630 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:11:25.408639+0000 osd.54 (osd.54) 52042 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:11:26.042+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:11:26.478+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:11:26.478+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:11:26.886+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:11:26.886+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
debug 2023-07-18T20:11:26.946+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:11:26.946+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/2409739235' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:11:26.276460+0000 mon.k (mon.1) 18945 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:11:26.276858+0000 mon.k (mon.1) 18946 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:11:26.455098+0000 osd.54 (osd.54) 52043 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:11:26.484624+0000 mon.j (mon.0) 21672 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:11:26.484908+0000 mon.j (mon.0) 21673 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:11:26.892158+0000 mon.j (mon.0) 21674 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T20:11:26.949438+0000 mon.j (mon.0) 21675 : audit [DBG] from='client.? 10.1.182.12:0/2409739235' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:11:26.802812+0000 mgr.b (mgr.12834102) 26432 : cluster [DBG] pgmap v27003: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:27.433360+0000 osd.54 (osd.54) 52044 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:28.468303+0000 osd.54 (osd.54) 52045 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:28.803606+0000 mgr.b (mgr.12834102) 26433 : cluster [DBG] pgmap v27004: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:29.451082+0000 osd.54 (osd.54) 52046 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:11:31.042+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:11:30.492455+0000 osd.54 (osd.54) 52047 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:11:31.018579+0000 mon.k (mon.1) 18947 : audit [DBG] from='client.? 10.1.222.242:0/329179764' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:11:30.804416+0000 mgr.b (mgr.12834102) 26434 : cluster [DBG] pgmap v27005: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:11:31.452363+0000 mon.k (mon.1) 18948 : audit [DBG] from='client.? 10.1.222.242:0/29814014' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:11:31.520826+0000 osd.54 (osd.54) 52048 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:32.570475+0000 osd.54 (osd.54) 52049 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
cluster 2023-07-18T20:11:32.805380+0000 mgr.b (mgr.12834102) 26435 : cluster [DBG] pgmap v27006: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:33.551876+0000 osd.54 (osd.54) 52050 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
debug 2023-07-18T20:11:35.074+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd erasure-code-profile set", "name": "ceph-erasure-default-data_ecprofile", "force": true, "profile": ["k=5", "m=2", "plugin=jerasure", "technique=reed_sol_van"], "format": "json"} v 0) v1
debug 2023-07-18T20:11:35.074+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "ceph-erasure-default-data_ecprofile", "force": true, "profile": ["k=5", "m=2", "plugin=jerasure", "technique=reed_sol_van"], "format": "json"}]: dispatch
cluster 2023-07-18T20:11:34.538513+0000 osd.54 (osd.54) 52051 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:11:34.652925+0000 mon.k (mon.1) 18949 : audit [DBG] from='client.? 10.1.222.242:0/3199296913' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile get", "name": "default", "format": "json"}]: dispatch
audit 2023-07-18T20:11:34.867676+0000 mon.l (mon.2) 15631 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:11:34.867950+0000 mon.l (mon.2) 15632 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:11:35.080340+0000 mon.l (mon.2) 15633 : audit [INF] from='client.? 10.1.222.242:0/958582878' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "ceph-erasure-default-data_ecprofile", "force": true, "profile": ["k=5", "m=2", "plugin=jerasure", "technique=reed_sol_van"], "format": "json"}]: dispatch
audit 2023-07-18T20:11:35.080447+0000 mon.j (mon.0) 21676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "ceph-erasure-default-data_ecprofile", "force": true, "profile": ["k=5", "m=2", "plugin=jerasure", "technique=reed_sol_van"], "format": "json"}]: dispatch
debug 2023-07-18T20:11:35.118+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_write.cc:1736] [default] New memtable created with log file: #21734. Immutable memtables: 0.
debug 2023-07-18T20:11:35.118+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:11:35.121549) [db/db_impl/db_impl_compaction_flush.cc:2394] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
debug 2023-07-18T20:11:35.118+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:335] [default] [JOB 1665] Flushing memtable with next log file: 21734
debug 2023-07-18T20:11:35.118+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711095121639, "job": 1665, "event": "flush_started", "num_memtables": 1, "num_entries": 1400, "num_deletes": 421, "total_data_size": 1810446, "memory_usage": 1835928, "flush_reason": "Manual Compaction"}
debug 2023-07-18T20:11:35.118+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:364] [default] [JOB 1665] Level-0 flush table #21735: started
debug 2023-07-18T20:11:35.126+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711095129826, "cf_name": "default", "job": 1665, "event": "table_file_creation", "file_number": 21735, "file_size": 1426120, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 1420268, "index_size": 2752, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 19984, "raw_average_key_size": 24, "raw_value_size": 1406227, "raw_average_value_size": 1692, "num_data_blocks": 108, "num_entries": 831, "num_deletions": 421, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689711026, "oldest_key_time": 1689711026, "file_creation_time": 1689711095, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:11:35.126+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:424] [default] [JOB 1665] Level-0 flush table #21735: 1426120 bytes OK
debug 2023-07-18T20:11:35.126+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:11:35.130037) [db/memtable_list.cc:449] [default] Level-0 commit table #21735 started
debug 2023-07-18T20:11:35.126+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:11:35.130381) [db/memtable_list.cc:628] [default] Level-0 commit table #21735: memtable #1 done
debug 2023-07-18T20:11:35.126+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:11:35.130396) EVENT_LOG_v1 {"time_micros": 1689711095130391, "job": 1665, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 3], "immutable_memtables": 0}
debug 2023-07-18T20:11:35.126+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:11:35.130412) [db/db_impl/db_impl_compaction_flush.cc:233] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 3] max score 0.25
debug 2023-07-18T20:11:35.126+0000 7f7fb8521700 4 rocksdb: [db/db_impl/db_impl_files.cc:415] [JOB 1665] Try to delete WAL files size 1803249, prev total WAL file size 1803249, number of live WAL files 2.
debug 2023-07-18T20:11:35.126+0000 7f7fb8521700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021729.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:11:35.126+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:11:35.126+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:11:35.130957) [db/db_impl/db_impl_compaction_flush.cc:2712] [default] Manual compaction from level-0 to level-6 from '7061786F730036323535363733' seq:72057594037927935, type:20 .. '7061786F730036323535393235' seq:0, type:0; will stop at (end)
debug 2023-07-18T20:11:35.126+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1881] [default] [JOB 1666] Compacting 1@0 + 3@6 files to L6, score -1.00
debug 2023-07-18T20:11:35.126+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1887] [default] Compaction start summary: Base version 1666 Base level 0, inputs: [21735(1392KB)], [21731(64MB) 21732(64MB) 21733(6509KB)]
debug 2023-07-18T20:11:35.126+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711095131046, "job": 1666, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [21735], "files_L6": [21731, 21732, 21733], "score": -1, "input_data_size": 142725875}
debug 2023-07-18T20:11:35.334+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1666] Generated table #21736: 21827 keys, 67306848 bytes
debug 2023-07-18T20:11:35.334+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711095340288, "cf_name": "default", "job": 1666, "event": "table_file_creation", "file_number": 21736, "file_size": 67306848, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67192721, "index_size": 58543, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 54597, "raw_key_size": 591979, "raw_average_key_size": 27, "raw_value_size": 66838326, "raw_average_value_size": 3062, "num_data_blocks": 2164, "num_entries": 21827, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689711095, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:11:35.454+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool create", "pool": "ceph-erasure-default-data", "pg_num": 0, "pool_type": "erasure", "erasure_code_profile": "ceph-erasure-default-data_ecprofile", "format": "json"} v 0) v1
debug 2023-07-18T20:11:35.454+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ceph-erasure-default-data", "pg_num": 0, "pool_type": "erasure", "erasure_code_profile": "ceph-erasure-default-data_ecprofile", "format": "json"}]: dispatch
debug 2023-07-18T20:11:35.534+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1666] Generated table #21737: 13154 keys, 67305446 bytes
debug 2023-07-18T20:11:35.534+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711095537653, "cf_name": "default", "job": 1666, "event": "table_file_creation", "file_number": 21737, "file_size": 67305446, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67175571, "index_size": 95924, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32965, "raw_key_size": 291389, "raw_average_key_size": 22, "raw_value_size": 66894499, "raw_average_value_size": 5085, "num_data_blocks": 3561, "num_entries": 13154, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689711095, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:11:35.558+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1666] Generated table #21738: 576 keys, 6140134 bytes
debug 2023-07-18T20:11:35.558+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711095561534, "cf_name": "default", "job": 1666, "event": "table_file_creation", "file_number": 21738, "file_size": 6140134, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 6130865, "index_size": 6810, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 12380, "raw_average_key_size": 21, "raw_value_size": 6116343, "raw_average_value_size": 10618, "num_data_blocks": 259, "num_entries": 576, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689711095, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:11:35.558+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1594] [default] [JOB 1666] Compacted 1@0 + 3@6 files to L6 => 140752428 bytes
debug 2023-07-18T20:11:35.558+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:11:35.562417) [db/compaction/compaction_job.cc:812] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 3] max score 0.00, MB/sec: 331.5 rd, 327.0 wr, level 6, files in(1, 3) out(3) MB in(1.4, 134.8) out(134.2), read-write-amplify(198.8) write-amplify(98.7) OK, records in: 36413, records dropped: 856 output_compression: NoCompression
debug 2023-07-18T20:11:35.558+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:11:35.562434) EVENT_LOG_v1 {"time_micros": 1689711095562427, "job": 1666, "event": "compaction_finished", "compaction_time_micros": 430491, "compaction_time_cpu_micros": 204267, "output_level": 6, "num_output_files": 3, "total_output_size": 140752428, "num_input_records": 36413, "num_output_records": 35557, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 3]}
debug 2023-07-18T20:11:35.558+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021735.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:11:35.558+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711095562748, "job": 1666, "event": "table_file_deletion", "file_number": 21735}
debug 2023-07-18T20:11:35.558+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021733.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:11:35.558+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711095563618, "job": 1666, "event": "table_file_deletion", "file_number": 21733}
debug 2023-07-18T20:11:35.566+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021732.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:11:35.566+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711095572622, "job": 1666, "event": "table_file_deletion", "file_number": 21732}
debug 2023-07-18T20:11:35.578+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021731.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:11:35.578+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711095582575, "job": 1666, "event": "table_file_deletion", "file_number": 21731}
debug 2023-07-18T20:11:35.578+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:11:35.578+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:11:35.578+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:11:35.578+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:11:35.578+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:11:35.854+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool set", "pool": "ceph-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"} v 0) v1
debug 2023-07-18T20:11:35.854+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "ceph-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]: dispatch
debug 2023-07-18T20:11:36.046+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:11:36.114+0000 7f7fb651d700 1 mon.j@0(leader).osd e20723 do_prune osdmap full prune enabled
debug 2023-07-18T20:11:36.122+0000 7f7fb1513700 1 mon.j@0(leader).osd e20724 e20724: 57 total, 3 up, 41 in
debug 2023-07-18T20:11:36.126+0000 7f7fb1513700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "ceph-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]': finished
debug 2023-07-18T20:11:36.126+0000 7f7fb1513700 0 log_channel(cluster) log [DBG] : osdmap e20724: 57 total, 3 up, 41 in
cluster 2023-07-18T20:11:34.806192+0000 mgr.b (mgr.12834102) 26436 : cluster [DBG] pgmap v27007: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:11:35.459657+0000 mon.l (mon.2) 15634 : audit [INF] from='client.? 10.1.222.242:0/2801840459' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ceph-erasure-default-data", "pg_num": 0, "pool_type": "erasure", "erasure_code_profile": "ceph-erasure-default-data_ecprofile", "format": "json"}]: dispatch
audit 2023-07-18T20:11:35.459913+0000 mon.j (mon.0) 21677 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ceph-erasure-default-data", "pg_num": 0, "pool_type": "erasure", "erasure_code_profile": "ceph-erasure-default-data_ecprofile", "format": "json"}]: dispatch
cluster 2023-07-18T20:11:35.519328+0000 osd.54 (osd.54) 52052 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:11:35.791767+0000 mon.l (mon.2) 15635 : audit [DBG] from='client.? 10.1.207.132:0/602697184' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:11:35.860351+0000 mon.k (mon.1) 18950 : audit [INF] from='client.? 10.1.222.242:0/2107037168' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "ceph-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]: dispatch
audit 2023-07-18T20:11:35.861356+0000 mon.j (mon.0) 21678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "ceph-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]: dispatch
debug 2023-07-18T20:11:36.470+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:11:36.470+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:11:36.574+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool application enable","pool": "ceph-erasure-default-data","app": "rbd"} v 0) v1
debug 2023-07-18T20:11:36.574+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-erasure-default-data","app": "rbd"}]: dispatch
debug 2023-07-18T20:11:37.130+0000 7f7fb651d700 1 mon.j@0(leader).osd e20724 do_prune osdmap full prune enabled
audit 2023-07-18T20:11:36.130623+0000 mon.j (mon.0) 21679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "ceph-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]': finished
cluster 2023-07-18T20:11:36.130701+0000 mon.j (mon.0) 21680 : cluster [DBG] osdmap e20724: 57 total, 3 up, 41 in
audit 2023-07-18T20:11:36.304472+0000 mon.k (mon.1) 18951 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:11:36.304722+0000 mon.k (mon.1) 18952 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:11:36.475615+0000 mon.j (mon.0) 21681 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:11:36.475900+0000 mon.j (mon.0) 21682 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:11:36.477000+0000 osd.54 (osd.54) 52053 : cluster [WRN] 63 slow requests (by type [ 'queued for pg' : 63 ] most affected pool [ 'ceph-erasure-default-data' : 63 ])
audit 2023-07-18T20:11:36.522598+0000 mon.k (mon.1) 18953 : audit [DBG] from='client.? 10.1.222.242:0/3342107802' entity='client.admin' cmd=[{"prefix": "osd pool application get", "pool": "ceph-erasure-default-data", "format": "json"}]: dispatch
audit 2023-07-18T20:11:36.580256+0000 mon.k (mon.1) 18954 : audit [INF] from='client.? 10.1.222.242:0/2464968102' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-erasure-default-data","app": "rbd"}]: dispatch
audit 2023-07-18T20:11:36.581148+0000 mon.j (mon.0) 21683 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-erasure-default-data","app": "rbd"}]: dispatch
debug 2023-07-18T20:11:37.142+0000 7f7fb1513700 1 mon.j@0(leader).osd e20725 e20725: 57 total, 3 up, 41 in
debug 2023-07-18T20:11:37.146+0000 7f7fb1513700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-erasure-default-data","app": "rbd"}]': finished
debug 2023-07-18T20:11:37.146+0000 7f7fb1513700 0 log_channel(cluster) log [DBG] : osdmap e20725: 57 total, 3 up, 41 in
cluster 2023-07-18T20:11:36.806992+0000 mgr.b (mgr.12834102) 26437 : cluster [DBG] pgmap v27009: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:11:37.149590+0000 mon.j (mon.0) 21684 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-erasure-default-data","app": "rbd"}]': finished
cluster 2023-07-18T20:11:37.149637+0000 mon.j (mon.0) 21685 : cluster [DBG] osdmap e20725: 57 total, 3 up, 41 in
audit 2023-07-18T20:11:37.428493+0000 mon.k (mon.1) 18955 : audit [DBG] from='client.? 10.1.222.242:0/3271008273' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:11:37.461447+0000 osd.54 (osd.54) 52054 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
audit 2023-07-18T20:11:37.909363+0000 mon.k (mon.1) 18956 : audit [DBG] from='client.? 10.1.222.242:0/2552122016' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:11:38.442429+0000 osd.54 (osd.54) 52055 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
cluster 2023-07-18T20:11:38.807769+0000 mgr.b (mgr.12834102) 26438 : cluster [DBG] pgmap v27011: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:39.402137+0000 osd.54 (osd.54) 52056 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
debug 2023-07-18T20:11:41.046+0000 7f7fb651d700 1 mon.j@0(leader).osd e20725 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:11:41.046+0000 7f7fb651d700 1 mon.j@0(leader).osd e20725 do_prune osdmap full prune enabled
debug 2023-07-18T20:11:41.046+0000 7f7fb651d700 1 mon.j@0(leader).osd e20725 prune_init
debug 2023-07-18T20:11:41.046+0000 7f7fb651d700 1 mon.j@0(leader).osd e20725 encode_pending osdmap full prune encoded e20726
debug 2023-07-18T20:11:41.054+0000 7f7fb1513700 1 mon.j@0(leader).osd e20726 e20726: 57 total, 3 up, 41 in
debug 2023-07-18T20:11:41.058+0000 7f7fb1513700 0 log_channel(cluster) log [DBG] : osdmap e20726: 57 total, 3 up, 41 in
cluster 2023-07-18T20:11:40.371325+0000 osd.54 (osd.54) 52057 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
cluster 2023-07-18T20:11:41.061654+0000 mon.j (mon.0) 21686 : cluster [DBG] osdmap e20726: 57 total, 3 up, 41 in
debug 2023-07-18T20:11:41.886+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:11:41.886+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:11:40.808337+0000 mgr.b (mgr.12834102) 26439 : cluster [DBG] pgmap v27012: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:41.415624+0000 osd.54 (osd.54) 52058 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
audit 2023-07-18T20:11:41.892267+0000 mon.j (mon.0) 21687 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
debug 2023-07-18T20:11:42.414+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:11:42.414+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/2473516213' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:11:42.380726+0000 osd.54 (osd.54) 52059 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
audit 2023-07-18T20:11:42.417814+0000 mon.j (mon.0) 21688 : audit [DBG] from='client.? 10.1.182.12:0/2473516213' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:11:42.809316+0000 mgr.b (mgr.12834102) 26440 : cluster [DBG] pgmap v27014: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:43.422252+0000 osd.54 (osd.54) 52060 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
cluster 2023-07-18T20:11:44.406181+0000 osd.54 (osd.54) 52061 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
audit 2023-07-18T20:11:44.852668+0000 mon.l (mon.2) 15636 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:11:44.852939+0000 mon.l (mon.2) 15637 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:11:46.054+0000 7f7fb651d700 1 mon.j@0(leader).osd e20726 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:11:44.810290+0000 mgr.b (mgr.12834102) 26441 : cluster [DBG] pgmap v27015: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:45.426651+0000 osd.54 (osd.54) 52062 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
debug 2023-07-18T20:11:46.478+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:11:46.482+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:11:46.288162+0000 mon.k (mon.1) 18957 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:11:46.288423+0000 mon.k (mon.1) 18958 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:11:46.459776+0000 osd.54 (osd.54) 52063 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
audit 2023-07-18T20:11:46.485242+0000 mon.j (mon.0) 21689 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:11:46.485522+0000 mon.j (mon.0) 21690 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:11:46.811269+0000 mgr.b (mgr.12834102) 26442 : cluster [DBG] pgmap v27016: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:47.499399+0000 osd.54 (osd.54) 52064 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
cluster 2023-07-18T20:11:48.452909+0000 osd.54 (osd.54) 52065 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
cluster 2023-07-18T20:11:48.812234+0000 mgr.b (mgr.12834102) 26443 : cluster [DBG] pgmap v27017: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:49.489741+0000 osd.54 (osd.54) 52066 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
debug 2023-07-18T20:11:51.054+0000 7f7fb651d700 1 mon.j@0(leader).osd e20726 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:11:50.503935+0000 osd.54 (osd.54) 52067 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
cluster 2023-07-18T20:11:50.813196+0000 mgr.b (mgr.12834102) 26444 : cluster [DBG] pgmap v27018: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:51.461431+0000 osd.54 (osd.54) 52068 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
cluster 2023-07-18T20:11:52.475842+0000 osd.54 (osd.54) 52069 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
cluster 2023-07-18T20:11:52.814168+0000 mgr.b (mgr.12834102) 26445 : cluster [DBG] pgmap v27019: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:53.507792+0000 osd.54 (osd.54) 52070 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
cluster 2023-07-18T20:11:54.495562+0000 osd.54 (osd.54) 52071 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
audit 2023-07-18T20:11:54.870560+0000 mon.l (mon.2) 15638 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:11:54.870884+0000 mon.l (mon.2) 15639 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:11:56.054+0000 7f7fb651d700 1 mon.j@0(leader).osd e20726 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:11:54.815125+0000 mgr.b (mgr.12834102) 26446 : cluster [DBG] pgmap v27020: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:55.489474+0000 osd.54 (osd.54) 52072 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
audit 2023-07-18T20:11:56.216118+0000 mon.k (mon.1) 18959 : audit [DBG] from='client.? 10.1.207.132:0/3128545439' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:11:56.470+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:11:56.470+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:11:56.894+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:11:56.894+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T20:11:56.300612+0000 mon.k (mon.1) 18960 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:11:56.300890+0000 mon.k (mon.1) 18961 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:11:56.475146+0000 mon.j (mon.0) 21691 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:11:56.475428+0000 mon.j (mon.0) 21692 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:11:56.478521+0000 osd.54 (osd.54) 52073 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
audit 2023-07-18T20:11:56.895335+0000 mon.j (mon.0) 21693 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
debug 2023-07-18T20:11:57.885+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:11:57.885+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/395264564' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:11:56.816189+0000 mgr.b (mgr.12834102) 26447 : cluster [DBG] pgmap v27021: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:57.516075+0000 osd.54 (osd.54) 52074 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
audit 2023-07-18T20:11:57.890564+0000 mon.j (mon.0) 21694 : audit [DBG] from='client.? 10.1.182.12:0/395264564' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:11:58.471264+0000 osd.54 (osd.54) 52075 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
cluster 2023-07-18T20:11:58.817171+0000 mgr.b (mgr.12834102) 26448 : cluster [DBG] pgmap v27022: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:11:59.449407+0000 osd.54 (osd.54) 52076 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
debug 2023-07-18T20:12:01.053+0000 7f7fb651d700 1 mon.j@0(leader).osd e20726 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:12:00.467605+0000 osd.54 (osd.54) 52077 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
cluster 2023-07-18T20:12:00.818153+0000 mgr.b (mgr.12834102) 26449 : cluster [DBG] pgmap v27023: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:12:01.442747+0000 osd.54 (osd.54) 52078 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
audit 2023-07-18T20:12:02.001651+0000 mon.l (mon.2) 15640 : audit [DBG] from='client.? 10.1.222.242:0/986311078' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:12:02.401410+0000 mon.k (mon.1) 18962 : audit [DBG] from='client.? 10.1.222.242:0/1347083045' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:02.434764+0000 osd.54 (osd.54) 52079 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
cluster 2023-07-18T20:12:02.819146+0000 mgr.b (mgr.12834102) 26450 : cluster [DBG] pgmap v27024: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:12:03.475556+0000 osd.54 (osd.54) 52080 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
cluster 2023-07-18T20:12:04.500752+0000 osd.54 (osd.54) 52081 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
audit 2023-07-18T20:12:04.856729+0000 mon.l (mon.2) 15641 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:12:04.856999+0000 mon.l (mon.2) 15642 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:12:06.057+0000 7f7fb651d700 1 mon.j@0(leader).osd e20726 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:12:06.061+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_write.cc:1736] [default] New memtable created with log file: #21739. Immutable memtables: 0.
debug 2023-07-18T20:12:06.061+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:12:06.069067) [db/db_impl/db_impl_compaction_flush.cc:2394] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
debug 2023-07-18T20:12:06.061+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:335] [default] [JOB 1667] Flushing memtable with next log file: 21739
debug 2023-07-18T20:12:06.061+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711126069118, "job": 1667, "event": "flush_started", "num_memtables": 1, "num_entries": 809, "num_deletes": 347, "total_data_size": 1059217, "memory_usage": 1076784, "flush_reason": "Manual Compaction"}
debug 2023-07-18T20:12:06.061+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:364] [default] [JOB 1667] Level-0 flush table #21740: started
debug 2023-07-18T20:12:06.069+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711126074627, "cf_name": "default", "job": 1667, "event": "table_file_creation", "file_number": 21740, "file_size": 786935, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 783302, "index_size": 1174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 13315, "raw_average_key_size": 24, "raw_value_size": 774584, "raw_average_value_size": 1418, "num_data_blocks": 47, "num_entries": 546, "num_deletions": 347, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689711096, "oldest_key_time": 1689711096, "file_creation_time": 1689711126, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:12:06.069+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:424] [default] [JOB 1667] Level-0 flush table #21740: 786935 bytes OK
debug 2023-07-18T20:12:06.069+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:12:06.074861) [db/memtable_list.cc:449] [default] Level-0 commit table #21740 started
debug 2023-07-18T20:12:06.069+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:12:06.075182) [db/memtable_list.cc:628] [default] Level-0 commit table #21740: memtable #1 done
debug 2023-07-18T20:12:06.069+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:12:06.075195) EVENT_LOG_v1 {"time_micros": 1689711126075191, "job": 1667, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 3], "immutable_memtables": 0}
debug 2023-07-18T20:12:06.069+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:12:06.075205) [db/db_impl/db_impl_compaction_flush.cc:233] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 3] max score 0.25
debug 2023-07-18T20:12:06.069+0000 7f7fb8521700 4 rocksdb: [db/db_impl/db_impl_files.cc:415] [JOB 1667] Try to delete WAL files size 1054611, prev total WAL file size 1054979, number of live WAL files 2.
debug 2023-07-18T20:12:06.069+0000 7f7fb8521700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021734.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:12:06.069+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:12:06.069+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:12:06.075581) [db/db_impl/db_impl_compaction_flush.cc:2712] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303836393939' seq:72057594037927935, type:20 .. '6D6772737461740032303837323530' seq:0, type:0; will stop at (end)
debug 2023-07-18T20:12:06.069+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1881] [default] [JOB 1668] Compacting 1@0 + 3@6 files to L6, score -1.00
debug 2023-07-18T20:12:06.069+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1887] [default] Compaction start summary: Base version 1668 Base level 0, inputs: [21740(768KB)], [21736(64MB) 21737(64MB) 21738(5996KB)]
debug 2023-07-18T20:12:06.069+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711126075633, "job": 1668, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [21740], "files_L6": [21736, 21737, 21738], "score": -1, "input_data_size": 141539363}
debug 2023-07-18T20:12:06.281+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1668] Generated table #21741: 21853 keys, 67284689 bytes
debug 2023-07-18T20:12:06.281+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711126288886, "cf_name": "default", "job": 1668, "event": "table_file_creation", "file_number": 21741, "file_size": 67284689, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67170334, "index_size": 58643, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 54725, "raw_key_size": 592482, "raw_average_key_size": 27, "raw_value_size": 66815220, "raw_average_value_size": 3057, "num_data_blocks": 2168, "num_entries": 21853, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689711126, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
cluster 2023-07-18T20:12:04.820122+0000 mgr.b (mgr.12834102) 26451 : cluster [DBG] pgmap v27025: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:12:05.519085+0000 osd.54 (osd.54) 52082 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
audit 2023-07-18T20:12:06.311964+0000 mon.k (mon.1) 18963 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:12:06.312274+0000 mon.k (mon.1) 18964 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:12:06.477+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:12:06.477+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:12:06.505+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1668] Generated table #21742: 13060 keys, 67247606 bytes
debug 2023-07-18T20:12:06.505+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711126511999, "cf_name": "default", "job": 1668, "event": "table_file_creation", "file_number": 21742, "file_size": 67247606, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67121927, "index_size": 91984, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32709, "raw_key_size": 289173, "raw_average_key_size": 22, "raw_value_size": 66845845, "raw_average_value_size": 5118, "num_data_blocks": 3416, "num_entries": 13060, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689711126, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:12:06.517+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1668] Generated table #21743: 487 keys, 3782861 bytes
debug 2023-07-18T20:12:06.517+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711126523002, "cf_name": "default", "job": 1668, "event": "table_file_creation", "file_number": 21743, "file_size": 3782861, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 3775807, "index_size": 4723, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10251, "raw_average_key_size": 21, "raw_value_size": 3764565, "raw_average_value_size": 7730, "num_data_blocks": 188, "num_entries": 487, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689711126, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:12:06.517+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1594] [default] [JOB 1668] Compacted 1@0 + 3@6 files to L6 => 138315156 bytes
debug 2023-07-18T20:12:06.517+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:12:06.523829) [db/compaction/compaction_job.cc:812] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 3] max score 0.00, MB/sec: 316.4 rd, 309.2 wr, level 6, files in(1, 3) out(3) MB in(0.8, 134.2) out(131.9), read-write-amplify(355.6) write-amplify(175.8) OK, records in: 36103, records dropped: 703 output_compression: NoCompression
debug 2023-07-18T20:12:06.517+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:12:06.523846) EVENT_LOG_v1 {"time_micros": 1689711126523839, "job": 1668, "event": "compaction_finished", "compaction_time_micros": 447382, "compaction_time_cpu_micros": 223975, "output_level": 6, "num_output_files": 3, "total_output_size": 138315156, "num_input_records": 36103, "num_output_records": 35400, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 3]}
debug 2023-07-18T20:12:06.517+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021740.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:12:06.517+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711126524098, "job": 1668, "event": "table_file_deletion", "file_number": 21740}
debug 2023-07-18T20:12:06.517+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021738.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:12:06.517+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711126524875, "job": 1668, "event": "table_file_deletion", "file_number": 21738}
debug 2023-07-18T20:12:06.529+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021737.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:12:06.529+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711126534356, "job": 1668, "event": "table_file_deletion", "file_number": 21737}
debug 2023-07-18T20:12:06.537+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021736.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:12:06.537+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711126544266, "job": 1668, "event": "table_file_deletion", "file_number": 21736}
debug 2023-07-18T20:12:06.537+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:12:06.537+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:12:06.537+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:12:06.537+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:12:06.537+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
audit 2023-07-18T20:12:06.482476+0000 mon.j (mon.0) 21695 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:12:06.482747+0000 mon.j (mon.0) 21696 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:12:06.536901+0000 osd.54 (osd.54) 52083 : cluster [WRN] 1 slow requests (by type [ 'queued for pg' : 1 ] most affected pool [ 'ceph-erasure-default-data' : 1 ])
cluster 2023-07-18T20:12:06.821189+0000 mgr.b (mgr.12834102) 26452 : cluster [DBG] pgmap v27026: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:12:07.543822+0000 osd.54 (osd.54) 52084 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:08.566819+0000 osd.54 (osd.54) 52085 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:12:08.588158+0000 mon.l (mon.2) 15643 : audit [DBG] from='client.? 10.1.222.242:0/3570681459' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:08.822211+0000 mgr.b (mgr.12834102) 26453 : cluster [DBG] pgmap v27027: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:12:08.991171+0000 mon.l (mon.2) 15644 : audit [DBG] from='client.? 10.1.222.242:0/3298331910' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:09.611738+0000 osd.54 (osd.54) 52086 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:12:11.061+0000 7f7fb651d700 1 mon.j@0(leader).osd e20726 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:12:10.647232+0000 osd.54 (osd.54) 52087 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:10.823214+0000 mgr.b (mgr.12834102) 26454 : cluster [DBG] pgmap v27028: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:12:11.889+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:12:11.889+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:11.639024+0000 osd.54 (osd.54) 52088 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:12:11.895307+0000 mon.j (mon.0) 21697 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
debug 2023-07-18T20:12:13.361+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:12:13.361+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/1650390884' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:12.633252+0000 osd.54 (osd.54) 52089 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:12.824185+0000 mgr.b (mgr.12834102) 26455 : cluster [DBG] pgmap v27029: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:12:13.364778+0000 mon.j (mon.0) 21698 : audit [DBG] from='client.? 10.1.182.12:0/1650390884' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:13.653156+0000 osd.54 (osd.54) 52090 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:14.701487+0000 osd.54 (osd.54) 52091 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:14.825168+0000 mgr.b (mgr.12834102) 26456 : cluster [DBG] pgmap v27030: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:12:14.873107+0000 mon.l (mon.2) 15645 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:12:14.873388+0000 mon.l (mon.2) 15646 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:12:16.065+0000 7f7fb651d700 1 mon.j@0(leader).osd e20726 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:12:15.687637+0000 osd.54 (osd.54) 52092 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:12:16.284092+0000 mon.k (mon.1) 18965 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:12:16.284398+0000 mon.k (mon.1) 18966 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:12:16.469+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:12:16.469+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:12:16.474861+0000 mon.j (mon.0) 21699 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:12:16.475184+0000 mon.j (mon.0) 21700 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:12:16.522370+0000 mon.l (mon.2) 15647 : audit [DBG] from='client.? 10.1.207.132:0/437580267' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:16.661406+0000 osd.54 (osd.54) 52093 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:16.826229+0000 mgr.b (mgr.12834102) 26457 : cluster [DBG] pgmap v27031: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:12:17.675290+0000 osd.54 (osd.54) 52094 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:18.636081+0000 osd.54 (osd.54) 52095 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:18.827203+0000 mgr.b (mgr.12834102) 26458 : cluster [DBG] pgmap v27032: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:12:19.684976+0000 osd.54 (osd.54) 52096 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:12:21.065+0000 7f7fb651d700 1 mon.j@0(leader).osd e20726 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:12:21.073+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd crush rule create-replicated", "name": "ceph-erasure-default-md", "root": "default", "type": "host", "format": "json"} v 0) v1
debug 2023-07-18T20:12:21.073+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-erasure-default-md", "root": "default", "type": "host", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:20.674372+0000 osd.54 (osd.54) 52097 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:20.828215+0000 mgr.b (mgr.12834102) 26459 : cluster [DBG] pgmap v27033: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:12:21.080420+0000 mon.j (mon.0) 21701 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-erasure-default-md", "root": "default", "type": "host", "format": "json"}]: dispatch
audit 2023-07-18T20:12:21.080449+0000 mon.l (mon.2) 15648 : audit [INF] from='client.? 10.1.222.242:0/1627873934' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-erasure-default-md", "root": "default", "type": "host", "format": "json"}]: dispatch
debug 2023-07-18T20:12:21.961+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool application enable","pool": "ceph-erasure-default-md","app": "rbd"} v 0) v1
debug 2023-07-18T20:12:21.961+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-erasure-default-md","app": "rbd"}]: dispatch
debug 2023-07-18T20:12:22.461+0000 7f7fb651d700 1 mon.j@0(leader).osd e20726 do_prune osdmap full prune enabled
audit 2023-07-18T20:12:21.508184+0000 mon.l (mon.2) 15649 : audit [DBG] from='client.? 10.1.222.242:0/1830110324' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": "ceph-erasure-default-md", "var": "all", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:21.675459+0000 osd.54 (osd.54) 52098 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:12:21.913441+0000 mon.k (mon.1) 18967 : audit [DBG] from='client.? 10.1.222.242:0/649911426' entity='client.admin' cmd=[{"prefix": "osd pool application get", "pool": "ceph-erasure-default-md", "format": "json"}]: dispatch
audit 2023-07-18T20:12:21.967371+0000 mon.k (mon.1) 18968 : audit [INF] from='client.? 10.1.222.242:0/2858967679' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-erasure-default-md","app": "rbd"}]: dispatch
audit 2023-07-18T20:12:21.968310+0000 mon.j (mon.0) 21702 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-erasure-default-md","app": "rbd"}]: dispatch
debug 2023-07-18T20:12:22.469+0000 7f7fb1513700 1 mon.j@0(leader).osd e20727 e20727: 57 total, 3 up, 41 in
debug 2023-07-18T20:12:22.473+0000 7f7fb1513700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-erasure-default-md","app": "rbd"}]': finished
debug 2023-07-18T20:12:22.473+0000 7f7fb1513700 0 log_channel(cluster) log [DBG] : osdmap e20727: 57 total, 3 up, 41 in
audit 2023-07-18T20:12:22.480611+0000 mon.j (mon.0) 21703 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-erasure-default-md","app": "rbd"}]': finished
cluster 2023-07-18T20:12:22.480712+0000 mon.j (mon.0) 21704 : cluster [DBG] osdmap e20727: 57 total, 3 up, 41 in
cluster 2023-07-18T20:12:22.701019+0000 osd.54 (osd.54) 52099 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:22.829211+0000 mgr.b (mgr.12834102) 26460 : cluster [DBG] pgmap v27035: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:12:23.727684+0000 osd.54 (osd.54) 52100 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:24.691813+0000 osd.54 (osd.54) 52101 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:24.830258+0000 mgr.b (mgr.12834102) 26461 : cluster [DBG] pgmap v27036: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:12:24.886429+0000 mon.l (mon.2) 15650 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:12:24.886781+0000 mon.l (mon.2) 15651 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:12:26.065+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:12:26.477+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:12:26.477+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:12:25.706815+0000 osd.54 (osd.54) 52102 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:12:26.301720+0000 mon.k (mon.1) 18969 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:12:26.302017+0000 mon.k (mon.1) 18970 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:12:26.484395+0000 mon.j (mon.0) 21705 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:12:26.484657+0000 mon.j (mon.0) 21706 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:12:26.889+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:12:26.889+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:26.727767+0000 osd.54 (osd.54) 52103 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:26.831318+0000 mgr.b (mgr.12834102) 26462 : cluster [DBG] pgmap v27037: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:12:26.895154+0000 mon.j (mon.0) 21707 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:27.733031+0000 osd.54 (osd.54) 52104 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:12:28.837+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:12:28.837+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/3066068000' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:28.777046+0000 osd.54 (osd.54) 52105 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:28.832371+0000 mgr.b (mgr.12834102) 26463 : cluster [DBG] pgmap v27038: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:12:28.845332+0000 mon.j (mon.0) 21708 : audit [DBG] from='client.? 10.1.182.12:0/3066068000' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:29.799664+0000 osd.54 (osd.54) 52106 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:12:31.069+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:12:30.793220+0000 osd.54 (osd.54) 52107 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:30.833366+0000 mgr.b (mgr.12834102) 26464 : cluster [DBG] pgmap v27039: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:12:31.794617+0000 osd.54 (osd.54) 52108 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:32.748884+0000 osd.54 (osd.54) 52109 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:32.834557+0000 mgr.b (mgr.12834102) 26465 : cluster [DBG] pgmap v27040: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:12:33.011284+0000 mon.k (mon.1) 18971 : audit [DBG] from='client.? 10.1.222.242:0/3417336883' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:12:33.403869+0000 mon.k (mon.1) 18972 : audit [DBG] from='client.? 10.1.222.242:0/1128350282' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:33.784194+0000 osd.54 (osd.54) 52110 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:34.769998+0000 osd.54 (osd.54) 52111 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:34.835527+0000 mgr.b (mgr.12834102) 26466 : cluster [DBG] pgmap v27041: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:12:34.888294+0000 mon.l (mon.2) 15652 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:12:34.888572+0000 mon.l (mon.2) 15653 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:12:36.073+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:12:36.481+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:12:36.481+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:12:35.766522+0000 osd.54 (osd.54) 52112 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:12:36.274734+0000 mon.k (mon.1) 18973 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:12:36.275005+0000 mon.k (mon.1) 18974 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:12:36.486560+0000 mon.j (mon.0) 21709 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:12:36.486821+0000 mon.j (mon.0) 21710 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:12:36.751676+0000 osd.54 (osd.54) 52113 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:12:36.806590+0000 mon.k (mon.1) 18975 : audit [DBG] from='client.? 10.1.207.132:0/2879827876' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:36.836506+0000 mgr.b (mgr.12834102) 26467 : cluster [DBG] pgmap v27042: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:12:37.731181+0000 osd.54 (osd.54) 52114 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:38.748249+0000 osd.54 (osd.54) 52115 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:38.837456+0000 mgr.b (mgr.12834102) 26468 : cluster [DBG] pgmap v27043: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:12:39.516328+0000 mon.k (mon.1) 18976 : audit [DBG] from='client.? 10.1.222.242:0/2896818129' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:39.775111+0000 osd.54 (osd.54) 52116 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:12:39.943184+0000 mon.k (mon.1) 18977 : audit [DBG] from='client.? 10.1.222.242:0/687007374' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
debug 2023-07-18T20:12:41.069+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:12:40.728591+0000 osd.54 (osd.54) 52117 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:40.838418+0000 mgr.b (mgr.12834102) 26469 : cluster [DBG] pgmap v27044: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:12:41.889+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:12:41.889+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:41.720360+0000 osd.54 (osd.54) 52118 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:12:41.895777+0000 mon.j (mon.0) 21711 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:42.746805+0000 osd.54 (osd.54) 52119 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:42.839387+0000 mgr.b (mgr.12834102) 26470 : cluster [DBG] pgmap v27045: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:12:44.325+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:12:44.325+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/3993542171' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:43.703440+0000 osd.54 (osd.54) 52120 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:12:44.327814+0000 mon.j (mon.0) 21712 : audit [DBG] from='client.? 10.1.182.12:0/3993542171' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:44.661327+0000 osd.54 (osd.54) 52121 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:44.840321+0000 mgr.b (mgr.12834102) 26471 : cluster [DBG] pgmap v27046: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:12:44.891661+0000 mon.l (mon.2) 15654 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:12:44.891937+0000 mon.l (mon.2) 15655 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:12:46.077+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:12:46.485+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:12:46.485+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:12:45.688892+0000 osd.54 (osd.54) 52122 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:12:46.298024+0000 mon.k (mon.1) 18978 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:12:46.298316+0000 mon.k (mon.1) 18979 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:12:46.487547+0000 mon.j (mon.0) 21713 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:12:46.487820+0000 mon.j (mon.0) 21714 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:12:46.727996+0000 osd.54 (osd.54) 52123 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:46.841362+0000 mgr.b (mgr.12834102) 26472 : cluster [DBG] pgmap v27047: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:12:47.768333+0000 osd.54 (osd.54) 52124 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:48.809692+0000 osd.54 (osd.54) 52125 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:48.842324+0000 mgr.b (mgr.12834102) 26473 : cluster [DBG] pgmap v27048: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:12:49.765794+0000 osd.54 (osd.54) 52126 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:12:51.077+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:12:50.717992+0000 osd.54 (osd.54) 52127 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:50.843308+0000 mgr.b (mgr.12834102) 26474 : cluster [DBG] pgmap v27049: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:12:51.721908+0000 osd.54 (osd.54) 52128 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:52.683064+0000 osd.54 (osd.54) 52129 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:52.844383+0000 mgr.b (mgr.12834102) 26475 : cluster [DBG] pgmap v27050: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:12:53.652845+0000 osd.54 (osd.54) 52130 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:54.699726+0000 osd.54 (osd.54) 52131 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:54.845357+0000 mgr.b (mgr.12834102) 26476 : cluster [DBG] pgmap v27051: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:12:54.853372+0000 mon.l (mon.2) 15656 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:12:54.853683+0000 mon.l (mon.2) 15657 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:12:56.081+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:12:56.473+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:12:56.477+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:12:55.731288+0000 osd.54 (osd.54) 52132 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:12:56.293637+0000 mon.k (mon.1) 18980 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:12:56.293923+0000 mon.k (mon.1) 18981 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:12:56.477431+0000 mon.j (mon.0) 21715 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:12:56.477698+0000 mon.j (mon.0) 21716 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:12:56.893+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:12:56.893+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:56.724049+0000 osd.54 (osd.54) 52133 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:56.846321+0000 mgr.b (mgr.12834102) 26477 : cluster [DBG] pgmap v27052: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:12:56.895781+0000 mon.j (mon.0) 21717 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T20:12:57.078030+0000 mon.l (mon.2) 15658 : audit [DBG] from='client.? 10.1.207.132:0/998657965' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:57.726738+0000 osd.54 (osd.54) 52134 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:58.758041+0000 osd.54 (osd.54) 52135 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:12:58.847304+0000 mgr.b (mgr.12834102) 26478 : cluster [DBG] pgmap v27053: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:12:59.805+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:12:59.805+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/646786330' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:12:59.769035+0000 osd.54 (osd.54) 52136 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:12:59.807156+0000 mon.j (mon.0) 21718 : audit [DBG] from='client.? 10.1.182.12:0/646786330' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:13:01.081+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:13:00.768882+0000 osd.54 (osd.54) 52137 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:00.848275+0000 mgr.b (mgr.12834102) 26479 : cluster [DBG] pgmap v27054: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:01.753747+0000 osd.54 (osd.54) 52138 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:02.753819+0000 osd.54 (osd.54) 52139 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:02.849282+0000 mgr.b (mgr.12834102) 26480 : cluster [DBG] pgmap v27055: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:03.794954+0000 osd.54 (osd.54) 52140 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:13:04.320823+0000 mon.k (mon.1) 18982 : audit [DBG] from='client.? 10.1.222.242:0/2280337912' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:13:04.831334+0000 mon.l (mon.2) 15659 : audit [DBG] from='client.? 10.1.222.242:0/4218697496' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:13:04.838424+0000 osd.54 (osd.54) 52141 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:13:04.848532+0000 mon.l (mon.2) 15660 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:13:04.848821+0000 mon.l (mon.2) 15661 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:13:04.850239+0000 mgr.b (mgr.12834102) 26481 : cluster [DBG] pgmap v27056: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:13:06.081+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:13:06.489+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:13:06.489+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:13:05.823303+0000 osd.54 (osd.54) 52142 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:13:06.319315+0000 mon.k (mon.1) 18983 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:13:06.319579+0000 mon.k (mon.1) 18984 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:13:06.491828+0000 mon.j (mon.0) 21719 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:13:06.491991+0000 mon.j (mon.0) 21720 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:13:06.815609+0000 osd.54 (osd.54) 52143 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:06.851198+0000 mgr.b (mgr.12834102) 26482 : cluster [DBG] pgmap v27057: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:07.789766+0000 osd.54 (osd.54) 52144 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:08.792131+0000 osd.54 (osd.54) 52145 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:08.852171+0000 mgr.b (mgr.12834102) 26483 : cluster [DBG] pgmap v27058: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:09.813414+0000 osd.54 (osd.54) 52146 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:13:10.637501+0000 mon.k (mon.1) 18985 : audit [DBG] from='client.? 10.1.222.242:0/1438676015' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
debug 2023-07-18T20:13:11.085+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:13:10.853144+0000 mgr.b (mgr.12834102) 26484 : cluster [DBG] pgmap v27059: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:10.858719+0000 osd.54 (osd.54) 52147 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:13:11.054680+0000 mon.k (mon.1) 18986 : audit [DBG] from='client.? 10.1.222.242:0/2830158098' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
debug 2023-07-18T20:13:11.893+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:13:11.893+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T20:13:11.895931+0000 mon.j (mon.0) 21721 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:13:11.894513+0000 osd.54 (osd.54) 52148 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:12.854126+0000 mgr.b (mgr.12834102) 26485 : cluster [DBG] pgmap v27060: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:12.907615+0000 osd.54 (osd.54) 52149 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:13:15.284+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:13:15.284+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/1147186061' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:13:13.954443+0000 osd.54 (osd.54) 52150 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:13:14.850924+0000 mon.l (mon.2) 15662 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:13:14.851121+0000 mon.l (mon.2) 15663 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:13:14.855126+0000 mgr.b (mgr.12834102) 26486 : cluster [DBG] pgmap v27061: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:13:15.289055+0000 mon.j (mon.0) 21722 : audit [DBG] from='client.? 10.1.182.12:0/1147186061' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:13:16.084+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:13:16.492+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:13:16.492+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:13:14.928209+0000 osd.54 (osd.54) 52151 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:13:16.306667+0000 mon.k (mon.1) 18987 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:13:16.306953+0000 mon.k (mon.1) 18988 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:13:16.494574+0000 mon.j (mon.0) 21723 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:13:16.494837+0000 mon.j (mon.0) 21724 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:13:15.910288+0000 osd.54 (osd.54) 52152 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:16.856113+0000 mgr.b (mgr.12834102) 26487 : cluster [DBG] pgmap v27062: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:13:17.230293+0000 mon.l (mon.2) 15664 : audit [DBG] from='client.? 10.1.207.132:0/249932011' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:13:16.905280+0000 osd.54 (osd.54) 52153 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:17.930984+0000 osd.54 (osd.54) 52154 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:18.857127+0000 mgr.b (mgr.12834102) 26488 : cluster [DBG] pgmap v27063: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:18.964907+0000 osd.54 (osd.54) 52155 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:13:21.088+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:13:19.983362+0000 osd.54 (osd.54) 52156 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:20.858105+0000 mgr.b (mgr.12834102) 26489 : cluster [DBG] pgmap v27064: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:20.935268+0000 osd.54 (osd.54) 52157 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:21.913003+0000 osd.54 (osd.54) 52158 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:22.859065+0000 mgr.b (mgr.12834102) 26490 : cluster [DBG] pgmap v27065: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:22.932276+0000 osd.54 (osd.54) 52159 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:23.915985+0000 osd.54 (osd.54) 52160 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:13:24.832642+0000 mon.l (mon.2) 15665 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:13:24.832942+0000 mon.l (mon.2) 15666 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:13:24.860016+0000 mgr.b (mgr.12834102) 26491 : cluster [DBG] pgmap v27066: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:24.958392+0000 osd.54 (osd.54) 52161 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:13:26.088+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:13:26.484+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:13:26.484+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:13:26.892+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:13:26.892+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:13:25.930478+0000 osd.54 (osd.54) 52162 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:13:26.295031+0000 mon.k (mon.1) 18989 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:13:26.295321+0000 mon.k (mon.1) 18990 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:13:26.488130+0000 mon.j (mon.0) 21725 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:13:26.488416+0000 mon.j (mon.0) 21726 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:13:26.895973+0000 mon.j (mon.0) 21727 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:13:26.860997+0000 mgr.b (mgr.12834102) 26492 : cluster [DBG] pgmap v27067: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:26.954394+0000 osd.54 (osd.54) 52163 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:27.945774+0000 osd.54 (osd.54) 52164 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:28.862050+0000 mgr.b (mgr.12834102) 26493 : cluster [DBG] pgmap v27068: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:28.901985+0000 osd.54 (osd.54) 52165 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:29.870282+0000 osd.54 (osd.54) 52166 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:13:30.768+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:13:30.768+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/129942571' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:13:31.092+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
audit 2023-07-18T20:13:30.769596+0000 mon.j (mon.0) 21728 : audit [DBG] from='client.? 10.1.182.12:0/129942571' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:13:30.827653+0000 osd.54 (osd.54) 52167 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:30.863089+0000 mgr.b (mgr.12834102) 26494 : cluster [DBG] pgmap v27069: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:31.865105+0000 osd.54 (osd.54) 52168 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:32.843521+0000 osd.54 (osd.54) 52169 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:32.864107+0000 mgr.b (mgr.12834102) 26495 : cluster [DBG] pgmap v27070: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:33.839158+0000 osd.54 (osd.54) 52170 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:34.842097+0000 osd.54 (osd.54) 52171 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:13:34.854845+0000 mon.l (mon.2) 15667 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:13:34.855041+0000 mon.l (mon.2) 15668 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:13:36.092+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:13:34.865189+0000 mgr.b (mgr.12834102) 26496 : cluster [DBG] pgmap v27071: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:13:35.392188+0000 mon.l (mon.2) 15669 : audit [DBG] from='client.? 10.1.222.242:0/3735504186' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:13:35.808393+0000 osd.54 (osd.54) 52172 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:13:35.808843+0000 mon.k (mon.1) 18991 : audit [DBG] from='client.? 10.1.222.242:0/3715738207' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
debug 2023-07-18T20:13:36.484+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:13:36.488+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:13:36.280415+0000 mon.k (mon.1) 18992 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:13:36.280706+0000 mon.k (mon.1) 18993 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:13:36.489346+0000 mon.j (mon.0) 21729 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:13:36.489487+0000 mon.j (mon.0) 21730 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:13:36.845180+0000 osd.54 (osd.54) 52173 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:36.866180+0000 mgr.b (mgr.12834102) 26497 : cluster [DBG] pgmap v27072: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:13:37.673000+0000 mon.l (mon.2) 15670 : audit [DBG] from='client.? 10.1.207.132:0/2669031169' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:13:37.893830+0000 osd.54 (osd.54) 52174 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:38.867171+0000 mgr.b (mgr.12834102) 26498 : cluster [DBG] pgmap v27073: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:38.931893+0000 osd.54 (osd.54) 52175 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:13:41.096+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:13:39.933101+0000 osd.54 (osd.54) 52176 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:13:41.892+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:13:41.892+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:13:40.868162+0000 mgr.b (mgr.12834102) 26499 : cluster [DBG] pgmap v27074: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:40.933534+0000 osd.54 (osd.54) 52177 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:13:41.679260+0000 mon.k (mon.1) 18994 : audit [DBG] from='client.? 10.1.222.242:0/3367830876' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:13:41.895229+0000 mon.j (mon.0) 21731 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T20:13:42.147117+0000 mon.l (mon.2) 15671 : audit [DBG] from='client.? 10.1.222.242:0/3947407680' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:13:41.898083+0000 osd.54 (osd.54) 52178 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:42.869167+0000 mgr.b (mgr.12834102) 26500 : cluster [DBG] pgmap v27075: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:42.900200+0000 osd.54 (osd.54) 52179 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:43.938619+0000 osd.54 (osd.54) 52180 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:13:44.861275+0000 mon.l (mon.2) 15672 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:13:44.861557+0000 mon.l (mon.2) 15673 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:13:46.100+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:13:44.870156+0000 mgr.b (mgr.12834102) 26501 : cluster [DBG] pgmap v27076: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:44.962190+0000 osd.54 (osd.54) 52181 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:13:46.232+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:13:46.232+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/607143033' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:13:46.492+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:13:46.492+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:13:45.921155+0000 osd.54 (osd.54) 52182 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:13:46.234666+0000 mon.j (mon.0) 21732 : audit [DBG] from='client.? 10.1.182.12:0/607143033' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:13:46.282314+0000 mon.k (mon.1) 18995 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:13:46.282481+0000 mon.k (mon.1) 18996 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:13:46.496417+0000 mon.j (mon.0) 21733 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:13:46.496688+0000 mon.j (mon.0) 21734 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:13:46.871145+0000 mgr.b (mgr.12834102) 26502 : cluster [DBG] pgmap v27077: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:46.917790+0000 osd.54 (osd.54) 52183 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:47.900691+0000 osd.54 (osd.54) 52184 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:48.881963+0000 osd.54 (osd.54) 52185 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:48.872110+0000 mgr.b (mgr.12834102) 26503 : cluster [DBG] pgmap v27078: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:13:51.100+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:13:49.921233+0000 osd.54 (osd.54) 52186 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:50.883623+0000 osd.54 (osd.54) 52187 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:50.873069+0000 mgr.b (mgr.12834102) 26504 : cluster [DBG] pgmap v27079: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:51.910228+0000 osd.54 (osd.54) 52188 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:52.874047+0000 mgr.b (mgr.12834102) 26505 : cluster [DBG] pgmap v27080: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:52.933882+0000 osd.54 (osd.54) 52189 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:53.940165+0000 osd.54 (osd.54) 52190 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:13:54.872269+0000 mon.l (mon.2) 15674 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:13:54.872626+0000 mon.l (mon.2) 15675 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:13:54.890811+0000 osd.54 (osd.54) 52191 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:13:56.100+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:13:54.874970+0000 mgr.b (mgr.12834102) 26506 : cluster [DBG] pgmap v27081: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:13:56.472+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:13:56.472+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:13:56.892+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:13:56.892+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:13:55.932523+0000 osd.54 (osd.54) 52192 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:13:56.306155+0000 mon.k (mon.1) 18997 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:13:56.306439+0000 mon.k (mon.1) 18998 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:13:56.476894+0000 mon.j (mon.0) 21735 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:13:56.477161+0000 mon.j (mon.0) 21736 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:13:56.895500+0000 mon.j (mon.0) 21737 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:13:56.875962+0000 mgr.b (mgr.12834102) 26507 : cluster [DBG] pgmap v27082: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:56.927923+0000 osd.54 (osd.54) 52193 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:13:57.767121+0000 mon.l (mon.2) 15676 : audit [DBG] from='client.? 10.1.207.132:0/3722718652' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:13:57.885156+0000 osd.54 (osd.54) 52194 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:13:58.876912+0000 mgr.b (mgr.12834102) 26508 : cluster [DBG] pgmap v27083: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:13:58.924988+0000 osd.54 (osd.54) 52195 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:14:01.104+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:13:59.914543+0000 osd.54 (osd.54) 52196 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:00.889233+0000 osd.54 (osd.54) 52197 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:14:01.332+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_write.cc:1736] [default] New memtable created with log file: #21744. Immutable memtables: 0.
debug 2023-07-18T20:14:01.332+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:14:01.337296) [db/db_impl/db_impl_compaction_flush.cc:2394] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
debug 2023-07-18T20:14:01.332+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:335] [default] [JOB 1669] Flushing memtable with next log file: 21744
debug 2023-07-18T20:14:01.332+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711241337391, "job": 1669, "event": "flush_started", "num_memtables": 1, "num_entries": 2168, "num_deletes": 539, "total_data_size": 3071606, "memory_usage": 3110328, "flush_reason": "Manual Compaction"}
debug 2023-07-18T20:14:01.336+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:364] [default] [JOB 1669] Level-0 flush table #21745: started
debug 2023-07-18T20:14:01.348+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711241350011, "cf_name": "default", "job": 1669, "event": "table_file_creation", "file_number": 21745, "file_size": 2427465, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 2418937, "index_size": 4399, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 29782, "raw_average_key_size": 24, "raw_value_size": 2398014, "raw_average_value_size": 1972, "num_data_blocks": 172, "num_entries": 1216, "num_deletions": 539, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689711126, "oldest_key_time": 1689711126, "file_creation_time": 1689711241, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:14:01.348+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:424] [default] [JOB 1669] Level-0 flush table #21745: 2427465 bytes OK
debug 2023-07-18T20:14:01.348+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:14:01.350233) [db/memtable_list.cc:449] [default] Level-0 commit table #21745 started
debug 2023-07-18T20:14:01.348+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:14:01.350542) [db/memtable_list.cc:628] [default] Level-0 commit table #21745: memtable #1 done
debug 2023-07-18T20:14:01.348+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:14:01.350557) EVENT_LOG_v1 {"time_micros": 1689711241350552, "job": 1669, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 3], "immutable_memtables": 0}
debug 2023-07-18T20:14:01.348+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:14:01.350570) [db/db_impl/db_impl_compaction_flush.cc:233] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 3] max score 0.25
debug 2023-07-18T20:14:01.348+0000 7f7fb8521700 4 rocksdb: [db/db_impl/db_impl_files.cc:415] [JOB 1669] Try to delete WAL files size 3060907, prev total WAL file size 3060907, number of live WAL files 2.
debug 2023-07-18T20:14:01.348+0000 7f7fb8521700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021739.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:14:01.348+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:14:01.348+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:14:01.351339) [db/db_impl/db_impl_compaction_flush.cc:2712] [default] Manual compaction from level-0 to level-6 from '7061786F730036323535393234' seq:72057594037927935, type:20 .. '7061786F730036323536313736' seq:0, type:0; will stop at (end)
debug 2023-07-18T20:14:01.348+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1881] [default] [JOB 1670] Compacting 1@0 + 3@6 files to L6, score -1.00
debug 2023-07-18T20:14:01.348+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1887] [default] Compaction start summary: Base version 1670 Base level 0, inputs: [21745(2370KB)], [21741(64MB) 21742(64MB) 21743(3694KB)]
debug 2023-07-18T20:14:01.348+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711241351446, "job": 1670, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [21745], "files_L6": [21741, 21742, 21743], "score": -1, "input_data_size": 140742621}
debug 2023-07-18T20:14:01.560+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1670] Generated table #21746: 21966 keys, 67312886 bytes
debug 2023-07-18T20:14:01.560+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711241563085, "cf_name": "default", "job": 1670, "event": "table_file_creation", "file_number": 21746, "file_size": 67312886, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67197862, "index_size": 59056, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 54981, "raw_key_size": 594751, "raw_average_key_size": 27, "raw_value_size": 66841045, "raw_average_value_size": 3042, "num_data_blocks": 2185, "num_entries": 21966, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689711241, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:14:01.712+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:14:01.712+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/713856801' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:14:01.776+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1670] Generated table #21747: 13038 keys, 67273213 bytes
debug 2023-07-18T20:14:01.776+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711241779093, "cf_name": "default", "job": 1670, "event": "table_file_creation", "file_number": 21747, "file_size": 67273213, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67146757, "index_size": 92761, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32709, "raw_key_size": 288788, "raw_average_key_size": 22, "raw_value_size": 66870296, "raw_average_value_size": 5128, "num_data_blocks": 3441, "num_entries": 13038, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689711241, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:14:01.788+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1670] Generated table #21748: 516 keys, 4155248 bytes
debug 2023-07-18T20:14:01.788+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711241791579, "cf_name": "default", "job": 1670, "event": "table_file_creation", "file_number": 21748, "file_size": 4155248, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 4147890, "index_size": 5027, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10897, "raw_average_key_size": 21, "raw_value_size": 4135915, "raw_average_value_size": 8015, "num_data_blocks": 200, "num_entries": 516, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689711241, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:14:01.788+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1594] [default] [JOB 1670] Compacted 1@0 + 3@6 files to L6 => 138741347 bytes
debug 2023-07-18T20:14:01.788+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:14:01.792536) [db/compaction/compaction_job.cc:812] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 3] max score 0.00, MB/sec: 319.8 rd, 315.2 wr, level 6, files in(1, 3) out(3) MB in(2.3, 131.9) out(132.3), read-write-amplify(115.1) write-amplify(57.2) OK, records in: 36616, records dropped: 1096 output_compression: NoCompression
debug 2023-07-18T20:14:01.788+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:14:01.792555) EVENT_LOG_v1 {"time_micros": 1689711241792545, "job": 1670, "event": "compaction_finished", "compaction_time_micros": 440141, "compaction_time_cpu_micros": 217069, "output_level": 6, "num_output_files": 3, "total_output_size": 138741347, "num_input_records": 36616, "num_output_records": 35520, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 3]}
debug 2023-07-18T20:14:01.788+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021745.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:14:01.788+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711241792999, "job": 1670, "event": "table_file_deletion", "file_number": 21745}
debug 2023-07-18T20:14:01.792+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021743.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:14:01.792+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711241793508, "job": 1670, "event": "table_file_deletion", "file_number": 21743}
debug 2023-07-18T20:14:01.800+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021742.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:14:01.800+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711241802728, "job": 1670, "event": "table_file_deletion", "file_number": 21742}
debug 2023-07-18T20:14:01.808+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021741.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:14:01.808+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711241812896, "job": 1670, "event": "table_file_deletion", "file_number": 21741}
debug 2023-07-18T20:14:01.808+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:14:01.808+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:14:01.808+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:14:01.808+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:14:01.808+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
cluster 2023-07-18T20:14:00.877879+0000 mgr.b (mgr.12834102) 26509 : cluster [DBG] pgmap v27084: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:14:01.716505+0000 mon.j (mon.0) 21738 : audit [DBG] from='client.? 10.1.182.12:0/713856801' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:01.870964+0000 osd.54 (osd.54) 52198 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:02.878838+0000 mgr.b (mgr.12834102) 26510 : cluster [DBG] pgmap v27085: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:14:02.900461+0000 osd.54 (osd.54) 52199 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:03.931450+0000 osd.54 (osd.54) 52200 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:04.879450+0000 mon.l (mon.2) 15677 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:14:04.879731+0000 mon.l (mon.2) 15678 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:14:04.879780+0000 mgr.b (mgr.12834102) 26511 : cluster [DBG] pgmap v27086: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:14:06.104+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:14:04.917565+0000 osd.54 (osd.54) 52201 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:06.301895+0000 mon.k (mon.1) 18999 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:14:06.302204+0000 mon.k (mon.1) 19000 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:14:06.484+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:14:06.484+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:14:05.942686+0000 osd.54 (osd.54) 52202 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:06.487421+0000 mon.j (mon.0) 21739 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:14:06.487706+0000 mon.j (mon.0) 21740 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:14:06.691343+0000 mon.l (mon.2) 15679 : audit [DBG] from='client.? 10.1.222.242:0/3126940305' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:06.880762+0000 mgr.b (mgr.12834102) 26512 : cluster [DBG] pgmap v27087: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:14:07.121809+0000 mon.k (mon.1) 19001 : audit [DBG] from='client.? 10.1.222.242:0/585633141' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:06.906823+0000 osd.54 (osd.54) 52203 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:07.953563+0000 osd.54 (osd.54) 52204 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:08.881741+0000 mgr.b (mgr.12834102) 26513 : cluster [DBG] pgmap v27088: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:14:08.971905+0000 osd.54 (osd.54) 52205 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:14:11.108+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:14:09.928572+0000 osd.54 (osd.54) 52206 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:10.882702+0000 mgr.b (mgr.12834102) 26514 : cluster [DBG] pgmap v27089: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:14:11.892+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:14:11.892+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:10.965708+0000 osd.54 (osd.54) 52207 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:11.895356+0000 mon.j (mon.0) 21741 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:11.994080+0000 osd.54 (osd.54) 52208 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:12.708814+0000 mon.l (mon.2) 15680 : audit [DBG] from='client.? 10.1.222.242:0/4051392942' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:12.883744+0000 mgr.b (mgr.12834102) 26515 : cluster [DBG] pgmap v27090: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:14:13.138934+0000 mon.k (mon.1) 19002 : audit [DBG] from='client.? 10.1.222.242:0/752625521' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:13.005223+0000 osd.54 (osd.54) 52209 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:13.968750+0000 osd.54 (osd.54) 52210 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:14.870263+0000 mon.l (mon.2) 15681 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:14:14.870541+0000 mon.l (mon.2) 15682 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:14:14.884701+0000 mgr.b (mgr.12834102) 26516 : cluster [DBG] pgmap v27091: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:14:16.108+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:14:14.940483+0000 osd.54 (osd.54) 52211 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:16.156877+0000 mon.k (mon.1) 19003 : audit [DBG] from='client.? 10.1.222.242:0/2154775630' entity='client.admin' cmd=[{"prefix": "osd crush dump", "format": "json"}]: dispatch
audit 2023-07-18T20:14:16.281160+0000 mon.k (mon.1) 19004 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:14:16.281443+0000 mon.k (mon.1) 19005 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:14:16.476+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:14:16.476+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:14:16.576+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd crush rule create-replicated", "name": "ceph-replica-default", "root": "default", "type": "host", "format": "json"} v 0) v1
debug 2023-07-18T20:14:16.576+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-replica-default", "root": "default", "type": "host", "format": "json"}]: dispatch
debug 2023-07-18T20:14:17.268+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:14:17.268+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/2149512341' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:15.921654+0000 osd.54 (osd.54) 52212 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:16.480979+0000 mon.j (mon.0) 21742 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:14:16.481134+0000 mon.j (mon.0) 21743 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:14:16.577708+0000 mon.k (mon.1) 19006 : audit [INF] from='client.? 10.1.222.242:0/1704883975' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-replica-default", "root": "default", "type": "host", "format": "json"}]: dispatch
audit 2023-07-18T20:14:16.578770+0000 mon.j (mon.0) 21744 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-replica-default", "root": "default", "type": "host", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:16.885305+0000 mgr.b (mgr.12834102) 26517 : cluster [DBG] pgmap v27092: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:14:17.046867+0000 mon.k (mon.1) 19007 : audit [DBG] from='client.? 10.1.222.242:0/3320679593' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": "ceph-replica-default", "var": "all", "format": "json"}]: dispatch
audit 2023-07-18T20:14:17.269768+0000 mon.j (mon.0) 21745 : audit [DBG] from='client.? 10.1.182.12:0/2149512341' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:14:17.804+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:14:17.804+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.207.132:0/3322981702' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:14:17.964+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool application enable", "pool": "ceph-replica-default", "app": "rbd", "yes_i_really_mean_it": true, "format": "json"} v 0) v1
debug 2023-07-18T20:14:17.964+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "ceph-replica-default", "app": "rbd", "yes_i_really_mean_it": true, "format": "json"}]: dispatch
debug 2023-07-18T20:14:18.472+0000 7f7fb651d700 1 mon.j@0(leader).osd e20727 do_prune osdmap full prune enabled
cluster 2023-07-18T20:14:16.962768+0000 osd.54 (osd.54) 52213 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:17.530948+0000 mon.l (mon.2) 15683 : audit [DBG] from='client.? 10.1.222.242:0/455405094' entity='client.admin' cmd=[{"prefix": "osd pool application get", "pool": "ceph-replica-default", "format": "json"}]: dispatch
audit 2023-07-18T20:14:17.805685+0000 mon.j (mon.0) 21746 : audit [DBG] from='client.? 10.1.207.132:0/3322981702' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:14:17.968559+0000 mon.j (mon.0) 21747 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "ceph-replica-default", "app": "rbd", "yes_i_really_mean_it": true, "format": "json"}]: dispatch
audit 2023-07-18T20:14:17.968570+0000 mon.l (mon.2) 15684 : audit [INF] from='client.? 10.1.222.242:0/420701380' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "ceph-replica-default", "app": "rbd", "yes_i_really_mean_it": true, "format": "json"}]: dispatch
debug 2023-07-18T20:14:18.488+0000 7f7fb1513700 1 mon.j@0(leader).osd e20728 e20728: 57 total, 3 up, 41 in
debug 2023-07-18T20:14:18.488+0000 7f7fb1513700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "ceph-replica-default", "app": "rbd", "yes_i_really_mean_it": true, "format": "json"}]': finished
debug 2023-07-18T20:14:18.488+0000 7f7fb1513700 0 log_channel(cluster) log [DBG] : osdmap e20728: 57 total, 3 up, 41 in
cluster 2023-07-18T20:14:17.987087+0000 osd.54 (osd.54) 52214 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:18.492846+0000 mon.j (mon.0) 21748 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "ceph-replica-default", "app": "rbd", "yes_i_really_mean_it": true, "format": "json"}]': finished
cluster 2023-07-18T20:14:18.492928+0000 mon.j (mon.0) 21749 : cluster [DBG] osdmap e20728: 57 total, 3 up, 41 in
cluster 2023-07-18T20:14:18.886221+0000 mgr.b (mgr.12834102) 26518 : cluster [DBG] pgmap v27094: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:14:18.910835+0000 mon.k (mon.1) 19008 : audit [DBG] from='client.? 10.1.222.242:0/2740181536' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": "ceph-replica-default", "var": "all", "format": "json"}]: dispatch
debug 2023-07-18T20:14:19.692+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool application enable","pool": "ceph-replica-default","app": "rbd"} v 0) v1
debug 2023-07-18T20:14:19.692+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-replica-default","app": "rbd"}]: dispatch
debug 2023-07-18T20:14:20.496+0000 7f7fb651d700 1 mon.j@0(leader).osd e20728 do_prune osdmap full prune enabled
debug 2023-07-18T20:14:20.504+0000 7f7fb1513700 1 mon.j@0(leader).osd e20729 e20729: 57 total, 3 up, 41 in
debug 2023-07-18T20:14:20.508+0000 7f7fb1513700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-replica-default","app": "rbd"}]': finished
debug 2023-07-18T20:14:20.508+0000 7f7fb1513700 0 log_channel(cluster) log [DBG] : osdmap e20729: 57 total, 3 up, 41 in
cluster 2023-07-18T20:14:19.026970+0000 osd.54 (osd.54) 52215 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:19.630294+0000 mon.k (mon.1) 19009 : audit [DBG] from='client.? 10.1.222.242:0/4182376399' entity='client.admin' cmd=[{"prefix": "osd crush rule dump", "name": "ceph-replica-default", "format": "json"}]: dispatch
audit 2023-07-18T20:14:19.695392+0000 mon.k (mon.1) 19010 : audit [INF] from='client.? 10.1.222.242:0/1712436555' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-replica-default","app": "rbd"}]: dispatch
audit 2023-07-18T20:14:19.696315+0000 mon.j (mon.0) 21750 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-replica-default","app": "rbd"}]: dispatch
debug 2023-07-18T20:14:21.112+0000 7f7fb651d700 1 mon.j@0(leader).osd e20729 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:14:19.979623+0000 osd.54 (osd.54) 52216 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:20.509655+0000 mon.j (mon.0) 21751 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-replica-default","app": "rbd"}]': finished
cluster 2023-07-18T20:14:20.509728+0000 mon.j (mon.0) 21752 : cluster [DBG] osdmap e20729: 57 total, 3 up, 41 in
cluster 2023-07-18T20:14:20.887185+0000 mgr.b (mgr.12834102) 26519 : cluster [DBG] pgmap v27096: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:14:20.977913+0000 osd.54 (osd.54) 52217 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:22.014684+0000 osd.54 (osd.54) 52218 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:22.888126+0000 mgr.b (mgr.12834102) 26520 : cluster [DBG] pgmap v27097: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:14:23.042606+0000 osd.54 (osd.54) 52219 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:24.064753+0000 osd.54 (osd.54) 52220 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:24.846893+0000 mon.l (mon.2) 15685 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:14:24.847158+0000 mon.l (mon.2) 15686 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:14:24.889101+0000 mgr.b (mgr.12834102) 26521 : cluster [DBG] pgmap v27098: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:14:26.112+0000 7f7fb651d700 1 mon.j@0(leader).osd e20729 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:14:26.496+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:14:26.496+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:14:25.025963+0000 osd.54 (osd.54) 52221 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:26.284259+0000 mon.k (mon.1) 19011 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:14:26.284556+0000 mon.k (mon.1) 19012 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:14:26.499145+0000 mon.j (mon.0) 21753 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:14:26.499410+0000 mon.j (mon.0) 21754 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:14:26.892+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:14:26.892+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:26.024295+0000 osd.54 (osd.54) 52222 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:26.890069+0000 mgr.b (mgr.12834102) 26522 : cluster [DBG] pgmap v27099: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:14:26.895375+0000 mon.j (mon.0) 21755 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:27.041096+0000 osd.54 (osd.54) 52223 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:28.020755+0000 osd.54 (osd.54) 52224 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:28.891117+0000 mgr.b (mgr.12834102) 26523 : cluster [DBG] pgmap v27100: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:14:29.043654+0000 osd.54 (osd.54) 52225 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:14:31.115+0000 7f7fb651d700 1 mon.j@0(leader).osd e20729 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:14:30.040199+0000 osd.54 (osd.54) 52226 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:30.892067+0000 mgr.b (mgr.12834102) 26524 : cluster [DBG] pgmap v27101: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:14:31.084995+0000 osd.54 (osd.54) 52227 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:14:32.747+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:14:32.747+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/3128427346' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:32.087763+0000 osd.54 (osd.54) 52228 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:32.750783+0000 mon.j (mon.0) 21756 : audit [DBG] from='client.? 10.1.182.12:0/3128427346' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:32.893037+0000 mgr.b (mgr.12834102) 26525 : cluster [DBG] pgmap v27102: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:14:33.055613+0000 osd.54 (osd.54) 52229 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:34.083503+0000 osd.54 (osd.54) 52230 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:34.871264+0000 mon.l (mon.2) 15687 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:14:34.871535+0000 mon.l (mon.2) 15688 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:14:34.894020+0000 mgr.b (mgr.12834102) 26526 : cluster [DBG] pgmap v27103: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:14:36.115+0000 7f7fb651d700 1 mon.j@0(leader).osd e20729 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:14:36.483+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:14:36.483+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:14:35.109803+0000 osd.54 (osd.54) 52231 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:36.313751+0000 mon.k (mon.1) 19013 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:14:36.314042+0000 mon.k (mon.1) 19014 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:14:36.488792+0000 mon.j (mon.0) 21757 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:14:36.489052+0000 mon.j (mon.0) 21758 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:14:36.132090+0000 osd.54 (osd.54) 52232 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:36.894977+0000 mgr.b (mgr.12834102) 26527 : cluster [DBG] pgmap v27104: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:14:38.079+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:14:38.079+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.207.132:0/2116087561' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:37.157175+0000 osd.54 (osd.54) 52233 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:37.696538+0000 mon.k (mon.1) 19015 : audit [DBG] from='client.? 10.1.222.242:0/1174392434' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:14:38.082895+0000 mon.j (mon.0) 21759 : audit [DBG] from='client.? 10.1.207.132:0/2116087561' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:14:38.111791+0000 mon.l (mon.2) 15689 : audit [DBG] from='client.? 10.1.222.242:0/2568642459' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:38.197886+0000 osd.54 (osd.54) 52234 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:38.895917+0000 mgr.b (mgr.12834102) 26528 : cluster [DBG] pgmap v27105: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:14:39.183201+0000 osd.54 (osd.54) 52235 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:14:41.115+0000 7f7fb651d700 1 mon.j@0(leader).osd e20729 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:14:41.123+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_write.cc:1736] [default] New memtable created with log file: #21749. Immutable memtables: 0.
debug 2023-07-18T20:14:41.123+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:14:41.126717) [db/db_impl/db_impl_compaction_flush.cc:2394] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
debug 2023-07-18T20:14:41.123+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:335] [default] [JOB 1671] Flushing memtable with next log file: 21749
debug 2023-07-18T20:14:41.123+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711281126766, "job": 1671, "event": "flush_started", "num_memtables": 1, "num_entries": 942, "num_deletes": 367, "total_data_size": 1173739, "memory_usage": 1191704, "flush_reason": "Manual Compaction"}
debug 2023-07-18T20:14:41.123+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:364] [default] [JOB 1671] Level-0 flush table #21750: started
debug 2023-07-18T20:14:41.127+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711281132908, "cf_name": "default", "job": 1671, "event": "table_file_creation", "file_number": 21750, "file_size": 966287, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 961945, "index_size": 1755, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 14575, "raw_average_key_size": 23, "raw_value_size": 951607, "raw_average_value_size": 1520, "num_data_blocks": 70, "num_entries": 626, "num_deletions": 367, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689711242, "oldest_key_time": 1689711242, "file_creation_time": 1689711281, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:14:41.127+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:424] [default] [JOB 1671] Level-0 flush table #21750: 966287 bytes OK
debug 2023-07-18T20:14:41.131+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:14:41.133100) [db/memtable_list.cc:449] [default] Level-0 commit table #21750 started
debug 2023-07-18T20:14:41.131+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:14:41.133361) [db/memtable_list.cc:628] [default] Level-0 commit table #21750: memtable #1 done
debug 2023-07-18T20:14:41.131+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:14:41.133384) EVENT_LOG_v1 {"time_micros": 1689711281133376, "job": 1671, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 3], "immutable_memtables": 0}
debug 2023-07-18T20:14:41.131+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:14:41.133404) [db/db_impl/db_impl_compaction_flush.cc:233] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 3] max score 0.25
debug 2023-07-18T20:14:41.131+0000 7f7fb8521700 4 rocksdb: [db/db_impl/db_impl_files.cc:415] [JOB 1671] Try to delete WAL files size 1168530, prev total WAL file size 1168898, number of live WAL files 2.
debug 2023-07-18T20:14:41.131+0000 7f7fb8521700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021744.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:14:41.131+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:14:41.131+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:14:41.134053) [db/db_impl/db_impl_compaction_flush.cc:2712] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323638383838' seq:72057594037927935, type:20 .. '6C6F676D0033323639313430' seq:0, type:0; will stop at (end)
debug 2023-07-18T20:14:41.131+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1881] [default] [JOB 1672] Compacting 1@0 + 3@6 files to L6, score -1.00
debug 2023-07-18T20:14:41.131+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1887] [default] Compaction start summary: Base version 1672 Base level 0, inputs: [21750(943KB)], [21746(64MB) 21747(64MB) 21748(4057KB)]
debug 2023-07-18T20:14:41.131+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711281134111, "job": 1672, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [21750], "files_L6": [21746, 21747, 21748], "score": -1, "input_data_size": 139707634}
debug 2023-07-18T20:14:41.339+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1672] Generated table #21751: 21751 keys, 67269171 bytes
debug 2023-07-18T20:14:41.339+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711281342803, "cf_name": "default", "job": 1672, "event": "table_file_creation", "file_number": 21751, "file_size": 67269171, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67155387, "index_size": 58328, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 54469, "raw_key_size": 590414, "raw_average_key_size": 27, "raw_value_size": 66801864, "raw_average_value_size": 3071, "num_data_blocks": 2154, "num_entries": 21751, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689711281, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:14:41.551+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1672] Generated table #21752: 13053 keys, 67287338 bytes
debug 2023-07-18T20:14:41.551+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711281555254, "cf_name": "default", "job": 1672, "event": "table_file_creation", "file_number": 21752, "file_size": 67287338, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67160479, "index_size": 93164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32709, "raw_key_size": 289139, "raw_average_key_size": 22, "raw_value_size": 66883441, "raw_average_value_size": 5123, "num_data_blocks": 3455, "num_entries": 13053, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689711281, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:14:41.567+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1672] Generated table #21753: 590 keys, 4943903 bytes
debug 2023-07-18T20:14:41.567+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711281571212, "cf_name": "default", "job": 1672, "event": "table_file_creation", "file_number": 21753, "file_size": 4943903, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 4935327, "index_size": 5989, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12469, "raw_average_key_size": 21, "raw_value_size": 4921418, "raw_average_value_size": 8341, "num_data_blocks": 237, "num_entries": 590, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689711281, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:14:41.567+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1594] [default] [JOB 1672] Compacted 1@0 + 3@6 files to L6 => 139500412 bytes
debug 2023-07-18T20:14:41.567+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:14:41.572286) [db/compaction/compaction_job.cc:812] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 3] max score 0.00, MB/sec: 319.6 rd, 319.1 wr, level 6, files in(1, 3) out(3) MB in(0.9, 132.3) out(133.0), read-write-amplify(288.9) write-amplify(144.4) OK, records in: 36146, records dropped: 752 output_compression: NoCompression
debug 2023-07-18T20:14:41.567+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:14:41.572303) EVENT_LOG_v1 {"time_micros": 1689711281572296, "job": 1672, "event": "compaction_finished", "compaction_time_micros": 437144, "compaction_time_cpu_micros": 215437, "output_level": 6, "num_output_files": 3, "total_output_size": 139500412, "num_input_records": 36146, "num_output_records": 35394, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 3]}
debug 2023-07-18T20:14:41.567+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021750.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:14:41.567+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711281572549, "job": 1672, "event": "table_file_deletion", "file_number": 21750}
debug 2023-07-18T20:14:41.567+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021748.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:14:41.567+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711281573069, "job": 1672, "event": "table_file_deletion", "file_number": 21748}
debug 2023-07-18T20:14:41.579+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021747.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:14:41.579+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711281581991, "job": 1672, "event": "table_file_deletion", "file_number": 21747}
debug 2023-07-18T20:14:41.587+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021746.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:14:41.587+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711281591478, "job": 1672, "event": "table_file_deletion", "file_number": 21746}
debug 2023-07-18T20:14:41.587+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:14:41.587+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:14:41.587+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:14:41.587+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:14:41.587+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
cluster 2023-07-18T20:14:40.137335+0000 osd.54 (osd.54) 52236 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:40.896885+0000 mgr.b (mgr.12834102) 26529 : cluster [DBG] pgmap v27106: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:14:41.891+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:14:41.891+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:41.158946+0000 osd.54 (osd.54) 52237 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:41.895305+0000 mon.j (mon.0) 21760 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:42.183946+0000 osd.54 (osd.54) 52238 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:42.897905+0000 mgr.b (mgr.12834102) 26530 : cluster [DBG] pgmap v27107: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:14:43.213717+0000 osd.54 (osd.54) 52239 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:43.757572+0000 mon.k (mon.1) 19016 : audit [DBG] from='client.? 10.1.222.242:0/2248196298' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:14:44.191560+0000 mon.l (mon.2) 15690 : audit [DBG] from='client.? 10.1.222.242:0/3769700455' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
debug 2023-07-18T20:14:45.431+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd crush rule create-replicated", "name": "ceph-ssd-erasure-default-md", "root": "default", "type": "osd", "class": "ssd", "format": "json"} v 0) v1
debug 2023-07-18T20:14:45.431+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-ssd-erasure-default-md", "root": "default", "type": "osd", "class": "ssd", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:44.198607+0000 osd.54 (osd.54) 52240 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:44.857638+0000 mon.l (mon.2) 15691 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:14:44.857931+0000 mon.l (mon.2) 15692 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:14:44.898838+0000 mgr.b (mgr.12834102) 26531 : cluster [DBG] pgmap v27108: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:14:44.933654+0000 mon.k (mon.1) 19017 : audit [DBG] from='client.? 10.1.222.242:0/2805978617' entity='client.admin' cmd=[{"prefix": "osd crush dump", "format": "json"}]: dispatch
audit 2023-07-18T20:14:45.433410+0000 mon.k (mon.1) 19018 : audit [INF] from='client.? 10.1.222.242:0/1584119829' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-ssd-erasure-default-md", "root": "default", "type": "osd", "class": "ssd", "format": "json"}]: dispatch
audit 2023-07-18T20:14:45.434326+0000 mon.j (mon.0) 21761 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-ssd-erasure-default-md", "root": "default", "type": "osd", "class": "ssd", "format": "json"}]: dispatch
debug 2023-07-18T20:14:46.123+0000 7f7fb651d700 1 mon.j@0(leader).osd e20729 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:14:46.487+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:14:46.487+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:14:45.217106+0000 osd.54 (osd.54) 52241 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:45.929953+0000 mon.k (mon.1) 19019 : audit [DBG] from='client.? 10.1.222.242:0/2822714554' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": "ceph-ssd-erasure-default-md", "var": "all", "format": "json"}]: dispatch
audit 2023-07-18T20:14:46.293030+0000 mon.k (mon.1) 19020 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:14:46.293328+0000 mon.k (mon.1) 19021 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:14:46.422872+0000 mon.k (mon.1) 19022 : audit [DBG] from='client.? 10.1.222.242:0/861154033' entity='client.admin' cmd=[{"prefix": "osd pool application get", "pool": "ceph-ssd-erasure-default-md", "format": "json"}]: dispatch
audit 2023-07-18T20:14:46.492466+0000 mon.j (mon.0) 21762 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:14:46.492732+0000 mon.j (mon.0) 21763 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:14:47.695+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool application enable","pool": "ceph-ssd-erasure-default-md","app": "rbd"} v 0) v1
debug 2023-07-18T20:14:47.695+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-ssd-erasure-default-md","app": "rbd"}]: dispatch
debug 2023-07-18T20:14:47.707+0000 7f7fb651d700 1 mon.j@0(leader).osd e20729 do_prune osdmap full prune enabled
cluster 2023-07-18T20:14:46.214041+0000 osd.54 (osd.54) 52242 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:46.899815+0000 mgr.b (mgr.12834102) 26532 : cluster [DBG] pgmap v27109: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:14:46.907035+0000 mon.l (mon.2) 15693 : audit [DBG] from='client.? 10.1.222.242:0/2555281805' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": "ceph-ssd-erasure-default-md", "var": "all", "format": "json"}]: dispatch
audit 2023-07-18T20:14:47.633191+0000 mon.k (mon.1) 19023 : audit [DBG] from='client.? 10.1.222.242:0/2923441506' entity='client.admin' cmd=[{"prefix": "osd crush rule dump", "name": "ceph-ssd-erasure-default-md_osd", "format": "json"}]: dispatch
audit 2023-07-18T20:14:47.698999+0000 mon.k (mon.1) 19024 : audit [INF] from='client.? 10.1.222.242:0/2213646153' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-ssd-erasure-default-md","app": "rbd"}]: dispatch
audit 2023-07-18T20:14:47.699926+0000 mon.j (mon.0) 21764 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-ssd-erasure-default-md","app": "rbd"}]: dispatch
debug 2023-07-18T20:14:47.719+0000 7f7fb1513700 1 mon.j@0(leader).osd e20730 e20730: 57 total, 3 up, 41 in
debug 2023-07-18T20:14:47.723+0000 7f7fb1513700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-ssd-erasure-default-md","app": "rbd"}]': finished
debug 2023-07-18T20:14:47.723+0000 7f7fb1513700 0 log_channel(cluster) log [DBG] : osdmap e20730: 57 total, 3 up, 41 in
debug 2023-07-18T20:14:48.231+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:14:48.231+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/1553965145' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:47.171015+0000 osd.54 (osd.54) 52243 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:47.726936+0000 mon.j (mon.0) 21765 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-ssd-erasure-default-md","app": "rbd"}]': finished
cluster 2023-07-18T20:14:47.726980+0000 mon.j (mon.0) 21766 : cluster [DBG] osdmap e20730: 57 total, 3 up, 41 in
audit 2023-07-18T20:14:48.234736+0000 mon.j (mon.0) 21767 : audit [DBG] from='client.? 10.1.182.12:0/1553965145' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:48.206166+0000 osd.54 (osd.54) 52244 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:48.900862+0000 mgr.b (mgr.12834102) 26533 : cluster [DBG] pgmap v27111: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:14:49.239748+0000 osd.54 (osd.54) 52245 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:14:51.123+0000 7f7fb651d700 1 mon.j@0(leader).osd e20730 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:14:50.248873+0000 osd.54 (osd.54) 52246 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:50.901843+0000 mgr.b (mgr.12834102) 26534 : cluster [DBG] pgmap v27112: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:14:51.214289+0000 osd.54 (osd.54) 52247 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:52.239869+0000 osd.54 (osd.54) 52248 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:52.902837+0000 mgr.b (mgr.12834102) 26535 : cluster [DBG] pgmap v27113: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:14:53.234312+0000 osd.54 (osd.54) 52249 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:54.279613+0000 osd.54 (osd.54) 52250 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:54.854679+0000 mon.l (mon.2) 15694 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:14:54.855014+0000 mon.l (mon.2) 15695 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:14:54.903815+0000 mgr.b (mgr.12834102) 26536 : cluster [DBG] pgmap v27114: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:14:56.123+0000 7f7fb651d700 1 mon.j@0(leader).osd e20730 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:14:56.491+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:14:56.491+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:14:55.242852+0000 osd.54 (osd.54) 52251 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:56.298354+0000 mon.k (mon.1) 19025 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:14:56.298659+0000 mon.k (mon.1) 19026 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:14:56.495110+0000 mon.j (mon.0) 21768 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:14:56.495377+0000 mon.j (mon.0) 21769 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:14:56.891+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:14:56.891+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:56.196339+0000 osd.54 (osd.54) 52252 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:56.895782+0000 mon.j (mon.0) 21770 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:56.907250+0000 mgr.b (mgr.12834102) 26537 : cluster [DBG] pgmap v27115: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:14:58.239+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:14:58.239+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.207.132:0/952095976' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:57.202182+0000 osd.54 (osd.54) 52253 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:14:58.243032+0000 mon.j (mon.0) 21771 : audit [DBG] from='client.? 10.1.207.132:0/952095976' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:14:58.173894+0000 osd.54 (osd.54) 52254 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:14:58.908233+0000 mgr.b (mgr.12834102) 26538 : cluster [DBG] pgmap v27116: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:14:59.159097+0000 osd.54 (osd.54) 52255 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:15:01.127+0000 7f7fb651d700 1 mon.j@0(leader).osd e20730 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:15:00.162152+0000 osd.54 (osd.54) 52256 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:00.909211+0000 mgr.b (mgr.12834102) 26539 : cluster [DBG] pgmap v27117: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:15:01.159266+0000 osd.54 (osd.54) 52257 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:15:03.703+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:15:03.703+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/4250359764' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:15:02.118621+0000 osd.54 (osd.54) 52258 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:02.910203+0000 mgr.b (mgr.12834102) 26540 : cluster [DBG] pgmap v27118: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:15:03.705798+0000 mon.j (mon.0) 21772 : audit [DBG] from='client.? 10.1.182.12:0/4250359764' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:15:04.491+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd crush rule create-replicated", "name": "ceph-nvme-erasure-default-md", "root": "default", "type": "osd", "class": "nvme", "format": "json"} v 0) v1
debug 2023-07-18T20:15:04.491+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-nvme-erasure-default-md", "root": "default", "type": "osd", "class": "nvme", "format": "json"}]: dispatch
cluster 2023-07-18T20:15:03.083276+0000 osd.54 (osd.54) 52259 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:04.086303+0000 mon.k (mon.1) 19027 : audit [DBG] from='client.? 10.1.222.242:0/240841020' entity='client.admin' cmd=[{"prefix": "osd crush dump", "format": "json"}]: dispatch
audit 2023-07-18T20:15:04.496870+0000 mon.j (mon.0) 21773 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-nvme-erasure-default-md", "root": "default", "type": "osd", "class": "nvme", "format": "json"}]: dispatch
audit 2023-07-18T20:15:04.496998+0000 mon.l (mon.2) 15696 : audit [INF] from='client.? 10.1.222.242:0/2745880721' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-nvme-erasure-default-md", "root": "default", "type": "osd", "class": "nvme", "format": "json"}]: dispatch
cluster 2023-07-18T20:15:04.113494+0000 osd.54 (osd.54) 52260 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:04.867944+0000 mon.l (mon.2) 15697 : audit [DBG] from='client.? 10.1.222.242:0/574463974' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": "ceph-nvme-erasure-default-md", "var": "all", "format": "json"}]: dispatch
audit 2023-07-18T20:15:04.868673+0000 mon.l (mon.2) 15698 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:15:04.868927+0000 mon.l (mon.2) 15699 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:15:04.911805+0000 mgr.b (mgr.12834102) 26541 : cluster [DBG] pgmap v27119: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:15:05.316110+0000 mon.k (mon.1) 19028 : audit [DBG] from='client.? 10.1.222.242:0/65852191' entity='client.admin' cmd=[{"prefix": "osd pool application get", "pool": "ceph-nvme-erasure-default-md", "format": "json"}]: dispatch
audit 2023-07-18T20:15:05.786278+0000 mon.k (mon.1) 19029 : audit [DBG] from='client.? 10.1.222.242:0/1632206302' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": "ceph-nvme-erasure-default-md", "var": "all", "format": "json"}]: dispatch
debug 2023-07-18T20:15:06.127+0000 7f7fb651d700 1 mon.j@0(leader).osd e20730 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:15:06.287+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool application enable","pool": "ceph-nvme-erasure-default-md","app": "rbd"} v 0) v1
debug 2023-07-18T20:15:06.287+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-nvme-erasure-default-md","app": "rbd"}]: dispatch
debug 2023-07-18T20:15:06.479+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:15:06.479+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:15:05.071008+0000 osd.54 (osd.54) 52261 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:06.230262+0000 mon.k (mon.1) 19030 : audit [DBG] from='client.? 10.1.222.242:0/653931979' entity='client.admin' cmd=[{"prefix": "osd crush rule dump", "name": "ceph-nvme-erasure-default-md_osd", "format": "json"}]: dispatch
audit 2023-07-18T20:15:06.291756+0000 mon.k (mon.1) 19031 : audit [INF] from='client.? 10.1.222.242:0/2085638260' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-nvme-erasure-default-md","app": "rbd"}]: dispatch
audit 2023-07-18T20:15:06.292710+0000 mon.j (mon.0) 21774 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-nvme-erasure-default-md","app": "rbd"}]: dispatch
audit 2023-07-18T20:15:06.298502+0000 mon.k (mon.1) 19032 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:15:06.298859+0000 mon.k (mon.1) 19033 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:15:06.481521+0000 mon.j (mon.0) 21775 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:15:06.481788+0000 mon.j (mon.0) 21776 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:15:07.131+0000 7f7fb651d700 1 mon.j@0(leader).osd e20730 do_prune osdmap full prune enabled
debug 2023-07-18T20:15:07.135+0000 7f7fb1513700 1 mon.j@0(leader).osd e20731 e20731: 57 total, 3 up, 41 in
debug 2023-07-18T20:15:07.139+0000 7f7fb1513700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-nvme-erasure-default-md","app": "rbd"}]': finished
debug 2023-07-18T20:15:07.139+0000 7f7fb1513700 0 log_channel(cluster) log [DBG] : osdmap e20731: 57 total, 3 up, 41 in
cluster 2023-07-18T20:15:06.110168+0000 osd.54 (osd.54) 52262 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:06.912913+0000 mgr.b (mgr.12834102) 26542 : cluster [DBG] pgmap v27120: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:15:07.143842+0000 mon.j (mon.0) 21777 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-nvme-erasure-default-md","app": "rbd"}]': finished
cluster 2023-07-18T20:15:07.143945+0000 mon.j (mon.0) 21778 : cluster [DBG] osdmap e20731: 57 total, 3 up, 41 in
cluster 2023-07-18T20:15:07.152227+0000 osd.54 (osd.54) 52263 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:08.651818+0000 mon.k (mon.1) 19034 : audit [DBG] from='client.? 10.1.222.242:0/1429422020' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:15:08.186132+0000 osd.54 (osd.54) 52264 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:08.913843+0000 mgr.b (mgr.12834102) 26543 : cluster [DBG] pgmap v27122: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:15:09.051020+0000 mon.k (mon.1) 19035 : audit [DBG] from='client.? 10.1.222.242:0/169707875' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:15:09.174731+0000 osd.54 (osd.54) 52265 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:15:11.127+0000 7f7fb651d700 1 mon.j@0(leader).osd e20731 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:15:10.198574+0000 osd.54 (osd.54) 52266 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:10.914814+0000 mgr.b (mgr.12834102) 26544 : cluster [DBG] pgmap v27123: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:15:11.895+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:15:11.895+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:15:11.204007+0000 osd.54 (osd.54) 52267 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:11.899282+0000 mon.j (mon.0) 21779 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:15:12.222004+0000 osd.54 (osd.54) 52268 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:12.915780+0000 mgr.b (mgr.12834102) 26545 : cluster [DBG] pgmap v27124: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:15:13.237298+0000 osd.54 (osd.54) 52269 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:14.776912+0000 mon.k (mon.1) 19036 : audit [DBG] from='client.? 10.1.222.242:0/4294634280' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:15:14.857813+0000 mon.l (mon.2) 15700 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:15:14.858093+0000 mon.l (mon.2) 15701 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:15:14.258134+0000 osd.54 (osd.54) 52270 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:14.916746+0000 mgr.b (mgr.12834102) 26546 : cluster [DBG] pgmap v27125: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:15:15.232878+0000 mon.k (mon.1) 19037 : audit [DBG] from='client.? 10.1.222.242:0/1778545030' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
debug 2023-07-18T20:15:16.131+0000 7f7fb651d700 1 mon.j@0(leader).osd e20731 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:15:16.487+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:15:16.487+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:15:15.281795+0000 osd.54 (osd.54) 52271 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:16.250528+0000 osd.54 (osd.54) 52272 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:16.293649+0000 mon.k (mon.1) 19038 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:15:16.293960+0000 mon.k (mon.1) 19039 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:15:16.491357+0000 mon.j (mon.0) 21780 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:15:16.491516+0000 mon.j (mon.0) 21781 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:15:16.917713+0000 mgr.b (mgr.12834102) 26547 : cluster [DBG] pgmap v27126: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:15:17.265641+0000 osd.54 (osd.54) 52273 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:18.232004+0000 osd.54 (osd.54) 52274 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:18.375379+0000 mon.k (mon.1) 19040 : audit [DBG] from='client.? 10.1.207.132:0/3724983064' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:15:19.171+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:15:19.171+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/1671070518' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:15:18.918609+0000 mgr.b (mgr.12834102) 26548 : cluster [DBG] pgmap v27127: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:15:19.177182+0000 mon.j (mon.0) 21782 : audit [DBG] from='client.? 10.1.182.12:0/1671070518' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:15:19.254966+0000 osd.54 (osd.54) 52275 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:15:21.131+0000 7f7fb651d700 1 mon.j@0(leader).osd e20731 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:15:20.253972+0000 osd.54 (osd.54) 52276 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:20.919588+0000 mgr.b (mgr.12834102) 26549 : cluster [DBG] pgmap v27128: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:15:21.218092+0000 osd.54 (osd.54) 52277 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:22.247360+0000 osd.54 (osd.54) 52278 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:22.920601+0000 mgr.b (mgr.12834102) 26550 : cluster [DBG] pgmap v27129: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:15:23.211388+0000 osd.54 (osd.54) 52279 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:24.179450+0000 osd.54 (osd.54) 52280 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:24.862882+0000 mon.l (mon.2) 15702 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:15:24.863162+0000 mon.l (mon.2) 15703 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:15:26.135+0000 7f7fb651d700 1 mon.j@0(leader).osd e20731 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:15:24.921583+0000 mgr.b (mgr.12834102) 26551 : cluster [DBG] pgmap v27130: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:15:25.133185+0000 osd.54 (osd.54) 52281 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:15:26.495+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:15:26.495+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:15:26.895+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:15:26.895+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:15:26.123104+0000 osd.54 (osd.54) 52282 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:26.307475+0000 mon.k (mon.1) 19041 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:15:26.307773+0000 mon.k (mon.1) 19042 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:15:26.499009+0000 mon.j (mon.0) 21783 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:15:26.499275+0000 mon.j (mon.0) 21784 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:15:26.899557+0000 mon.j (mon.0) 21785 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:15:26.922573+0000 mgr.b (mgr.12834102) 26552 : cluster [DBG] pgmap v27131: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:15:27.113953+0000 osd.54 (osd.54) 52283 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:28.142808+0000 osd.54 (osd.54) 52284 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:28.923552+0000 mgr.b (mgr.12834102) 26553 : cluster [DBG] pgmap v27132: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:15:29.096906+0000 osd.54 (osd.54) 52285 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:15:31.135+0000 7f7fb651d700 1 mon.j@0(leader).osd e20731 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:15:30.082744+0000 osd.54 (osd.54) 52286 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:30.924535+0000 mgr.b (mgr.12834102) 26554 : cluster [DBG] pgmap v27133: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:15:31.113003+0000 osd.54 (osd.54) 52287 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:32.075475+0000 osd.54 (osd.54) 52288 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:32.925515+0000 mgr.b (mgr.12834102) 26555 : cluster [DBG] pgmap v27134: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:15:33.097175+0000 osd.54 (osd.54) 52289 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:15:34.647+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:15:34.647+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/3915506203' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:15:34.125715+0000 osd.54 (osd.54) 52290 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:34.651339+0000 mon.j (mon.0) 21786 : audit [DBG] from='client.? 10.1.182.12:0/3915506203' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:15:34.875209+0000 mon.l (mon.2) 15704 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:15:34.875479+0000 mon.l (mon.2) 15705 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:15:36.139+0000 7f7fb651d700 1 mon.j@0(leader).osd e20731 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:15:34.926490+0000 mgr.b (mgr.12834102) 26556 : cluster [DBG] pgmap v27135: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:15:35.169368+0000 osd.54 (osd.54) 52291 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:15:36.491+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:15:36.491+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:15:36.142626+0000 osd.54 (osd.54) 52292 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:36.295444+0000 mon.k (mon.1) 19043 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:15:36.295730+0000 mon.k (mon.1) 19044 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:15:36.496315+0000 mon.j (mon.0) 21787 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:15:36.496580+0000 mon.j (mon.0) 21788 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:15:36.927353+0000 mgr.b (mgr.12834102) 26557 : cluster [DBG] pgmap v27136: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:15:37.172310+0000 osd.54 (osd.54) 52293 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:38.143693+0000 osd.54 (osd.54) 52294 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:38.705283+0000 mon.l (mon.2) 15706 : audit [DBG] from='client.? 10.1.207.132:0/1363262901' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:15:38.928082+0000 mgr.b (mgr.12834102) 26558 : cluster [DBG] pgmap v27137: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:15:39.153989+0000 osd.54 (osd.54) 52295 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:39.602732+0000 mon.k (mon.1) 19045 : audit [DBG] from='client.? 10.1.222.242:0/849845197' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:15:40.061147+0000 mon.k (mon.1) 19046 : audit [DBG] from='client.? 10.1.222.242:0/1417928854' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
debug 2023-07-18T20:15:41.139+0000 7f7fb651d700 1 mon.j@0(leader).osd e20731 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:15:40.179197+0000 osd.54 (osd.54) 52296 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:15:41.895+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:15:41.895+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:15:40.928884+0000 mgr.b (mgr.12834102) 26559 : cluster [DBG] pgmap v27138: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:15:41.141687+0000 osd.54 (osd.54) 52297 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:41.899953+0000 mon.j (mon.0) 21789 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:15:42.096962+0000 osd.54 (osd.54) 52298 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:42.929711+0000 mgr.b (mgr.12834102) 26560 : cluster [DBG] pgmap v27139: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:15:43.116512+0000 osd.54 (osd.54) 52299 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:44.091568+0000 osd.54 (osd.54) 52300 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:44.851392+0000 mon.l (mon.2) 15707 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:15:44.851669+0000 mon.l (mon.2) 15708 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:15:46.143+0000 7f7fb651d700 1 mon.j@0(leader).osd e20731 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:15:44.930691+0000 mgr.b (mgr.12834102) 26561 : cluster [DBG] pgmap v27140: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:15:45.122381+0000 osd.54 (osd.54) 52301 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:45.929622+0000 mon.k (mon.1) 19047 : audit [DBG] from='client.? 10.1.222.242:0/2825842294' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:15:46.290685+0000 mon.k (mon.1) 19048 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:15:46.290980+0000 mon.k (mon.1) 19049 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:15:46.479+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:15:46.479+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:15:46.135075+0000 osd.54 (osd.54) 52302 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:46.416403+0000 mon.k (mon.1) 19050 : audit [DBG] from='client.? 10.1.222.242:0/56278089' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
audit 2023-07-18T20:15:46.483326+0000 mon.j (mon.0) 21790 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:15:46.483595+0000 mon.j (mon.0) 21791 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:15:46.931687+0000 mgr.b (mgr.12834102) 26562 : cluster [DBG] pgmap v27141: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:15:47.146846+0000 osd.54 (osd.54) 52303 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:48.099085+0000 osd.54 (osd.54) 52304 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:48.759224+0000 mon.k (mon.1) 19051 : audit [DBG] from='client.? 10.1.222.242:0/1879216024' entity='client.admin' cmd=[{"prefix": "osd crush dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:15:48.932731+0000 mgr.b (mgr.12834102) 26563 : cluster [DBG] pgmap v27142: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:15:49.297515+0000 mon.k (mon.1) 19052 : audit [DBG] from='client.? 10.1.222.242:0/1325112231' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile get", "name": "default", "format": "json"}]: dispatch
debug 2023-07-18T20:15:49.902+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd erasure-code-profile set", "name": "ceph-ssd-erasure-default-data_ecprofile", "force": true, "profile": ["k=5", "m=2", "plugin=jerasure", "technique=reed_sol_van", "crush-failure-domain=osd", "crush-device-class=ssd"], "format": "json"} v 0) v1
debug 2023-07-18T20:15:49.902+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "ceph-ssd-erasure-default-data_ecprofile", "force": true, "profile": ["k=5", "m=2", "plugin=jerasure", "technique=reed_sol_van", "crush-failure-domain=osd", "crush-device-class=ssd"], "format": "json"}]: dispatch
debug 2023-07-18T20:15:50.114+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:15:50.114+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/4214260952' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:15:49.121247+0000 osd.54 (osd.54) 52305 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:49.906906+0000 mon.k (mon.1) 19053 : audit [INF] from='client.? 10.1.222.242:0/4191907834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "ceph-ssd-erasure-default-data_ecprofile", "force": true, "profile": ["k=5", "m=2", "plugin=jerasure", "technique=reed_sol_van", "crush-failure-domain=osd", "crush-device-class=ssd"], "format": "json"}]: dispatch
audit 2023-07-18T20:15:49.908049+0000 mon.j (mon.0) 21792 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "ceph-ssd-erasure-default-data_ecprofile", "force": true, "profile": ["k=5", "m=2", "plugin=jerasure", "technique=reed_sol_van", "crush-failure-domain=osd", "crush-device-class=ssd"], "format": "json"}]: dispatch
audit 2023-07-18T20:15:50.120606+0000 mon.j (mon.0) 21793 : audit [DBG] from='client.? 10.1.182.12:0/4214260952' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:15:50.434+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool create", "pool": "ceph-ssd-erasure-default-data", "pg_num": 0, "pool_type": "erasure", "erasure_code_profile": "ceph-ssd-erasure-default-data_ecprofile", "format": "json"} v 0) v1
debug 2023-07-18T20:15:50.434+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ceph-ssd-erasure-default-data", "pg_num": 0, "pool_type": "erasure", "erasure_code_profile": "ceph-ssd-erasure-default-data_ecprofile", "format": "json"}]: dispatch
debug 2023-07-18T20:15:50.894+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool set", "pool": "ceph-ssd-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"} v 0) v1
debug 2023-07-18T20:15:50.894+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "ceph-ssd-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]: dispatch
debug 2023-07-18T20:15:51.142+0000 7f7fb651d700 1 mon.j@0(leader).osd e20731 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:15:51.354+0000 7f7fb651d700 1 mon.j@0(leader).osd e20731 do_prune osdmap full prune enabled
debug 2023-07-18T20:15:51.366+0000 7f7fb1513700 1 mon.j@0(leader).osd e20732 e20732: 57 total, 3 up, 41 in
cluster 2023-07-18T20:15:50.120790+0000 osd.54 (osd.54) 52306 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:50.435267+0000 mon.k (mon.1) 19054 : audit [INF] from='client.? 10.1.222.242:0/3215721335' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ceph-ssd-erasure-default-data", "pg_num": 0, "pool_type": "erasure", "erasure_code_profile": "ceph-ssd-erasure-default-data_ecprofile", "format": "json"}]: dispatch
audit 2023-07-18T20:15:50.436318+0000 mon.j (mon.0) 21794 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ceph-ssd-erasure-default-data", "pg_num": 0, "pool_type": "erasure", "erasure_code_profile": "ceph-ssd-erasure-default-data_ecprofile", "format": "json"}]: dispatch
audit 2023-07-18T20:15:50.899210+0000 mon.k (mon.1) 19055 : audit [INF] from='client.? 10.1.222.242:0/2459877020' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "ceph-ssd-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]: dispatch
audit 2023-07-18T20:15:50.900238+0000 mon.j (mon.0) 21795 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "ceph-ssd-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]: dispatch
cluster 2023-07-18T20:15:50.933742+0000 mgr.b (mgr.12834102) 26564 : cluster [DBG] pgmap v27143: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:15:51.370+0000 7f7fb1513700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "ceph-ssd-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]': finished
debug 2023-07-18T20:15:51.370+0000 7f7fb1513700 0 log_channel(cluster) log [DBG] : osdmap e20732: 57 total, 3 up, 41 in
debug 2023-07-18T20:15:52.186+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool application enable","pool": "ceph-ssd-erasure-default-data","app": "rbd"} v 0) v1
debug 2023-07-18T20:15:52.186+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-ssd-erasure-default-data","app": "rbd"}]: dispatch
debug 2023-07-18T20:15:52.366+0000 7f7fb651d700 1 mon.j@0(leader).osd e20732 do_prune osdmap full prune enabled
cluster 2023-07-18T20:15:51.084442+0000 osd.54 (osd.54) 52307 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:51.373487+0000 mon.j (mon.0) 21796 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "ceph-ssd-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]': finished
cluster 2023-07-18T20:15:51.373525+0000 mon.j (mon.0) 21797 : cluster [DBG] osdmap e20732: 57 total, 3 up, 41 in
audit 2023-07-18T20:15:52.123091+0000 mon.k (mon.1) 19056 : audit [DBG] from='client.? 10.1.222.242:0/3410699188' entity='client.admin' cmd=[{"prefix": "osd pool application get", "pool": "ceph-ssd-erasure-default-data", "format": "json"}]: dispatch
audit 2023-07-18T20:15:52.191034+0000 mon.k (mon.1) 19057 : audit [INF] from='client.? 10.1.222.242:0/2608581811' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-ssd-erasure-default-data","app": "rbd"}]: dispatch
audit 2023-07-18T20:15:52.191978+0000 mon.j (mon.0) 21798 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-ssd-erasure-default-data","app": "rbd"}]: dispatch
debug 2023-07-18T20:15:52.378+0000 7f7fb1513700 1 mon.j@0(leader).osd e20733 e20733: 57 total, 3 up, 41 in
debug 2023-07-18T20:15:52.378+0000 7f7fb1513700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-ssd-erasure-default-data","app": "rbd"}]': finished
debug 2023-07-18T20:15:52.378+0000 7f7fb1513700 0 log_channel(cluster) log [DBG] : osdmap e20733: 57 total, 3 up, 41 in
cluster 2023-07-18T20:15:52.040133+0000 osd.54 (osd.54) 52308 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:52.384193+0000 mon.j (mon.0) 21799 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-ssd-erasure-default-data","app": "rbd"}]': finished
cluster 2023-07-18T20:15:52.384223+0000 mon.j (mon.0) 21800 : cluster [DBG] osdmap e20733: 57 total, 3 up, 41 in
cluster 2023-07-18T20:15:52.934729+0000 mgr.b (mgr.12834102) 26565 : cluster [DBG] pgmap v27146: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:15:53.030354+0000 osd.54 (osd.54) 52309 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:53.996393+0000 osd.54 (osd.54) 52310 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:54.859350+0000 mon.l (mon.2) 15709 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:15:54.859645+0000 mon.l (mon.2) 15710 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:15:54.935695+0000 mgr.b (mgr.12834102) 26566 : cluster [DBG] pgmap v27147: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:15:56.142+0000 7f7fb651d700 1 mon.j@0(leader).osd e20733 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:15:54.975014+0000 osd.54 (osd.54) 52311 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:56.302139+0000 mon.k (mon.1) 19058 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:15:56.302441+0000 mon.k (mon.1) 19059 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:15:56.482+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:15:56.482+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:15:56.894+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:15:56.894+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:15:55.970652+0000 osd.54 (osd.54) 52312 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:56.486790+0000 mon.j (mon.0) 21801 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:15:56.486916+0000 mon.j (mon.0) 21802 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:15:56.899956+0000 mon.j (mon.0) 21803 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:15:56.941892+0000 mgr.b (mgr.12834102) 26567 : cluster [DBG] pgmap v27148: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:15:56.997980+0000 osd.54 (osd.54) 52313 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:15:57.995686+0000 osd.54 (osd.54) 52314 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:15:58.914723+0000 mon.k (mon.1) 19060 : audit [DBG] from='client.? 10.1.207.132:0/1746506018' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:15:58.942814+0000 mgr.b (mgr.12834102) 26568 : cluster [DBG] pgmap v27149: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:15:59.016344+0000 osd.54 (osd.54) 52315 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:16:01.146+0000 7f7fb651d700 1 mon.j@0(leader).osd e20733 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:15:59.980217+0000 osd.54 (osd.54) 52316 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:00.943788+0000 mgr.b (mgr.12834102) 26569 : cluster [DBG] pgmap v27150: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:16:01.004048+0000 osd.54 (osd.54) 52317 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:01.983358+0000 osd.54 (osd.54) 52318 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:02.944777+0000 mgr.b (mgr.12834102) 26570 : cluster [DBG] pgmap v27151: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:16:03.021966+0000 osd.54 (osd.54) 52319 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:04.046787+0000 osd.54 (osd.54) 52320 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:04.851817+0000 mon.l (mon.2) 15711 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:16:04.852111+0000 mon.l (mon.2) 15712 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:16:04.945745+0000 mgr.b (mgr.12834102) 26571 : cluster [DBG] pgmap v27152: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:16:05.598+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:16:05.598+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/795249418' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:16:06.146+0000 7f7fb651d700 1 mon.j@0(leader).osd e20733 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:16:05.044727+0000 osd.54 (osd.54) 52321 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:05.602045+0000 mon.j (mon.0) 21804 : audit [DBG] from='client.? 10.1.182.12:0/795249418' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:16:06.290965+0000 mon.k (mon.1) 19061 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:16:06.291283+0000 mon.k (mon.1) 19062 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:16:06.490+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:16:06.490+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:16:06.018587+0000 osd.54 (osd.54) 52322 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:06.494590+0000 mon.j (mon.0) 21805 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:16:06.494847+0000 mon.j (mon.0) 21806 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:16:06.946694+0000 mgr.b (mgr.12834102) 26572 : cluster [DBG] pgmap v27153: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:16:08.014+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd crush rule create-replicated", "name": "ceph-ssd-replica-default", "root": "default", "type": "host", "class": "ssd", "format": "json"} v 0) v1
debug 2023-07-18T20:16:08.014+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-ssd-replica-default", "root": "default", "type": "host", "class": "ssd", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:06.969625+0000 osd.54 (osd.54) 52323 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:07.612796+0000 mon.l (mon.2) 15713 : audit [DBG] from='client.? 10.1.222.242:0/1841290703' entity='client.admin' cmd=[{"prefix": "osd crush dump", "format": "json"}]: dispatch
audit 2023-07-18T20:16:08.020243+0000 mon.j (mon.0) 21807 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-ssd-replica-default", "root": "default", "type": "host", "class": "ssd", "format": "json"}]: dispatch
audit 2023-07-18T20:16:08.020316+0000 mon.l (mon.2) 15714 : audit [INF] from='client.? 10.1.222.242:0/3278345084' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-ssd-replica-default", "root": "default", "type": "host", "class": "ssd", "format": "json"}]: dispatch
audit 2023-07-18T20:16:08.422147+0000 mon.k (mon.1) 19063 : audit [DBG] from='client.? 10.1.222.242:0/3522811242' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": "ceph-ssd-replica-default", "var": "all", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:07.938757+0000 osd.54 (osd.54) 52324 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:08.801557+0000 mon.l (mon.2) 15715 : audit [DBG] from='client.? 10.1.222.242:0/1069838777' entity='client.admin' cmd=[{"prefix": "osd pool application get", "pool": "ceph-ssd-replica-default", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:08.947621+0000 mgr.b (mgr.12834102) 26573 : cluster [DBG] pgmap v27154: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:16:09.320413+0000 mon.l (mon.2) 15716 : audit [DBG] from='client.? 10.1.222.242:0/2130956347' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": "ceph-ssd-replica-default", "var": "all", "format": "json"}]: dispatch
debug 2023-07-18T20:16:09.894+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool application enable","pool": "ceph-ssd-replica-default","app": "rbd"} v 0) v1
debug 2023-07-18T20:16:09.894+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-ssd-replica-default","app": "rbd"}]: dispatch
debug 2023-07-18T20:16:10.502+0000 7f7fb651d700 1 mon.j@0(leader).osd e20733 do_prune osdmap full prune enabled
cluster 2023-07-18T20:16:08.969212+0000 osd.54 (osd.54) 52325 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:09.830892+0000 mon.k (mon.1) 19064 : audit [DBG] from='client.? 10.1.222.242:0/217390614' entity='client.admin' cmd=[{"prefix": "osd crush rule dump", "name": "ceph-ssd-replica-default", "format": "json"}]: dispatch
audit 2023-07-18T20:16:09.897595+0000 mon.j (mon.0) 21808 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-ssd-replica-default","app": "rbd"}]: dispatch
audit 2023-07-18T20:16:09.897811+0000 mon.l (mon.2) 15717 : audit [INF] from='client.? 10.1.222.242:0/1121737911' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-ssd-replica-default","app": "rbd"}]: dispatch
debug 2023-07-18T20:16:10.514+0000 7f7fb1513700 1 mon.j@0(leader).osd e20734 e20734: 57 total, 3 up, 41 in
debug 2023-07-18T20:16:10.518+0000 7f7fb1513700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-ssd-replica-default","app": "rbd"}]': finished
debug 2023-07-18T20:16:10.518+0000 7f7fb1513700 0 log_channel(cluster) log [DBG] : osdmap e20734: 57 total, 3 up, 41 in
debug 2023-07-18T20:16:11.150+0000 7f7fb651d700 1 mon.j@0(leader).osd e20734 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:16:09.956557+0000 osd.54 (osd.54) 52326 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:10.522439+0000 mon.j (mon.0) 21809 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-ssd-replica-default","app": "rbd"}]': finished
cluster 2023-07-18T20:16:10.522546+0000 mon.j (mon.0) 21810 : cluster [DBG] osdmap e20734: 57 total, 3 up, 41 in
audit 2023-07-18T20:16:10.618614+0000 mon.k (mon.1) 19065 : audit [DBG] from='client.? 10.1.222.242:0/3575894908' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:10.916469+0000 osd.54 (osd.54) 52327 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:10.948588+0000 mgr.b (mgr.12834102) 26574 : cluster [DBG] pgmap v27156: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:16:11.086142+0000 mon.k (mon.1) 19066 : audit [DBG] from='client.? 10.1.222.242:0/1322647855' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
debug 2023-07-18T20:16:11.894+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:16:11.894+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:11.895605+0000 osd.54 (osd.54) 52328 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:11.899333+0000 mon.j (mon.0) 21811 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:12.878920+0000 osd.54 (osd.54) 52329 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:12.949353+0000 mgr.b (mgr.12834102) 26575 : cluster [DBG] pgmap v27157: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:16:13.866007+0000 osd.54 (osd.54) 52330 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:14.858979+0000 mon.l (mon.2) 15718 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:16:14.859252+0000 mon.l (mon.2) 15719 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:16:14.860509+0000 osd.54 (osd.54) 52331 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:14.950137+0000 mgr.b (mgr.12834102) 26576 : cluster [DBG] pgmap v27158: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:16:16.150+0000 7f7fb651d700 1 mon.j@0(leader).osd e20734 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:16:16.470+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:16:16.470+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:16:15.898794+0000 osd.54 (osd.54) 52332 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:16.286427+0000 mon.k (mon.1) 19067 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:16:16.286609+0000 mon.k (mon.1) 19068 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:16:16.476289+0000 mon.j (mon.0) 21812 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:16:16.476455+0000 mon.j (mon.0) 21813 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:16:16.857106+0000 osd.54 (osd.54) 52333 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:16.950930+0000 mgr.b (mgr.12834102) 26577 : cluster [DBG] pgmap v27159: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:16:17.133260+0000 mon.k (mon.1) 19069 : audit [DBG] from='client.? 10.1.222.242:0/3424847304' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:16:17.600913+0000 mon.l (mon.2) 15720 : audit [DBG] from='client.? 10.1.222.242:0/3933115558' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:17.821611+0000 osd.54 (osd.54) 52334 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:18.833839+0000 osd.54 (osd.54) 52335 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:18.951673+0000 mgr.b (mgr.12834102) 26578 : cluster [DBG] pgmap v27160: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:16:19.026654+0000 mon.k (mon.1) 19070 : audit [DBG] from='client.? 10.1.207.132:0/3878441818' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:19.857010+0000 osd.54 (osd.54) 52336 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:16:21.078+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:16:21.078+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/719542678' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:16:21.154+0000 7f7fb651d700 1 mon.j@0(leader).osd e20734 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:16:20.856366+0000 osd.54 (osd.54) 52337 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:20.952482+0000 mgr.b (mgr.12834102) 26579 : cluster [DBG] pgmap v27161: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:16:21.081836+0000 mon.j (mon.0) 21814 : audit [DBG] from='client.? 10.1.182.12:0/719542678' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:21.834081+0000 osd.54 (osd.54) 52338 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:22.865719+0000 osd.54 (osd.54) 52339 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:22.953245+0000 mgr.b (mgr.12834102) 26580 : cluster [DBG] pgmap v27162: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:16:23.864515+0000 osd.54 (osd.54) 52340 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:24.857916+0000 mon.l (mon.2) 15721 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:16:24.858204+0000 mon.l (mon.2) 15722 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:16:24.893092+0000 osd.54 (osd.54) 52341 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:24.953988+0000 mgr.b (mgr.12834102) 26581 : cluster [DBG] pgmap v27163: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:16:26.154+0000 7f7fb651d700 1 mon.j@0(leader).osd e20734 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:16:26.166+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_write.cc:1736] [default] New memtable created with log file: #21754. Immutable memtables: 0.
debug 2023-07-18T20:16:26.166+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:16:26.171860) [db/db_impl/db_impl_compaction_flush.cc:2394] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
debug 2023-07-18T20:16:26.166+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:335] [default] [JOB 1673] Flushing memtable with next log file: 21754
debug 2023-07-18T20:16:26.166+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711386171911, "job": 1673, "event": "flush_started", "num_memtables": 1, "num_entries": 2117, "num_deletes": 550, "total_data_size": 3196488, "memory_usage": 3234544, "flush_reason": "Manual Compaction"}
debug 2023-07-18T20:16:26.166+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:364] [default] [JOB 1673] Level-0 flush table #21755: started
debug 2023-07-18T20:16:26.178+0000 7f7fb8521700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711386183429, "cf_name": "default", "job": 1673, "event": "table_file_creation", "file_number": 21755, "file_size": 2595416, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 2586498, "index_size": 4789, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 30031, "raw_average_key_size": 24, "raw_value_size": 2565119, "raw_average_value_size": 2099, "num_data_blocks": 187, "num_entries": 1222, "num_deletions": 550, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689711281, "oldest_key_time": 1689711281, "file_creation_time": 1689711386, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:16:26.178+0000 7f7fb8521700 4 rocksdb: [db/flush_job.cc:424] [default] [JOB 1673] Level-0 flush table #21755: 2595416 bytes OK
debug 2023-07-18T20:16:26.178+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:16:26.183676) [db/memtable_list.cc:449] [default] Level-0 commit table #21755 started
debug 2023-07-18T20:16:26.178+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:16:26.183976) [db/memtable_list.cc:628] [default] Level-0 commit table #21755: memtable #1 done
debug 2023-07-18T20:16:26.178+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:16:26.183987) EVENT_LOG_v1 {"time_micros": 1689711386183983, "job": 1673, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 3], "immutable_memtables": 0}
debug 2023-07-18T20:16:26.178+0000 7f7fb8521700 4 rocksdb: (Original Log Time 2023/07/18-20:16:26.183999) [db/db_impl/db_impl_compaction_flush.cc:233] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 3] max score 0.25
debug 2023-07-18T20:16:26.178+0000 7f7fb8521700 4 rocksdb: [db/db_impl/db_impl_files.cc:415] [JOB 1673] Try to delete WAL files size 3185823, prev total WAL file size 3185823, number of live WAL files 2.
debug 2023-07-18T20:16:26.178+0000 7f7fb8521700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021749.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:16:26.178+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:16:26.178+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:16:26.184760) [db/db_impl/db_impl_compaction_flush.cc:2712] [default] Manual compaction from level-0 to level-6 from '7061786F730036323536313735' seq:72057594037927935, type:20 .. '7061786F730036323536343237' seq:0, type:0; will stop at (end)
debug 2023-07-18T20:16:26.178+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1881] [default] [JOB 1674] Compacting 1@0 + 3@6 files to L6, score -1.00
debug 2023-07-18T20:16:26.178+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1887] [default] Compaction start summary: Base version 1674 Base level 0, inputs: [21755(2534KB)], [21751(64MB) 21752(64MB) 21753(4828KB)]
debug 2023-07-18T20:16:26.178+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711386184812, "job": 1674, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [21755], "files_L6": [21751, 21752, 21753], "score": -1, "input_data_size": 142095828}
debug 2023-07-18T20:16:26.390+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1674] Generated table #21756: 21858 keys, 67269218 bytes
debug 2023-07-18T20:16:26.390+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711386397269, "cf_name": "default", "job": 1674, "event": "table_file_creation", "file_number": 21756, "file_size": 67269218, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67154742, "index_size": 58764, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 54725, "raw_key_size": 592595, "raw_average_key_size": 27, "raw_value_size": 66799375, "raw_average_value_size": 3056, "num_data_blocks": 2172, "num_entries": 21858, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689711386, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:16:26.490+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:16:26.490+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:16:26.606+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1674] Generated table #21757: 13093 keys, 67302308 bytes
debug 2023-07-18T20:16:26.606+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711386612534, "cf_name": "default", "job": 1674, "event": "table_file_creation", "file_number": 21757, "file_size": 67302308, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 67174388, "index_size": 94097, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32837, "raw_key_size": 290001, "raw_average_key_size": 22, "raw_value_size": 66895946, "raw_average_value_size": 5109, "num_data_blocks": 3490, "num_entries": 13093, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689711386, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:16:26.626+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1516] [default] [JOB 1674] Generated table #21758: 547 keys, 5516791 bytes
debug 2023-07-18T20:16:26.626+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711386629514, "cf_name": "default", "job": 1674, "event": "table_file_creation", "file_number": 21758, "file_size": 5516791, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 5507893, "index_size": 6439, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11655, "raw_average_key_size": 21, "raw_value_size": 5494107, "raw_average_value_size": 10044, "num_data_blocks": 251, "num_entries": 547, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1689132280, "oldest_key_time": 0, "file_creation_time": 1689711386, "db_id": "09118976-745b-467c-b011-10e87cc54d93", "db_session_id": "N33ZCOEVX58Q7YO4QUXO"}}
debug 2023-07-18T20:16:26.626+0000 7f7fb8d22700 4 rocksdb: [db/compaction/compaction_job.cc:1594] [default] [JOB 1674] Compacted 1@0 + 3@6 files to L6 => 140088317 bytes
debug 2023-07-18T20:16:26.626+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:16:26.630479) [db/compaction/compaction_job.cc:812] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 3] max score 0.00, MB/sec: 319.5 rd, 315.0 wr, level 6, files in(1, 3) out(3) MB in(2.5, 133.0) out(133.6), read-write-amplify(108.7) write-amplify(54.0) OK, records in: 36616, records dropped: 1118 output_compression: NoCompression
debug 2023-07-18T20:16:26.626+0000 7f7fb8d22700 4 rocksdb: (Original Log Time 2023/07/18-20:16:26.630503) EVENT_LOG_v1 {"time_micros": 1689711386630491, "job": 1674, "event": "compaction_finished", "compaction_time_micros": 444730, "compaction_time_cpu_micros": 214892, "output_level": 6, "num_output_files": 3, "total_output_size": 140088317, "num_input_records": 36616, "num_output_records": 35498, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 3]}
debug 2023-07-18T20:16:26.626+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021755.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:16:26.626+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711386630991, "job": 1674, "event": "table_file_deletion", "file_number": 21755}
debug 2023-07-18T20:16:26.626+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021753.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:16:26.626+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711386631671, "job": 1674, "event": "table_file_deletion", "file_number": 21753}
cluster 2023-07-18T20:16:25.906662+0000 osd.54 (osd.54) 52342 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:26.295114+0000 mon.k (mon.1) 19071 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:16:26.295454+0000 mon.k (mon.1) 19072 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:16:26.494760+0000 mon.j (mon.0) 21815 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:16:26.495041+0000 mon.j (mon.0) 21816 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:16:26.634+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021752.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:16:26.634+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711386640579, "job": 1674, "event": "table_file_deletion", "file_number": 21752}
debug 2023-07-18T20:16:26.646+0000 7f7fb8d22700 4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-j/store.db/021751.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
debug 2023-07-18T20:16:26.646+0000 7f7fb8d22700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1689711386650335, "job": 1674, "event": "table_file_deletion", "file_number": 21751}
debug 2023-07-18T20:16:26.646+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:16:26.646+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:16:26.646+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:16:26.646+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:16:26.646+0000 7f7faf50f700 4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1615] [default] Manual compaction starting
debug 2023-07-18T20:16:26.894+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:16:26.894+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:26.881140+0000 osd.54 (osd.54) 52343 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:26.900011+0000 mon.j (mon.0) 21817 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:26.954946+0000 mgr.b (mgr.12834102) 26582 : cluster [DBG] pgmap v27164: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:16:27.884362+0000 osd.54 (osd.54) 52344 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:27.970266+0000 mon.k (mon.1) 19073 : audit [DBG] from='client.? 10.1.222.242:0/1392633317' entity='client.admin' cmd=[{"prefix": "osd crush dump", "format": "json"}]: dispatch
audit 2023-07-18T20:16:28.523374+0000 mon.k (mon.1) 19074 : audit [DBG] from='client.? 10.1.222.242:0/2855732912' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile get", "name": "default", "format": "json"}]: dispatch
debug 2023-07-18T20:16:28.938+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd erasure-code-profile set", "name": "ceph-nvme-erasure-default-data_ecprofile", "force": true, "profile": ["k=2", "m=1", "plugin=jerasure", "technique=reed_sol_van", "crush-failure-domain=osd", "crush-device-class=nvme"], "format": "json"} v 0) v1
debug 2023-07-18T20:16:28.938+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "ceph-nvme-erasure-default-data_ecprofile", "force": true, "profile": ["k=2", "m=1", "plugin=jerasure", "technique=reed_sol_van", "crush-failure-domain=osd", "crush-device-class=nvme"], "format": "json"}]: dispatch
debug 2023-07-18T20:16:29.450+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool create", "pool": "ceph-nvme-erasure-default-data", "pg_num": 0, "pool_type": "erasure", "erasure_code_profile": "ceph-nvme-erasure-default-data_ecprofile", "format": "json"} v 0) v1
debug 2023-07-18T20:16:29.450+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ceph-nvme-erasure-default-data", "pg_num": 0, "pool_type": "erasure", "erasure_code_profile": "ceph-nvme-erasure-default-data_ecprofile", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:28.926955+0000 osd.54 (osd.54) 52345 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:28.942913+0000 mon.k (mon.1) 19075 : audit [INF] from='client.? 10.1.222.242:0/3161069151' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "ceph-nvme-erasure-default-data_ecprofile", "force": true, "profile": ["k=2", "m=1", "plugin=jerasure", "technique=reed_sol_van", "crush-failure-domain=osd", "crush-device-class=nvme"], "format": "json"}]: dispatch
audit 2023-07-18T20:16:28.943859+0000 mon.j (mon.0) 21818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "ceph-nvme-erasure-default-data_ecprofile", "force": true, "profile": ["k=2", "m=1", "plugin=jerasure", "technique=reed_sol_van", "crush-failure-domain=osd", "crush-device-class=nvme"], "format": "json"}]: dispatch
cluster 2023-07-18T20:16:28.955899+0000 mgr.b (mgr.12834102) 26583 : cluster [DBG] pgmap v27165: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:16:29.452109+0000 mon.j (mon.0) 21819 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ceph-nvme-erasure-default-data", "pg_num": 0, "pool_type": "erasure", "erasure_code_profile": "ceph-nvme-erasure-default-data_ecprofile", "format": "json"}]: dispatch
audit 2023-07-18T20:16:29.452220+0000 mon.l (mon.2) 15723 : audit [INF] from='client.? 10.1.222.242:0/2058250840' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ceph-nvme-erasure-default-data", "pg_num": 0, "pool_type": "erasure", "erasure_code_profile": "ceph-nvme-erasure-default-data_ecprofile", "format": "json"}]: dispatch
debug 2023-07-18T20:16:29.854+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool set", "pool": "ceph-nvme-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"} v 0) v1
debug 2023-07-18T20:16:29.854+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "ceph-nvme-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]: dispatch
debug 2023-07-18T20:16:30.654+0000 7f7fb651d700 1 mon.j@0(leader).osd e20734 do_prune osdmap full prune enabled
debug 2023-07-18T20:16:30.662+0000 7f7fb1513700 1 mon.j@0(leader).osd e20735 e20735: 57 total, 3 up, 41 in
debug 2023-07-18T20:16:30.662+0000 7f7fb1513700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "ceph-nvme-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]': finished
debug 2023-07-18T20:16:30.662+0000 7f7fb1513700 0 log_channel(cluster) log [DBG] : osdmap e20735: 57 total, 3 up, 41 in
audit 2023-07-18T20:16:29.858848+0000 mon.j (mon.0) 21820 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "ceph-nvme-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]: dispatch
audit 2023-07-18T20:16:29.858976+0000 mon.l (mon.2) 15724 : audit [INF] from='client.? 10.1.222.242:0/3561257015' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "ceph-nvme-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:29.891527+0000 osd.54 (osd.54) 52346 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:16:31.158+0000 7f7fb651d700 1 mon.j@0(leader).osd e20735 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:16:31.158+0000 7f7fb651d700 1 mon.j@0(leader).osd e20735 do_prune osdmap full prune enabled
debug 2023-07-18T20:16:31.158+0000 7f7fb651d700 1 mon.j@0(leader).osd e20735 prune_init
debug 2023-07-18T20:16:31.158+0000 7f7fb651d700 1 mon.j@0(leader).osd e20735 encode_pending osdmap full prune encoded e20736
debug 2023-07-18T20:16:31.162+0000 7f7fb1513700 1 mon.j@0(leader).osd e20736 e20736: 57 total, 3 up, 41 in
debug 2023-07-18T20:16:31.166+0000 7f7fb1513700 0 log_channel(cluster) log [DBG] : osdmap e20736: 57 total, 3 up, 41 in
debug 2023-07-18T20:16:31.174+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool application enable","pool": "ceph-nvme-erasure-default-data","app": "rbd"} v 0) v1
debug 2023-07-18T20:16:31.174+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-nvme-erasure-default-data","app": "rbd"}]: dispatch
audit 2023-07-18T20:16:30.668755+0000 mon.j (mon.0) 21821 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "ceph-nvme-erasure-default-data", "var": "allow_ec_overwrites", "val": "true", "format": "json"}]': finished
cluster 2023-07-18T20:16:30.668801+0000 mon.j (mon.0) 21822 : cluster [DBG] osdmap e20735: 57 total, 3 up, 41 in
cluster 2023-07-18T20:16:30.918509+0000 osd.54 (osd.54) 52347 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:30.956447+0000 mgr.b (mgr.12834102) 26584 : cluster [DBG] pgmap v27167: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:16:31.109376+0000 mon.k (mon.1) 19076 : audit [DBG] from='client.? 10.1.222.242:0/1346225686' entity='client.admin' cmd=[{"prefix": "osd pool application get", "pool": "ceph-nvme-erasure-default-data", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:31.171091+0000 mon.j (mon.0) 21823 : cluster [DBG] osdmap e20736: 57 total, 3 up, 41 in
audit 2023-07-18T20:16:31.173981+0000 mon.k (mon.1) 19077 : audit [INF] from='client.? 10.1.222.242:0/3259118409' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-nvme-erasure-default-data","app": "rbd"}]: dispatch
audit 2023-07-18T20:16:31.178151+0000 mon.j (mon.0) 21824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-nvme-erasure-default-data","app": "rbd"}]: dispatch
debug 2023-07-18T20:16:32.170+0000 7f7fb651d700 1 mon.j@0(leader).osd e20736 do_prune osdmap full prune enabled
debug 2023-07-18T20:16:32.174+0000 7f7fb1513700 1 mon.j@0(leader).osd e20737 e20737: 57 total, 3 up, 41 in
debug 2023-07-18T20:16:32.178+0000 7f7fb1513700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-nvme-erasure-default-data","app": "rbd"}]': finished
debug 2023-07-18T20:16:32.178+0000 7f7fb1513700 0 log_channel(cluster) log [DBG] : osdmap e20737: 57 total, 3 up, 41 in
cluster 2023-07-18T20:16:31.933171+0000 osd.54 (osd.54) 52348 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:32.182745+0000 mon.j (mon.0) 21825 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-nvme-erasure-default-data","app": "rbd"}]': finished
cluster 2023-07-18T20:16:32.182790+0000 mon.j (mon.0) 21826 : cluster [DBG] osdmap e20737: 57 total, 3 up, 41 in
cluster 2023-07-18T20:16:32.957201+0000 mgr.b (mgr.12834102) 26585 : cluster [DBG] pgmap v27170: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:16:32.962449+0000 osd.54 (osd.54) 52349 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:33.957272+0000 osd.54 (osd.54) 52350 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:34.842474+0000 mon.l (mon.2) 15725 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:16:34.842707+0000 mon.l (mon.2) 15726 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:16:34.913279+0000 osd.54 (osd.54) 52351 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:16:36.162+0000 7f7fb651d700 1 mon.j@0(leader).osd e20737 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:16:34.957954+0000 mgr.b (mgr.12834102) 26586 : cluster [DBG] pgmap v27171: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:16:35.870817+0000 osd.54 (osd.54) 52352 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:16:36.482+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:16:36.482+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:16:36.550+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:16:36.550+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/3514284834' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:16:36.280310+0000 mon.k (mon.1) 19078 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:16:36.280626+0000 mon.k (mon.1) 19079 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:16:36.485640+0000 mon.j (mon.0) 21827 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:16:36.485902+0000 mon.j (mon.0) 21828 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:16:36.555368+0000 mon.j (mon.0) 21829 : audit [DBG] from='client.? 10.1.182.12:0/3514284834' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:36.856010+0000 osd.54 (osd.54) 52353 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:36.958751+0000 mgr.b (mgr.12834102) 26587 : cluster [DBG] pgmap v27172: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:16:37.831240+0000 osd.54 (osd.54) 52354 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:38.823765+0000 osd.54 (osd.54) 52355 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:38.959467+0000 mgr.b (mgr.12834102) 26588 : cluster [DBG] pgmap v27173: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:16:39.363468+0000 mon.k (mon.1) 19080 : audit [DBG] from='client.? 10.1.207.132:0/590532570' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:39.851808+0000 osd.54 (osd.54) 52356 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:16:41.162+0000 7f7fb651d700 1 mon.j@0(leader).osd e20737 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:16:40.821448+0000 osd.54 (osd.54) 52357 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:16:41.894+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:16:41.894+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:40.960191+0000 mgr.b (mgr.12834102) 26589 : cluster [DBG] pgmap v27174: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:16:41.697350+0000 mon.k (mon.1) 19081 : audit [DBG] from='client.? 10.1.222.242:0/409123945' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:41.852556+0000 osd.54 (osd.54) 52358 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:41.899374+0000 mon.j (mon.0) 21830 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T20:16:42.121684+0000 mon.k (mon.1) 19082 : audit [DBG] from='client.? 10.1.222.242:0/928214982' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:42.821493+0000 osd.54 (osd.54) 52359 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:42.961126+0000 mgr.b (mgr.12834102) 26590 : cluster [DBG] pgmap v27175: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:16:43.806450+0000 osd.54 (osd.54) 52360 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:44.820800+0000 osd.54 (osd.54) 52361 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:44.853602+0000 mon.l (mon.2) 15727 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:16:44.853801+0000 mon.l (mon.2) 15728 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:16:46.162+0000 7f7fb651d700 1 mon.j@0(leader).osd e20737 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:16:44.962046+0000 mgr.b (mgr.12834102) 26591 : cluster [DBG] pgmap v27176: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:16:45.814171+0000 osd.54 (osd.54) 52362 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:16:46.502+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:16:46.502+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:16:46.978+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd crush rule create-replicated", "name": "ceph-nvme-replica-default", "root": "default", "type": "osd", "class": "nvme", "format": "json"} v 0) v1
debug 2023-07-18T20:16:46.978+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-nvme-replica-default", "root": "default", "type": "osd", "class": "nvme", "format": "json"}]: dispatch
audit 2023-07-18T20:16:46.286877+0000 mon.k (mon.1) 19083 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:16:46.287212+0000 mon.k (mon.1) 19084 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:16:46.508098+0000 mon.j (mon.0) 21831 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:16:46.508333+0000 mon.j (mon.0) 21832 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:16:46.535632+0000 mon.k (mon.1) 19085 : audit [DBG] from='client.? 10.1.222.242:0/3464557982' entity='client.admin' cmd=[{"prefix": "osd crush dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:46.766462+0000 osd.54 (osd.54) 52363 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:46.982192+0000 mon.j (mon.0) 21833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-nvme-replica-default", "root": "default", "type": "osd", "class": "nvme", "format": "json"}]: dispatch
audit 2023-07-18T20:16:46.982263+0000 mon.l (mon.2) 15729 : audit [INF] from='client.? 10.1.222.242:0/2695289759' entity='client.admin' cmd=[{"prefix": "osd crush rule create-replicated", "name": "ceph-nvme-replica-default", "root": "default", "type": "osd", "class": "nvme", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:46.962998+0000 mgr.b (mgr.12834102) 26592 : cluster [DBG] pgmap v27177: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:16:47.525765+0000 mon.k (mon.1) 19086 : audit [DBG] from='client.? 10.1.222.242:0/688174452' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": "ceph-nvme-replica-default", "var": "all", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:47.773868+0000 osd.54 (osd.54) 52364 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:47.924605+0000 mon.l (mon.2) 15730 : audit [DBG] from='client.? 10.1.222.242:0/3504479966' entity='client.admin' cmd=[{"prefix": "osd pool application get", "pool": "ceph-nvme-replica-default", "format": "json"}]: dispatch
audit 2023-07-18T20:16:48.214468+0000 mon.l (mon.2) 15731 : audit [DBG] from='client.? 10.1.222.242:0/3319516510' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
debug 2023-07-18T20:16:49.214+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd pool application enable","pool": "ceph-nvme-replica-default","app": "rbd"} v 0) v1
debug 2023-07-18T20:16:49.214+0000 7f7fb3d18700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-nvme-replica-default","app": "rbd"}]: dispatch
debug 2023-07-18T20:16:49.290+0000 7f7fb651d700 1 mon.j@0(leader).osd e20737 do_prune osdmap full prune enabled
audit 2023-07-18T20:16:48.417219+0000 mon.l (mon.2) 15732 : audit [DBG] from='client.? 10.1.222.242:0/696040186' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": "ceph-nvme-replica-default", "var": "all", "format": "json"}]: dispatch
audit 2023-07-18T20:16:48.688705+0000 mon.l (mon.2) 15733 : audit [DBG] from='client.? 10.1.222.242:0/3713425178' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:48.805337+0000 osd.54 (osd.54) 52365 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:49.150305+0000 mon.k (mon.1) 19087 : audit [DBG] from='client.? 10.1.222.242:0/2458938254' entity='client.admin' cmd=[{"prefix": "osd crush rule dump", "name": "ceph-nvme-replica-default", "format": "json"}]: dispatch
audit 2023-07-18T20:16:49.218436+0000 mon.k (mon.1) 19088 : audit [INF] from='client.? 10.1.222.242:0/3834136580' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-nvme-replica-default","app": "rbd"}]: dispatch
audit 2023-07-18T20:16:49.219398+0000 mon.j (mon.0) 21834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ceph-nvme-replica-default","app": "rbd"}]: dispatch
debug 2023-07-18T20:16:49.302+0000 7f7fb1513700 1 mon.j@0(leader).osd e20738 e20738: 57 total, 3 up, 41 in
debug 2023-07-18T20:16:49.306+0000 7f7fb1513700 0 log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-nvme-replica-default","app": "rbd"}]': finished
debug 2023-07-18T20:16:49.306+0000 7f7fb1513700 0 log_channel(cluster) log [DBG] : osdmap e20738: 57 total, 3 up, 41 in
cluster 2023-07-18T20:16:48.963905+0000 mgr.b (mgr.12834102) 26593 : cluster [DBG] pgmap v27178: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:16:49.310789+0000 mon.j (mon.0) 21835 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ceph-nvme-replica-default","app": "rbd"}]': finished
cluster 2023-07-18T20:16:49.310890+0000 mon.j (mon.0) 21836 : cluster [DBG] osdmap e20738: 57 total, 3 up, 41 in
cluster 2023-07-18T20:16:49.762768+0000 osd.54 (osd.54) 52366 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:16:51.166+0000 7f7fb651d700 1 mon.j@0(leader).osd e20738 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:16:50.787849+0000 osd.54 (osd.54) 52367 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:16:52.022+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:16:52.022+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/3471804333' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:50.964874+0000 mgr.b (mgr.12834102) 26594 : cluster [DBG] pgmap v27180: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:16:51.824896+0000 osd.54 (osd.54) 52368 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:52.025463+0000 mon.j (mon.0) 21837 : audit [DBG] from='client.? 10.1.182.12:0/3471804333' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:52.810260+0000 osd.54 (osd.54) 52369 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:52.965865+0000 mgr.b (mgr.12834102) 26595 : cluster [DBG] pgmap v27181: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:16:53.795099+0000 osd.54 (osd.54) 52370 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:54.816600+0000 osd.54 (osd.54) 52371 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:54.863692+0000 mon.l (mon.2) 15734 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:16:54.863967+0000 mon.l (mon.2) 15735 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:16:56.166+0000 7f7fb651d700 1 mon.j@0(leader).osd e20738 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:16:54.966825+0000 mgr.b (mgr.12834102) 26596 : cluster [DBG] pgmap v27182: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:16:55.865949+0000 osd.54 (osd.54) 52372 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:56.288360+0000 mon.k (mon.1) 19089 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:16:56.288650+0000 mon.k (mon.1) 19090 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:16:56.490+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:16:56.490+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:16:56.894+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:16:56.894+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T20:16:56.495204+0000 mon.j (mon.0) 21838 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:16:56.495343+0000 mon.j (mon.0) 21839 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:16:56.896871+0000 osd.54 (osd.54) 52373 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:16:56.899870+0000 mon.j (mon.0) 21840 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:56.967780+0000 mgr.b (mgr.12834102) 26597 : cluster [DBG] pgmap v27183: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:16:57.912859+0000 osd.54 (osd.54) 52374 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:58.922704+0000 osd.54 (osd.54) 52375 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:16:58.968744+0000 mgr.b (mgr.12834102) 26598 : cluster [DBG] pgmap v27184: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:16:59.670+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:16:59.670+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.207.132:0/3359944095' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:16:59.673583+0000 mon.j (mon.0) 21841 : audit [DBG] from='client.? 10.1.207.132:0/3359944095' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:16:59.902972+0000 osd.54 (osd.54) 52376 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:17:01.170+0000 7f7fb651d700 1 mon.j@0(leader).osd e20738 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:17:00.934713+0000 osd.54 (osd.54) 52377 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:00.969583+0000 mgr.b (mgr.12834102) 26599 : cluster [DBG] pgmap v27185: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:17:01.950275+0000 osd.54 (osd.54) 52378 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:02.909152+0000 osd.54 (osd.54) 52379 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:02.970386+0000 mgr.b (mgr.12834102) 26600 : cluster [DBG] pgmap v27186: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:17:03.911223+0000 osd.54 (osd.54) 52380 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:17:04.883818+0000 mon.l (mon.2) 15736 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:17:04.884089+0000 mon.l (mon.2) 15737 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:17:04.954311+0000 osd.54 (osd.54) 52381 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:04.971163+0000 mgr.b (mgr.12834102) 26601 : cluster [DBG] pgmap v27187: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:17:06.169+0000 7f7fb651d700 1 mon.j@0(leader).osd e20738 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:17:06.477+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:17:06.477+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:17:05.940177+0000 osd.54 (osd.54) 52382 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:17:06.292807+0000 mon.k (mon.1) 19091 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:17:06.293125+0000 mon.k (mon.1) 19092 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:17:06.481945+0000 mon.j (mon.0) 21842 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:17:06.482239+0000 mon.j (mon.0) 21843 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:17:06.901921+0000 osd.54 (osd.54) 52383 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:17:07.489+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:17:07.489+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/1870234682' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:17:06.972722+0000 mgr.b (mgr.12834102) 26602 : cluster [DBG] pgmap v27188: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:17:07.494877+0000 mon.j (mon.0) 21844 : audit [DBG] from='client.? 10.1.182.12:0/1870234682' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:17:07.938877+0000 osd.54 (osd.54) 52384 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:08.967906+0000 osd.54 (osd.54) 52385 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:08.973773+0000 mgr.b (mgr.12834102) 26603 : cluster [DBG] pgmap v27189: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:17:11.173+0000 7f7fb651d700 1 mon.j@0(leader).osd e20738 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:17:09.975693+0000 osd.54 (osd.54) 52386 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:17:11.893+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:17:11.893+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:17:10.956601+0000 osd.54 (osd.54) 52387 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:10.974779+0000 mgr.b (mgr.12834102) 26604 : cluster [DBG] pgmap v27190: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:17:11.900147+0000 mon.j (mon.0) 21845 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:17:11.979415+0000 osd.54 (osd.54) 52388 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:17:12.740599+0000 mon.l (mon.2) 15738 : audit [DBG] from='client.? 10.1.222.242:0/3770998068' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:17:13.155805+0000 mon.k (mon.1) 19093 : audit [DBG] from='client.? 10.1.222.242:0/2162556168' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:17:12.975751+0000 mgr.b (mgr.12834102) 26605 : cluster [DBG] pgmap v27191: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:17:12.979050+0000 osd.54 (osd.54) 52389 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:13.948223+0000 osd.54 (osd.54) 52390 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:17:14.856115+0000 mon.l (mon.2) 15739 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:17:14.856417+0000 mon.l (mon.2) 15740 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:17:14.902872+0000 osd.54 (osd.54) 52391 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:17:16.173+0000 7f7fb651d700 1 mon.j@0(leader).osd e20738 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:17:14.976701+0000 mgr.b (mgr.12834102) 26606 : cluster [DBG] pgmap v27192: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:17:15.906854+0000 osd.54 (osd.54) 52392 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:17:16.481+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:17:16.481+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:17:16.298037+0000 mon.k (mon.1) 19094 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:17:16.298353+0000 mon.k (mon.1) 19095 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:17:16.488880+0000 mon.j (mon.0) 21846 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:17:16.489144+0000 mon.j (mon.0) 21847 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:17:16.905832+0000 osd.54 (osd.54) 52393 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:16.977674+0000 mgr.b (mgr.12834102) 26607 : cluster [DBG] pgmap v27193: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:17:17.954828+0000 osd.54 (osd.54) 52394 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:18.936296+0000 osd.54 (osd.54) 52395 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:18.978623+0000 mgr.b (mgr.12834102) 26608 : cluster [DBG] pgmap v27194: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:17:19.289734+0000 mon.l (mon.2) 15741 : audit [DBG] from='client.? 10.1.222.242:0/536974018' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:17:19.744541+0000 mon.k (mon.1) 19096 : audit [DBG] from='client.? 10.1.222.242:0/431110140' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
audit 2023-07-18T20:17:19.877812+0000 mon.l (mon.2) 15742 : audit [DBG] from='client.? 10.1.207.132:0/291497188' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:17:19.936174+0000 osd.54 (osd.54) 52396 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:17:21.173+0000 7f7fb651d700 1 mon.j@0(leader).osd e20738 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:17:20.956578+0000 osd.54 (osd.54) 52397 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:20.979607+0000 mgr.b (mgr.12834102) 26609 : cluster [DBG] pgmap v27195: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:17:21.933030+0000 osd.54 (osd.54) 52398 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:17:22.965+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:17:22.965+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/3029295311' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
audit 2023-07-18T20:17:22.970278+0000 mon.j (mon.0) 21848 : audit [DBG] from='client.? 10.1.182.12:0/3029295311' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:17:22.979200+0000 osd.54 (osd.54) 52399 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:22.980588+0000 mgr.b (mgr.12834102) 26610 : cluster [DBG] pgmap v27196: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:17:23.991089+0000 osd.54 (osd.54) 52400 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:17:24.900995+0000 mon.l (mon.2) 15743 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:17:24.901270+0000 mon.l (mon.2) 15744 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:17:26.177+0000 7f7fb651d700 1 mon.j@0(leader).osd e20738 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:17:24.981550+0000 mgr.b (mgr.12834102) 26611 : cluster [DBG] pgmap v27197: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:17:25.033991+0000 osd.54 (osd.54) 52401 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:17:26.285004+0000 mon.k (mon.1) 19097 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:17:26.285314+0000 mon.k (mon.1) 19098 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:17:26.497+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:17:26.497+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:17:26.897+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:17:26.897+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:17:26.041708+0000 osd.54 (osd.54) 52402 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:17:26.503543+0000 mon.j (mon.0) 21849 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:17:26.503821+0000 mon.j (mon.0) 21850 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:17:26.899284+0000 mon.j (mon.0) 21851 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:17:26.982515+0000 mgr.b (mgr.12834102) 26612 : cluster [DBG] pgmap v27198: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:17:27.051308+0000 osd.54 (osd.54) 52403 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:28.099019+0000 osd.54 (osd.54) 52404 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:28.983489+0000 mgr.b (mgr.12834102) 26613 : cluster [DBG] pgmap v27199: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:17:29.097260+0000 osd.54 (osd.54) 52405 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:17:31.177+0000 7f7fb651d700 1 mon.j@0(leader).osd e20738 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:17:30.141801+0000 osd.54 (osd.54) 52406 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:30.984492+0000 mgr.b (mgr.12834102) 26614 : cluster [DBG] pgmap v27200: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:17:31.097638+0000 osd.54 (osd.54) 52407 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:32.080507+0000 osd.54 (osd.54) 52408 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:32.985487+0000 mgr.b (mgr.12834102) 26615 : cluster [DBG] pgmap v27201: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:17:33.039515+0000 osd.54 (osd.54) 52409 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:34.075445+0000 osd.54 (osd.54) 52410 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:17:34.847047+0000 mon.l (mon.2) 15745 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:17:34.847357+0000 mon.l (mon.2) 15746 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:17:34.986444+0000 mgr.b (mgr.12834102) 26616 : cluster [DBG] pgmap v27202: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:17:36.181+0000 7f7fb651d700 1 mon.j@0(leader).osd e20738 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:17:35.044460+0000 osd.54 (osd.54) 52411 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:17:36.298011+0000 mon.k (mon.1) 19099 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:17:36.298303+0000 mon.k (mon.1) 19100 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:17:36.473+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:17:36.473+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:17:36.011324+0000 osd.54 (osd.54) 52412 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:17:36.478096+0000 mon.j (mon.0) 21852 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:17:36.478258+0000 mon.j (mon.0) 21853 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:17:36.987411+0000 mgr.b (mgr.12834102) 26617 : cluster [DBG] pgmap v27203: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:17:37.005733+0000 osd.54 (osd.54) 52413 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:17:38.449+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:17:38.449+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/2552248188' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:17:37.992848+0000 osd.54 (osd.54) 52414 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:17:38.457243+0000 mon.j (mon.0) 21854 : audit [DBG] from='client.? 10.1.182.12:0/2552248188' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:17:38.988397+0000 mgr.b (mgr.12834102) 26618 : cluster [DBG] pgmap v27204: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:17:39.028013+0000 osd.54 (osd.54) 52415 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:17:40.189253+0000 mon.k (mon.1) 19101 : audit [DBG] from='client.? 10.1.207.132:0/1090183622' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
debug 2023-07-18T20:17:41.181+0000 7f7fb651d700 1 mon.j@0(leader).osd e20738 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:17:40.056722+0000 osd.54 (osd.54) 52416 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:40.989506+0000 mgr.b (mgr.12834102) 26619 : cluster [DBG] pgmap v27205: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:17:41.893+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:17:41.893+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:17:41.008811+0000 osd.54 (osd.54) 52417 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:17:41.900003+0000 mon.j (mon.0) 21855 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:17:41.993195+0000 osd.54 (osd.54) 52418 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:42.990478+0000 mgr.b (mgr.12834102) 26620 : cluster [DBG] pgmap v27206: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:17:42.977390+0000 osd.54 (osd.54) 52419 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:17:43.780440+0000 mon.l (mon.2) 15747 : audit [DBG] from='client.? 10.1.222.242:0/1252794969' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:17:44.255284+0000 mon.l (mon.2) 15748 : audit [DBG] from='client.? 10.1.222.242:0/345539002' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:17:43.975326+0000 osd.54 (osd.54) 52420 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:17:44.893044+0000 mon.l (mon.2) 15749 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:17:44.893312+0000 mon.l (mon.2) 15750 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:17:44.933627+0000 osd.54 (osd.54) 52421 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:44.991451+0000 mgr.b (mgr.12834102) 26621 : cluster [DBG] pgmap v27207: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:17:46.181+0000 7f7fb651d700 1 mon.j@0(leader).osd e20738 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
debug 2023-07-18T20:17:46.477+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:17:46.477+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:17:45.975271+0000 osd.54 (osd.54) 52422 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:17:46.284996+0000 mon.k (mon.1) 19102 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:17:46.285307+0000 mon.k (mon.1) 19103 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:17:46.481795+0000 mon.j (mon.0) 21856 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:17:46.482063+0000 mon.j (mon.0) 21857 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:17:46.975388+0000 osd.54 (osd.54) 52423 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:46.992448+0000 mgr.b (mgr.12834102) 26622 : cluster [DBG] pgmap v27208: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:17:47.994370+0000 osd.54 (osd.54) 52424 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:48.980124+0000 osd.54 (osd.54) 52425 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:48.993382+0000 mgr.b (mgr.12834102) 26623 : cluster [DBG] pgmap v27209: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
debug 2023-07-18T20:17:51.189+0000 7f7fb651d700 1 mon.j@0(leader).osd e20738 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:17:50.003597+0000 osd.54 (osd.54) 52426 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:17:50.380909+0000 mon.k (mon.1) 19104 : audit [DBG] from='client.? 10.1.222.242:0/863889182' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
audit 2023-07-18T20:17:50.886485+0000 mon.k (mon.1) 19105 : audit [DBG] from='client.? 10.1.222.242:0/2100691930' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch
cluster 2023-07-18T20:17:50.955946+0000 osd.54 (osd.54) 52427 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:50.994361+0000 mgr.b (mgr.12834102) 26624 : cluster [DBG] pgmap v27210: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:17:51.914622+0000 osd.54 (osd.54) 52428 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:52.918191+0000 osd.54 (osd.54) 52429 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:17:53.925+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) v1
debug 2023-07-18T20:17:53.925+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='client.? 10.1.182.12:0/3613952029' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:17:52.995325+0000 mgr.b (mgr.12834102) 26625 : cluster [DBG] pgmap v27211: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
audit 2023-07-18T20:17:53.927119+0000 mon.j (mon.0) 21858 : audit [DBG] from='client.? 10.1.182.12:0/3613952029' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:17:53.942291+0000 osd.54 (osd.54) 52430 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:17:54.847878+0000 mon.l (mon.2) 15751 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:17:54.848177+0000 mon.l (mon.2) 15752 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:17:54.929899+0000 osd.54 (osd.54) 52431 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:17:56.189+0000 7f7fb651d700 1 mon.j@0(leader).osd e20738 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:17:54.996291+0000 mgr.b (mgr.12834102) 26626 : cluster [DBG] pgmap v27212: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:17:55.895312+0000 osd.54 (osd.54) 52432 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:17:56.481+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
debug 2023-07-18T20:17:56.481+0000 7f7fb9f1f700 0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
debug 2023-07-18T20:17:56.897+0000 7f7fb3d18700 0 mon.j@0(leader) e20 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0) v1
debug 2023-07-18T20:17:56.897+0000 7f7fb3d18700 0 log_channel(audit) log [DBG] : from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
audit 2023-07-18T20:17:56.284699+0000 mon.k (mon.1) 19106 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:17:56.285002+0000 mon.k (mon.1) 19107 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:17:56.483369+0000 mon.j (mon.0) 21859 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:17:56.483632+0000 mon.j (mon.0) 21860 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
audit 2023-07-18T20:17:56.900205+0000 mon.j (mon.0) 21861 : audit [DBG] from='mgr.12834102 10.1.182.12:0/2600390027' entity='mgr.b' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
cluster 2023-07-18T20:17:56.909250+0000 osd.54 (osd.54) 52433 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:56.997289+0000 mgr.b (mgr.12834102) 26627 : cluster [DBG] pgmap v27213: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:17:57.901059+0000 osd.54 (osd.54) 52434 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:58.892604+0000 osd.54 (osd.54) 52435 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:17:58.998242+0000 mgr.b (mgr.12834102) 26628 : cluster [DBG] pgmap v27214: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:17:59.907304+0000 osd.54 (osd.54) 52436 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:18:01.193+0000 7f7fb651d700 1 mon.j@0(leader).osd e20738 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
audit 2023-07-18T20:18:00.494513+0000 mon.k (mon.1) 19108 : audit [DBG] from='client.? 10.1.207.132:0/2453039398' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json"}]: dispatch
cluster 2023-07-18T20:18:00.867249+0000 osd.54 (osd.54) 52437 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:18:00.999233+0000 mgr.b (mgr.12834102) 26629 : cluster [DBG] pgmap v27215: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:18:01.835258+0000 osd.54 (osd.54) 52438 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:18:02.836417+0000 osd.54 (osd.54) 52439 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
cluster 2023-07-18T20:18:03.000216+0000 mgr.b (mgr.12834102) 26630 : cluster [DBG] pgmap v27216: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:18:03.856765+0000 osd.54 (osd.54) 52440 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:18:04.855444+0000 mon.l (mon.2) 15753 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:18:04.855795+0000 mon.l (mon.2) 15754 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
cluster 2023-07-18T20:18:04.898609+0000 osd.54 (osd.54) 52441 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
debug 2023-07-18T20:18:06.193+0000 7f7fb651d700 1 mon.j@0(leader).osd e20738 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 335544320 full_alloc: 369098752 kv_alloc: 310378496
cluster 2023-07-18T20:18:05.001152+0000 mgr.b (mgr.12834102) 26631 : cluster [DBG] pgmap v27217: 196 pgs: 186 unknown, 1 incomplete, 9 down; 19 B data, 1.2 GiB used, 2.3 TiB / 2.3 TiB avail
cluster 2023-07-18T20:18:05.898551+0000 osd.54 (osd.54) 52442 : cluster [WRN] 64 slow requests (by type [ 'queued for pg' : 64 ] most affected pool [ 'ceph-erasure-default-data' : 64 ])
audit 2023-07-18T20:18:06.289880+0000 mon.k (mon.1) 19109 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
audit 2023-07-18T20:18:06.290168+0000 mo
View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment