Skip to content

Instantly share code, notes, and snippets.

@ztl8702
Created January 2, 2020 19:11
Show Gist options
  • Save ztl8702/d9de4014183200edd7ce3c1ad951b982 to your computer and use it in GitHub Desktop.
Save ztl8702/d9de4014183200edd7ce3c1ad951b982 to your computer and use it in GitHub Desktop.
Error EINVAL: invalid command
I0102 18:56:42.629032 1 utils.go:119] ID: 64 GRPC call: /csi.v1.Identity/Probe
I0102 18:56:42.629046 1 utils.go:120] ID: 64 GRPC request: {}
I0102 18:56:42.629349 1 utils.go:125] ID: 64 GRPC response: {}
I0102 18:56:44.416038 1 utils.go:119] ID: 65 GRPC call: /csi.v1.Controller/CreateVolume
I0102 18:56:44.416408 1 utils.go:120] ID: 65 GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-d82d2341-76f6-44f2-ae7a-486ed0128ccb","parameters":{"clusterID":"rook-ceph","fsName":"testfs","pool":"testfs-data0"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":5}}]}
I0102 18:56:44.427999 1 util.go:48] ID: 65 cephfs: EXEC ceph [-m 10.43.149.215:6789,10.43.33.196:6789,10.43.187.206:6789 --id csi-cephfs-provisioner --keyfile=***stripped*** -c /etc/ceph/ceph.conf fs get testfs --format=json]
I0102 18:56:45.634747 1 util.go:48] ID: 65 cephfs: EXEC ceph [-m 10.43.149.215:6789,10.43.33.196:6789,10.43.187.206:6789 --id csi-cephfs-provisioner --keyfile=***stripped*** -c /etc/ceph/ceph.conf fs ls --format=json]
I0102 18:56:47.845800 1 fsjournal.go:137] ID: 65 Generated Volume ID (0001-0009-rook-ceph-0000000000000001-a0386a33-2d91-11ea-a4d2-8678eaaaf7de) and subvolume name (csi-vol-a0386a33-2d91-11ea-a4d2-8678eaaaf7de) for request name (pvc-d82d2341-76f6-44f2-ae7a-486ed0128ccb)
I0102 18:56:47.846793 1 util.go:48] ID: 65 cephfs: EXEC ceph [fs subvolumegroup create testfs csi -m 10.43.149.215:6789,10.43.33.196:6789,10.43.187.206:6789 -c /etc/ceph/ceph.conf -n client.csi-cephfs-provisioner --keyfile=***stripped***]
E0102 18:56:50.035139 1 volume.go:93] ID: 65 failed to create subvolume group csi, for the vol csi-vol-a0386a33-2d91-11ea-a4d2-8678eaaaf7de(an error occurred while running (1449) ceph [fs subvolumegroup create testfs csi -m 10.43.149.215:6789,10.43.33.196:6789,10.43.187.206:6789 -c /etc/ceph/ceph.conf -n client.csi-cephfs-provisioner --keyfile=***stripped***]: exit status 22: no valid command found; 10 closest matches:
fs set-default <fs_name>
fs status {<fs>}
fs flag set enable_multiple <val> {--yes-i-really-mean-it}
fs add_data_pool <fs_name>
fs get <fs_name>
fs set <fs_name> max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client {--yes-i-really-mean-it}
fs reset <fs_name> {--yes-i-really-mean-it}
fs ls
fs fail <fs_name>
fs rm <fs_name> {--yes-i-really-mean-it}
Error EINVAL: invalid command
)
E0102 18:56:50.035652 1 controllerserver.go:55] ID: 65 failed to create volume pvc-d82d2341-76f6-44f2-ae7a-486ed0128ccb: an error occurred while running (1449) ceph [fs subvolumegroup create testfs csi -m 10.43.149.215:6789,10.43.33.196:6789,10.43.187.206:6789 -c /etc/ceph/ceph.conf -n client.csi-cephfs-provisioner --keyfile=***stripped***]: exit status 22: no valid command found; 10 closest matches:
fs set-default <fs_name>
fs status {<fs>}
fs flag set enable_multiple <val> {--yes-i-really-mean-it}
fs add_data_pool <fs_name>
fs get <fs_name>
fs set <fs_name> max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client {--yes-i-really-mean-it}
fs reset <fs_name> {--yes-i-really-mean-it}
fs ls
fs fail <fs_name>
fs rm <fs_name> {--yes-i-really-mean-it}
Error EINVAL: invalid command
E0102 18:56:51.382752 1 utils.go:123] ID: 65 GRPC error: rpc error: code = Internal desc = an error occurred while running (1449) ceph [fs subvolumegroup create testfs csi -m 10.43.149.215:6789,10.43.33.196:6789,10.43.187.206:6789 -c /etc/ceph/ceph.conf -n client.csi-cephfs-provisioner --keyfile=***stripped***]: exit status 22: no valid command found; 10 closest matches:
fs set-default <fs_name>
fs status {<fs>}
fs flag set enable_multiple <val> {--yes-i-really-mean-it}
fs add_data_pool <fs_name>
fs get <fs_name>
fs set <fs_name> max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client {--yes-i-really-mean-it}
fs reset <fs_name> {--yes-i-really-mean-it}
fs ls
fs fail <fs_name>
fs rm <fs_name> {--yes-i-really-mean-it}
Error EINVAL: invalid command
I0102 18:57:23.409386 1 utils.go:119] ID: 66 GRPC call: /csi.v1.Controller/CreateVolume
I0102 18:57:23.409758 1 utils.go:120] ID: 66 GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-d82d2341-76f6-44f2-ae7a-486ed0128ccb","parameters":{"clusterID":"rook-ceph","fsName":"testfs","pool":"testfs-data0"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":5}}]}
I0102 18:57:23.420210 1 util.go:48] ID: 66 cephfs: EXEC ceph [-m 10.43.149.215:6789,10.43.33.196:6789,10.43.187.206:6789 --id csi-cephfs-provisioner --keyfile=***stripped*** -c /etc/ceph/ceph.conf fs get testfs --format=json]
I0102 18:57:25.033992 1 util.go:48] ID: 66 cephfs: EXEC ceph [-m 10.43.149.215:6789,10.43.33.196:6789,10.43.187.206:6789 --id csi-cephfs-provisioner --keyfile=***stripped*** -c /etc/ceph/ceph.conf fs ls --format=json]
I0102 18:57:27.002169 1 fsjournal.go:137] ID: 66 Generated Volume ID (0001-0009-rook-ceph-0000000000000001-b7af32c2-2d91-11ea-a4d2-8678eaaaf7de) and subvolume name (csi-vol-b7af32c2-2d91-11ea-a4d2-8678eaaaf7de) for request name (pvc-d82d2341-76f6-44f2-ae7a-486ed0128ccb)
I0102 18:57:27.002756 1 util.go:48] ID: 66 cephfs: EXEC ceph [fs subvolumegroup create testfs csi -m 10.43.149.215:6789,10.43.33.196:6789,10.43.187.206:6789 -c /etc/ceph/ceph.conf -n client.csi-cephfs-provisioner --keyfile=***stripped***]
E0102 18:57:28.622169 1 volume.go:93] ID: 66 failed to create subvolume group csi, for the vol csi-vol-b7af32c2-2d91-11ea-a4d2-8678eaaaf7de(an error occurred while running (1664) ceph [fs subvolumegroup create testfs csi -m 10.43.149.215:6789,10.43.33.196:6789,10.43.187.206:6789 -c /etc/ceph/ceph.conf -n client.csi-cephfs-provisioner --keyfile=***stripped***]: exit status 22: no valid command found; 10 closest matches:
fs set-default <fs_name>
fs status {<fs>}
fs flag set enable_multiple <val> {--yes-i-really-mean-it}
fs add_data_pool <fs_name>
fs get <fs_name>
fs set <fs_name> max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client {--yes-i-really-mean-it}
fs reset <fs_name> {--yes-i-really-mean-it}
fs ls
fs fail <fs_name>
fs rm <fs_name> {--yes-i-really-mean-it}
Error EINVAL: invalid command
)
E0102 18:57:28.622523 1 controllerserver.go:55] ID: 66 failed to create volume pvc-d82d2341-76f6-44f2-ae7a-486ed0128ccb: an error occurred while running (1664) ceph [fs subvolumegroup create testfs csi -m 10.43.149.215:6789,10.43.33.196:6789,10.43.187.206:6789 -c /etc/ceph/ceph.conf -n client.csi-cephfs-provisioner --keyfile=***stripped***]: exit status 22: no valid command found; 10 closest matches:
fs set-default <fs_name>
fs status {<fs>}
fs flag set enable_multiple <val> {--yes-i-really-mean-it}
fs add_data_pool <fs_name>
fs get <fs_name>
fs set <fs_name> max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client {--yes-i-really-mean-it}
fs reset <fs_name> {--yes-i-really-mean-it}
fs ls
fs fail <fs_name>
fs rm <fs_name> {--yes-i-really-mean-it}
Error EINVAL: invalid command
E0102 18:57:29.068901 1 utils.go:123] ID: 66 GRPC error: rpc error: code = Internal desc = an error occurred while running (1664) ceph [fs subvolumegroup create testfs csi -m 10.43.149.215:6789,10.43.33.196:6789,10.43.187.206:6789 -c /etc/ceph/ceph.conf -n client.csi-cephfs-provisioner --keyfile=***stripped***]: exit status 22: no valid command found; 10 closest matches:
fs set-default <fs_name>
fs status {<fs>}
fs flag set enable_multiple <val> {--yes-i-really-mean-it}
fs add_data_pool <fs_name>
fs get <fs_name>
fs set <fs_name> max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client {--yes-i-really-mean-it}
fs reset <fs_name> {--yes-i-really-mean-it}
fs ls
fs fail <fs_name>
fs rm <fs_name> {--yes-i-really-mean-it}
Error EINVAL: invalid command
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment