Last active
March 10, 2019 00:44
-
-
Save feffi/cc455d8da5fe3464dd6d5e3c27b10c7c to your computer and use it in GitHub Desktop.
rook-ceph Shared Filesystem failure
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Problem: Maybe somebody can help me with a (maybe simple) problem with a failing `FlexVolume` (shared filesystem) mount on rook.io-Ceph? Although I can create the operator, the cluster and also the filesystem, I‘m not able to mount the filesystem in any of my pods. Creating and mounting PV/PVC works fine. I've collected all information, but out of options right now ... | |
https://gist.github.com/feffi/cc455d8da5fe3464dd6d5e3c27b10c7c | |
--- | |
apiVersion: ceph.rook.io/v1 | |
kind: CephFilesystem | |
metadata: | |
name: cephfs | |
namespace: rook-ceph | |
spec: | |
metadataPool: | |
replicated: | |
size: 2 | |
dataPools: | |
- replicated: | |
size: 2 | |
metadataServer: | |
activeCount: 1 | |
activeStandby: true | |
--- | |
apiVersion: v1 | |
kind: Service | |
metadata: | |
name: hello | |
namespace: rook-ceph | |
spec: | |
selector: | |
app: hello | |
ports: | |
- name: http | |
port: 80 | |
protocol: TCP | |
targetPort: 8080 | |
type: ClusterIP | |
--- | |
apiVersion: extensions/v1beta1 | |
kind: Ingress | |
metadata: | |
name: hello | |
namespace: rook-ceph | |
labels: | |
app.kubernetes.io/name: hello | |
annotations: | |
kubernetes.io/ingress.class: traefik | |
spec: | |
rules: | |
- host: hello.lab | |
http: | |
paths: | |
- path: / | |
backend: | |
serviceName: hello | |
servicePort: http | |
--- | |
apiVersion: apps/v1 | |
kind: Deployment | |
metadata: | |
name: hello | |
namespace: default | |
spec: | |
replicas: 1 | |
selector: | |
matchLabels: | |
app: hello | |
template: | |
metadata: | |
labels: | |
app: hello | |
spec: | |
containers: | |
- name: hello | |
image: paulbouwer/hello-kubernetes:1.5 | |
ports: | |
- containerPort: 8080 | |
volumeMounts: | |
- name: hello | |
mountPath: /mnt/data | |
volumes: | |
- name: hello | |
flexVolume: | |
driver: ceph.rook.io/rook | |
fsType: ceph | |
options: | |
fsName: cephfs | |
clusterNamespace: rook-ceph | |
path: "/hello" | |
$ kubectl -n rook-ceph-system logs rook-ceph-operator-cdc686667-jrd4b -f | |
-------------------------------------------------------------------- | |
2019-03-09 21:51:17.168570 I | exec: Running command: ceph fs get cephfs --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/539274420 | |
2019-03-09 21:51:24.666670 I | exec: Error ENOENT: filesystem 'cephfs' not found | |
2019-03-09 21:51:24.666959 I | exec: Running command: ceph fs ls --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/696427139 | |
2019-03-09 21:51:31.473011 I | cephmds: Creating file system cephfs | |
2019-03-09 21:51:31.473179 I | exec: Running command: ceph osd crush rule create-simple cephfs-metadata default host --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/508357126 | |
2019-03-09 21:51:39.273146 I | exec: Running command: ceph osd pool create cephfs-metadata 0 replicated cephfs-metadata --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/177237933 | |
2019-03-09 21:51:47.469623 I | exec: pool 'cephfs-metadata' created | |
2019-03-09 21:51:47.470005 I | exec: Running command: ceph osd pool set cephfs-metadata size 2 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/810236712 | |
2019-03-09 21:52:01.866336 I | exec: set pool 2 size to 2 | |
2019-03-09 21:52:01.866650 I | exec: Running command: ceph osd pool application enable cephfs-metadata cephfs --yes-i-really-mean-it --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/244009626 | |
2019-03-09 21:52:16.768821 I | exec: enabled application 'cephfs' on pool 'cephfs-metadata' | |
2019-03-09 21:52:16.769013 I | cephclient: creating replicated pool cephfs-metadata succeeded, buf: | |
2019-03-09 21:52:16.769144 I | exec: Running command: ceph osd crush rule create-simple cephfs-data0 default host --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/398704860 | |
2019-03-09 21:52:25.469450 I | exec: Running command: ceph osd pool create cephfs-data0 0 replicated cephfs-data0 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/169385099 | |
2019-03-09 21:52:33.372149 I | exec: pool 'cephfs-data0' created | |
2019-03-09 21:52:33.373145 I | exec: Running command: ceph osd pool set cephfs-data0 size 2 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/157204590 | |
2019-03-09 21:52:41.466424 I | exec: set pool 3 size to 2 | |
2019-03-09 21:52:41.466709 I | exec: Running command: ceph osd pool application enable cephfs-data0 cephfs --yes-i-really-mean-it --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/936065781 | |
2019-03-09 21:52:49.966578 I | exec: enabled application 'cephfs' on pool 'cephfs-data0' | |
2019-03-09 21:52:49.966839 I | cephclient: creating replicated pool cephfs-data0 succeeded, buf: | |
2019-03-09 21:52:49.967524 I | exec: Running command: ceph fs new cephfs cephfs-metadata cephfs-data0 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/641075951 | |
2019-03-09 21:53:03.666447 I | exec: new fs with metadata pool 2 and data pool 3 | |
2019-03-09 21:53:03.666631 I | cephmds: created file system cephfs on 1 data pool(s) and metadata pool cephfs-metadata | |
2019-03-09 21:53:03.666765 I | exec: Running command: ceph fs get cephfs --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/876084610 | |
2019-03-09 21:53:10.766873 I | op-file: start running mdses for file system cephfs | |
2019-03-09 21:53:10.771827 I | op-file: legacy mds deployment rook-ceph-mds-cephfs not found, no update needed | |
2019-03-09 21:53:10.776890 I | exec: Running command: ceph auth get-or-create-key mds.cephfs-a osd allow * mds allow mon allow profile mds --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/093288185 | |
2019-03-09 21:53:18.286530 I | exec: Running command: ceph auth get-or-create-key mds.cephfs-b osd allow * mds allow mon allow profile mds --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/388432787 | |
$ kubectl describe deployments.apps hello | |
-------------------------------------------------------------------- | |
Name: hello | |
Namespace: default | |
CreationTimestamp: Sat, 09 Mar 2019 23:03:05 +0100 | |
Labels: <none> | |
Annotations: deployment.kubernetes.io/revision: 1 | |
Selector: app=hello | |
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable | |
StrategyType: RollingUpdate | |
MinReadySeconds: 0 | |
RollingUpdateStrategy: 25% max unavailable, 25% max surge | |
Pod Template: | |
Labels: app=hello | |
Containers: | |
hello: | |
Image: paulbouwer/hello-kubernetes:1.5 | |
Port: 8080/TCP | |
Host Port: 0/TCP | |
Environment: <none> | |
Mounts: | |
/mnt/data from hello (rw) | |
Volumes: | |
hello: | |
Type: FlexVolume (a generic volume resource that is provisioned/attached using an exec based plugin) | |
Driver: ceph.rook.io/rook | |
FSType: ceph | |
SecretRef: nil | |
ReadOnly: false | |
Options: map[path:/hello clusterNamespace:rook-ceph fsName:cephfs] | |
Conditions: | |
Type Status Reason | |
---- ------ ------ | |
Available False MinimumReplicasUnavailable | |
Progressing True ReplicaSetUpdated | |
OldReplicaSets: <none> | |
NewReplicaSet: hello-7b77d8ddc7 (1/1 replicas created) | |
Events: | |
Type Reason Age From Message | |
---- ------ ---- ---- ------- | |
Normal ScalingReplicaSet 3m52s deployment-controller Scaled up replica set hello-7b77d8ddc7 to 1 | |
$ kubectl describe pod hello-7b77d8ddc7-g6v5t | |
-------------------------------------------------------------------- | |
Name: hello-7b77d8ddc7-g6v5t | |
Namespace: default | |
Node: juju-f8e96f-5/10.1.1.254 | |
Start Time: Sat, 09 Mar 2019 23:03:05 +0100 | |
Labels: app=hello | |
pod-template-hash=7b77d8ddc7 | |
Annotations: <none> | |
Status: Pending | |
IP: | |
Controlled By: ReplicaSet/hello-7b77d8ddc7 | |
Containers: | |
hello: | |
Container ID: | |
Image: paulbouwer/hello-kubernetes:1.5 | |
Image ID: | |
Port: 8080/TCP | |
Host Port: 0/TCP | |
State: Waiting | |
Reason: ContainerCreating | |
Ready: False | |
Restart Count: 0 | |
Environment: <none> | |
Mounts: | |
/mnt/data from hello (rw) | |
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mj4xv (ro) | |
Conditions: | |
Type Status | |
Initialized True | |
Ready False | |
ContainersReady False | |
PodScheduled True | |
Volumes: | |
hello: | |
Type: FlexVolume (a generic volume resource that is provisioned/attached using an exec based plugin) | |
Driver: ceph.rook.io/rook | |
FSType: ceph | |
SecretRef: nil | |
ReadOnly: false | |
Options: map[clusterNamespace:rook-ceph fsName:cephfs path:/hello] | |
default-token-mj4xv: | |
Type: Secret (a volume populated by a Secret) | |
SecretName: default-token-mj4xv | |
Optional: false | |
QoS Class: BestEffort | |
Node-Selectors: <none> | |
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s | |
node.kubernetes.io/unreachable:NoExecute for 300s | |
Events: | |
Type Reason Age From Message | |
---- ------ ---- ---- ------- | |
Normal Scheduled 5m30s default-scheduler Successfully assigned default/hello-7b77d8ddc7-g6v5t to juju-f8e96f-5 | |
Warning FailedMount 5m30s kubelet, juju-f8e96f-5 MountVolume.SetUp failed for volume "hello" : mount command failed, status: Failure, reason: failed to mount filesystem cephfs to /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello with monitor 10.152.183.146:6790,10.152.183.115:6790,10.152.183.130:6790:/hello and options [name=admin secret=AQCXJoRctCMCEBAAhN3ffupSOv6S95f99/epFw== mds_namespace=cephfs]: mount failed: exit status 2 | |
Mounting command: systemd-run | |
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello --scope -- mount -t ceph -o name=admin,secret=AQCXJoRctCMCEBAAhN3ffupSOv6S95f99/epFw==,mds_namespace=cephfs 10.152.183.146:6790,10.152.183.115:6790,10.152.183.130:6790:/hello /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello | |
Output: Running scope as unit: run-r5fbeccb16b56476c81a068b96de07e9d.scope | |
mount error 2 = No such file or directory | |
Warning FailedMount 5m29s kubelet, juju-f8e96f-5 MountVolume.SetUp failed for volume "hello" : mount command failed, status: Failure, reason: failed to mount filesystem cephfs to /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello with monitor 10.152.183.146:6790,10.152.183.115:6790,10.152.183.130:6790:/hello and options [name=admin secret=AQCXJoRctCMCEBAAhN3ffupSOv6S95f99/epFw== mds_namespace=cephfs]: mount failed: exit status 2 | |
Mounting command: systemd-run | |
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello --scope -- mount -t ceph -o name=admin,secret=AQCXJoRctCMCEBAAhN3ffupSOv6S95f99/epFw==,mds_namespace=cephfs 10.152.183.146:6790,10.152.183.115:6790,10.152.183.130:6790:/hello /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello | |
Output: Running scope as unit: run-rb0868e2a94f14280aac924d16811f891.scope | |
mount error 2 = No such file or directory | |
Warning FailedMount 5m28s kubelet, juju-f8e96f-5 MountVolume.SetUp failed for volume "hello" : mount command failed, status: Failure, reason: failed to mount filesystem cephfs to /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello with monitor 10.152.183.146:6790,10.152.183.115:6790,10.152.183.130:6790:/hello and options [name=admin secret=AQCXJoRctCMCEBAAhN3ffupSOv6S95f99/epFw== mds_namespace=cephfs]: mount failed: exit status 2 | |
Mounting command: systemd-run | |
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello --scope -- mount -t ceph -o name=admin,secret=AQCXJoRctCMCEBAAhN3ffupSOv6S95f99/epFw==,mds_namespace=cephfs 10.152.183.146:6790,10.152.183.115:6790,10.152.183.130:6790:/hello /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello | |
Output: Running scope as unit: run-ra11f9a4ff87c40fda7a65fdebfb08fb4.scope | |
mount error 2 = No such file or directory | |
Warning FailedMount 5m25s kubelet, juju-f8e96f-5 MountVolume.SetUp failed for volume "hello" : mount command failed, status: Failure, reason: failed to mount filesystem cephfs to /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello with monitor 10.152.183.146:6790,10.152.183.115:6790,10.152.183.130:6790:/hello and options [name=admin secret=AQCXJoRctCMCEBAAhN3ffupSOv6S95f99/epFw== mds_namespace=cephfs]: mount failed: exit status 2 | |
Mounting command: systemd-run | |
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello --scope -- mount -t ceph -o name=admin,secret=AQCXJoRctCMCEBAAhN3ffupSOv6S95f99/epFw==,mds_namespace=cephfs 10.152.183.146:6790,10.152.183.115:6790,10.152.183.130:6790:/hello /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello | |
Output: Running scope as unit: run-r392611644e5542e58120c599cdca7f0d.scope | |
mount error 2 = No such file or directory | |
Warning FailedMount 5m21s kubelet, juju-f8e96f-5 MountVolume.SetUp failed for volume "hello" : mount command failed, status: Failure, reason: failed to mount filesystem cephfs to /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello with monitor 10.152.183.146:6790,10.152.183.115:6790,10.152.183.130:6790:/hello and options [name=admin secret=AQCXJoRctCMCEBAAhN3ffupSOv6S95f99/epFw== mds_namespace=cephfs]: mount failed: exit status 2 | |
Mounting command: systemd-run | |
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello --scope -- mount -t ceph -o name=admin,secret=AQCXJoRctCMCEBAAhN3ffupSOv6S95f99/epFw==,mds_namespace=cephfs 10.152.183.146:6790,10.152.183.115:6790,10.152.183.130:6790:/hello /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello | |
Output: Running scope as unit: run-rb695a2126b1d46f28c2422dc34fb6914.scope | |
mount error 2 = No such file or directory | |
Warning FailedMount 5m13s kubelet, juju-f8e96f-5 MountVolume.SetUp failed for volume "hello" : mount command failed, status: Failure, reason: failed to mount filesystem cephfs to /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello with monitor 10.152.183.146:6790,10.152.183.115:6790,10.152.183.130:6790:/hello and options [name=admin secret=AQCXJoRctCMCEBAAhN3ffupSOv6S95f99/epFw== mds_namespace=cephfs]: mount failed: exit status 2 | |
Mounting command: systemd-run | |
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello --scope -- mount -t ceph -o name=admin,secret=AQCXJoRctCMCEBAAhN3ffupSOv6S95f99/epFw==,mds_namespace=cephfs 10.152.183.146:6790,10.152.183.115:6790,10.152.183.130:6790:/hello /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello | |
Output: Running scope as unit: run-rb172486a59bc49a39b22d2a8c10d4150.scope | |
mount error 2 = No such file or directory | |
Warning FailedMount 4m57s kubelet, juju-f8e96f-5 MountVolume.SetUp failed for volume "hello" : mount command failed, status: Failure, reason: failed to mount filesystem cephfs to /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello with monitor 10.152.183.146:6790,10.152.183.115:6790,10.152.183.130:6790:/hello and options [name=admin secret=AQCXJoRctCMCEBAAhN3ffupSOv6S95f99/epFw== mds_namespace=cephfs]: mount failed: exit status 2 | |
Mounting command: systemd-run | |
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello --scope -- mount -t ceph -o name=admin,secret=AQCXJoRctCMCEBAAhN3ffupSOv6S95f99/epFw==,mds_namespace=cephfs 10.152.183.146:6790,10.152.183.115:6790,10.152.183.130:6790:/hello /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello | |
Output: Running scope as unit: run-r286508db2c4743e598859c87e271640b.scope | |
mount error 2 = No such file or directory | |
Warning FailedMount 4m25s kubelet, juju-f8e96f-5 MountVolume.SetUp failed for volume "hello" : mount command failed, status: Failure, reason: failed to mount filesystem cephfs to /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello with monitor 10.152.183.146:6790,10.152.183.115:6790,10.152.183.130:6790:/hello and options [name=admin secret=AQCXJoRctCMCEBAAhN3ffupSOv6S95f99/epFw== mds_namespace=cephfs]: mount failed: exit status 2 | |
Mounting command: systemd-run | |
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello --scope -- mount -t ceph -o name=admin,secret=AQCXJoRctCMCEBAAhN3ffupSOv6S95f99/epFw==,mds_namespace=cephfs 10.152.183.146:6790,10.152.183.115:6790,10.152.183.130:6790:/hello /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello | |
Output: Running scope as unit: run-r3fc991f184fa49ca942e9131ca123a7a.scope | |
mount error 2 = No such file or directory | |
Warning FailedMount 78s (x2 over 3m21s) kubelet, juju-f8e96f-5 (combined from similar events): MountVolume.SetUp failed for volume "hello" : mount command failed, status: Failure, reason: failed to mount filesystem cephfs to /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello with monitor 10.152.183.146:6790,10.152.183.115:6790,10.152.183.130:6790:/hello and options [name=admin secret=AQCXJoRctCMCEBAAhN3ffupSOv6S95f99/epFw== mds_namespace=cephfs]: mount failed: exit status 2 | |
Mounting command: systemd-run | |
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello --scope -- mount -t ceph -o name=admin,secret=AQCXJoRctCMCEBAAhN3ffupSOv6S95f99/epFw==,mds_namespace=cephfs 10.152.183.146:6790,10.152.183.115:6790,10.152.183.130:6790:/hello /var/lib/kubelet/pods/1d9b05e2-42b7-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello | |
Output: Running scope as unit: run-r6a0d1b7d48dd4b47b0bbba5e16ec2b64.scope | |
mount error 2 = No such file or directory | |
Warning FailedMount 69s (x2 over 3m27s) kubelet, juju-f8e96f-5 Unable to mount volumes for pod "hello-7b77d8ddc7-g6v5t_default(1d9b05e2-42b7-11e9-b373-00505636f89f)": timeout expired waiting for volumes to attach or mount for pod "default"/"hello-7b77d8ddc7-g6v5t". list of unmounted volumes=[hello]. list of unattached volumes=[hello default-token-mj4xv] | |
Unable to mount volumes for pod "hello-7b77d8ddc7-g6v5t_default(1d9b05e2-42b7-11e9-b373-00505636f89f)": timeout expired waiting for volumes to attach or mount for pod "default"/"hello-7b77d8ddc7-g6v5t". list of unmounted volumes=[hello]. list of unattached volumes=[hello default-token-mj4xv] | |
5306 Mar 09 12:12:55 juju-f8e96f-3 kubelet.daemon[1204]: I0309 12:12:55.661742 1204 reconciler.go:301] Volume detached for volume "rook-ceph-system-token-fq8jx" (UniqueName: "kubernetes.io/secret/8f00ba11-4261-11e9-b373-00505636f89f | |
5306 -rook-ceph-system-token-fq8jx") on node "juju-f8e96f-3" DevicePath "" | |
5307 Mar 09 12:12:55 juju-f8e96f-3 kubelet.daemon[1204]: I0309 12:12:55.661789 1204 reconciler.go:301] Volume detached for volume "dev" (UniqueName: "kubernetes.io/host-path/8f00ba11-4261-11e9-b373-00505636f89f-dev") on node "juju-f | |
5307 8e96f-3" DevicePath "" | |
5308 Mar 09 12:12:55 juju-f8e96f-3 kubelet.daemon[1204]: I0309 12:12:55.661804 1204 reconciler.go:301] Volume detached for volume "udev" (UniqueName: "kubernetes.io/host-path/8f00ba11-4261-11e9-b373-00505636f89f-udev") on node "juju | |
5308 -f8e96f-3" DevicePath "" | |
5309 Mar 09 12:12:55 juju-f8e96f-3 kubelet.daemon[1204]: I0309 12:12:55.661824 1204 reconciler.go:301] Volume detached for volume "sys" (UniqueName: "kubernetes.io/host-path/8f00ba11-4261-11e9-b373-00505636f89f-sys") on node "juju-f | |
5309 8e96f-3" DevicePath "" | |
5310 Mar 09 12:12:57 juju-f8e96f-3 kubelet.daemon[1204]: I0309 12:12:57.567421 1204 reconciler.go:181] operationExecutor.UnmountVolume started for volume "dev" (UniqueName: "kubernetes.io/host-path/8edbbabb-4261-11e9-b373-00505636f8 | |
5310 9f-dev") pod "8edbbabb-4261-11e9-b373-00505636f89f" (UID: "8edbbabb-4261-11e9-b373-00505636f89f") | |
5311 Mar 09 12:12:57 juju-f8e96f-3 kubelet.daemon[1204]: I0309 12:12:57.567517 1204 reconciler.go:181] operationExecutor.UnmountVolume started for volume "sys" (UniqueName: "kubernetes.io/host-path/8edbbabb-4261-11e9-b373-00505636f8 | |
5311 9f-sys") pod "8edbbabb-4261-11e9-b373-00505636f89f" (UID: "8edbbabb-4261-11e9-b373-00505636f89f") | |
5312 Mar 09 12:12:57 juju-f8e96f-3 kubelet.daemon[1204]: I0309 12:12:57.567545 1204 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8edbbabb-4261-11e9-b373-00505636f89f-dev" (OuterVol | |
5312 umeSpecName: "dev") pod "8edbbabb-4261-11e9-b373-00505636f89f" (UID: "8edbbabb-4261-11e9-b373-00505636f89f"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" | |
5313 Mar 09 12:12:57 juju-f8e96f-3 kubelet.daemon[1204]: I0309 12:12:57.567592 1204 reconciler.go:181] operationExecutor.UnmountVolume started for volume "flexvolume" (UniqueName: "kubernetes.io/host-path/8edbbabb-4261-11e9-b373-005 | |
5313 05636f89f-flexvolume") pod "8edbbabb-4261-11e9-b373-00505636f89f" (UID: "8edbbabb-4261-11e9-b373-00505636f89f") | |
5314 Mar 09 12:12:57 juju-f8e96f-3 kubelet.daemon[1204]: I0309 12:12:57.567627 1204 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8edbbabb-4261-11e9-b373-00505636f89f-sys" (OuterVol | |
5314 umeSpecName: "sys") pod "8edbbabb-4261-11e9-b373-00505636f89f" (UID: "8edbbabb-4261-11e9-b373-00505636f89f"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" | |
5315 Mar 09 12:12:57 juju-f8e96f-3 kubelet.daemon[1204]: I0309 12:12:57.567657 1204 reconciler.go:181] operationExecutor.UnmountVolume started for volume "libmodules" (UniqueName: "kubernetes.io/host-path/8edbbabb-4261-11e9-b373-005 | |
5315 05636f89f-libmodules") pod "8edbbabb-4261-11e9-b373-00505636f89f" (UID: "8edbbabb-4261-11e9-b373-00505636f89f") | |
5316 Mar 09 12:12:57 juju-f8e96f-3 kubelet.daemon[1204]: I0309 12:12:57.567671 1204 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8edbbabb-4261-11e9-b373-00505636f89f-flexvolume" (O | |
5316 uterVolumeSpecName: "flexvolume") pod "8edbbabb-4261-11e9-b373-00505636f89f" (UID: "8edbbabb-4261-11e9-b373-00505636f89f"). InnerVolumeSpecName "flexvolume". PluginName "kubernetes.io/host-path", VolumeGidValue "" | |
5317 Mar 09 12:12:57 juju-f8e96f-3 kubelet.daemon[1204]: I0309 12:12:57.567742 1204 reconciler.go:181] operationExecutor.UnmountVolume started for volume "rook-ceph-system-token-fq8jx" (UniqueName: "kubernetes.io/secret/8edbbabb-426 | |
5317 1-11e9-b373-00505636f89f-rook-ceph-system-token-fq8jx") pod "8edbbabb-4261-11e9-b373-00505636f89f" (UID: "8edbbabb-4261-11e9-b373-00505636f89f") | |
5318 Mar 09 12:12:57 juju-f8e96f-3 kubelet.daemon[1204]: I0309 12:12:57.567798 1204 reconciler.go:301] Volume detached for volume "dev" (UniqueName: "kubernetes.io/host-path/8edbbabb-4261-11e9-b373-00505636f89f-dev") on node "juju-f | |
5318 8e96f-3" DevicePath "" | |
5319 Mar 09 12:12:57 juju-f8e96f-3 kubelet.daemon[1204]: I0309 12:12:57.567821 1204 reconciler.go:301] Volume detached for volume "sys" (UniqueName: "kubernetes.io/host-path/8edbbabb-4261-11e9-b373-00505636f89f-sys") on node "juju-f | |
5319 8e96f-3" DevicePath "" | |
5320 Mar 09 12:12:57 juju-f8e96f-3 kubelet.daemon[1204]: I0309 12:12:57.567853 1204 reconciler.go:301] Volume detached for volume "flexvolume" (UniqueName: "kubernetes.io/host-path/8edbbabb-4261-11e9-b373-00505636f89f-flexvolume") o | |
5320 n node "juju-f8e96f-3" DevicePath "" | |
5321 Mar 09 12:12:57 juju-f8e96f-3 kubelet.daemon[1204]: I0309 12:12:57.567930 1204 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8edbbabb-4261-11e9-b373-00505636f89f-libmodules" (O | |
5321 uterVolumeSpecName: "libmodules") pod "8edbbabb-4261-11e9-b373-00505636f89f" (UID: "8edbbabb-4261-11e9-b373-00505636f89f"). InnerVolumeSpecName "libmodules". PluginName "kubernetes.io/host-path", VolumeGidValue "" | |
5322 Mar 09 12:12:57 juju-f8e96f-3 kubelet.daemon[1204]: I0309 12:12:57.598873 1204 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8edbbabb-4261-11e9-b373-00505636f89f-rook-ceph-system- | |
5322 token-fq8jx" (OuterVolumeSpecName: "rook-ceph-system-token-fq8jx") pod "8edbbabb-4261-11e9-b373-00505636f89f" (UID: "8edbbabb-4261-11e9-b373-00505636f89f"). InnerVolumeSpecName "rook-ceph-system-token-fq8jx". PluginName "kubernete | |
5322 s.io/secret", VolumeGidValue "" | |
5323 Mar 09 12:12:57 juju-f8e96f-3 kubelet.daemon[1204]: I0309 12:12:57.668142 1204 reconciler.go:301] Volume detached for volume "libmodules" (UniqueName: "kubernetes.io/host-path/8edbbabb-4261-11e9-b373-00505636f89f-libmodules") o | |
5323 n node "juju-f8e96f-3" DevicePath "" | |
5324 Mar 09 12:12:57 juju-f8e96f-3 kubelet.daemon[1204]: I0309 12:12:57.668209 1204 reconciler.go:301] Volume detached for volume "rook-ceph-system-token-fq8jx" (UniqueName: "kubernetes.io/secret/8edbbabb-4261-11e9-b373-00505636f89f | |
5324 -rook-ceph-system-token-fq8jx") on node "juju-f8e96f-3" DevicePath "" | |
5325 Mar 09 12:13:30 juju-f8e96f-3 kubelet.daemon[1204]: W0309 12:13:30.332969 1204 reflector.go:270] object-"default"/"weave-scope-agent-weave-scope-token-zrfrr": watch of *v1.Secret ended with: too old resource version: 136455 (13 | |
5325 7344) | |
5326 Mar 09 12:14:00 juju-f8e96f-3 kubelet.daemon[1204]: W0309 12:14:00.333363 1204 reflector.go:270] object-"kube-system"/"traefik-token-scp95": watch of *v1.Secret ended with: too old resource version: 136455 (137344) | |
5327 Mar 09 12:14:02 juju-f8e96f-3 kubelet.daemon[1204]: I0309 12:14:02.970858 1204 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "rook-ceph-system-token-kkvlm" (UniqueName: "kubernetes.io/se | |
5327 cret/d3bc1ee1-4264-11e9-b373-00505636f89f-rook-ceph-system-token-kkvlm") pod "rook-ceph-operator-cdc686667-vjzhg" (UID: "d3bc1ee1-4264-11e9-b373-00505636f89f") | |
5328 Mar 09 12:14:03 juju-f8e96f-3 kubelet.daemon[1204]: W0309 12:14:03.092703 1204 container.go:422] Failed to get RecentStats("/system.slice/run-r14ba142237434ce7a8ca931c2865422e.scope") while determining the next housekeeping: un | |
5328 able to find data in memory cache | |
5329 Mar 09 12:14:04 juju-f8e96f-3 kubelet.daemon[1204]: W0309 12:14:04.105457 1204 pod_container_deletor.go:75] Container "98f2b1b1936c896c3e9d4f2363afba4683d91dc7835941e62d0aae13710a7631" not found in pod's containers | |
5330 Mar 09 12:14:06 juju-f8e96f-3 kubelet.daemon[1204]: I0309 12:14:06.688150 1204 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "sys" (UniqueName: "kubernetes.io/host-path/d5f6cee6-4264-11e | |
Mar 07 21:40:05 juju-f8e96f-6 kubelet.daemon[15581]: I0307 21:40:05.935224 15581 policy_none.go:42] [cpumanager] none policy: Start | |
Mar 07 21:40:05 juju-f8e96f-6 kubelet.daemon[15581]: W0307 21:40:05.952577 15581 manager.go:528] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found | |
Mar 07 21:40:05 juju-f8e96f-6 kubelet.daemon[15581]: I0307 21:40:05.952940 15581 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach | |
Mar 07 21:40:15 juju-f8e96f-6 kubelet.daemon[15581]: I0307 21:40:15.975766 15581 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach | |
Mar 07 21:40:26 juju-f8e96f-6 kubelet.daemon[15581]: I0307 21:40:26.001870 15581 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach | |
Mar 07 21:40:36 juju-f8e96f-6 kubelet.daemon[15581]: I0307 21:40:36.040851 15581 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach | |
...skipping... | |
led: exit status 2\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello --scope -- mount -t ceph -o name=admin,secret=AQCpTYRc2jhqHxAAIOpBATWAbiaAb5f1DDhRWw==,mds_namespace=cephfs 10.152.183.64:6790,10.152.183.238:6790,10.152.183.33:6790:/hello /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello\nOutput: Running scope as unit: run-r976c5c30be9544e6957b3b81c575116b.scope\nmount error 2 = No such file or directory\n\n" | |
Mar 10 00:18:47 juju-f8e96f-6 kubelet.daemon[26520]: E0310 00:18:47.598290 26520 kubelet.go:1680] Unable to mount volumes for pod "hello-7b77d8ddc7-lm9q8_default(079e5309-42c5-11e9-b373-00505636f89f)": timeout expired waiting for volumes to attach or mount for pod "default"/"hello-7b77d8ddc7-lm9q8". list of unmounted volumes=[hello]. list of unattached volumes=[hello default-token-mj4xv]; skipping pod | |
Mar 10 00:18:47 juju-f8e96f-6 kubelet.daemon[26520]: E0310 00:18:47.598727 26520 pod_workers.go:190] Error syncing pod 079e5309-42c5-11e9-b373-00505636f89f ("hello-7b77d8ddc7-lm9q8_default(079e5309-42c5-11e9-b373-00505636f89f)"), skipping: timeout expired waiting for volumes to attach or mount for pod "default"/"hello-7b77d8ddc7-lm9q8". list of unmounted volumes=[hello]. list of unattached volumes=[hello default-token-mj4xv] | |
Mar 10 00:19:29 juju-f8e96f-6 kubelet.daemon[26520]: E0310 00:19:29.138128 26520 driver-call.go:274] mount command failed, status: Failure, reason: failed to mount filesystem cephfs to /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello with monitor 10.152.183.64:6790,10.152.183.238:6790,10.152.183.33:6790:/hello and options [name=admin secret=AQCpTYRc2jhqHxAAIOpBATWAbiaAb5f1DDhRWw== mds_namespace=cephfs]: mount failed: exit status 2 | |
Mar 10 00:19:29 juju-f8e96f-6 kubelet.daemon[26520]: Mounting command: systemd-run | |
Mar 10 00:19:29 juju-f8e96f-6 kubelet.daemon[26520]: Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello --scope -- mount -t ceph -o name=admin,secret=AQCpTYRc2jhqHxAAIOpBATWAbiaAb5f1DDhRWw==,mds_namespace=cephfs 10.152.183.64:6790,10.152.183.238:6790,10.152.183.33:6790:/hello /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello | |
Mar 10 00:19:29 juju-f8e96f-6 kubelet.daemon[26520]: Output: Running scope as unit: run-re13a0dc4cbff4eafbca9ae64223534e3.scope | |
Mar 10 00:19:29 juju-f8e96f-6 kubelet.daemon[26520]: mount error 2 = No such file or directory | |
Mar 10 00:19:29 juju-f8e96f-6 kubelet.daemon[26520]: W0310 00:19:29.138184 26520 driver-call.go:150] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ceph.rook.io~rook/rook, args: [mount /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello {"clusterNamespace":"rook-ceph","fsName":"cephfs","kubernetes.io/fsType":"ceph","kubernetes.io/pod.name":"hello-7b77d8ddc7-lm9q8","kubernetes.io/pod.namespace":"default","kubernetes.io/pod.uid":"079e5309-42c5-11e9-b373-00505636f89f","kubernetes.io/pvOrVolumeName":"hello","kubernetes.io/readwrite":"rw","kubernetes.io/serviceAccount.name":"default","path":"/hello"}], error: exit status 1, output: "{\"status\":\"Failure\",\"message\":\"failed to mount filesystem cephfs to /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello with monitor 10.152.183.64:6790,10.152.183.238:6790,10.152.183.33:6790:/hello and options [name=admin secret=AQCpTYRc2jhqHxAAIOpBATWAbiaAb5f1DDhRWw== mds_namespace=cephfs]: mount failed: exit status 2\\nMounting command: systemd-run\\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello --scope -- mount -t ceph -o name=admin,secret=AQCpTYRc2jhqHxAAIOpBATWAbiaAb5f1DDhRWw==,mds_namespace=cephfs 10.152.183.64:6790,10.152.183.238:6790,10.152.183.33:6790:/hello /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello\\nOutput: Running scope as unit: run-re13a0dc4cbff4eafbca9ae64223534e3.scope\\nmount error 2 = No such file or directory\\n\\n\"}\n" | |
Mar 10 00:19:29 juju-f8e96f-6 kubelet.daemon[26520]: E0310 00:19:29.138431 26520 nestedpendingoperations.go:267] Operation for "\"flexvolume-ceph.rook.io/rook/079e5309-42c5-11e9-b373-00505636f89f-hello\" (\"079e5309-42c5-11e9-b373-00505636f89f\")" failed. No retries permitted until 2019-03-10 00:21:31.138341397 +0000 UTC m=+3054.858194456 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"hello\" (UniqueName: \"flexvolume-ceph.rook.io/rook/079e5309-42c5-11e9-b373-00505636f89f-hello\") pod \"hello-7b77d8ddc7-lm9q8\" (UID: \"079e5309-42c5-11e9-b373-00505636f89f\") : mount command failed, status: Failure, reason: failed to mount filesystem cephfs to /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello with monitor 10.152.183.64:6790,10.152.183.238:6790,10.152.183.33:6790:/hello and options [name=admin secret=AQCpTYRc2jhqHxAAIOpBATWAbiaAb5f1DDhRWw== mds_namespace=cephfs]: mount failed: exit status 2\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello --scope -- mount -t ceph -o name=admin,secret=AQCpTYRc2jhqHxAAIOpBATWAbiaAb5f1DDhRWw==,mds_namespace=cephfs 10.152.183.64:6790,10.152.183.238:6790,10.152.183.33:6790:/hello /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello\nOutput: Running scope as unit: run-re13a0dc4cbff4eafbca9ae64223534e3.scope\nmount error 2 = No such file or directory\n\n" | |
Mar 10 00:21:01 juju-f8e96f-6 kubelet.daemon[26520]: E0310 00:21:01.598389 26520 kubelet.go:1680] Unable to mount volumes for pod "hello-7b77d8ddc7-lm9q8_default(079e5309-42c5-11e9-b373-00505636f89f)": timeout expired waiting for volumes to attach or mount for pod "default"/"hello-7b77d8ddc7-lm9q8". list of unmounted volumes=[hello]. list of unattached volumes=[hello default-token-mj4xv]; skipping pod | |
Mar 10 00:21:01 juju-f8e96f-6 kubelet.daemon[26520]: E0310 00:21:01.598454 26520 pod_workers.go:190] Error syncing pod 079e5309-42c5-11e9-b373-00505636f89f ("hello-7b77d8ddc7-lm9q8_default(079e5309-42c5-11e9-b373-00505636f89f)"), skipping: timeout expired waiting for volumes to attach or mount for pod "default"/"hello-7b77d8ddc7-lm9q8". list of unmounted volumes=[hello]. list of unattached volumes=[hello default-token-mj4xv] | |
Mar 10 00:21:31 juju-f8e96f-6 kubelet.daemon[26520]: E0310 00:21:31.344827 26520 driver-call.go:274] mount command failed, status: Failure, reason: failed to mount filesystem cephfs to /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello with monitor 10.152.183.64:6790,10.152.183.238:6790,10.152.183.33:6790:/hello and options [name=admin secret=AQCpTYRc2jhqHxAAIOpBATWAbiaAb5f1DDhRWw== mds_namespace=cephfs]: mount failed: exit status 2 | |
Mar 10 00:21:31 juju-f8e96f-6 kubelet.daemon[26520]: Mounting command: systemd-run | |
Mar 10 00:21:31 juju-f8e96f-6 kubelet.daemon[26520]: Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello --scope -- mount -t ceph -o name=admin,secret=AQCpTYRc2jhqHxAAIOpBATWAbiaAb5f1DDhRWw==,mds_namespace=cephfs 10.152.183.64:6790,10.152.183.238:6790,10.152.183.33:6790:/hello /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello | |
Mar 10 00:21:31 juju-f8e96f-6 kubelet.daemon[26520]: Output: Running scope as unit: run-r018c4193f22c4b85a952d25ee1e44575.scope | |
Mar 10 00:21:31 juju-f8e96f-6 kubelet.daemon[26520]: mount error 2 = No such file or directory | |
Mar 10 00:21:31 juju-f8e96f-6 kubelet.daemon[26520]: W0310 00:21:31.346295 26520 driver-call.go:150] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ceph.rook.io~rook/rook, args: [mount /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello {"clusterNamespace":"rook-ceph","fsName":"cephfs","kubernetes.io/fsType":"ceph","kubernetes.io/pod.name":"hello-7b77d8ddc7-lm9q8","kubernetes.io/pod.namespace":"default","kubernetes.io/pod.uid":"079e5309-42c5-11e9-b373-00505636f89f","kubernetes.io/pvOrVolumeName":"hello","kubernetes.io/readwrite":"rw","kubernetes.io/serviceAccount.name":"default","path":"/hello"}], error: exit status 1, output: "{\"status\":\"Failure\",\"message\":\"failed to mount filesystem cephfs to /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello with monitor 10.152.183.64:6790,10.152.183.238:6790,10.152.183.33:6790:/hello and options [name=admin secret=AQCpTYRc2jhqHxAAIOpBATWAbiaAb5f1DDhRWw== mds_namespace=cephfs]: mount failed: exit status 2\\nMounting command: systemd-run\\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello --scope -- mount -t ceph -o name=admin,secret=AQCpTYRc2jhqHxAAIOpBATWAbiaAb5f1DDhRWw==,mds_namespace=cephfs 10.152.183.64:6790,10.152.183.238:6790,10.152.183.33:6790:/hello /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello\\nOutput: Running scope as unit: run-r018c4193f22c4b85a952d25ee1e44575.scope\\nmount error 2 = No such file or directory\\n\\n\"}\n" | |
Mar 10 00:21:31 juju-f8e96f-6 kubelet.daemon[26520]: E0310 00:21:31.346613 26520 nestedpendingoperations.go:267] Operation for "\"flexvolume-ceph.rook.io/rook/079e5309-42c5-11e9-b373-00505636f89f-hello\" (\"079e5309-42c5-11e9-b373-00505636f89f\")" failed. No retries permitted until 2019-03-10 00:23:33.34652141 +0000 UTC m=+3177.066374468 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"hello\" (UniqueName: \"flexvolume-ceph.rook.io/rook/079e5309-42c5-11e9-b373-00505636f89f-hello\") pod \"hello-7b77d8ddc7-lm9q8\" (UID: \"079e5309-42c5-11e9-b373-00505636f89f\") : mount command failed, status: Failure, reason: failed to mount filesystem cephfs to /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello with monitor 10.152.183.64:6790,10.152.183.238:6790,10.152.183.33:6790:/hello and options [name=admin secret=AQCpTYRc2jhqHxAAIOpBATWAbiaAb5f1DDhRWw== mds_namespace=cephfs]: mount failed: exit status 2\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello --scope -- mount -t ceph -o name=admin,secret=AQCpTYRc2jhqHxAAIOpBATWAbiaAb5f1DDhRWw==,mds_namespace=cephfs 10.152.183.64:6790,10.152.183.238:6790,10.152.183.33:6790:/hello /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello\nOutput: Running scope as unit: run-r018c4193f22c4b85a952d25ee1e44575.scope\nmount error 2 = No such file or directory\n\n" | |
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello --scope -- mount -t ceph -o name=admin,secret=AQCpTYRc2jhqHxAAIOpBATWAbiaAb5f1DDhRWw==,mds_namespace=cephfs 10.152.183.64:6790,10.152.183.238:6790,10.152.183.33:6790:/hello /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello | |
Output: Running scope as unit: run-rea4c9710a750405b9f2a1cb08381643d.scope | |
mount error 2 = No such file or directory | |
2019-03-10 00:35:47.036201 E | flexdriver: failed to mount filesystem cephfs to /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello with monitor 10.152.183.64:6790,10.152.183.238:6790,10.152.183.33:6790:/hello and options [name=admin secret=AQCpTYRc2jhqHxAAIOpBATWAbiaAb5f1DDhRWw== mds_namespace=cephfs]: mount failed: exit status 2 | |
Mounting command: systemd-run | |
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello --scope -- mount -t ceph -o name=admin,secret=AQCpTYRc2jhqHxAAIOpBATWAbiaAb5f1DDhRWw==,mds_namespace=cephfs 10.152.183.64:6790,10.152.183.238:6790,10.152.183.33:6790:/hello /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello | |
Output: Running scope as unit: run-rea4c9710a750405b9f2a1cb08381643d.scope | |
mount error 2 = No such file or directory | |
2019-03-10 00:37:49.124473 I | flexdriver: mounting ceph filesystem cephfs on /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello | |
2019-03-10 00:37:49.144168 I | cephmon: parsing mon endpoints: b=10.152.183.64:6790,c=10.152.183.238:6790,a=10.152.183.33:6790 | |
2019-03-10 00:37:49.144277 I | op-mon: loaded: maxMonID=2, mons=map[b:0xc00051dec0 c:0xc00051df80 a:0xc00058a020], mapping=&{Node:map[a:0xc000590750 b:0xc000590780 c:0xc0005907b0] Port:map[]} | |
2019-03-10 00:37:49.150274 I | flexdriver: mounting ceph filesystem cephfs on 10.152.183.64:6790,10.152.183.238:6790,10.152.183.33:6790:/hello to /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello | |
2019-03-10 00:37:49.261240 I | flexdriver: E0310 00:37:49.259055 29973 mount_linux.go:151] Mount failed: exit status 2 | |
Mounting command: systemd-run | |
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello --scope -- mount -t ceph -o name=admin,secret=AQCpTYRc2jhqHxAAIOpBATWAbiaAb5f1DDhRWw==,mds_namespace=cephfs 10.152.183.64:6790,10.152.183.238:6790,10.152.183.33:6790:/hello /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello | |
Output: Running scope as unit: run-r71f3a8c43b0c49d9bc5ec6667dc8d078.scope | |
mount error 2 = No such file or directory | |
2019-03-10 00:37:49.261622 E | flexdriver: failed to mount filesystem cephfs to /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello with monitor 10.152.183.64:6790,10.152.183.238:6790,10.152.183.33:6790:/hello and options [name=admin secret=AQCpTYRc2jhqHxAAIOpBATWAbiaAb5f1DDhRWw== mds_namespace=cephfs]: mount failed: exit status 2 | |
Mounting command: systemd-run | |
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello --scope -- mount -t ceph -o name=admin,secret=AQCpTYRc2jhqHxAAIOpBATWAbiaAb5f1DDhRWw==,mds_namespace=cephfs 10.152.183.64:6790,10.152.183.238:6790,10.152.183.33:6790:/hello /var/lib/kubelet/pods/079e5309-42c5-11e9-b373-00505636f89f/volumes/ceph.rook.io~rook/hello | |
Output: Running scope as unit: run-r71f3a8c43b0c49d9bc5ec6667dc8d078.scope | |
mount error 2 = No such file or directory |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment