Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save ShyamsundarR/0a227f9d5f275482edc0f114b6930e0a to your computer and use it in GitHub Desktop.
Save ShyamsundarR/0a227f9d5f275482edc0f114b6930e0a to your computer and use it in GitHub Desktop.
ceph-pr-35756-test-failure-analysis-1.md

Analysis:

  • Failure in both test cases relate to the command from the test timing out
  • On the mgr logs post recieving the command the logs indicate the following,
2020-08-19T16:30:55.586+0000 7f43fcf9f700  0 [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(prefix:fs subvolume getpath, sub_name:subvolume_0000000000545095, target:['mon-mgr', ''], vol_name:cephfs) < ""
2020-08-19T16:30:55.586+0000 7f43fcf9f700  0 [volumes DEBUG mgr_util] self.fs_id=46, fs_id=47
2020-08-19T16:30:55.586+0000 7f43fcf9f700  0 [volumes WARNING mgr_util] filesystem id changed for volume 'cephfs', reconnecting...
2020-08-19T16:30:55.586+0000 7f43fcf9f700  0 [volumes DEBUG mgr_util] self.fs_id=46, fs_id=47
2020-08-19T16:30:55.586+0000 7f43fcf9f700  0 [volumes INFO mgr_util] aborting connection from cephfs 'cephfs'
2020-08-19T16:30:55.586+0000 7f43fcf9f700  2 client.16292 unmounting (abort)
2020-08-19T16:30:55.586+0000 7f43fcf9f700  5 client.16292 _closed_mds_session mds.0 seq 219

Detailed logs for the failures:

Log: /ceph/teuthology-archive/rraja-2020-08-19_13:10:02-fs-ajarr-wip-46163-distro-basic-smithi/5359773/teuthology.log

2020-08-19T16:30:55.591 INFO:teuthology.orchestra.run.smithi169.stderr:2020-08-19T16:30:55.585+0000 7fc67cccf700  1 -- 172.21.15.169:0/312227723 --> [v2:172.21.15.26:6834/15695,v1:172.21.15.26:6835/15695] -- mgr_command(tid 0: {"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "subvolume_0000000000545095", "target": ["mon-mgr", ""]}) v1 -- 0x7fc678067320 con 0x7fc6540542d0
2020-08-19T16:30:57.896 INFO:teuthology.orchestra.run.smithi026:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-08-19T16:30:57.899 INFO:teuthology.orchestra.run.smithi169:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-08-19T16:31:27.955 INFO:teuthology.orchestra.run.smithi026:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-08-19T16:31:27.990 INFO:teuthology.orchestra.run.smithi169:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-08-19T16:31:58.082 INFO:teuthology.orchestra.run.smithi026:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-08-19T16:31:58.097 INFO:teuthology.orchestra.run.smithi169:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-08-19T16:32:28.191 INFO:teuthology.orchestra.run.smithi026:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-08-19T16:32:28.206 INFO:teuthology.orchestra.run.smithi169:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-08-19T16:32:55.292 DEBUG:teuthology.orchestra.run:got remote process result: 124

snip

2020-08-19T16:32:57.215 INFO:tasks.cephfs_test_runner:test_subvolume_no_upgrade_v1_to_v2 (tasks.cephfs.test_volumes.TestVolumes) ... ERROR
2020-08-19T16:32:57.216 INFO:tasks.cephfs_test_runner:
2020-08-19T16:32:57.216 INFO:tasks.cephfs_test_runner:======================================================================
2020-08-19T16:32:57.216 INFO:tasks.cephfs_test_runner:ERROR: test_subvolume_no_upgrade_v1_to_v2 (tasks.cephfs.test_volumes.TestVolumes)
2020-08-19T16:32:57.216 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-08-19T16:32:57.216 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2020-08-19T16:32:57.216 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_ajarr-wip-46163/qa/tasks/cephfs/test_volumes.py", line 1878, in test_subvolume_no_upgrade_v1_to_v2
2020-08-19T16:32:57.216 INFO:tasks.cephfs_test_runner:    subvolpath1 = self._get_subvolume_path(self.volname, subvolume1)
2020-08-19T16:32:57.216 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_ajarr-wip-46163/qa/tasks/cephfs/test_volumes.py", line 207, in _get_subvolume_path
2020-08-19T16:32:57.216 INFO:tasks.cephfs_test_runner:    path = self._fs_cmd(*args)
2020-08-19T16:32:57.216 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_ajarr-wip-46163/qa/tasks/cephfs/test_volumes.py", line 34, in _fs_cmd
2020-08-19T16:32:57.217 INFO:tasks.cephfs_test_runner:    return self.mgr_cluster.mon_manager.raw_cluster_cmd("fs", *args)
2020-08-19T16:32:57.217 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_ajarr-wip-46163/qa/tasks/ceph_manager.py", line 1357, in raw_cluster_cmd
2020-08-19T16:32:57.217 INFO:tasks.cephfs_test_runner:    stdout=StringIO(),
2020-08-19T16:32:57.217 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 204, in run
2020-08-19T16:32:57.217 INFO:tasks.cephfs_test_runner:    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
2020-08-19T16:32:57.217 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 446, in run
2020-08-19T16:32:57.217 INFO:tasks.cephfs_test_runner:    r.wait()
2020-08-19T16:32:57.217 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 160, in wait
2020-08-19T16:32:57.217 INFO:tasks.cephfs_test_runner:    self._raise_for_status()
2020-08-19T16:32:57.217 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 182, in _raise_for_status
2020-08-19T16:32:57.217 INFO:tasks.cephfs_test_runner:    node=self.hostname, label=self.label
2020-08-19T16:32:57.217 INFO:tasks.cephfs_test_runner:teuthology.exceptions.CommandFailedError: Command failed on smithi169 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early fs subvolume getpath cephfs subvolume_0000000000545095'
2020-08-19T16:32:57.217 INFO:tasks.cephfs_test_runner:
2020-08-19T16:32:57.218 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-08-19T16:32:57.218 INFO:tasks.cephfs_test_runner:Ran 46 tests in 1729.548s
2020-08-19T16:32:57.218 INFO:tasks.cephfs_test_runner:
2020-08-19T16:32:57.218 INFO:tasks.cephfs_test_runner:FAILED (errors=1)
2020-08-19T16:32:57.218 INFO:tasks.cephfs_test_runner:
2020-08-19T16:32:57.218 INFO:tasks.cephfs_test_runner:======================================================================
2020-08-19T16:32:57.218 INFO:tasks.cephfs_test_runner:ERROR: test_subvolume_no_upgrade_v1_to_v2 (tasks.cephfs.test_volumes.TestVolumes)
2020-08-19T16:32:57.218 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-08-19T16:32:57.218 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2020-08-19T16:32:57.218 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_ajarr-wip-46163/qa/tasks/cephfs/test_volumes.py", line 1878, in test_subvolume_no_upgrade_v1_to_v2
2020-08-19T16:32:57.218 INFO:tasks.cephfs_test_runner:    subvolpath1 = self._get_subvolume_path(self.volname, subvolume1)
2020-08-19T16:32:57.219 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_ajarr-wip-46163/qa/tasks/cephfs/test_volumes.py", line 207, in _get_subvolume_path
2020-08-19T16:32:57.219 INFO:tasks.cephfs_test_runner:    path = self._fs_cmd(*args)
2020-08-19T16:32:57.219 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_ajarr-wip-46163/qa/tasks/cephfs/test_volumes.py", line 34, in _fs_cmd
2020-08-19T16:32:57.219 INFO:tasks.cephfs_test_runner:    return self.mgr_cluster.mon_manager.raw_cluster_cmd("fs", *args)
2020-08-19T16:32:57.219 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_ajarr-wip-46163/qa/tasks/ceph_manager.py", line 1357, in raw_cluster_cmd
2020-08-19T16:32:57.219 INFO:tasks.cephfs_test_runner:    stdout=StringIO(),
2020-08-19T16:32:57.219 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 204, in run
2020-08-19T16:32:57.219 INFO:tasks.cephfs_test_runner:    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
2020-08-19T16:32:57.219 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 446, in run
2020-08-19T16:32:57.219 INFO:tasks.cephfs_test_runner:    r.wait()
2020-08-19T16:32:57.219 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 160, in wait
2020-08-19T16:32:57.219 INFO:tasks.cephfs_test_runner:    self._raise_for_status()
2020-08-19T16:32:57.219 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 182, in _raise_for_status
2020-08-19T16:32:57.219 INFO:tasks.cephfs_test_runner:    node=self.hostname, label=self.label
2020-08-19T16:32:57.220 INFO:tasks.cephfs_test_runner:teuthology.exceptions.CommandFailedError: Command failed on smithi169 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early fs subvolume getpath cephfs subvolume_0000000000545095'
2020-08-19T16:32:57.220 INFO:tasks.cephfs_test_runner:
2020-08-19T16:32:57.220 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 93, in run_tasks
    manager.__enter__()
  File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
    return next(self.gen)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_ajarr-wip-46163/qa/tasks/cephfs_test_runner.py", line 211, in task
    raise RuntimeError("Test failure: {0}".format(", ".join(bad_tests)))
RuntimeError: Test failure: test_subvolume_no_upgrade_v1_to_v2 (tasks.cephfs.test_volumes.TestVolumes)
2020-08-19T16:32:57.250 ERROR:teuthology.run_tasks: Sentry event: http://sentry.ceph.com/sepia/teuthology/?q=29544f29407347fc84e18f24d7cb81bc
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 93, in run_tasks
    manager.__enter__()
  File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
    return next(self.gen)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_ajarr-wip-46163/qa/tasks/cephfs_test_runner.py", line 211, in task
    raise RuntimeError("Test failure: {0}".format(", ".join(bad_tests)))
RuntimeError: Test failure: test_subvolume_no_upgrade_v1_to_v2 (tasks.cephfs.test_volumes.TestVolumes)
2020-08-19T16:32:57.250 DEBUG:teuthology.run_tasks:Unwinding manager cephfs_test_runner
2020-08-19T16:32:57.263 DEBUG:teuthology.run_tasks:Unwinding manager ceph-fuse
2020-08-19T16:32:57.295 INFO:tasks.ceph_fuse:Unmounting ceph-fuse clients...

From: /ceph/teuthology-archive/rraja-2020-08-19_13:10:02-fs-ajarr-wip-46163-distro-basic-smithi/5359773/remote/smithi026/log/ceph-mgr.x.log.gz

2020-08-19T16:30:55.586+0000 7f43fcf9f700  0 [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(prefix:fs subvolume getpath, sub_name:subvolume_0000000000545095, target:['mon-mgr', ''], vol_name:cephfs) < ""
2020-08-19T16:30:55.586+0000 7f43fcf9f700  0 [volumes DEBUG mgr_util] self.fs_id=46, fs_id=47
2020-08-19T16:30:55.586+0000 7f43fcf9f700  0 [volumes WARNING mgr_util] filesystem id changed for volume 'cephfs', reconnecting...
2020-08-19T16:30:55.586+0000 7f43fcf9f700  0 [volumes DEBUG mgr_util] self.fs_id=46, fs_id=47
2020-08-19T16:30:55.586+0000 7f43fcf9f700  0 [volumes INFO mgr_util] aborting connection from cephfs 'cephfs'
2020-08-19T16:30:55.586+0000 7f43fcf9f700  2 client.16292 unmounting (abort)
2020-08-19T16:30:55.586+0000 7f43fcf9f700  5 client.16292 _closed_mds_session mds.0 seq 219
2020-08-19T16:30:55.586+0000 7f43fcf9f700  1 -- 172.21.15.26:0/1990121687 >> [v2:172.21.15.169:6800/1391148625,v1:172.21.15.169:6801/1391148625] conn(0x5624472b6400 msgr2=0x562445b8a000 unknown :-1 s=STATE_CLOSED l=0).mark_down
2020-08-19T16:30:55.586+0000 7f43fcf9f700  1 --2- 172.21.15.26:0/1990121687 >> [v2:172.21.15.169:6800/1391148625,v1:172.21.15.169:6801/1391148625] conn(0x5624472b6400 0x562445b8a000 unknown :-1 s=CLOSED pgs=13 cs=0 l=0 rev1=1 rx=0 tx=0).stop
2020-08-19T16:30:55.586+0000 7f43fcf9f700 10 client.16292 remove_session_caps mds.0
2020-08-19T16:30:55.586+0000 7f43fcf9f700 10 client.16292 remove_cap mds.0 on 0x100000007d9.head(faked_ino=0 ref=3 ll_ref=0 cap_refs={} open={} mode=40700 size=0/0 nlink=1 btime=2020-08-19T16:30:20.014095+0000 mtime=2020-08-19T16:30:20.017734+0000 ctime=2020-08-19T16:30:20.017734+0000 caps=pAsLsXsFsx(0=pAsLsXsFsx) COMPLETE parents=0x10000000000.head["_index"] 0x562445ba9180)

From: /ceph/teuthology-archive/rraja-2020-08-19_13:10:02-fs-ajarr-wip-46163-distro-basic-smithi/5359776/teuthology.log

2020-08-19T16:09:19.303 INFO:teuthology.orchestra.run.smithi008.stderr:2020-08-19T16:09:19.301+0000 7fbb89ffb700  1 -- 172.21.15.8:0/611010694 --> [v2:172.21.15.8:6800/34450,v1:172.21.15.8:6801/34450] -- mgr_command(tid 0: {"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "subvolume_0000000000751588", "target": ["mon-mgr", ""]}) v1 -- 0x7fbb7c000d90 con 0x7fbb84050c70
2020-08-19T16:09:30.996 INFO:teuthology.orchestra.run.smithi008:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-08-19T16:09:31.000 INFO:teuthology.orchestra.run.smithi068:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-08-19T16:10:01.112 INFO:teuthology.orchestra.run.smithi008:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-08-19T16:10:01.116 INFO:teuthology.orchestra.run.smithi068:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-08-19T16:10:31.189 INFO:teuthology.orchestra.run.smithi008:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-08-19T16:10:31.199 INFO:teuthology.orchestra.run.smithi068:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-08-19T16:11:01.287 INFO:teuthology.orchestra.run.smithi008:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-08-19T16:11:01.291 INFO:teuthology.orchestra.run.smithi068:> sudo logrotate /etc/logrotate.d/ceph-test.conf
2020-08-19T16:11:19.007 DEBUG:teuthology.orchestra.run:got remote process result: 124

snip

2020-08-19T16:11:20.520 INFO:tasks.cephfs_test_runner:test_clone_subvolume_info (tasks.cephfs.test_volumes.TestVolumes) ... ERROR
2020-08-19T16:11:20.520 INFO:tasks.cephfs_test_runner:
2020-08-19T16:11:20.521 INFO:tasks.cephfs_test_runner:======================================================================
2020-08-19T16:11:20.521 INFO:tasks.cephfs_test_runner:ERROR: test_clone_subvolume_info (tasks.cephfs.test_volumes.TestVolumes)
2020-08-19T16:11:20.521 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-08-19T16:11:20.521 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2020-08-19T16:11:20.521 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_ajarr-wip-46163/qa/tasks/cephfs/test_volumes.py", line 1059, in test_clone_subvolume_info
2020-08-19T16:11:20.521 INFO:tasks.cephfs_test_runner:    self._fs_cmd("subvolume", "create", self.volname, subvolume)
2020-08-19T16:11:20.521 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_ajarr-wip-46163/qa/tasks/cephfs/test_volumes.py", line 34, in _fs_cmd
2020-08-19T16:11:20.521 INFO:tasks.cephfs_test_runner:    return self.mgr_cluster.mon_manager.raw_cluster_cmd("fs", *args)
2020-08-19T16:11:20.521 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_ajarr-wip-46163/qa/tasks/ceph_manager.py", line 1357, in raw_cluster_cmd
2020-08-19T16:11:20.521 INFO:tasks.cephfs_test_runner:    stdout=StringIO(),
2020-08-19T16:11:20.522 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 204, in run
2020-08-19T16:11:20.522 INFO:tasks.cephfs_test_runner:    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
2020-08-19T16:11:20.522 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 446, in run
2020-08-19T16:11:20.522 INFO:tasks.cephfs_test_runner:    r.wait()
2020-08-19T16:11:20.522 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 160, in wait
2020-08-19T16:11:20.522 INFO:tasks.cephfs_test_runner:    self._raise_for_status()
2020-08-19T16:11:20.522 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 182, in _raise_for_status
2020-08-19T16:11:20.522 INFO:tasks.cephfs_test_runner:    node=self.hostname, label=self.label
2020-08-19T16:11:20.522 INFO:tasks.cephfs_test_runner:teuthology.exceptions.CommandFailedError: Command failed on smithi008 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early fs subvolume create cephfs subvolume_0000000000751588'
2020-08-19T16:11:20.522 INFO:tasks.cephfs_test_runner:
2020-08-19T16:11:20.522 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-08-19T16:11:20.522 INFO:tasks.cephfs_test_runner:Ran 2 tests in 342.308s
2020-08-19T16:11:20.523 INFO:tasks.cephfs_test_runner:
2020-08-19T16:11:20.523 INFO:tasks.cephfs_test_runner:FAILED (errors=1)
2020-08-19T16:11:20.523 INFO:tasks.cephfs_test_runner:
2020-08-19T16:11:20.523 INFO:tasks.cephfs_test_runner:======================================================================
2020-08-19T16:11:20.523 INFO:tasks.cephfs_test_runner:ERROR: test_clone_subvolume_info (tasks.cephfs.test_volumes.TestVolumes)
2020-08-19T16:11:20.523 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-08-19T16:11:20.523 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2020-08-19T16:11:20.523 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_ajarr-wip-46163/qa/tasks/cephfs/test_volumes.py", line 1059, in test_clone_subvolume_info
2020-08-19T16:11:20.523 INFO:tasks.cephfs_test_runner:    self._fs_cmd("subvolume", "create", self.volname, subvolume)
2020-08-19T16:11:20.523 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_ajarr-wip-46163/qa/tasks/cephfs/test_volumes.py", line 34, in _fs_cmd
2020-08-19T16:11:20.523 INFO:tasks.cephfs_test_runner:    return self.mgr_cluster.mon_manager.raw_cluster_cmd("fs", *args)
2020-08-19T16:11:20.524 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_ajarr-wip-46163/qa/tasks/ceph_manager.py", line 1357, in raw_cluster_cmd
2020-08-19T16:11:20.524 INFO:tasks.cephfs_test_runner:    stdout=StringIO(),
2020-08-19T16:11:20.524 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 204, in run
2020-08-19T16:11:20.524 INFO:tasks.cephfs_test_runner:    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
2020-08-19T16:11:20.524 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 446, in run
2020-08-19T16:11:20.524 INFO:tasks.cephfs_test_runner:    r.wait()
2020-08-19T16:11:20.524 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 160, in wait
2020-08-19T16:11:20.524 INFO:tasks.cephfs_test_runner:    self._raise_for_status()
2020-08-19T16:11:20.524 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 182, in _raise_for_status
2020-08-19T16:11:20.524 INFO:tasks.cephfs_test_runner:    node=self.hostname, label=self.label
2020-08-19T16:11:20.524 INFO:tasks.cephfs_test_runner:teuthology.exceptions.CommandFailedError: Command failed on smithi008 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early fs subvolume create cephfs subvolume_0000000000751588'
2020-08-19T16:11:20.524 INFO:tasks.cephfs_test_runner:
2020-08-19T16:11:20.525 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 93, in run_tasks
    manager.__enter__()
  File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
    return next(self.gen)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_ajarr-wip-46163/qa/tasks/cephfs_test_runner.py", line 211, in task
    raise RuntimeError("Test failure: {0}".format(", ".join(bad_tests)))
RuntimeError: Test failure: test_clone_subvolume_info (tasks.cephfs.test_volumes.TestVolumes)
2020-08-19T16:11:20.551 ERROR:teuthology.run_tasks: Sentry event: http://sentry.ceph.com/sepia/teuthology/?q=90125320ba0e4046966e91499bc80b06
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 93, in run_tasks
    manager.__enter__()
  File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
    return next(self.gen)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_ajarr-wip-46163/qa/tasks/cephfs_test_runner.py", line 211, in task
    raise RuntimeError("Test failure: {0}".format(", ".join(bad_tests)))
RuntimeError: Test failure: test_clone_subvolume_info (tasks.cephfs.test_volumes.TestVolumes)
2020-08-19T16:11:20.551 DEBUG:teuthology.run_tasks:Unwinding manager cephfs_test_runner
2020-08-19T16:11:20.589 DEBUG:teuthology.run_tasks:Unwinding manager ceph-fuse
2020-08-19T16:11:20.643 INFO:tasks.ceph_fuse:Unmounting ceph-fuse clients...

From: /ceph/teuthology-archive/rraja-2020-08-19_13:10:02-fs-ajarr-wip-46163-distro-basic-smithi/5359776/remote/smithi008/log/ceph-mgr.y.log.gz

2020-08-19T16:09:19.302+0000 7f505d459700  0 [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(prefix:fs subvolume create, sub_name:subvolume_0000000000751588, target:['mon-mgr', ''], vol_name:cephfs) < ""
2020-08-19T16:09:19.302+0000 7f505d459700  0 [volumes DEBUG mgr_util] self.fs_id=2, fs_id=3
2020-08-19T16:09:19.302+0000 7f505d459700  0 [volumes WARNING mgr_util] filesystem id changed for volume 'cephfs', reconnecting...
2020-08-19T16:09:19.302+0000 7f505d459700  0 [volumes DEBUG mgr_util] self.fs_id=2, fs_id=3
2020-08-19T16:09:19.302+0000 7f505d459700  0 [volumes INFO mgr_util] aborting connection from cephfs 'cephfs'
2020-08-19T16:09:19.302+0000 7f505d459700  2 client.5211 unmounting (abort)
2020-08-19T16:09:19.302+0000 7f505d459700  5 client.5211 _closed_mds_session mds.0 seq 3113
2020-08-19T16:09:19.302+0000 7f505d459700  1 -- 172.21.15.8:0/3821269557 >> [v2:172.21.15.8:6834/2935578574,v1:172.21.15.8:6835/2935578574] conn(0x562832af4400 msgr2=0x562832af1400 unknown :-1 s=STATE_CLOSED l=0).mark_down
2020-08-19T16:09:19.302+0000 7f505d459700  1 --2- 172.21.15.8:0/3821269557 >> [v2:172.21.15.8:6834/2935578574,v1:172.21.15.8:6835/2935578574] conn(0x562832af4400 0x562832af1400 unknown :-1 s=CLOSED pgs=10 cs=0 l=0 rev1=1 rx=0 tx=0).stop
2020-08-19T16:09:19.302+0000 7f505d459700 10 client.5211 remove_session_caps mds.0
2020-08-19T16:09:19.302+0000 7f505d459700 10 client.5211 remove_cap mds.0 on 0x10000000001.head(faked_ino=0 ref=3 ll_ref=0 cap_refs={} open={} mode=40755 size=0/0 nlink=1 btime=2020-08-19T16:06:02.489199+0000 mtime=2020-08-19T16:08:52.640782+0000 ctime=2020-08-19T16:08:52.640782+0000 caps=pAsLsXsFsx(0=pAsLsXsFsx) parents=0x10000000000.head["_nogroup"] 0x5628324f8b00)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment