Created
June 28, 2017 19:48
-
-
Save tanabarr/95ec25ea9262f79348dddf2da7aaeea4 to your computer and use it in GitHub Desktop.
LustreError on lotus-58vm5 from SSI test run
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[root@lotus-58vm5 ~]# cat /var/log/messages| grep LustreError | |
Jun 28 04:18:32 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 137-5: testfs-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. | |
Jun 28 04:19:22 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 137-5: testfs-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. | |
Jun 28 04:20:12 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 137-5: testfs-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. | |
Jun 28 04:21:02 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 137-5: testfs-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. | |
Jun 28 04:22:17 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 11-0: testfs-OST0001-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 | |
Jun 28 04:22:17 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 137-5: testfs-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. | |
Jun 28 04:22:21 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 11-0: testfs-OST0000-osc-MDT0000: operation ost_disconnect to node 0@lo failed: rc = -107 | |
Jun 28 04:48:04 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 11-0: testfs-OST0001-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 | |
Jun 28 04:48:04 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 137-5: testfs-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. | |
Jun 28 05:51:20 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 25240:0:(fid_handler.c:329:__seq_server_alloc_meta()) srv-testfs-MDT0001: Allocated super-sequence failed: rc = -115 | |
Jun 28 05:51:20 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 25240:0:(fid_request.c:227:seq_client_alloc_seq()) cli-testfs-MDT0001: Can't allocate new meta-sequence,rc -115 | |
Jun 28 05:51:20 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 25240:0:(fid_request.c:383:seq_client_alloc_fid()) cli-testfs-MDT0001: Can't allocate new sequence: rc = -115 | |
Jun 28 05:51:20 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 25240:0:(lod_dev.c:419:lod_sub_recovery_thread()) testfs-MDT0001-osd getting update log failed: rc = -115 | |
Jun 28 05:51:21 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 25257:0:(fid_request.c:227:seq_client_alloc_seq()) cli-cli-testfs-MDT0001-osp-MDT0000: Can't allocate new meta-sequence,rc -5 | |
Jun 28 05:51:21 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 25257:0:(fid_request.c:383:seq_client_alloc_fid()) cli-cli-testfs-MDT0001-osp-MDT0000: Can't allocate new sequence: rc = -5 | |
Jun 28 05:51:21 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 11-0: testfs-MDT0000-lwp-MDT0001: operation mds_disconnect to node 0@lo failed: rc = -107 | |
Jun 28 05:51:21 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 25257:0:(lod_dev.c:419:lod_sub_recovery_thread()) testfs-MDT0001-osp-MDT0000 getting update log failed: rc = -5 | |
Jun 28 05:51:34 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 137-5: testfs-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. | |
Jun 28 05:51:36 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 11-0: testfs-MDT0000-osp-MDT0001: operation mds_connect to node 0@lo failed: rc = -114 | |
Jun 28 05:51:36 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: Skipped 1 previous similar message | |
Jun 28 05:55:32 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 11-0: testfs-MDT0001-lwp-OST0000: operation mds_disconnect to node 0@lo failed: rc = -107 | |
Jun 28 06:01:41 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 31979:0:(fid_handler.c:329:__seq_server_alloc_meta()) srv-testfs-MDT0001: Allocated super-sequence failed: rc = -115 | |
Jun 28 06:01:41 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 31979:0:(fid_handler.c:329:__seq_server_alloc_meta()) Skipped 1 previous similar message | |
Jun 28 06:01:41 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 31979:0:(fid_request.c:227:seq_client_alloc_seq()) cli-testfs-MDT0001: Can't allocate new meta-sequence,rc -115 | |
Jun 28 06:01:41 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 31979:0:(fid_request.c:383:seq_client_alloc_fid()) cli-testfs-MDT0001: Can't allocate new sequence: rc = -115 | |
Jun 28 06:01:41 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 31979:0:(lod_dev.c:419:lod_sub_recovery_thread()) testfs-MDT0001-osd getting update log failed: rc = -115 | |
Jun 28 06:01:41 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 11-0: testfs-MDT0001-osp-MDT0000: operation mds_disconnect to node 0@lo failed: rc = -107 | |
Jun 28 06:01:41 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: Skipped 2 previous similar messages | |
Jun 28 06:01:42 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 137-5: testfs-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. | |
Jun 28 06:01:42 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 32029:0:(fid_handler.c:329:__seq_server_alloc_meta()) srv-testfs-MDT0002: Allocated super-sequence failed: rc = -5 | |
Jun 28 06:01:42 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 32029:0:(fid_handler.c:329:__seq_server_alloc_meta()) Skipped 1 previous similar message | |
Jun 28 06:01:42 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 32029:0:(fid_request.c:227:seq_client_alloc_seq()) cli-testfs-MDT0002: Can't allocate new meta-sequence,rc -5 | |
Jun 28 06:01:42 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 32029:0:(fid_request.c:383:seq_client_alloc_fid()) cli-testfs-MDT0002: Can't allocate new sequence: rc = -5 | |
Jun 28 06:01:42 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 32029:0:(lod_dev.c:419:lod_sub_recovery_thread()) testfs-MDT0002-osd getting update log failed: rc = -5 | |
Jun 28 06:01:42 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 32030:0:(osp_object.c:582:osp_attr_get()) testfs-MDT0000-osp-MDT0002:osp_attr_get update error [0x200000009:0x0:0x0]: rc = -5 | |
Jun 28 06:01:42 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 32030:0:(lod_sub_object.c:932:lod_sub_prep_llog()) testfs-MDT0002-mdtlov: can't get id from catalogs: rc = -5 | |
Jun 28 06:01:42 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 32030:0:(lod_sub_object.c:932:lod_sub_prep_llog()) Skipped 1 previous similar message | |
Jun 28 06:01:52 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 31995:0:(fid_request.c:227:seq_client_alloc_seq()) cli-cli-testfs-MDT0001-osp-MDT0000: Can't allocate new meta-sequence,rc -5 | |
Jun 28 06:01:52 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 31995:0:(fid_request.c:383:seq_client_alloc_fid()) cli-cli-testfs-MDT0001-osp-MDT0000: Can't allocate new sequence: rc = -5 | |
Jun 28 06:01:52 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 31995:0:(lod_dev.c:419:lod_sub_recovery_thread()) testfs-MDT0001-osp-MDT0000 getting update log failed: rc = -5 | |
Jun 28 06:01:52 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 31995:0:(lod_dev.c:419:lod_sub_recovery_thread()) Skipped 2 previous similar messages | |
Jun 28 06:01:54 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 137-5: testfs-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. | |
Jun 28 10:40:34 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 28063:0:(ldlm_lib.c:2606:target_stop_recovery_thread()) testfs-MDT0000: Aborting recovery | |
Jun 28 10:40:34 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 27213:0:(client.c:1166:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff8800471a8c00 x1571469774554080/t0(0) o5->testfs-OST0000-osc-MDT0000@10.14.83.62@tcp:28/4 lens 432/432 e 0 to 0 dl 0 ref 2 fl Rpc:N/0/ffffffff rc 0/-1 | |
Jun 28 10:40:34 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 27215:0:(osp_precreate.c:903:osp_precreate_cleanup_orphans()) testfs-OST0001-osc-MDT0000: cannot cleanup orphans: rc = -5 | |
Jun 28 10:40:34 lotus-58vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 27213:0:(client.c:1166:ptlrpc_import_delay_req()) Skipped 1 previous similar message | |
[root@lotus-58vm5 ~]# |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment