Skip to content

Instantly share code, notes, and snippets.

@tanabarr
Created June 30, 2017 10:33
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save tanabarr/70d3bfa66c4fc474b82c7c02adcda511 to your computer and use it in GitHub Desktop.
Save tanabarr/70d3bfa66c4fc474b82c7c02adcda511 to your computer and use it in GitHub Desktop.
kernel messages showing LustreErrors related to DNE failed mount of MDT
Jun 30 02:06:44 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LDISKFS-fs (sdc): mounted filesystem with ordered data mode. Opts: errors=remount-ro
Jun 30 02:06:44 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LDISKFS-fs (sde): mounted filesystem with ordered data mode. Opts: errors=remount-ro
Jun 30 02:06:44 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LDISKFS-fs (sde): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc
Jun 30 02:06:44 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LDISKFS-fs (sdc): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc
Jun 30 02:06:44 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LDISKFS-fs (sdb): mounted filesystem with ordered data mode. Opts: errors=remount-ro
Jun 30 02:06:44 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: Lustre: srv-testfs-MDT0001: No data found on store. Initialize space
Jun 30 02:06:45 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: Lustre: testfs-MDT0001: new disk, initializing
Jun 30 02:06:45 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LDISKFS-fs (sdb): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc
Jun 30 02:06:45 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: Lustre: testfs-MDT0001: Imperative Recovery not enabled, recovery window 300-900
Jun 30 02:06:45 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 6427:0:(fid_handler.c:329:__seq_server_alloc_meta()) srv-testfs-MDT0001: Allocated super-sequence failed: rc = -115
Jun 30 02:06:45 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 6427:0:(fid_request.c:227:seq_client_alloc_seq()) cli-testfs-MDT0001: Can't allocate new meta-sequence,rc -115
Jun 30 02:06:45 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 6427:0:(fid_request.c:383:seq_client_alloc_fid()) cli-testfs-MDT0001: Can't allocate new sequence: rc = -115
Jun 30 02:06:45 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 6427:0:(lod_dev.c:419:lod_sub_recovery_thread()) testfs-MDT0001-osd getting update log failed: rc = -115
Jun 30 02:06:45 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: Lustre: testfs-MDT0001: Connection restored to bab0865e-9b72-a101-f03c-9934d3a8b017 (at 0@lo)
Jun 30 02:06:45 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: Lustre: Failing over testfs-MDT0002
Jun 30 02:06:45 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: Lustre: testfs-MDT0002: Not available for connect from 0@lo (stopping)
Jun 30 02:06:45 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: Lustre: ctl-testfs-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt
Jun 30 02:06:45 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 6403:0:(fid_handler.c:329:__seq_server_alloc_meta()) srv-testfs-MDT0001: Allocated super-sequence failed: rc = -115
Jun 30 02:06:45 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 6403:0:(fid_handler.c:329:__seq_server_alloc_meta()) Skipped 2 previous similar messages
Jun 30 02:06:45 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 11-0: testfs-MDT0001-osp-MDT0000: operation seq_query to node 0@lo failed: rc = -19
Jun 30 02:06:45 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: Lustre: testfs-MDT0001-osp-MDT0000: Connection to testfs-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
Jun 30 02:06:45 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: Lustre: server umount testfs-MDT0001 complete
Jun 30 02:06:46 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: Lustre: Failing over testfs-MDT0000
Jun 30 02:06:46 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: Lustre: Skipped 1 previous similar message
Jun 30 02:06:46 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 6460:0:(fid_request.c:227:seq_client_alloc_seq()) cli-cli-testfs-MDT0001-osp-MDT0000: Can't allocate new meta-sequence,rc -5
Jun 30 02:06:46 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 6460:0:(fid_request.c:227:seq_client_alloc_seq()) Skipped 1 previous similar message
Jun 30 02:06:46 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 6461:0:(osp_object.c:582:osp_attr_get()) testfs-MDT0002-osp-MDT0000:osp_attr_get update error [0x200000009:0x2:0x0]: rc = -5
Jun 30 02:06:46 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 6460:0:(fid_request.c:383:seq_client_alloc_fid()) cli-cli-testfs-MDT0001-osp-MDT0000: Can't allocate new sequence: rc = -5
Jun 30 02:06:46 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 6460:0:(fid_request.c:383:seq_client_alloc_fid()) Skipped 1 previous similar message
Jun 30 02:06:46 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 6460:0:(lod_dev.c:419:lod_sub_recovery_thread()) testfs-MDT0001-osp-MDT0000 getting update log failed: rc = -5
Jun 30 02:06:46 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 6460:0:(lod_dev.c:419:lod_sub_recovery_thread()) Skipped 1 previous similar message
Jun 30 02:06:46 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 6461:0:(lod_sub_object.c:932:lod_sub_prep_llog()) testfs-MDT0000-mdtlov: can't get id from catalogs: rc = -5
Jun 30 02:06:46 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: Lustre: server umount testfs-MDT0000 complete
Jun 30 02:06:48 lotus-52vm5.lotus.hpdd.lab.intel.com firewalld: ERROR: ALREADY_ENABLED: '988:tcp' already in 'public'
Jun 30 02:06:50 lotus-52vm5.lotus.hpdd.lab.intel.com firewalld: ERROR: ALREADY_ENABLED: '988:tcp' already in 'public'
Jun 30 02:06:50 lotus-52vm5.lotus.hpdd.lab.intel.com firewalld: ERROR: ALREADY_ENABLED: 988:tcp
Jun 30 02:06:50 lotus-52vm5.lotus.hpdd.lab.intel.com firewalld: ERROR: ALREADY_ENABLED: 988:tcp
Jun 30 02:06:55 lotus-52vm5.lotus.hpdd.lab.intel.com stonith-ng[5053]: notice: On loss of CCM Quorum: Ignore
Jun 30 02:06:55 lotus-52vm5.lotus.hpdd.lab.intel.com crmd[5057]: notice: Result of probe operation for testfs-MDT0001_822568 on lotus-52vm5.lotus.hpdd.lab.intel.com: 7 (not running)
Jun 30 02:06:56 lotus-52vm5.lotus.hpdd.lab.intel.com stonith-ng[5053]: notice: On loss of CCM Quorum: Ignore
Jun 30 02:06:56 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 6442:0:(fid_request.c:227:seq_client_alloc_seq()) cli-cli-testfs-MDT0001-osp-MDT0002: Can't allocate new meta-sequence,rc -5
Jun 30 02:06:56 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 6442:0:(fid_request.c:383:seq_client_alloc_fid()) cli-cli-testfs-MDT0001-osp-MDT0002: Can't allocate new sequence: rc = -5
Jun 30 02:06:56 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 6442:0:(lod_dev.c:419:lod_sub_recovery_thread()) testfs-MDT0001-osp-MDT0002 getting update log failed: rc = -5
Jun 30 02:06:56 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 6442:0:(lod_dev.c:419:lod_sub_recovery_thread()) Skipped 1 previous similar message
Jun 30 02:06:56 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: Lustre: server umount testfs-MDT0002 complete
Jun 30 02:06:56 lotus-52vm5.lotus.hpdd.lab.intel.com crmd[5057]: notice: Result of probe operation for testfs-MDT0000_015926 on lotus-52vm5.lotus.hpdd.lab.intel.com: 7 (not running)
Jun 30 02:06:59 lotus-52vm5.lotus.hpdd.lab.intel.com stonith-ng[5053]: notice: On loss of CCM Quorum: Ignore
Jun 30 02:06:59 lotus-52vm5.lotus.hpdd.lab.intel.com firewalld: ERROR: ALREADY_ENABLED: '988:tcp' already in 'public'
Jun 30 02:07:00 lotus-52vm5.lotus.hpdd.lab.intel.com stonith-ng[5053]: notice: On loss of CCM Quorum: Ignore
Jun 30 02:07:00 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LDISKFS-fs (sde): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc
Jun 30 02:07:00 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: LustreError: 137-5: testfs-MDT0002_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
Jun 30 02:07:01 lotus-52vm5.lotus.hpdd.lab.intel.com firewalld: ERROR: ALREADY_ENABLED: 988:tcp
Jun 30 02:07:01 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: Lustre: testfs-MDT0001: Imperative Recovery not enabled, recovery window 300-900
Jun 30 02:07:01 lotus-52vm5.lotus.hpdd.lab.intel.com kernel: Lustre: Skipped 2 previous similar messages
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment