Last active
August 21, 2023 05:49
-
-
Save stevenjswanson/67ca4c76bc00200d52b2d05ab7bfb422 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
slurm-stack_slurmctld.0.x6dormiehq63@142dev | ---> Starting the MUNGE Authentication service (munged) ... | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | * Starting MUNGE munged | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | ...done. | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | ---> Waiting for slurmdbd to become active before starting slurmctld ... | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | -- slurmdbd is not available. Sleeping ... | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | -- slurmdbd is not available. Sleeping ... | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | -- slurmdbd is not available. Sleeping ... | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | -- slurmdbd is not available. Sleeping ... | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | -- slurmdbd is not available. Sleeping ... | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | -- slurmdbd is now active ... | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | ---> Starting the Slurm Controller Daemon (slurmctld) ... | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: error: Ignoring obsolete FastSchedule=1 option. Please remove from your configuration. | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: slurmctld log levels: stderr=debug2 logfile=debug2 syslog=quiet | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: Log file re-opened | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: Not running as root. Can't drop supplementary groups | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: creating clustername file: /var/lib/slurmd/clustername | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: error: Configured MailProg is invalid | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: slurmscriptd: Got ack from slurmctld, initialization successful | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: slurmctld: slurmscriptd fork()'d and initialized. | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: _slurmscriptd_mainloop: started | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: slurmctld version 21.08.5 started on cluster linux | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: _slurmctld_listener_thread: started listening to slurmscriptd | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: cred/munge: init: Munge credential signature plugin loaded | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: auth/munge: init: Munge authentication plugin loaded | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: select/cons_tres: common_init: select/cons_tres loaded | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: select/linear: init: Linear node selection plugin loaded with argument 17 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: select/cons_res: common_init: select/cons_res loaded | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: select/cray_aries: init: Cray/Aries node selection plugin loaded | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: preempt/none: init: preempt/none loaded | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: acct_gather_energy/none: init: AcctGatherEnergy NONE plugin loaded | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: acct_gather_Profile/none: init: AcctGatherProfile NONE plugin loaded | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: acct_gather_interconnect/none: init: AcctGatherInterconnect NONE plugin loaded | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: acct_gather_filesystem/none: init: AcctGatherFilesystem NONE plugin loaded | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: No acct_gather.conf file (/etc/slurm/acct_gather.conf) | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: jobacct_gather/linux: init: Job accounting gather LINUX plugin loaded | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: ext_sensors/none: init: ExtSensors NONE plugin loaded | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: accounting_storage/slurmdbd: init: Accounting storage SLURMDBD plugin loaded | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: accounting_storage/slurmdbd: _connect_dbd_conn: Sent PersistInit msg | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: accounting_storage/slurmdbd: clusteracct_storage_p_register_ctld: Registering slurmctld at port 6817 with slurmdbd | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: assoc 2(root, root) has direct parent of 1(root, (null)) | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: user root default acct is root | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: association rec id : 1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: acct : root | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: cluster : linux | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: RawShares : 1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Default QOS : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: GrpTRESMins : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: GrpTRESRunMins : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: GrpTRES : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: GrpJobs : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: GrpJobsAccrue : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: GrpSubmitJobs : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: GrpWall : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: MaxTRESMins : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: MaxTRESRunMins : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: MaxTRESPerJob : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: MaxTRESPerNode : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: MaxJobs : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: MaxJobsAccrue : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: MinPrioThresh : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: MaxSubmitJobs : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: MaxWall : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Qos : normal | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: NormalizedShares : 18446744073709551616.000000 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: UsedJobs : 0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: RawUsage : 0.000000 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: association rec id : 2 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: acct : root | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: cluster : linux | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: RawShares : 1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Default QOS : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: GrpTRESMins : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: GrpTRESRunMins : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: GrpTRES : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: GrpJobs : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: GrpJobsAccrue : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: GrpSubmitJobs : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: GrpWall : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: MaxTRESMins : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: MaxTRESRunMins : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: MaxTRESPerJob : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: MaxTRESPerNode : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: MaxJobs : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: MaxJobsAccrue : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: MinPrioThresh : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: MaxSubmitJobs : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: MaxWall : NONE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Qos : normal | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: User : root(0) | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: NormalizedShares : 18446744073709551616.000000 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: UsedJobs : 0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: RawUsage : 0.000000 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: create_mmap_buf: Failed to mmap file `/var/lib/slurmd/assoc_usage`, Invalid argument | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: No Assoc usage file (/var/lib/slurmd/assoc_usage) to recover | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: create_mmap_buf: Failed to mmap file `/var/lib/slurmd/qos_usage`, Invalid argument | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: No Qos usage file (/var/lib/slurmd/qos_usage) to recover | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: switch/none: init: switch NONE plugin loaded | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: switch Cray/Aries plugin loaded. | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: Reading slurm.conf file: /etc/slurm/slurm.conf | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: NodeNames=c[1-2] setting Sockets=Boards(1) | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: error: Ignoring obsolete FastSchedule=1 option. Please remove from your configuration. | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: No memory enforcing mechanism configured. | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: topology/none: init: topology NONE plugin loaded | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: No DownNodes | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: create_mmap_buf: Failed to open file `/var/lib/slurmd/last_config_lite`, No such file or directory | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: No last_config_lite file (/var/lib/slurmd/last_config_lite) to recover | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: slurmctld log levels: stderr=debug2 logfile=debug2 syslog=quiet | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: Log file re-opened | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: sched: Backfill scheduler plugin loaded | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: route/default: init: route default plugin loaded | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: create_mmap_buf: Failed to mmap file `/var/lib/slurmd/node_state`, Invalid argument | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: error: Could not open node state file /var/lib/slurmd/node_state: Invalid argument | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: error: NOTE: Trying backup state save file. Information may be lost! | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: create_mmap_buf: Failed to open file `/var/lib/slurmd/node_state.old`, No such file or directory | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: No node state file (/var/lib/slurmd/node_state.old) to recover | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: create_mmap_buf: Failed to mmap file `/var/lib/slurmd/job_state`, Invalid argument | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: error: Could not open job state file /var/lib/slurmd/job_state: Invalid argument | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: error: NOTE: Trying backup state save file. Jobs may be lost! | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: create_mmap_buf: Failed to open file `/var/lib/slurmd/job_state.old`, No such file or directory | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: No job state file (/var/lib/slurmd/job_state.old) to recover | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: select/cons_res: part_data_create_array: select/cons_res: preparing for 1 partitions | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: Updating partition uid access list | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: create_mmap_buf: Failed to mmap file `/var/lib/slurmd/resv_state`, Invalid argument | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: error: Could not open reservation state file /var/lib/slurmd/resv_state: Invalid argument | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: error: NOTE: Trying backup state save file. Reservations may be lost | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: create_mmap_buf: Failed to open file `/var/lib/slurmd/resv_state.old`, No such file or directory | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: No reservation state file (/var/lib/slurmd/resv_state.old) to recover | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: create_mmap_buf: Failed to mmap file `/var/lib/slurmd/trigger_state`, Invalid argument | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: error: Could not open trigger state file /var/lib/slurmd/trigger_state: Invalid argument | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: error: NOTE: Trying backup state save file. Triggers may be lost! | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: create_mmap_buf: Failed to open file `/var/lib/slurmd/trigger_state.old`, No such file or directory | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: No trigger state file (/var/lib/slurmd/trigger_state.old) to recover | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: read_slurm_conf: backup_controller not specified | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: Reinitializing job accounting state | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: accounting_storage/slurmdbd: acct_storage_p_flush_jobs_on_cluster: Ending any jobs in accounting that were running when controller went down on | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: select/cons_res: select_p_reconfigure: select/cons_res: reconfigure | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: select/cons_res: part_data_create_array: select/cons_res: preparing for 1 partitions | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: Running as primary controller | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: No backup controllers, not launching heartbeat. | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: accounting_storage/slurmdbd: clusteracct_storage_p_cluster_tres: Sending tres '1=2,2=2000,3=0,4=2,5=2,6=0,7=0,8=0' for cluster | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: send_all_to_accounting: called ACCOUNTING_FIRST_REG | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: create_mmap_buf: Failed to mmap file `/var/lib/slurmd/fed_mgr_state`, Invalid argument | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: error: No fed_mgr state file (/var/lib/slurmd/fed_mgr_state) to recover | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: priority/basic: init: Priority BASIC plugin loaded | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: No parameter for mcs plugin, default values set | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: mcs: MCSParameters = (null). ondemand set. | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: mcs/none: init: mcs none plugin loaded | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: power_save module disabled, SuspendTime < 0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: power_save mode not enabled | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: slurmctld listening on 0.0.0.0:6817 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: slurm_recv_timeout at 0 of 4, recv zero bytes | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: error: slurm_receive_msg [10.0.39.4:57264]: Zero Bytes were transmitted or received | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: MESSAGE_NODE_REGISTRATION_STATUS from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: validate_node_specs: node c2 registered with 0 jobs | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_node_registration complete for c2 usec=27 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: slurm_recv_timeout at 0 of 4, recv zero bytes | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: error: slurm_receive_msg [10.0.39.6:49188]: Zero Bytes were transmitted or received | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: MESSAGE_NODE_REGISTRATION_STATUS from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: validate_node_specs: node c1 registered with 0 jobs | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_node_registration complete for c1 usec=18 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: SchedulerParameters=default_queue_depth=100,max_rpc_cnt=0,max_sched_time=2,partition_job_depth=0,sched_max_job_start=0,sched_min_interval=2 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: sched: Running job scheduler for default depth. | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_RESOURCE_ALLOCATION from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: found 2 usable nodes from config containing c[1-2] | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: select/cons_res: select_p_job_test: select_p_job_test for JobId=1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: sched: JobId=1 allocated resources: NodeList=c1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: sched: _slurm_rpc_allocate_resources JobId=1 NodeList=c1 usec=217 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: create_mmap_buf: Failed to mmap file `/var/lib/slurmd/job_state`, Invalid argument | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: error: Could not open job state file /var/lib/slurmd/job_state: Invalid argument | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: error: NOTE: Trying backup state save file. Jobs may be lost! | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: create_mmap_buf: Failed to open file `/var/lib/slurmd/job_state.old`, No such file or directory | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: No job state file (/var/lib/slurmd/job_state.old) found | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_READY from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_job_ready(1)=7 usec=1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_HET_JOB_ALLOC_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: _slurm_rpc_het_job_alloc_info: JobId=1 NodeList=c1 usec=1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_STEP_CREATE from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: laying out the 1 tasks on 1 hosts c1 dist 1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _group_cache_lookup_internal: no entry found for root | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_RESOURCE_ALLOCATION from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: found 2 usable nodes from config containing c[1-2] | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: select/cons_res: select_p_job_test: select_p_job_test for JobId=2 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: sched: JobId=2 allocated resources: NodeList=c2 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: sched: _slurm_rpc_allocate_resources JobId=2 NodeList=c2 usec=193 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_READY from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_job_ready(2)=7 usec=1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_HET_JOB_ALLOC_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: _slurm_rpc_het_job_alloc_info: JobId=2 NodeList=c2 usec=1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_STEP_CREATE from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: laying out the 1 tasks on 1 hosts c2 dist 1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _group_cache_lookup_internal: found valid entry for root | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_RESOURCE_ALLOCATION from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: found 2 usable nodes from config containing c[1-2] | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: select/cons_res: select_p_job_test: select_p_job_test for JobId=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: select/cons_res: select_p_job_test: select_p_job_test for JobId=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: sched: _slurm_rpc_allocate_resources JobId=3 NodeList=(null) usec=162 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: sched: Running job scheduler for default depth. | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: found 2 usable nodes from config containing c[1-2] | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: select/cons_res: select_p_job_test: select_p_job_test for JobId=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: select/cons_res: select_p_job_test: select_p_job_test for JobId=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: sched/backfill: _attempt_backfill: beginning | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: sched/backfill: _attempt_backfill: 1 jobs to backfill | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: sched/backfill: _attempt_backfill: entering _try_sched for JobId=3. | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: select/cons_res: select_p_job_test: select_p_job_test for JobId=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: select/cons_res: select_p_job_test: select_p_job_test for JobId=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Testing job time limits and checkpoints | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Spawning RPC agent for msg_type SRUN_TIMEOUT | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Spawning RPC agent for msg_type SRUN_TIMEOUT | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Spawning RPC agent for msg_type SRUN_TIMEOUT | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Spawning RPC agent for msg_type SRUN_TIMEOUT | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_STEP_COMPLETE from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: full switch release for JobId=2 StepId=0, nodes c2 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=5 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_STEP_COMPLETE from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: full switch release for JobId=1 StepId=0, nodes c1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_COMPLETE_JOB_ALLOCATION from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: _job_complete: JobId=1 WEXITSTATUS 0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: _job_complete: JobId=1 done | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_complete_job_allocation: JobId=1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Spawning RPC agent for msg_type SRUN_JOB_COMPLETE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Spawning RPC agent for msg_type REQUEST_TERMINATE_JOB | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Tree head got back 0 looking for 1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Tree head got back 1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: node_did_resp c1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: sched: Running job scheduler for default depth. | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: found 2 usable nodes from config containing c[1-2] | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: select/cons_res: select_p_job_test: select_p_job_test for JobId=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: sched: Allocate JobId=3 NodeList=c1 #CPUs=1 Partition=normal | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Spawning RPC agent for msg_type RESPONSE_RESOURCE_ALLOCATION | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_READY from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_job_ready(3)=7 usec=1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_HET_JOB_ALLOC_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: _slurm_rpc_het_job_alloc_info: JobId=3 NodeList=c1 usec=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_STEP_CREATE from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: laying out the 1 tasks on 1 hosts c1 dist 1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _group_cache_lookup_internal: found valid entry for root | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_STEP_COMPLETE from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: full switch release for JobId=3 StepId=0, nodes c1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: sched/backfill: _attempt_backfill: beginning | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: sched/backfill: _attempt_backfill: no jobs to backfill | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Testing job time limits and checkpoints | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Performing purge of old job records | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Spawning RPC agent for msg_type SRUN_TIMEOUT | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: sched: Running job scheduler for full queue. | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Spawning RPC agent for msg_type SRUN_TIMEOUT | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=6 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Testing job time limits and checkpoints | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: Time limit exhausted for JobId=2 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Spawning RPC agent for msg_type SRUN_TIMEOUT | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Spawning RPC agent for msg_type SRUN_JOB_COMPLETE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Spawning RPC agent for msg_type REQUEST_KILL_TIMELIMIT | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Tree head got back 0 looking for 1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Tree head got back 1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: MESSAGE_EPILOG_COMPLETE from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_epilog_complete: JobId=2 Node=c2 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: sched: Running job scheduler for default depth. | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: node_did_resp c2 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Spawning RPC agent for msg_type SRUN_TIMEOUT | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: sched/backfill: _attempt_backfill: beginning | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: sched/backfill: _attempt_backfill: no jobs to backfill | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=21 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=5 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Testing job time limits and checkpoints | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: Time limit exhausted for JobId=3 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Performing purge of old job records | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Spawning RPC agent for msg_type SRUN_TIMEOUT | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: sched: Running job scheduler for full queue. | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Spawning RPC agent for msg_type SRUN_JOB_COMPLETE | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Spawning RPC agent for msg_type REQUEST_KILL_TIMELIMIT | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Tree head got back 0 looking for 1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Tree head got back 1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: MESSAGE_EPILOG_COMPLETE from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_epilog_complete: JobId=3 Node=c1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: sched: Running job scheduler for default depth. | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: node_did_resp c1 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: sched/backfill: _attempt_backfill: beginning | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug: sched/backfill: _attempt_backfill: no jobs to backfill | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_JOB_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: Processing RPC: REQUEST_PARTITION_INFO from UID=0 | |
slurm-stack_slurmctld.0.x6dormiehq63@142dev | slurmctld: debug2: _slurm_rpc_dump_partitions, size=199 usec=4 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment