Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save boegel/d05ee99cbf74552dba96ec0ae3981d4b to your computer and use it in GitHub Desktop.
Save boegel/d05ee99cbf74552dba96ec0ae3981d4b to your computer and use it in GitHub Desktop.
(partial) EasyBuild log for failed build of /tmp/eb-ls8zh15h/files_pr20809/g/GROMACS/GROMACS-2024.2-foss-2023b-CUDA-12.5.0.eb (PR(s) #20809)
that with the Verlet scheme, nstlist has no effect on the accuracy of
your simulation.
NOTE 2 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_input.mdp]:
Setting nstcalcenergy (100) equal to nstenergy (1)
Number of degrees of freedom in T-Coupling group System is 33.00
NOTE 3 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_input.mdp]:
NVE simulation: will use the initial temperature of 68.810 K for
determining the Verlet buffer size
There were 3 NOTEs
Reading file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18.tpr, VERSION 2024.2 (single precision)
Can not increase nstlist because an NVE ensemble is used
On host node3902.accelgor.os 1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
Using 2 MPI threads
Non-default thread affinity set, disabling internal thread affinity
Using 1 OpenMP thread per tMPI thread
starting mdrun 'Argon'
16 steps, 0.0 ps.
Generated 1 of the 1 non-bonded parameter combinations
Excluding 1 bonded neighbours molecule type 'Argon'
Determining Verlet buffer for a tolerance of 1e-06 kJ/mol/ps at 68.8096 K
Calculated rlist for 1x1 atom pair-list as 0.703 nm, buffer size 0.003 nm
Set rlist, assuming 4x4 atom pair-list, to 0.703 nm, buffer size 0.003 nm
Note that mdrun will redetermine rlist based on the actual pair-list setup
This run will generate roughly 0 Mb of data
Writing final coordinates.
Dynamic load balancing report:
DLB was turned on during the run due to measured imbalance.
Average load imbalance: 9.5%.
The balanceable part of the MD step is 67%, load imbalance is computed from this.
Part of the total run time spent waiting due to load imbalance: 6.4%.
Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0 %
NOTE: 6.4 % of the available CPU time was lost due to load imbalance
in the domain decomposition.
You can consider manually changing the decomposition (option -dd);
e.g. by using fewer domains along the box dimension in which there is
considerable inhomogeneity in the simulated system.
NOTE: 44 % of the run time was spent communicating energies,
you might want to increase some nst* mdp options
Core t (s) Wall t (s) (%)
Time: 8.462 4.292 197.2
(ns/day) (hour/ns)
Performance: 0.342 70.129
NOTE 1 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_input.mdp]:
With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note
that with the Verlet scheme, nstlist has no effect on the accuracy of
your simulation.
NOTE 2 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_input.mdp]:
Setting nstcalcenergy (100) equal to nstenergy (1)
Number of degrees of freedom in T-Coupling group System is 33.00
NOTE 3 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_input.mdp]:
NVE simulation: will use the initial temperature of 68.810 K for
determining the Verlet buffer size
There were 3 NOTEs
Reading file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18.tpr, VERSION 2024.2 (single precision)
Can not increase nstlist because an NVE ensemble is used
On host node3902.accelgor.os 1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
Using 2 MPI threads
Non-default thread affinity set, disabling internal thread affinity
Using 1 OpenMP thread per tMPI thread
starting mdrun 'Argon'
16 steps, 0.0 ps.
Generated 1 of the 1 non-bonded parameter combinations
Excluding 1 bonded neighbours molecule type 'Argon'
Determining Verlet buffer for a tolerance of 1e-06 kJ/mol/ps at 68.8096 K
Calculated rlist for 1x1 atom pair-list as 0.703 nm, buffer size 0.003 nm
Set rlist, assuming 4x4 atom pair-list, to 0.703 nm, buffer size 0.003 nm
Note that mdrun will redetermine rlist based on the actual pair-list setup
This run will generate roughly 0 Mb of data
Writing final coordinates.
Dynamic load balancing report:
DLB was turned on during the run due to measured imbalance.
Average load imbalance: 9.9%.
The balanceable part of the MD step is 73%, load imbalance is computed from this.
Part of the total run time spent waiting due to load imbalance: 7.3%.
Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0 %
NOTE: 7.3 % of the available CPU time was lost due to load imbalance
in the domain decomposition.
You can consider manually changing the decomposition (option -dd);
e.g. by using fewer domains along the box dimension in which there is
considerable inhomogeneity in the simulated system.
NOTE: 51 % of the run time was spent communicating energies,
you might want to increase some nst* mdp options
Core t (s) Wall t (s) (%)
Time: 7.092 3.604 196.8
(ns/day) (hour/ns)
Performance: 0.408 58.887
Opened /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_reference.edr as single precision energy file
Opened /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18.edr as single precision energy file
Reading energy frame 0 time 0.000
Reading energy frame 0 time 0.000
Reading energy frame 1 time 0.001
Reading energy frame 1 time 0.001
Reading energy frame 2 time 0.002
Reading energy frame 2 time 0.002
Reading energy frame 3 time 0.003
Reading energy frame 3 time 0.003
Reading energy frame 4 time 0.004
Reading energy frame 4 time 0.004
Reading energy frame 5 time 0.005
Reading energy frame 5 time 0.005
Reading energy frame 6 time 0.006
Reading energy frame 6 time 0.006
Reading energy frame 7 time 0.007
Reading energy frame 7 time 0.007
Reading energy frame 8 time 0.008
Reading energy frame 8 time 0.008
Reading energy frame 9 time 0.009
Reading energy frame 9 time 0.009
Reading energy frame 10 time 0.010
Reading energy frame 10 time 0.010
Reading energy frame 11 time 0.011
Reading energy frame 11 time 0.011
Reading energy frame 12 time 0.012
Reading energy frame 12 time 0.012
Reading energy frame 13 time 0.013
Reading energy frame 13 time 0.013
Reading energy frame 14 time 0.014
Reading energy frame 14 time 0.014
Reading energy frame 15 time 0.015
Reading energy frame 15 time 0.015
Reading energy frame 16 time 0.016
Reading energy frame 16 time 0.016
Last energy frame read 16 time 0.016
NOTE 1 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_input.mdp]:
With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note
that with the Verlet scheme, nstlist has no effect on the accuracy of
your simulation.
NOTE 2 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_input.mdp]:
Setting nstcalcenergy (100) equal to nstenergy (4)
Number of degrees of freedom in T-Coupling group System is 33.00
NOTE 3 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_input.mdp]:
NVE simulation: will use the initial temperature of 68.810 K for
determining the Verlet buffer size
There were 3 NOTEs
Reading file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18.tpr, VERSION 2024.2 (single precision)
Can not increase nstlist because an NVE ensemble is used
On host node3902.accelgor.os 1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
Using 2 MPI threads
Non-default thread affinity set, disabling internal thread affinity
Using 1 OpenMP thread per tMPI thread
starting mdrun 'Argon'
16 steps, 0.0 ps.
Generated 1 of the 1 non-bonded parameter combinations
Excluding 1 bonded neighbours molecule type 'Argon'
Determining Verlet buffer for a tolerance of 1e-06 kJ/mol/ps at 68.8096 K
Calculated rlist for 1x1 atom pair-list as 0.703 nm, buffer size 0.003 nm
Set rlist, assuming 4x4 atom pair-list, to 0.703 nm, buffer size 0.003 nm
Note that mdrun will redetermine rlist based on the actual pair-list setup
This run will generate roughly 0 Mb of data
Writing final coordinates.
Dynamic load balancing report:
DLB was turned on during the run due to measured imbalance.
Average load imbalance: 9.9%.
The balanceable part of the MD step is 73%, load imbalance is computed from this.
Part of the total run time spent waiting due to load imbalance: 7.3%.
Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0 %
NOTE: 7.3 % of the available CPU time was lost due to load imbalance
in the domain decomposition.
You can consider manually changing the decomposition (option -dd);
e.g. by using fewer domains along the box dimension in which there is
considerable inhomogeneity in the simulated system.
NOTE: 52 % of the run time was spent communicating energies,
you might want to increase some nst* mdp options
Core t (s) Wall t (s) (%)
Time: 7.382 3.743 197.2
(ns/day) (hour/ns)
Performance: 0.392 61.159
Opened /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_reference.edr as single precision energy file
Opened /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18.edr as single precision energy file
Reading energy frame 0 time 0.000
Reading energy frame 0 time 0.000
Reading energy frame 1 time 0.004
Reading energy frame 1 time 0.001
Reading energy frame 2 time 0.002
Reading energy frame 3 time 0.003
Reading energy frame 4 time 0.004
Reading energy frame 2 time 0.008
Reading energy frame 5 time 0.005
Reading energy frame 6 time 0.006
Reading energy frame 7 time 0.007
Reading energy frame 8 time 0.008
Reading energy frame 3 time 0.012
Reading energy frame 9 time 0.009
Reading energy frame 10 time 0.010
Reading energy frame 11 time 0.011
Reading energy frame 12 time 0.012
Reading energy frame 4 time 0.016
Reading energy frame 13 time 0.013
Reading energy frame 14 time 0.014
Reading energy frame 15 time 0.015
Reading energy frame 16 time 0.016
Last energy frame read 4 time 0.016
NOTE 1 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_input.mdp]:
With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note
that with the Verlet scheme, nstlist has no effect on the accuracy of
your simulation.
NOTE 2 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_input.mdp]:
nstcomm < nstcalcenergy defeats the purpose of nstcalcenergy, consider
setting nstcomm equal to nstcalcenergy for less overhead
Number of degrees of freedom in T-Coupling group System is 33.00
NOTE 3 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_input.mdp]:
NVE simulation: will use the initial temperature of 68.810 K for
determining the Verlet buffer size
There were 3 NOTEs
Reading file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18.tpr, VERSION 2024.2 (single precision)
Can not increase nstlist because an NVE ensemble is used
On host node3902.accelgor.os 1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
Using 2 MPI threads
Non-default thread affinity set, disabling internal thread affinity
Using 1 OpenMP thread per tMPI thread
starting mdrun 'Argon'
16 steps, 0.0 ps.
Generated 1 of the 1 non-bonded parameter combinations
Excluding 1 bonded neighbours molecule type 'Argon'
Determining Verlet buffer for a tolerance of 1e-06 kJ/mol/ps at 68.8096 K
Calculated rlist for 1x1 atom pair-list as 0.703 nm, buffer size 0.003 nm
Set rlist, assuming 4x4 atom pair-list, to 0.703 nm, buffer size 0.003 nm
Note that mdrun will redetermine rlist based on the actual pair-list setup
This run will generate roughly 0 Mb of data
Writing final coordinates.
Dynamic load balancing report:
DLB was turned on during the run due to measured imbalance.
Average load imbalance: 9.9%.
The balanceable part of the MD step is 70%, load imbalance is computed from this.
Part of the total run time spent waiting due to load imbalance: 6.9%.
Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0 %
NOTE: 6.9 % of the available CPU time was lost due to load imbalance
in the domain decomposition.
You can consider manually changing the decomposition (option -dd);
e.g. by using fewer domains along the box dimension in which there is
considerable inhomogeneity in the simulated system.
NOTE: 47 % of the run time was spent communicating energies,
you might want to increase some nst* mdp options
Core t (s) Wall t (s) (%)
Time: 7.712 3.917 196.9
(ns/day) (hour/ns)
Performance: 0.375 64.002
Opened /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_reference.edr as single precision energy file
Opened /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18.edr as single precision energy file
NOTE 1 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_input.mdp]:
With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note
that with the Verlet scheme, nstlist has no effect on the accuracy of
your simulation.
NOTE 2 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_input.mdp]:
nstcomm < nstcalcenergy defeats the purpose of nstcalcenergy, consider
setting nstcomm equal to nstcalcenergy for less overhead
Number of degrees of freedom in T-Coupling group System is 33.00
NOTE 3 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_input.mdp]:
NVE simulation: will use the initial temperature of 68.810 K for
determining the Verlet buffer size
There were 3 NOTEs
Reading file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18.tpr, VERSION 2024.2 (single precision)
Can not increase nstlist because an NVE ensemble is used
On host node3902.accelgor.os 1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
Using 2 MPI threads
Non-default thread affinity set, disabling internal thread affinity
Using 1 OpenMP thread per tMPI thread
starting mdrun 'Argon'
16 steps, 0.0 ps.
Generated 1 of the 1 non-bonded parameter combinations
Excluding 1 bonded neighbours molecule type 'Argon'
Determining Verlet buffer for a tolerance of 1e-06 kJ/mol/ps at 68.8096 K
Calculated rlist for 1x1 atom pair-list as 0.703 nm, buffer size 0.003 nm
Set rlist, assuming 4x4 atom pair-list, to 0.703 nm, buffer size 0.003 nm
Note that mdrun will redetermine rlist based on the actual pair-list setup
This run will generate roughly 0 Mb of data
Writing final coordinates.
Dynamic load balancing report:
DLB was turned on during the run due to measured imbalance.
Average load imbalance: 9.9%.
The balanceable part of the MD step is 68%, load imbalance is computed from this.
Part of the total run time spent waiting due to load imbalance: 6.8%.
Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0 %
NOTE: 6.8 % of the available CPU time was lost due to load imbalance
in the domain decomposition.
You can consider manually changing the decomposition (option -dd);
e.g. by using fewer domains along the box dimension in which there is
considerable inhomogeneity in the simulated system.
NOTE: 47 % of the run time was spent communicating energies,
you might want to increase some nst* mdp options
Core t (s) Wall t (s) (%)
Time: 7.696 3.909 196.9
(ns/day) (hour/ns)
Performance: 0.376 63.871
Opened /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_reference.edr as single precision energy file
Opened /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18.edr as single precision energy file
NOTE 1 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_input.mdp]:
With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note
that with the Verlet scheme, nstlist has no effect on the accuracy of
your simulation.
NOTE 2 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_input.mdp]:
nstcomm < nstcalcenergy defeats the purpose of nstcalcenergy, consider
setting nstcomm equal to nstcalcenergy for less overhead
Number of degrees of freedom in T-Coupling group System is 33.00
NOTE 3 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_input.mdp]:
NVE simulation: will use the initial temperature of 68.810 K for
determining the Verlet buffer size
There were 3 NOTEs
Reading file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18.tpr, VERSION 2024.2 (single precision)
Can not increase nstlist because an NVE ensemble is used
On host node3902.accelgor.os 1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
Using 2 MPI threads
Non-default thread affinity set, disabling internal thread affinity
Using 1 OpenMP thread per tMPI thread
Start 81: MdrunCoordinationConstraintsTests1Rank
81/89 Test #81: MdrunCoordinationConstraintsTests1Rank ....... Passed 27.29 sec
Start 82: MdrunCoordinationConstraintsTests2Ranks
82/89 Test #82: MdrunCoordinationConstraintsTests2Ranks ...... Passed 432.95 sec
Start 83: MdrunFEPTests
83/89 Test #83: MdrunFEPTests ................................ Passed 4.50 sec
Start 84: MdrunPullTests
84/89 Test #84: MdrunPullTests ............................... Passed 1.82 sec
Start 85: MdrunRotationTests
85/89 Test #85: MdrunRotationTests ........................... Passed 3.65 sec
Start 86: MdrunSimulatorComparison
86/89 Test #86: MdrunSimulatorComparison ..................... Passed 0.04 sec
Start 87: MdrunVirtualSiteTests
87/89 Test #87: MdrunVirtualSiteTests ........................ Passed 85.96 sec
Start 88: EnsembleHistogramPotentialPlugin.ForceCalc
88/89 Test #88: EnsembleHistogramPotentialPlugin.ForceCalc ... Passed 0.03 sec
Start 89: EnsembleBoundingPotentialPlugin.ForceCalc
89/89 Test #89: EnsembleBoundingPotentialPlugin.ForceCalc .... Passed 0.03 sec
99% tests passed, 1 tests failed out of 89
Label Time Summary:
GTest = 3613.69 sec*proc (85 tests)
IntegrationTest = 1599.39 sec*proc (28 tests)
MpiTest = 3379.15 sec*proc (21 tests)
QuickGpuTest = 893.11 sec*proc (20 tests)
SlowGpuTest = 2682.33 sec*proc (14 tests)
SlowTest = 1959.40 sec*proc (13 tests)
UnitTest = 54.90 sec*proc (44 tests)
Total Test time (real) = 1556.44 sec
The following tests FAILED:
80 - MdrunCoordinationCouplingTests2Ranks (Timeout)
Errors while running CTest
make[3]: *** [CMakeFiles/run-ctest-nophys.dir/build.make:74: CMakeFiles/run-ctest-nophys] Error 8
make[3]: Leaving directory '/tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj'
make[2]: *** [CMakeFiles/Makefile2:3461: CMakeFiles/run-ctest-nophys.dir/all] Error 2
make[2]: Leaving directory '/tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj'
make[1]: *** [CMakeFiles/Makefile2:3497: CMakeFiles/check.dir/rule] Error 2
make[1]: Leaving directory '/tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj'
make: *** [Makefile:632: check] Error 2
(at easybuild/easybuild-framework/easybuild/tools/run.py:682 in parse_cmd_output)
== 2024-06-26 13:53:51,704 build_log.py:267 INFO ... (took 28 mins 19 secs)
== 2024-06-26 13:53:51,706 config.py:700 DEBUG software install path as specified by 'installpath' and 'subdir_software': /user/gent/400/vsc40023/eb_scratch/RHEL8/zen3-ampere-ib/software
== 2024-06-26 13:53:51,706 filetools.py:2013 INFO Removing lock /user/gent/400/vsc40023/eb_scratch/RHEL8/zen3-ampere-ib/software/.locks/_user_gent_400_vsc40023_eb_scratch_RHEL8_zen3-ampere-ib_software_GROMACS_2024.2-foss-2023b-CUDA-12.5.0.lock...
== 2024-06-26 13:53:51,712 filetools.py:383 INFO Path /user/gent/400/vsc40023/eb_scratch/RHEL8/zen3-ampere-ib/software/.locks/_user_gent_400_vsc40023_eb_scratch_RHEL8_zen3-ampere-ib_software_GROMACS_2024.2-foss-2023b-CUDA-12.5.0.lock successfully removed.
== 2024-06-26 13:53:51,712 filetools.py:2017 INFO Lock removed: /user/gent/400/vsc40023/eb_scratch/RHEL8/zen3-ampere-ib/software/.locks/_user_gent_400_vsc40023_eb_scratch_RHEL8_zen3-ampere-ib_software_GROMACS_2024.2-foss-2023b-CUDA-12.5.0.lock
== 2024-06-26 13:53:51,712 easyblock.py:4285 WARNING build failed (first 300 chars): cmd "make check -j 48 " exited with exit code 2 and output:
/kyukon/scratch/gent/vo/000/gvo00002/vsc40023/easybuild_REGTEST/RHEL8/zen3-ampere-ib/software/CMake/3.27.6-GCCcore-13.2.0/bin/cmake -P /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/CMakeFiles/VerifyGlobs.
== 2024-06-26 13:53:51,712 easyblock.py:328 INFO Closing log for application name GROMACS version 2024.2
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment