Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save boegel/022252e28d43a44c444009829b999a2c to your computer and use it in GitHub Desktop.
Save boegel/022252e28d43a44c444009829b999a2c to your computer and use it in GitHub Desktop.
(partial) EasyBuild log for failed build of /tmp/eb-dyvbm7q5/files_pr20809/g/GROMACS/GROMACS-2024.2-foss-2023b-CUDA-12.5.0.eb (PR(s) #20809)
Set rlist, assuming 4x4 atom pair-list, to 0.703 nm, buffer size 0.003 nm
Note that mdrun will redetermine rlist based on the actual pair-list setup
This run will generate roughly 0 Mb of data
Writing final coordinates.
Dynamic load balancing report:
DLB was turned on during the run due to measured imbalance.
Average load imbalance: 10.2%.
The balanceable part of the MD step is 70%, load imbalance is computed from this.
Part of the total run time spent waiting due to load imbalance: 7.1%.
Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0 %
NOTE: 7.1 % of the available CPU time was lost due to load imbalance
in the domain decomposition.
You can consider manually changing the decomposition (option -dd);
e.g. by using fewer domains along the box dimension in which there is
considerable inhomogeneity in the simulated system.
NOTE: 49 % of the run time was spent communicating energies,
you might want to increase some nst* mdp options
Core t (s) Wall t (s) (%)
Time: 7.510 3.808 197.2
(ns/day) (hour/ns)
Performance: 0.386 62.229
Opened /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_reference.edr as single precision energy file
Opened /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18.edr as single precision energy file
NOTE 1 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_input.mdp]:
With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note
that with the Verlet scheme, nstlist has no effect on the accuracy of
your simulation.
NOTE 2 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_input.mdp]:
nstcomm < nstcalcenergy defeats the purpose of nstcalcenergy, consider
setting nstcomm equal to nstcalcenergy for less overhead
Number of degrees of freedom in T-Coupling group System is 33.00
NOTE 3 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_input.mdp]:
NVE simulation: will use the initial temperature of 68.810 K for
determining the Verlet buffer size
There were 3 NOTEs
Reading file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18.tpr, VERSION 2024.2 (single precision)
Can not increase nstlist because an NVE ensemble is used
On host node3306.joltik.os 1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
Using 2 MPI threads
Non-default thread affinity set, disabling internal thread affinity
Using 1 OpenMP thread per tMPI thread
starting mdrun 'Argon'
16 steps, 0.0 ps.
Generated 1 of the 1 non-bonded parameter combinations
Excluding 1 bonded neighbours molecule type 'Argon'
Determining Verlet buffer for a tolerance of 1e-06 kJ/mol/ps at 68.8096 K
Calculated rlist for 1x1 atom pair-list as 0.703 nm, buffer size 0.003 nm
Set rlist, assuming 4x4 atom pair-list, to 0.703 nm, buffer size 0.003 nm
Note that mdrun will redetermine rlist based on the actual pair-list setup
This run will generate roughly 0 Mb of data
Writing final coordinates.
Dynamic load balancing report:
DLB was turned on during the run due to measured imbalance.
Average load imbalance: 9.7%.
The balanceable part of the MD step is 68%, load imbalance is computed from this.
Part of the total run time spent waiting due to load imbalance: 6.6%.
Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0 %
NOTE: 6.6 % of the available CPU time was lost due to load imbalance
in the domain decomposition.
You can consider manually changing the decomposition (option -dd);
e.g. by using fewer domains along the box dimension in which there is
considerable inhomogeneity in the simulated system.
NOTE: 49 % of the run time was spent communicating energies,
you might want to increase some nst* mdp options
Core t (s) Wall t (s) (%)
Time: 7.382 3.745 197.1
(ns/day) (hour/ns)
Performance: 0.392 61.191
Opened /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18_reference.edr as single precision energy file
Opened /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_18.edr as single precision energy file
[ OK ] PropagatorsWithCoupling/PeriodicActionsTest.PeriodicActionsAgreeWithReference/18 (30867 ms)
[ RUN ] PropagatorsWithCoupling/PeriodicActionsTest.PeriodicActionsAgreeWithReference/19
NOTE 1 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19_input.mdp]:
With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note
that with the Verlet scheme, nstlist has no effect on the accuracy of
your simulation.
NOTE 2 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19_input.mdp]:
Setting nstcalcenergy (100) equal to nstenergy (1)
Number of degrees of freedom in T-Coupling group System is 33.00
There were 2 NOTEs
Reading file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19.tpr, VERSION 2024.2 (single precision)
Changing nstlist from 8 to 100, rlist from 0.703 to 0.751
On host node3306.joltik.os 1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
Using 2 MPI threads
Non-default thread affinity set, disabling internal thread affinity
Using 1 OpenMP thread per tMPI thread
starting mdrun 'Argon'
16 steps, 0.0 ps.
Generated 1 of the 1 non-bonded parameter combinations
Excluding 1 bonded neighbours molecule type 'Argon'
Determining Verlet buffer for a tolerance of 1e-06 kJ/mol/ps at 80 K
Calculated rlist for 1x1 atom pair-list as 0.703 nm, buffer size 0.003 nm
Set rlist, assuming 4x4 atom pair-list, to 0.703 nm, buffer size 0.003 nm
Note that mdrun will redetermine rlist based on the actual pair-list setup
This run will generate roughly 0 Mb of data
Writing final coordinates.
NOTE: 51 % of the run time was spent communicating energies,
you might want to increase some nst* mdp options
Core t (s) Wall t (s) (%)
Time: 7.072 3.588 197.1
(ns/day) (hour/ns)
Performance: 0.409 58.626
NOTE 1 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19_input.mdp]:
With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note
that with the Verlet scheme, nstlist has no effect on the accuracy of
your simulation.
NOTE 2 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19_input.mdp]:
Setting nstcalcenergy (100) equal to nstenergy (1)
Number of degrees of freedom in T-Coupling group System is 33.00
There were 2 NOTEs
Reading file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19.tpr, VERSION 2024.2 (single precision)
Changing nstlist from 8 to 100, rlist from 0.703 to 0.751
On host node3306.joltik.os 1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
Using 2 MPI threads
Non-default thread affinity set, disabling internal thread affinity
Using 1 OpenMP thread per tMPI thread
starting mdrun 'Argon'
16 steps, 0.0 ps.
Generated 1 of the 1 non-bonded parameter combinations
Excluding 1 bonded neighbours molecule type 'Argon'
Determining Verlet buffer for a tolerance of 1e-06 kJ/mol/ps at 80 K
Calculated rlist for 1x1 atom pair-list as 0.703 nm, buffer size 0.003 nm
Set rlist, assuming 4x4 atom pair-list, to 0.703 nm, buffer size 0.003 nm
Note that mdrun will redetermine rlist based on the actual pair-list setup
This run will generate roughly 0 Mb of data
Writing final coordinates.
NOTE: 56 % of the run time was spent communicating energies,
you might want to increase some nst* mdp options
Core t (s) Wall t (s) (%)
Time: 6.294 3.202 196.6
(ns/day) (hour/ns)
Performance: 0.459 52.319
Opened /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19_reference.edr as single precision energy file
Opened /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19.edr as single precision energy file
Reading energy frame 0 time 0.000
Reading energy frame 0 time 0.000
Reading energy frame 1 time 0.001
Reading energy frame 1 time 0.001
Reading energy frame 2 time 0.002
Reading energy frame 2 time 0.002
Reading energy frame 3 time 0.003
Reading energy frame 3 time 0.003
Reading energy frame 4 time 0.004
Reading energy frame 4 time 0.004
Reading energy frame 5 time 0.005
Reading energy frame 5 time 0.005
Reading energy frame 6 time 0.006
Reading energy frame 6 time 0.006
Reading energy frame 7 time 0.007
Reading energy frame 7 time 0.007
Reading energy frame 8 time 0.008
Reading energy frame 8 time 0.008
Reading energy frame 9 time 0.009
Reading energy frame 9 time 0.009
Reading energy frame 10 time 0.010
Reading energy frame 10 time 0.010
Reading energy frame 11 time 0.011
Reading energy frame 11 time 0.011
Reading energy frame 12 time 0.012
Reading energy frame 12 time 0.012
Reading energy frame 13 time 0.013
Reading energy frame 13 time 0.013
Reading energy frame 14 time 0.014
Reading energy frame 14 time 0.014
Reading energy frame 15 time 0.015
Reading energy frame 15 time 0.015
Reading energy frame 16 time 0.016
Reading energy frame 16 time 0.016
Last energy frame read 16 time 0.016
NOTE 1 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19_input.mdp]:
With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note
that with the Verlet scheme, nstlist has no effect on the accuracy of
your simulation.
NOTE 2 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19_input.mdp]:
Setting nstcalcenergy (100) equal to nstenergy (4)
Number of degrees of freedom in T-Coupling group System is 33.00
NOTE 3 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19_input.mdp]:
COM removal frequency is set to (5).
Other settings require a global communication frequency of 2.
Note that this will require additional global communication steps,
which will reduce performance when using multiple ranks.
Consider setting nstcomm to a multiple of 2.
There were 3 NOTEs
Reading file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19.tpr, VERSION 2024.2 (single precision)
Changing nstlist from 8 to 100, rlist from 0.703 to 0.751
On host node3306.joltik.os 1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
Using 2 MPI threads
Non-default thread affinity set, disabling internal thread affinity
Using 1 OpenMP thread per tMPI thread
starting mdrun 'Argon'
16 steps, 0.0 ps.
Generated 1 of the 1 non-bonded parameter combinations
Excluding 1 bonded neighbours molecule type 'Argon'
Determining Verlet buffer for a tolerance of 1e-06 kJ/mol/ps at 80 K
Calculated rlist for 1x1 atom pair-list as 0.703 nm, buffer size 0.003 nm
Set rlist, assuming 4x4 atom pair-list, to 0.703 nm, buffer size 0.003 nm
Note that mdrun will redetermine rlist based on the actual pair-list setup
This run will generate roughly 0 Mb of data
Writing final coordinates.
NOTE: 45 % of the run time was spent communicating energies,
you might want to increase some nst* mdp options
Core t (s) Wall t (s) (%)
Time: 5.050 2.577 196.0
(ns/day) (hour/ns)
Performance: 0.570 42.106
Opened /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19_reference.edr as single precision energy file
Opened /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19.edr as single precision energy file
Reading energy frame 0 time 0.000
Reading energy frame 0 time 0.000
Reading energy frame 1 time 0.004
Reading energy frame 1 time 0.001
Reading energy frame 2 time 0.002
Reading energy frame 3 time 0.003
Reading energy frame 4 time 0.004
Reading energy frame 2 time 0.008
Reading energy frame 5 time 0.005
Reading energy frame 6 time 0.006
Reading energy frame 7 time 0.007
Reading energy frame 8 time 0.008
Reading energy frame 3 time 0.012
Reading energy frame 9 time 0.009
Reading energy frame 10 time 0.010
Reading energy frame 11 time 0.011
Reading energy frame 12 time 0.012
Reading energy frame 4 time 0.016
Reading energy frame 13 time 0.013
Reading energy frame 14 time 0.014
Reading energy frame 15 time 0.015
Reading energy frame 16 time 0.016
Last energy frame read 4 time 0.016
NOTE 1 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19_input.mdp]:
With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note
that with the Verlet scheme, nstlist has no effect on the accuracy of
your simulation.
NOTE 2 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19_input.mdp]:
nstcomm < nstcalcenergy defeats the purpose of nstcalcenergy, consider
setting nstcomm equal to nstcalcenergy for less overhead
Number of degrees of freedom in T-Coupling group System is 33.00
NOTE 3 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19_input.mdp]:
COM removal frequency is set to (5).
Other settings require a global communication frequency of 2.
Note that this will require additional global communication steps,
which will reduce performance when using multiple ranks.
Consider setting nstcomm to a multiple of 2.
There were 3 NOTEs
Reading file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19.tpr, VERSION 2024.2 (single precision)
Changing nstlist from 8 to 100, rlist from 0.703 to 0.751
On host node3306.joltik.os 1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
Using 2 MPI threads
Non-default thread affinity set, disabling internal thread affinity
Using 1 OpenMP thread per tMPI thread
starting mdrun 'Argon'
16 steps, 0.0 ps.
Generated 1 of the 1 non-bonded parameter combinations
Excluding 1 bonded neighbours molecule type 'Argon'
Determining Verlet buffer for a tolerance of 1e-06 kJ/mol/ps at 80 K
Calculated rlist for 1x1 atom pair-list as 0.703 nm, buffer size 0.003 nm
Set rlist, assuming 4x4 atom pair-list, to 0.703 nm, buffer size 0.003 nm
Note that mdrun will redetermine rlist based on the actual pair-list setup
This run will generate roughly 0 Mb of data
Writing final coordinates.
NOTE: 44 % of the run time was spent communicating energies,
you might want to increase some nst* mdp options
Core t (s) Wall t (s) (%)
Time: 5.284 2.690 196.4
(ns/day) (hour/ns)
Performance: 0.546 43.953
Opened /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19_reference.edr as single precision energy file
Opened /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19.edr as single precision energy file
NOTE 1 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19_input.mdp]:
With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note
that with the Verlet scheme, nstlist has no effect on the accuracy of
your simulation.
NOTE 2 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19_input.mdp]:
nstcomm < nstcalcenergy defeats the purpose of nstcalcenergy, consider
setting nstcomm equal to nstcalcenergy for less overhead
Number of degrees of freedom in T-Coupling group System is 33.00
NOTE 3 [file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19_input.mdp]:
COM removal frequency is set to (5).
Other settings require a global communication frequency of 2.
Note that this will require additional global communication steps,
which will reduce performance when using multiple ranks.
Consider setting nstcomm to a multiple of 2.
There were 3 NOTEs
Reading file /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithCoupling_PeriodicActionsTest_PeriodicActionsAgreeWithReference_19.tpr, VERSION 2024.2 (single precision)
Changing nstlist from 8 to 100, rlist from 0.703 to 0.751
On host node3306.joltik.os 1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
Using 2 MPI threads
Non-default thread affinity set, disabling internal thread affinity
Using 1 OpenMP thread per tMPI thread
starting mdrun 'Argon'
16 steps, 0.0 ps.
Generated 1 of the 1 non-bonded parameter combinations
Excluding 1 bonded neighbours molecule type 'Argon'
Determining Verlet buffer for a tolerance of 1e-06 kJ/mol/ps at 80 K
Calculated rlist for 1x1 atom pair-list as 0.703 nm, buffer size 0.003 nm
Set rlist, assuming 4x4 atom pair-list, to 0.703 nm, buffer size 0.003 nm
Note that mdrun will redetermine rlist based on the actual pair-list setup
This run will generate roughly 0 Mb of data
Start 81: MdrunCoordinationConstraintsTests1Rank
81/89 Test #81: MdrunCoordinationConstraintsTests1Rank ....... Passed 22.15 sec
Start 82: MdrunCoordinationConstraintsTests2Ranks
82/89 Test #82: MdrunCoordinationConstraintsTests2Ranks ...... Passed 420.24 sec
Start 83: MdrunFEPTests
83/89 Test #83: MdrunFEPTests ................................ Passed 2.96 sec
Start 84: MdrunPullTests
84/89 Test #84: MdrunPullTests ............................... Passed 1.39 sec
Start 85: MdrunRotationTests
85/89 Test #85: MdrunRotationTests ........................... Passed 1.88 sec
Start 86: MdrunSimulatorComparison
86/89 Test #86: MdrunSimulatorComparison ..................... Passed 0.05 sec
Start 87: MdrunVirtualSiteTests
87/89 Test #87: MdrunVirtualSiteTests ........................ Passed 80.13 sec
Start 88: EnsembleHistogramPotentialPlugin.ForceCalc
88/89 Test #88: EnsembleHistogramPotentialPlugin.ForceCalc ... Passed 0.04 sec
Start 89: EnsembleBoundingPotentialPlugin.ForceCalc
89/89 Test #89: EnsembleBoundingPotentialPlugin.ForceCalc .... Passed 0.04 sec
99% tests passed, 1 tests failed out of 89
Label Time Summary:
GTest = 3332.92 sec*proc (85 tests)
IntegrationTest = 1393.29 sec*proc (28 tests)
MpiTest = 3220.71 sec*proc (21 tests)
QuickGpuTest = 770.40 sec*proc (20 tests)
SlowGpuTest = 2527.79 sec*proc (14 tests)
SlowTest = 1906.61 sec*proc (13 tests)
UnitTest = 33.02 sec*proc (44 tests)
Total Test time (real) = 1415.32 sec
The following tests FAILED:
80 - MdrunCoordinationCouplingTests2Ranks (Timeout)
Errors while running CTest
make[3]: *** [CMakeFiles/run-ctest-nophys.dir/build.make:74: CMakeFiles/run-ctest-nophys] Error 8
make[3]: Leaving directory '/tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj'
make[2]: *** [CMakeFiles/Makefile2:3461: CMakeFiles/run-ctest-nophys.dir/all] Error 2
make[2]: Leaving directory '/tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj'
make[1]: *** [CMakeFiles/Makefile2:3497: CMakeFiles/check.dir/rule] Error 2
make[1]: Leaving directory '/tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj'
make: *** [Makefile:632: check] Error 2
(at easybuild/easybuild-framework/easybuild/tools/run.py:682 in parse_cmd_output)
== 2024-06-12 23:52:01,958 build_log.py:267 INFO ... (took 27 mins 10 secs)
== 2024-06-12 23:52:01,961 config.py:700 DEBUG software install path as specified by 'installpath' and 'subdir_software': /user/gent/400/vsc40023/eb_scratch/RHEL8/cascadelake-volta-ib/software
== 2024-06-12 23:52:01,961 filetools.py:2013 INFO Removing lock /user/gent/400/vsc40023/eb_scratch/RHEL8/cascadelake-volta-ib/software/.locks/_user_gent_400_vsc40023_eb_scratch_RHEL8_cascadelake-volta-ib_software_GROMACS_2024.2-foss-2023b-CUDA-12.5.0.lock...
== 2024-06-12 23:52:01,967 filetools.py:383 INFO Path /user/gent/400/vsc40023/eb_scratch/RHEL8/cascadelake-volta-ib/software/.locks/_user_gent_400_vsc40023_eb_scratch_RHEL8_cascadelake-volta-ib_software_GROMACS_2024.2-foss-2023b-CUDA-12.5.0.lock successfully removed.
== 2024-06-12 23:52:01,968 filetools.py:2017 INFO Lock removed: /user/gent/400/vsc40023/eb_scratch/RHEL8/cascadelake-volta-ib/software/.locks/_user_gent_400_vsc40023_eb_scratch_RHEL8_cascadelake-volta-ib_software_GROMACS_2024.2-foss-2023b-CUDA-12.5.0.lock
== 2024-06-12 23:52:01,968 easyblock.py:4285 WARNING build failed (first 300 chars): cmd "make check -j 8 " exited with exit code 2 and output:
/kyukon/scratch/gent/vo/000/gvo00002/vsc40023/easybuild_REGTEST/RHEL8/cascadelake-volta-ib/software/CMake/3.27.6-GCCcore-13.2.0/bin/cmake -P /tmp/vsc40023/easybuild_build/GROMACS/2024.2/foss-2023b-CUDA-12.5.0/easybuild_obj/CMakeFiles/VerifyG
== 2024-06-12 23:52:01,968 easyblock.py:328 INFO Closing log for application name GROMACS version 2024.2
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment