Skip to content

Instantly share code, notes, and snippets.

@smoors
Created September 29, 2020 16:36
Show Gist options
  • Save smoors/c29a871a7332321c826a77db9d8e9d14 to your computer and use it in GitHub Desktop.
Save smoors/c29a871a7332321c826a77db9d8e9d14 to your computer and use it in GitHub Desktop.
(partial) EasyBuild log for failed build of /local/3365505.master01.hydra.brussel.vsc/eb-Rq6TAE/files_pr11398/g/GROMACS/GROMACS-2020.3-fosscuda-2019b.eb (PR #11398)
that with the Verlet scheme, nstlist has no effect on the accuracy of
your simulation.
NOTE 2 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1_input.mdp]:
Setting nstcalcenergy (100) equal to nstenergy (4)
Generated 330891 of the 330891 non-bonded parameter combinations
Generating 1-4 interactions: fudge = 0.5
Generated 330891 of the 330891 1-4 parameter combinations
Excluding 2 bonded neighbours molecule type 'SOL'
Number of degrees of freedom in T-Coupling group System is 27.00
NOTE 3 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1_input.mdp]:
NVE simulation: will use the initial temperature of 398.997 K for
determining the Verlet buffer size
NOTE 4 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1_input.mdp]:
You are using a plain Coulomb cut-off, which might produce artifacts.
You might want to consider using PME electrostatics.
There were 4 notes
Reading file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1.tpr, VERSION 2020.3-MODIFIED (single precision)
Can not increase nstlist because an NVE ensemble is used
On host node251.hydra.os 1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
Using 2 MPI threads
Using 1 OpenMP thread per tMPI thread
NOTE: Your choice of number of MPI ranks and amount of resources results in using 1 OpenMP threads per rank, which is most likely inefficient. The optimum is usually between 2 and 6 threads per rank.
NOTE: The number of threads is not equal to the number of (logical) cores
and the -pin option is set to auto: will not pin threads to cores.
This can lead to significant performance degradation.
Consider using -pin on (and -pinoffset in case you run multiple jobs).
starting mdrun 'spc2'
16 steps, 0.0 ps.
Determining Verlet buffer for a tolerance of 1e-06 kJ/mol/ps at 398.997 K
Calculated rlist for 1x1 atom pair-list as 0.774 nm, buffer size 0.074 nm
Set rlist, assuming 4x4 atom pair-list, to 0.769 nm, buffer size 0.069 nm
Note that mdrun will redetermine rlist based on the actual pair-list setup
This run will generate roughly 0 Mb of data
Writing final coordinates.
Dynamic load balancing report:
DLB was off during the run due to low measured imbalance.
Average load imbalance: 7.0%.
The balanceable part of the MD step is 19%, load imbalance is computed from this.
Part of the total run time spent waiting due to load imbalance: 1.4%.
NOTE: 44 % of the run time was spent communicating energies,
you might want to increase some nst* mdp options
Core t (s) Wall t (s) (%)
Time: 0.207 0.104 199.9
(ns/day) (hour/ns)
Performance: 14.190 1.691
Opened /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1_reference.edr as single precision energy file
Opened /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1.edr as single precision energy file
Reading energy frame 0 time 0.000 Reading energy frame 0 time 0.000 Reading energy frame 1 time 0.004 Reading energy frame 1 time 0.004 Reading energy frame 2 time 0.008 Reading energy frame 2 time 0.008 Reading energy frame 3 time 0.012 Reading energy frame 3 time 0.012 Reading energy frame 4 time 0.016 Reading energy frame 4 time 0.016 Last energy frame read 4 time 0.016
NOTE 1 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1_input.mdp]:
With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note
that with the Verlet scheme, nstlist has no effect on the accuracy of
your simulation.
NOTE 2 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1_input.mdp]:
Setting nstcalcenergy (100) equal to nstenergy (4)
Generated 330891 of the 330891 non-bonded parameter combinations
Generating 1-4 interactions: fudge = 0.5
Generated 330891 of the 330891 1-4 parameter combinations
Excluding 2 bonded neighbours molecule type 'SOL'
Number of degrees of freedom in T-Coupling group System is 27.00
NOTE 3 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1_input.mdp]:
NVE simulation: will use the initial temperature of 398.997 K for
determining the Verlet buffer size
NOTE 4 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1_input.mdp]:
You are using a plain Coulomb cut-off, which might produce artifacts.
You might want to consider using PME electrostatics.
There were 4 notes
Reading file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1.tpr, VERSION 2020.3-MODIFIED (single precision)
Can not increase nstlist because an NVE ensemble is used
On host node251.hydra.os 1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
Using 2 MPI threads
Using 1 OpenMP thread per tMPI thread
NOTE: Your choice of number of MPI ranks and amount of resources results in using 1 OpenMP threads per rank, which is most likely inefficient. The optimum is usually between 2 and 6 threads per rank.
NOTE: The number of threads is not equal to the number of (logical) cores
and the -pin option is set to auto: will not pin threads to cores.
This can lead to significant performance degradation.
Consider using -pin on (and -pinoffset in case you run multiple jobs).
starting mdrun 'spc2'
16 steps, 0.0 ps.
Determining Verlet buffer for a tolerance of 1e-06 kJ/mol/ps at 398.997 K
Calculated rlist for 1x1 atom pair-list as 0.774 nm, buffer size 0.074 nm
Set rlist, assuming 4x4 atom pair-list, to 0.769 nm, buffer size 0.069 nm
Note that mdrun will redetermine rlist based on the actual pair-list setup
This run will generate roughly 0 Mb of data
Writing final coordinates.
Dynamic load balancing report:
DLB was off during the run due to low measured imbalance.
Average load imbalance: 7.3%.
The balanceable part of the MD step is 20%, load imbalance is computed from this.
Part of the total run time spent waiting due to load imbalance: 1.4%.
NOTE: 46 % of the run time was spent communicating energies,
you might want to increase some nst* mdp options
Core t (s) Wall t (s) (%)
Time: 0.305 0.153 199.9
(ns/day) (hour/ns)
Performance: 9.620 2.495
Opened /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1_reference.edr as single precision energy file
Opened /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1.edr as single precision energy file
NOTE 1 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1_input.mdp]:
With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note
that with the Verlet scheme, nstlist has no effect on the accuracy of
your simulation.
NOTE 2 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1_input.mdp]:
Setting nstcalcenergy (100) equal to nstenergy (4)
Generated 330891 of the 330891 non-bonded parameter combinations
Generating 1-4 interactions: fudge = 0.5
Generated 330891 of the 330891 1-4 parameter combinations
Excluding 2 bonded neighbours molecule type 'SOL'
Number of degrees of freedom in T-Coupling group System is 27.00
NOTE 3 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1_input.mdp]:
NVE simulation: will use the initial temperature of 398.997 K for
determining the Verlet buffer size
NOTE 4 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1_input.mdp]:
You are using a plain Coulomb cut-off, which might produce artifacts.
You might want to consider using PME electrostatics.
There were 4 notes
Reading file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1.tpr, VERSION 2020.3-MODIFIED (single precision)
Can not increase nstlist because an NVE ensemble is used
On host node251.hydra.os 1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
Using 2 MPI threads
Using 1 OpenMP thread per tMPI thread
NOTE: Your choice of number of MPI ranks and amount of resources results in using 1 OpenMP threads per rank, which is most likely inefficient. The optimum is usually between 2 and 6 threads per rank.
NOTE: The number of threads is not equal to the number of (logical) cores
and the -pin option is set to auto: will not pin threads to cores.
This can lead to significant performance degradation.
Consider using -pin on (and -pinoffset in case you run multiple jobs).
starting mdrun 'spc2'
16 steps, 0.0 ps.
Determining Verlet buffer for a tolerance of 1e-06 kJ/mol/ps at 398.997 K
Calculated rlist for 1x1 atom pair-list as 0.774 nm, buffer size 0.074 nm
Set rlist, assuming 4x4 atom pair-list, to 0.769 nm, buffer size 0.069 nm
Note that mdrun will redetermine rlist based on the actual pair-list setup
This run will generate roughly 0 Mb of data
Writing final coordinates.
Dynamic load balancing report:
DLB was off during the run due to low measured imbalance.
Average load imbalance: 6.6%.
The balanceable part of the MD step is 20%, load imbalance is computed from this.
Part of the total run time spent waiting due to load imbalance: 1.3%.
NOTE: 45 % of the run time was spent communicating energies,
you might want to increase some nst* mdp options
Core t (s) Wall t (s) (%)
Time: 0.220 0.110 199.8
(ns/day) (hour/ns)
Performance: 13.337 1.800
Opened /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1_reference.edr as single precision energy file
Opened /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1.edr as single precision energy file
NOTE 1 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1_input.mdp]:
With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note
that with the Verlet scheme, nstlist has no effect on the accuracy of
your simulation.
NOTE 2 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1_input.mdp]:
Setting nstcalcenergy (100) equal to nstenergy (4)
Generated 330891 of the 330891 non-bonded parameter combinations
Generating 1-4 interactions: fudge = 0.5
Generated 330891 of the 330891 1-4 parameter combinations
Excluding 2 bonded neighbours molecule type 'SOL'
Number of degrees of freedom in T-Coupling group System is 27.00
NOTE 3 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1_input.mdp]:
NVE simulation: will use the initial temperature of 398.997 K for
determining the Verlet buffer size
NOTE 4 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1_input.mdp]:
You are using a plain Coulomb cut-off, which might produce artifacts.
You might want to consider using PME electrostatics.
There were 4 notes
Reading file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1.tpr, VERSION 2020.3-MODIFIED (single precision)
Can not increase nstlist because an NVE ensemble is used
On host node251.hydra.os 1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
Using 2 MPI threads
Using 1 OpenMP thread per tMPI thread
NOTE: Your choice of number of MPI ranks and amount of resources results in using 1 OpenMP threads per rank, which is most likely inefficient. The optimum is usually between 2 and 6 threads per rank.
NOTE: The number of threads is not equal to the number of (logical) cores
and the -pin option is set to auto: will not pin threads to cores.
This can lead to significant performance degradation.
Consider using -pin on (and -pinoffset in case you run multiple jobs).
starting mdrun 'spc2'
16 steps, 0.0 ps.
Determining Verlet buffer for a tolerance of 1e-06 kJ/mol/ps at 398.997 K
Calculated rlist for 1x1 atom pair-list as 0.774 nm, buffer size 0.074 nm
Set rlist, assuming 4x4 atom pair-list, to 0.769 nm, buffer size 0.069 nm
Note that mdrun will redetermine rlist based on the actual pair-list setup
This run will generate roughly 0 Mb of data
Writing final coordinates.
Dynamic load balancing report:
DLB was off during the run due to low measured imbalance.
Average load imbalance: 6.8%.
The balanceable part of the MD step is 20%, load imbalance is computed from this.
Part of the total run time spent waiting due to load imbalance: 1.3%.
NOTE: 45 % of the run time was spent communicating energies,
you might want to increase some nst* mdp options
Core t (s) Wall t (s) (%)
Time: 0.223 0.111 199.9
(ns/day) (hour/ns)
Performance: 13.173 1.822
Opened /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1_reference.edr as single precision energy file
Opened /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_1.edr as single precision energy file
[ OK ] PropagatorsWithConstraints/PeriodicActionsTest.PeriodicActionsAgreeWithReference/1 (3236 ms)
[ RUN ] PropagatorsWithConstraints/PeriodicActionsTest.PeriodicActionsAgreeWithReference/2
NOTE 1 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_2_input.mdp]:
With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note
that with the Verlet scheme, nstlist has no effect on the accuracy of
your simulation.
NOTE 2 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_2_input.mdp]:
Setting nstcalcenergy (100) equal to nstenergy (4)
Generated 330891 of the 330891 non-bonded parameter combinations
Generating 1-4 interactions: fudge = 0.5
Generated 330891 of the 330891 1-4 parameter combinations
Excluding 2 bonded neighbours molecule type 'SOL'
Number of degrees of freedom in T-Coupling group System is 27.00
NOTE 3 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_2_input.mdp]:
You are using a plain Coulomb cut-off, which might produce artifacts.
You might want to consider using PME electrostatics.
There were 3 notes
Reading file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_2.tpr, VERSION 2020.3-MODIFIED (single precision)
Changing nstlist from 8 to 25, rlist from 0.759 to 0.912
On host node251.hydra.os 1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
Using 2 MPI threads
Using 1 OpenMP thread per tMPI thread
NOTE: Your choice of number of MPI ranks and amount of resources results in using 1 OpenMP threads per rank, which is most likely inefficient. The optimum is usually between 2 and 6 threads per rank.
NOTE: The number of threads is not equal to the number of (logical) cores
and the -pin option is set to auto: will not pin threads to cores.
This can lead to significant performance degradation.
Consider using -pin on (and -pinoffset in case you run multiple jobs).
starting mdrun 'spc2'
16 steps, 0.0 ps.
Determining Verlet buffer for a tolerance of 1e-06 kJ/mol/ps at 298 K
Calculated rlist for 1x1 atom pair-list as 0.763 nm, buffer size 0.063 nm
Set rlist, assuming 4x4 atom pair-list, to 0.759 nm, buffer size 0.059 nm
Note that mdrun will redetermine rlist based on the actual pair-list setup
This run will generate roughly 0 Mb of data
Writing final coordinates.
Core t (s) Wall t (s) (%)
Time: 0.319 0.160 199.9
(ns/day) (hour/ns)
Performance: 9.203 2.608
NOTE 1 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_2_input.mdp]:
With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note
that with the Verlet scheme, nstlist has no effect on the accuracy of
your simulation.
NOTE 2 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_2_input.mdp]:
Setting nstcalcenergy (100) equal to nstenergy (4)
Generated 330891 of the 330891 non-bonded parameter combinations
Generating 1-4 interactions: fudge = 0.5
Generated 330891 of the 330891 1-4 parameter combinations
Excluding 2 bonded neighbours molecule type 'SOL'
Number of degrees of freedom in T-Coupling group System is 27.00
NOTE 3 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_2_input.mdp]:
You are using a plain Coulomb cut-off, which might produce artifacts.
You might want to consider using PME electrostatics.
There were 3 notes
Reading file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_2.tpr, VERSION 2020.3-MODIFIED (single precision)
Changing nstlist from 8 to 25, rlist from 0.759 to 0.912
On host node251.hydra.os 1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
Using 2 MPI threads
Using 1 OpenMP thread per tMPI thread
NOTE: Your choice of number of MPI ranks and amount of resources results in using 1 OpenMP threads per rank, which is most likely inefficient. The optimum is usually between 2 and 6 threads per rank.
NOTE: The number of threads is not equal to the number of (logical) cores
and the -pin option is set to auto: will not pin threads to cores.
This can lead to significant performance degradation.
Consider using -pin on (and -pinoffset in case you run multiple jobs).
starting mdrun 'spc2'
16 steps, 0.0 ps.
Determining Verlet buffer for a tolerance of 1e-06 kJ/mol/ps at 298 K
Calculated rlist for 1x1 atom pair-list as 0.763 nm, buffer size 0.063 nm
Set rlist, assuming 4x4 atom pair-list, to 0.759 nm, buffer size 0.059 nm
Note that mdrun will redetermine rlist based on the actual pair-list setup
This run will generate roughly 0 Mb of data
Writing final coordinates.
Core t (s) Wall t (s) (%)
Time: 0.226 0.113 199.9
(ns/day) (hour/ns)
Performance: 13.007 1.845
Opened /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_2_reference.edr as single precision energy file
Opened /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_2.edr as single precision energy file
Reading energy frame 0 time 0.000 Reading energy frame 0 time 0.000 Reading energy frame 1 time 0.004 Reading energy frame 1 time 0.004 Reading energy frame 2 time 0.008 Reading energy frame 2 time 0.008 Reading energy frame 3 time 0.012 Reading energy frame 3 time 0.012 Reading energy frame 4 time 0.016 Reading energy frame 4 time 0.016 Last energy frame read 4 time 0.016
NOTE 1 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_2_input.mdp]:
With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note
that with the Verlet scheme, nstlist has no effect on the accuracy of
your simulation.
NOTE 2 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_2_input.mdp]:
Setting nstcalcenergy (100) equal to nstenergy (4)
Generated 330891 of the 330891 non-bonded parameter combinations
Generating 1-4 interactions: fudge = 0.5
Generated 330891 of the 330891 1-4 parameter combinations
Excluding 2 bonded neighbours molecule type 'SOL'
Number of degrees of freedom in T-Coupling group System is 27.00
NOTE 3 [file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_2_input.mdp]:
You are using a plain Coulomb cut-off, which might produce artifacts.
You might want to consider using PME electrostatics.
There were 3 notes
Reading file /tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj/src/programs/mdrun/tests/Testing/Temporary/PropagatorsWithConstraints_PeriodicActionsTest_PeriodicActionsAgreeWithReference_2.tpr, VERSION 2020.3-MODIFIED (single precision)
Changing nstlist from 8 to 25, rlist from 0.759 to 0.912
On host node251.hydra.os 1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
Using 2 MPI threads
Using 1 OpenMP thread per tMPI thread
NOTE: Your choice of number of MPI ranks and amount of resources results in using 1 OpenMP threads per rank, which is most likely inefficient. The optimum is usually between 2 and 6 threads per rank.
NOTE: The number of threads is not equal to the number of (logical) cores
and the -pin option is set to auto: will not pin threads to cores.
This can lead to significant performance degradation.
Consider using -pin on (and -pinoffset in case you run multiple jobs).
-------------------------------------------------------
Program: mdrun-mpi-coordination-test, version 2020.3-MODIFIED
Source file: src/gromacs/gpu_utils/pinning.cu (line 106)
Function: gmx::pinBuffer(void*, std::size_t)::<lambda()>
MPI rank: 1 (out of 2)
Assertion failed:
Condition: stat == cudaSuccess
Could not register the host memory for page locking for GPU transfers.
cudaErrorDevicesUnavailable: all CUDA-capable devices are busy or unavailable
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------
Determining Verlet buffer for a tolerance of 1e-06 kJ/mol/ps at 298 K
Calculated rlist for 1x1 atom pair-list as 0.763 nm, buffer size 0.063 nm
Set rlist, assuming 4x4 atom pair-list, to 0.759 nm, buffer size 0.059 nm
Note that mdrun will redetermine rlist based on the actual pair-list setup
This run will generate roughly 0 Mb of data
Start 49: GmxapiExternalInterfaceTests
49/52 Test #49: GmxapiExternalInterfaceTests ........ Passed 3.81 sec
Start 50: GmxapiMpiTests
50/52 Test #50: GmxapiMpiTests ...................... Passed 4.01 sec
Start 51: GmxapiInternalInterfaceTests
51/52 Test #51: GmxapiInternalInterfaceTests ........ Passed 0.66 sec
Start 52: GmxapiInternalsMpiTests
52/52 Test #52: GmxapiInternalsMpiTests ............. Passed 0.64 sec
98% tests passed, 1 tests failed out of 52
Label Time Summary:
GTest = 157.56 sec*proc (52 tests)
IntegrationTest = 68.29 sec*proc (9 tests)
MpiTest = 87.83 sec*proc (8 tests)
SlowTest = 76.42 sec*proc (2 tests)
UnitTest = 12.85 sec*proc (41 tests)
Total Test time (real) = 157.62 sec
The following tests FAILED:
48 - MdrunMpiCoordinationTestsTwoRanks (Failed)
Errors while running CTest
make[3]: *** [CMakeFiles/run-ctest-nophys] Error 8
make[3]: Leaving directory `/tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj'
make[2]: *** [CMakeFiles/run-ctest-nophys.dir/all] Error 2
make[2]: Leaving directory `/tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj'
make[1]: *** [CMakeFiles/check.dir/rule] Error 2
make[1]: Leaving directory `/tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/easybuild_obj'
make: *** [check] Error 2
(at easybuild/tools/run.py:533 in parse_cmd_output)
== 2020-09-29 18:36:33,104 config.py:576 DEBUG software install path as specified by 'installpath' and 'subdir_software': /tmp/vsc10009/ebinstall/11398/software
== 2020-09-29 18:36:33,104 filetools.py:1623 INFO Removing lock /tmp/vsc10009/ebinstall/11398/software/.locks/_tmp_vsc10009_ebinstall_11398_software_GROMACS_2020.3-fosscuda-2019b.lock...
== 2020-09-29 18:36:33,104 filetools.py:330 INFO Path /tmp/vsc10009/ebinstall/11398/software/.locks/_tmp_vsc10009_ebinstall_11398_software_GROMACS_2020.3-fosscuda-2019b.lock successfully removed.
== 2020-09-29 18:36:33,104 filetools.py:1627 INFO Lock removed: /tmp/vsc10009/ebinstall/11398/software/.locks/_tmp_vsc10009_ebinstall_11398_software_GROMACS_2020.3-fosscuda-2019b.lock
== 2020-09-29 18:36:33,105 easyblock.py:3311 WARNING build failed (first 300 chars): cmd " make check -j 24 " exited with exit code 2 and output:
/theia/home/apps/CO7/broadwell/software/CMake/3.15.3-GCCcore-8.3.0/bin/cmake -S/tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/build/GROMACS/2020.3/fosscuda-2019b/gromacs-2020.3 -B/tmp/3365505.master01.hydra.brussel.vsc/tmp/vsc10009/
== 2020-09-29 18:36:33,105 easyblock.py:295 INFO Closing log for application name GROMACS version 2020.3
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment