Skip to content

Instantly share code, notes, and snippets.

@costerwi
Last active June 11, 2024 09:19
Show Gist options
  • Save costerwi/02ee8489091ebee526e02967d127cfec to your computer and use it in GitHub Desktop.
Save costerwi/02ee8489091ebee526e02967d127cfec to your computer and use it in GitHub Desktop.
Overview of running Abaqus MPI on Linux

Abaqus MPI on Linux

MPI Versions Delivered with Abaqus

Abaqus version IBM Platform MPI (PMPI) Intel MPI (IMPI)
2018 to 2020 HF2 9.1.4.3 (S, E) 2017.2.174
2020 HF3 (FP2024) 9.1.4.3 2017.2.174 (S, E)
2020 HF4 (FP2030) 9.1.4.3 (S, E) 2017.2.174
2020 >=HF5 (FP2038) 9.1.4.3 (S) 2019 Update 7 (E)
2021 Golden 9.1.4.3 (S, E) 2017.2.174
2021 >=HF3 (FP2042) 9.1.4.3 (S) 2019 Update 7 (E)
2022 9.1.4.3 (S) 2021.3 (E)

Default settings gleaned from QA00000066035 Changing the SIMULIA Abaqus Analysis MPI Configuration

  • (S) Default for Abaqus Standard
  • (E) Default for Abaqus Explicit

Abaqus 2021 HF1 and HF2 were Dassault internal releases, not availble for download.

The Amazon Elastic Compute Cloud Elastic Fabric Adapter (EFA) is supported by Intel MPI 2019 Update 7. The EFA is a network device that you can attach to your Amazon EC2 instance to accelerate High Performance Computing (HPC) and machine learning applications. EFA enables you to achieve the application performance of an on-premises HPC cluster, with the scalability, flexibility, and elasticity provided by the AWS Cloud.

Latest configuration options

As shown in the table above, the MPI implementation for the latest versions of Abaqus Standard and Explicit are individually configurable. The following keys are supported by the mp_mpi_implementation parameter dictionary:

  • DEFAULTMPI: The default MPI implementation applied for all solvers. The value of this key is superseded when a particular solver MPI key is defined.
  • EXPLICIT: The MPI implementation used for Abaqus/Explicit
  • STANDARD: The MPI implementation used for Abaqus/Standard

Abaqus can also be configured to use Cray MPI (CMPI), since it is expected to be ABI Compliant with Intel MPI; however, no official testing has been performed, and additional configuration is necessary to operate in cluster native mode. Contact Cray for more information.

mp_mpi_implementation={DEFAULTMPI: IMPI, STANDARD: PMPI}
mp_mpirun_path={
  PMPI: '/opt/CAE/SIMULIA/EstProducts/2020/linux_a64/code/bin/SMAExternal/pmpi/bin/mpirun',
  IMPI: '/opt/CAE/SIMULIA/EstProducts/2020/linux_a64/code/bin/SMAExternal/impi/intel64/bin/mpirun',
  CMPI: '/opt/CAE/SIMULIA/EstProducts/2020/linux_a64/code/bin/SMAExternal/impi/intel64/bin/mpirun',
  }

New architecture delivers improved parallel performance of Abaqus/Explicit

Parallel execution of Abaqus/Explicit is now available in hybrid mode using a combination of MPI and threads. This functionality is first available in the Abaqus 2021 FD03 (FP.2042) and 2020 FD03 (FP.2022) releases.

Execution in hybrid mode is invoked by setting the command line option threads_per_mpi_process=m. The number of cpus must be divisible by the number of threads per MPI process.

abaqus job=beam cpus=80 threads_per_mpi_process=20

See also:

  1. QA00000009316 FAQ on setup of MPI
  2. QA00000008994 Configuring Abaqus for distributed memory parallel execution
  3. QA00000008549 MPI Init errors running in distributed memory parallel on systems with high speed interconnects
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment