Skip to content

Instantly share code, notes, and snippets.

@mbartelsm
Last active June 12, 2021 10:30
Show Gist options
  • Save mbartelsm/788924424d36bd4e25246b0560e3017c to your computer and use it in GitHub Desktop.
Save mbartelsm/788924424d36bd4e25246b0560e3017c to your computer and use it in GitHub Desktop.
MPI Cheatsheet

Reference

Setup and teardown

// Initializes MPI. Needed to use other functions.
MPI_Init(           // out  int             Error value
    &argc,          // i/o  int*            Number of program arguments
    &argv           // i/o  char***         Program arguments
);

// Tears down MPI. Cannot call MPI funcs after this.
MPI_Finalize();     // out  int             Error value

Information gathering

// Gets the ID of the current processor.
MPI_Comm_rank (     // out  int             Error value
    MPI_COMM_WORLD, // in   MPI_Comm        Communicator
    &my_rank        // out  int             ID of the processor
);

// Gets the number of processors in the network.
MPI_Comm_size (     // out  int             Error value
    MPI_COMM_WORLD, // in   MPI_Comm        Communicator
    &num_procs      // out  int             Number of processors
);

Basic Communication

// Sends a message. Requires matching MPI_Recv.
// Blocking. 
MPI_Send (          // out  int            Error value
    buffer,         // in   void *         Sending buffer
    count,          // in   int            Size of message in buffer
    bufftype,       // in   MPI_Datatype   Type of elems of msg
    destination,    // in   int            Destination processor ID
    tag,            // in   int            Message tag
    comm            // in   MPI_Comm       Communicator
);

// Receives a message. Requires matching MPI_Send.
// Blocking
MPI_Recv (          // out  int             Error value
    buffer,         // out  void *          Receiving buffer
    count,          // in   int             Max num of elems in buffer
    bufftype,       // in   MPI_Datatype    Type of elems in buffer
    source,         // in   int             Source processor ID
    tag,            // in   int             Accepted message tag
    comm            // in   MPI_Comm        Communicator
    status          // out  MPI_Status *    Status
);

// Sends and receives a message, used to do ring communication without deadlock
// Blocking. Synchronous
MPI_Sendrecv(       // out  int             Error value
    sendbuf,        // in   void *          Sending buffer
    sendcount,      // in   int             Size of msg in send buffer
    sendtype,       // in   MPI_Datatype    Type of elems in send buffer
    dest,           // in   int             Destination processor ID
    sendtag,        // in   int             Message tag
    recvbuf,        // out  void *          Receiving buffer
    recvcount,      // in   int             Size of msg in receive buffer
    recvtype,       // in   MPI_Datatype    Type of elems in receive buffer
    source,         // in   int             Source processor ID
    recvtag,        // in   int             Accepted message tag
    comm,           // in   MPI_Comm        Communicator
    status          // out  MPI_Status *    Status
);

// Blocks until all processes reach this point.
// Blocking. Synchronized.
MPI_Barrier(        // out  int             Error value
    comm            // in   MPI_Comm        Communicator
)

Workload distribution

// Broadcasts a message from one proc to all others.
// Blocking. Synchronized.
MPI_Bcast(          // out  int             Error value
    buffer,         // i/o  void *          Send/Receive buffer
    count,          // in   int             Buffer size
    recvtype,       // in   MPI_Datatype    Type of message (see appendix)
    root,           // in   int             Broadcaster processor ID
    comm            // in   MPI_Comm        Communicator
);

// Splits a message evenly for all processors in the network
// NOTE: sendbuf size should be at least sendcount * num_procs
// Blocking. Synchronized.
MPI_Scatter(        // out  int             Error value
    sendbuf,        // in   void *          Send buffer
    sendcount,      // in   int             Num of elems for each processor
    sendtype,       // in   MPI_Datatype    Type of elems to send
    recvbuf,        // out  void *          Receive buffer
    recvcount,      // in   int             Num of elems sent (=sendcount)
    recvtype,       // in   MPI_Datatype    Type of elems sent (=sendtype)
    root,           // in   int             Scatterer processor ID
    comm            // in   MPI_Comm        Communicator
);

// Assembles a message from pieces from all processors
// NOTE: recvbuf size should be at least recvcount * num_procs
// Blocking. Synchronized.
MPI_Gather(         // out  int             Error value
    sendbuf,        // in   void *          Send buffer
    sendcount,      // in   int             Num of elems in each processor
    sendtype,       // in   MPI_Datatype    Type of elems to send
    recvbuf,        // out  void *          Receive buffer
    recvcount,      // in   int             Num of elems sent (=sendcount)
    recvtype,       // in   MPI_Datatype    Type of elems sent (=sendtype)
    root,           // in   int             Gatherer processor ID
    comm            // in   MPI_Comm        Communicator

);

// Same as MPI_Gather, but all processors get the output
// NOTE: recvbuf size should be at least recvcount * num_procs
// Blocking. Synchronized.
MPI_Allgather(         // out  int             Error value
    sendbuf,        // in   void *          Send buffer
    sendcount,      // in   int             Num of elems in each processor
    sendtype,       // in   MPI_Datatype    Type of elems to send
    recvbuf,        // out  void *          Receive buffer
    recvcount,      // in   int             Num of elems sent (=sendcount)
    recvtype,       // in   MPI_Datatype    Type of elems sent (=sendtype)
    comm            // in   MPI_Comm        Communicator

);

// Splits a message in pieces of different size and offset
// Blocking. Synchronized.
MPI_Scatterv(       // out  int             Error value
    sendbuf,        // in   void *          Send buffer
    sendcounts,     // in   int *           Sizes of msgs for each proc
    displs,         // in   int *           Offsets of msgs for each proc
    sendtype,       // in   MPI_Datatype    Type of elems to send
    recvbuf,        // out  void *          Receive buffer
    recvcount,      // in   int             Count to receive in this proc
    recvtype,       // in   MPI_Datatype    Type of elems to receive
    root,           // in   int             Scatterer processor ID
    comm            // in   MPI_Comm        Communicator
);

// Gathers a message from pieces of different size and offset
// Blocking. Synchronized.
MPI_Gatherv(        // out  int             Error value
    sendbuf,        // in   void *          Send buffer
    sendcount,      // in   int             Count to send from this proc
    sendtype,       // in   MPI_Datatype    Type of elems to send
    recvbuf,        // out  void *          Receive buffer
    recvcounts,     // in   int *           Sizes of msgs from each proc
    displs,         // in   int *           Offsets of msgs from each proc
    recvtype,       // in   MPI_Datatype    Type of elems to receive
    root,           // in   int             Gatherer processor ID
    comm            // in   MPI_Comm        Communicator
);

Distributed calculation

// Applies an operation to values across all processors, producing a single
// value in a single processor
// NOTE: If an array is passed as input, operation is performed element-wise
//       That is, the Max of all elements in index 1, and the max of all elems
//       in index 2, and so on.
// Blocking. Synchronized.
MPI_Reduce(         // out  int             Error value
    sendbuf,        // in   void *          Send buffer
    recvbuf,        // out  void *          Receive buffer
    count,          // in   int             Num of elems sent by each proc
    datatype,       // in   MPI_Datatype    Type of elems sent/received
    op,             // in   MPI_Op          Operation to perform (see appendix)
    root,           // in   int             Receiving processor ID
    comm            // in   MPI_Comm        Communicator
);

// Same as before, but all processors get the output
// NOTE: If an array is passed as input, operation is performed element-wise
//       That is, the Max of all elements in index 1, and the max of all elems
//       in index 2, and so on.
// Blocking. Synchronized.
MPI_Allreduce(      // out  int             Error value
    sendbuf,        // in   void *          Send buffer
    recvbuf,        // out  void *          Receive buffer
    count,          // in   int             Num of elems sent by each proc
    datatype,       // in   MPI_Datatype    Type of elems sent/received
    op,             // in   MPI_Op          Operation to perform (see appendix)
    comm            // in   MPI_Comm        Communicator
);

Appendix

Useful constants

  • MPI_COMM_WORLD: Default communicator
  • MPI_ANY_TAG: Accepts any tags when receiving messages

Types

MPI Type C type
MPI_CHAR char
MPI_SIGNED_CHAR char
MPI_SHORT short
MPI_INT int
MPI_LONG long int
MPI_LONG_LONG_INT long long int
MPI_BYTE unsigned char
MPI_UNSIGNED_CHAR unsigned char
MPI_UNSIGNED_SHORT unsigned short
MPI_UNSIGNED unsigned int
MPI_UNSIGNED_LONG unsigned long int
MPI_FLOAT float
MPI_DOUBLE double
MPI_LONG_DOUBLE long double

Operations

Value Description
MPI_MAX Returns the maximum element.
MPI_MIN Returns the minimum element.
MPI_SUM Sums the elements.
MPI_PROD Multiplies all elements.
MPI_LAND Performs a logical and across the elements.
MPI_LOR Performs a logical or across the elements.
MPI_BAND Performs a bitwise and across the bits of the elements.
MPI_BOR Performs a bitwise or across the bits of the elements.
MPI_MAXLOC Returns the maximum value and the rank of the process that owns it.
MPI_MINLOC Returns the minimum value and the rank of the process that owns it.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment