Skip to content

Instantly share code, notes, and snippets.

@angel-devicente
Created April 21, 2020 07:22
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save angel-devicente/1618d20b7637a94809d05d501602d9a4 to your computer and use it in GitHub Desktop.
Save angel-devicente/1618d20b7637a94809d05d501602d9a4 to your computer and use it in GitHub Desktop.
/******************************************************************
* Angel de Vicente (angel.de.vicente@iac.es)
* https://github.com/angel-devicente/
*
* A simple example to illustrate the use of locks with passive target
* synchronization (MPI RMA).
*
* We assume a type of master-worker setting, where the master process will keep a
* window variable (w_counter) so that everybody can keep track of the job ids
* being done. In this example each process, including the master, is going to do
* local_jobs number of jobs. By using MPI_Win_lock and MPI_Get_accumulate (also in
* the master itself), we can keep w_counter properly updated without race
* conditions. In the master process we could access w_counter directly, but that
* would lead to race conditions!
******************************************************************/
#include <stdio.h>
#include <mpi.h>
#include <unistd.h>
#include <stdlib.h>
int main (int argc, char *argv[])
{
const int local_jobs = 10;
const int max_sleep = 1;
int my_rank, size, master;
int *w_counter, counter;
MPI_Aint w_size;
int e_size;
MPI_Status status;
MPI_Request request;
MPI_Win win;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
master = 0;
if (my_rank == master) {
w_size = (MPI_Aint)sizeof(int);
e_size = sizeof(int);
MPI_Alloc_mem(sizeof(int),MPI_INFO_NULL,&w_counter);
*w_counter = 0;
MPI_Win_create(w_counter, w_size, e_size, MPI_INFO_NULL, MPI_COMM_WORLD, &win);
} else {
MPI_Win_create(NULL, 0, 1, MPI_INFO_NULL, MPI_COMM_WORLD, &win);
}
int one = 1;
if (my_rank == master) {
for (int i=1; i<=local_jobs; i++) {
sleep(rand() % max_sleep);
MPI_Win_lock(MPI_LOCK_EXCLUSIVE, 0, 0, win);
MPI_Get_accumulate(&one, 1, MPI_INT,
&counter, 1, MPI_INT,
master, (MPI_Aint)0, 1, MPI_INT,
MPI_SUM, win);
MPI_Win_unlock(0, win);
printf("Rank 0 finished job n: %d \n",counter+1);
// Don't access the window directly or you will get race conditions
/* *w_counter = *w_counter + 1; */
/* printf("Rank 0 finished job n: %d \n",*w_counter); */
}
} else {
for (int i=1; i<=local_jobs; i++) {
sleep(rand() % max_sleep);
MPI_Win_lock(MPI_LOCK_EXCLUSIVE, 0, 0, win);
MPI_Get_accumulate(&one, 1, MPI_INT,
&counter, 1, MPI_INT,
master, (MPI_Aint)0, 1, MPI_INT,
MPI_SUM, win);
MPI_Win_unlock(0, win);
printf("Rank %d finished job n: %d \n",my_rank,counter+1);
}
}
MPI_Finalize();
if (my_rank == master) {
printf("\n\nIn total we got %d jobs done \n",*w_counter);
}
}
@angel-devicente
Copy link
Author

Compile / run like:

$ mpicc -o mw_lock mw_lock.c
$ mpirun -np 3 ./mw_lock
Rank 0 finished job n: 3
Rank 1 finished job n: 1
Rank 2 finished job n: 2
Rank 1 finished job n: 4
Rank 2 finished job n: 5
Rank 0 finished job n: 6
Rank 1 finished job n: 7
Rank 0 finished job n: 9
Rank 2 finished job n: 8
Rank 0 finished job n: 10
Rank 2 finished job n: 11
Rank 0 finished job n: 12
Rank 2 finished job n: 13
Rank 0 finished job n: 14
Rank 2 finished job n: 15
Rank 0 finished job n: 16
Rank 2 finished job n: 17
Rank 0 finished job n: 18
Rank 2 finished job n: 19
Rank 0 finished job n: 20
Rank 2 finished job n: 21
Rank 0 finished job n: 22
Rank 2 finished job n: 23
Rank 1 finished job n: 24
Rank 1 finished job n: 25
Rank 1 finished job n: 26
Rank 1 finished job n: 27
Rank 1 finished job n: 28
Rank 1 finished job n: 29
Rank 1 finished job n: 30


In total we got 30 jobs done
$ 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment