Skip to content

Instantly share code, notes, and snippets.

@ratmcu
Last active December 25, 2019 04:31
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ratmcu/e1424d41021f85c6a1f504f191e2bdac to your computer and use it in GitHub Desktop.
Save ratmcu/e1424d41021f85c6a1f504f191e2bdac to your computer and use it in GitHub Desktop.
Splitting an array in OPENMPI and combining results [c/c++]

compile

mpic++ -O0  -c main.c -o main.o

link

mpic++ -g main.o -o mpi_test

run

create the his file

hostname > hostfile

run the binary with mpirun

mpirun  --use-hwthread-cpus  -hostfile hostfile ./mpi_test
#include "stdio.h"
#include <stdlib.h>
#include <math.h>
#include <mpi.h>
int main(int argc, char *argv[])
{
int process_Rank, size_Of_Comm;
double distro_Array[] = {1, 2, 3, 4, 5, 6 ,7, 8, 9, 10, 11, 12, 13, 14}; // data to be distributed
int N = sizeof(distro_Array)/sizeof(distro_Array[0]);
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size_Of_Comm);
MPI_Comm_rank(MPI_COMM_WORLD, &process_Rank);
double scattered_Data[2];
int i;
if(process_Rank==0)
{
for(i=1;i<size_Of_Comm;i++)
{
if(2*i<=N)
{
printf("scattering data %f\n", *((distro_Array)+2*i));
MPI_Send(
distro_Array+2*(i-1), //Address of the message we are sending.
2, //Number of elements handled by that address.
MPI_DOUBLE, //MPI_TYPE of the message we are sending.
i, //Rank of receiving process
1, //Message Tag
MPI_COMM_WORLD //MPI Communicator
);
}
}
}
else
{
printf("waiting for data by %d\n", process_Rank);
if(2*process_Rank<=N)
{
MPI_Recv(
&scattered_Data, //Address of the message we are receiving.
2, //Number of elements handled by that address.
MPI_DOUBLE, //MPI_TYPE of the message we are sending.
0, //Rank of sending process
1, //Message Tag
MPI_COMM_WORLD, //MPI Communicator
MPI_STATUS_IGNORE //MPI Status Object
);
printf("Process %d has received: ",process_Rank);
double sum=0;
for(int i =0;i<2;i++)
{
printf("%f ", scattered_Data[i]);
sum += scattered_Data[i];
}
MPI_Send(
&sum, //Address of the message we are sending.
1, //Number of elements handled by that address.
MPI_DOUBLE, //MPI_TYPE of the message we are sending.
0, //Rank of receiving process
1, //Message Tag
MPI_COMM_WORLD //MPI Communicator
);
printf("\n");
}
}
MPI_Barrier(MPI_COMM_WORLD); // all the sub ranks/processes waits here
/* process 0 will aggregate the results*/
if(process_Rank==0)
{
for(i=1;i<size_Of_Comm;i++)
{
double sum=0;
MPI_Recv(
&sum, //Address of the message we are receiving.
1, //Number of elements handled by that address.
MPI_DOUBLE, //MPI_TYPE of the message we are sending.
i, //Rank of sending process
1, //Message Tag
MPI_COMM_WORLD, //MPI Communicator
MPI_STATUS_IGNORE //MPI Status Object
);
printf("Process %d has sent: %f \n", i, sum);
}
}
MPI_Finalize();
return 0;
}
@ratmcu
Copy link
Author

ratmcu commented Dec 25, 2019

openmpi , parallel, splitting , MPI_Recv , MPI_Barrier , MPI_Send

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment