Skip to content

Instantly share code, notes, and snippets.

@ehamberg
Last active January 17, 2024 00:55
Show Gist options
  • Star 26 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save ehamberg/1263868 to your computer and use it in GitHub Desktop.
Save ehamberg/1263868 to your computer and use it in GitHub Desktop.
MPI_Scatterv example
#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#define SIZE 4
int main(int argc, char *argv[])
{
int rank, size; // for storing this process' rank, and the number of processes
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
int *sendcounts; // array describing how many elements to send to each process
int *displs; // array describing the displacements where each segment begins
int rem = (SIZE*SIZE)%size; // elements remaining after division among processes
int sum = 0; // Sum of counts. Used to calculate displacements
char rec_buf[100]; // buffer where the received data should be stored
// the data to be distributed
char data[SIZE][SIZE] = {
{'a', 'b', 'c', 'd'},
{'e', 'f', 'g', 'h'},
{'i', 'j', 'k', 'l'},
{'m', 'n', 'o', 'p'}
};
sendcounts = malloc(sizeof(int)*size);
displs = malloc(sizeof(int)*size);
// calculate send counts and displacements
for (int i = 0; i < size; i++) {
sendcounts[i] = (SIZE*SIZE)/size;
if (rem > 0) {
sendcounts[i]++;
rem--;
}
displs[i] = sum;
sum += sendcounts[i];
}
// print calculated send counts and displacements for each process
if (0 == rank) {
for (int i = 0; i < size; i++) {
printf("sendcounts[%d] = %d\tdispls[%d] = %d\n", i, sendcounts[i], i, displs[i]);
}
}
// divide the data among processes as described by sendcounts and displs
MPI_Scatterv(&data, sendcounts, displs, MPI_CHAR, &rec_buf, 100, MPI_CHAR, 0, MPI_COMM_WORLD);
// print what each process received
printf("%d: ", rank);
for (int i = 0; i < sendcounts[rank]; i++) {
printf("%c\t", rec_buf[i]);
}
printf("\n");
MPI_Finalize();
free(sendcounts);
free(displs);
return 0;
}
@amirmasoudabdol
Copy link

Thanks for you code :) It helps me a lot.

@ehamberg
Copy link
Author

ehamberg commented Aug 6, 2012

Cool. Glad to hear you found it useful. :)

@aloksukhwani
Copy link

Thanks a lot. This a easy to understand tutorial compared to other sources.

@mingc00
Copy link

mingc00 commented May 23, 2013

Thanks for your sample code!

The variable "rem" is initialized when size is uninitialized.
I think that "rem" should be initialized after calling MPI_Comm_size().

@alex-vo
Copy link

alex-vo commented Jun 28, 2014

I get "Floating point exception (signal 8)"...

@wme7
Copy link

wme7 commented Mar 28, 2016

Thx for this sample!

@dedmari
Copy link

dedmari commented Jun 22, 2016

Thank you for this sample 👍 . I guess rem should be calculated after MPI_Comm_size(MPI_COMM_WORLD, &size);

@jennakwon06
Copy link

Hello! thank you so much for the sample.

I had one question. Shouldn't lines 30-43 be also enclosed in "if (rank==0)" conditional? I'm not sure why every processor would have to allocate memory & calculate values for sendcounts and displs.

@MonaASaleh
Copy link

Thank you and I have the same question as anjennakwon06

@leonkielstra
Copy link

Thanks for this example!

Is there a reason why you reinitialise these variables?

int *sendcounts;    // array describing how many elements to send to each process // line 11
int *displs;        // array describing the displacements where each segment begins // line 12

int *sendcounts = malloc(sizeof(int)*size); // line 30
int *displs = malloc(sizeof(int)*size); // line 31

@Manchuk
Copy link

Manchuk commented Nov 25, 2017

Can I Call MPI_Scatterv only in few processes ?
for example only for process which rank<=20 ?

@sateeshBangarugiri
Copy link

mpirun noticed that process rank 0 with PID 28921 on node sateesh-OptiPlex-9020 exited on signal 8 (Floating point exception)....??
can anyone help me

@umarsaid
Copy link

mpirun noticed that process rank 0 with PID 28921 on node sateesh-OptiPlex-9020 exited on signal 8 (Floating point exception)....??
can anyone help me

Me too. It has been 3 years and still no answer.

@jeissonh
Copy link

Uninitialized variable size is used at line 14 to calculate the remainder. But, size gets the number of processes later, at line 28. It can fixed moving lines 26-28 at the beginning of main(), between lines 10 and 11.

@aab641
Copy link

aab641 commented Apr 1, 2019

For those who want to know what it prints out without compiling:

1: e    f       g       h
sendcounts[0] = 4       displs[0] = 0
sendcounts[1] = 4       displs[1] = 4
sendcounts[2] = 4       displs[2] = 8
sendcounts[3] = 4       displs[3] = 12
0: a    b       c       d
3: m    n       o       p
2: i    j       k       l

@salmanfarhat1
Copy link

Great, thanks

@PedritoQwark
Copy link

PedritoQwark commented May 5, 2019

LO DE ABAJO ES EL CODIGO CORRECTO:
.
.
.
.
.
.
.
.
.

#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
#include <math.h>

#define N 4

int main(int argc, char *argv[])
{
int rank, numProcs; // para almacenar rango número de procesos

MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &numProcs);

int *sendcounts;    // matriz que describe cuántos elementos enviar a cada proceso
int *displs;        // array que describe los desplazamientos donde comienza cada segmento

int rem = (N*N)%numProcs; // Elementos que quedan después de la división entre procesos
int sum = 0;                // Suma de cuentas. Se utiliza para calcular desplazamientos
char rec_buf[100];          // buffer donde se deben almacenar los datos recibidos

// los datos a distribuir
char data[N][N] = {
    {'a', 'b', 'c', 'd'},
    {'e', 'f', 'g', 'h'},
    {'i', 'j', 'k', 'l'},
    {'m', 'n', 'o', 'p'}
};


sendcounts = malloc(sizeof(int)*numProcs);
displs = malloc(sizeof(int)*numProcs);

// calcular conteos y desplazamientos de envío
for (int i = 0; i < numProcs; i++) {
    sendcounts[i] = (N*N)/numProcs;

    if (rem > 0) {
        sendcounts[i]++;
        rem--;
    }

    displs[i] = sum;
    sum += sendcounts[i];
}

// imprimir los conteos y desplazamientos de envío calculados para cada proceso
if (0 == rank) {
    for (int i = 0; i < numProcs; i++) {
        printf("sendcounts[%d] = %d\tdispls[%d] = %d\n", i, sendcounts[i], i, displs[i]);
    }
}

// dividir los datos entre procesos según lo descrito por sendcounts y displs
MPI_Scatterv(&data, sendcounts, displs, MPI_CHAR, &rec_buf, 100, MPI_CHAR, 0, MPI_COMM_WORLD);

// imprime lo que recibió cada proceso
printf("%d: ", rank);
for (int i = 0; i < sendcounts[rank]; i++) {
    printf("%c\t", rec_buf[i]);
}
printf("\n");

MPI_Finalize();

free(sendcounts);
free(displs);

return 0;

}

@Filtatos
Copy link

Thanks a lot, very helpful ❤️

@KonanAl
Copy link

KonanAl commented Jun 22, 2023

Thank you for the code, very helpfull

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment