Skip to content

Instantly share code, notes, and snippets.

@IuryAlves
Last active October 16, 2015 13:30
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save IuryAlves/3805e7ab43cabe52739c to your computer and use it in GitHub Desktop.
Save IuryAlves/3805e7ab43cabe52739c to your computer and use it in GitHub Desktop.
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
/*
Pra compilar execute mpicc mpi_largest_sum.c -o mpi_largest_sum
Pra rodar execute mpirun -n 4 mpi_largest_sum
o parametro -n indica a quantidade de processadores, voce pode aumentar ou diminuir = )
*/
int main (int argc, char **argv) {
int vector_size = 10;
int sum = 0, global_sum;
int i, rank, size;
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int *vector_total = (int*) calloc (10*size, sizeof(int));
int *vector = (int*) calloc (vector_size, sizeof(int));
srand((unsigned) time(NULL));
if (rank == 0) {
for(i = 0; i < 10 * size; i++){
vector_total[i] = rand() % 100;
}
}
MPI_Scatter(vector_total, vector_size, MPI_INT, vector, vector_size, MPI_INT, 0, MPI_COMM_WORLD);
for(i = 0; i < vector_size; i++){
sum +=vector[i];
}
printf("soma de cada do vetor no processo[%d] = %d \n", rank, sum);
MPI_Reduce(&sum, &global_sum, 1, MPI_INT, MPI_MAX, 0, MPI_COMM_WORLD);
if (rank == 0){
printf("Maior valor: %d \n", global_sum);
}
MPI_Finalize();
return 0;
}
@ecarrara
Copy link

Na linha 28, talvez fique melhor um MPI_Reduce(&sum_local, &vetor, vetor_size, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD);. O que acha?

@IuryAlves
Copy link
Author

vou testar

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment