在MPI C中发送和接收阵列

时间:2021-12-24 07:39:00

Here's how my code should work. Slave nodes will perform some computations, and each node will send a value, minE with a corresponding linear array, phi. Root node will then receive, 2 values. I'm trying to figure out on how I will store N-1 (number of slaves) phi in root node. I tried accepting phi in a 2D array but it doesn't work. Or maybe, I'm doing it wrong. So this code simply receives the values of phi from the last node to send.

这是我的代码应该如何工作。从节点将执行一些计算,并且每个节点将发送具有相应线性阵列phi的值minE。然后,根节点将接收2个值。我试图弄清楚如何在根节点中存储N-1(从属数量)phi。我尝试在2D数组中接受phi,但它不起作用。或许,我做错了。因此,此代码只接收要发送的最后一个节点的phi值。

if (world_rank == root) {
    for(i=1; i<world_size; i++) {
        MPI_Irecv(&local_minE[i], 1, MPI_FLOAT, MPI_ANY_SOURCE, 1, MPI_COMM_WORLD, &rcv_request[i]);
        MPI_Irecv(phi, len, MPI_INT, MPI_ANY_SOURCE, 1, MPI_COMM_WORLD, &rcv_request[i]); 
        MPI_Waitall(world_size-1, &rcv_request[1], &status[1]);
    }   
    MPI_Waitall(world_size-1, &rcv_request[1], &status[1]);  
}
else {
   //after computations, node will send a value, minE with a corresponding array phi
   MPI_Isend(&minE, 1, MPI_FLOAT, 0, 1, MPI_COMM_WORLD,&send_request[0]);
   MPI_Isend(phi, len, MPI_INT, root, 1, MPI_COMM_WORLD, &send_request[0]);
   MPI_Wait(&send_request[0], &status[0]);
}

1 个解决方案

#1


3  

First of all, you should consider using MPI_Gather() or even MPI_Igather() if the non-blocking aspect is important for you.

首先,如果非阻塞方面对您很重要,您应该考虑使用MPI_Gather()甚至MPI_Igather()。

Now, for what regards your code snippet, there is a fundamental problem: you try to use phi on the receiving part while it hasn't been received yet. You need to alter you code so that you first do the waiting part, and only after you do the copying / transposition part.

现在,对于您的代码片段而言,存在一个基本问题:您尝试在尚未收到接收部分时使用phi。您需要更改代码,以便首先执行等待部分,并且仅在执行复制/转置部分之后。

This would look something like this:

这看起来像这样:

int *trphi = malloc(len*world_size*sizeof(int));
for(i=0; i<world_size; i++) {
    MPI_Irecv(trphi[i], len, MPI_INT, i, 1, MPI_COMM_WORLD, &rcv_request[i]);
}
MPI_Waitall(world_size-1, &rcv_request[1], &status[1]);
for(i=1; i<world_size; i++) {
    for(j=0; j<len; j++) {
        rphi[j][i] = trphi[i][j];
    }
}
for(j=0; j<len; j++) {
    rphi[j][O] = phi[j];
}
free(trphi);

But again, look at MPI_Gather() (and possibly also a good MPI_Datatype) to do it much more nicely.

但是再看看MPI_Gather()(也可能是一个好的MPI_Datatype)来做得更好。

#1


3  

First of all, you should consider using MPI_Gather() or even MPI_Igather() if the non-blocking aspect is important for you.

首先,如果非阻塞方面对您很重要,您应该考虑使用MPI_Gather()甚至MPI_Igather()。

Now, for what regards your code snippet, there is a fundamental problem: you try to use phi on the receiving part while it hasn't been received yet. You need to alter you code so that you first do the waiting part, and only after you do the copying / transposition part.

现在,对于您的代码片段而言,存在一个基本问题:您尝试在尚未收到接收部分时使用phi。您需要更改代码,以便首先执行等待部分,并且仅在执行复制/转置部分之后。

This would look something like this:

这看起来像这样:

int *trphi = malloc(len*world_size*sizeof(int));
for(i=0; i<world_size; i++) {
    MPI_Irecv(trphi[i], len, MPI_INT, i, 1, MPI_COMM_WORLD, &rcv_request[i]);
}
MPI_Waitall(world_size-1, &rcv_request[1], &status[1]);
for(i=1; i<world_size; i++) {
    for(j=0; j<len; j++) {
        rphi[j][i] = trphi[i][j];
    }
}
for(j=0; j<len; j++) {
    rphi[j][O] = phi[j];
}
free(trphi);

But again, look at MPI_Gather() (and possibly also a good MPI_Datatype) to do it much more nicely.

但是再看看MPI_Gather()(也可能是一个好的MPI_Datatype)来做得更好。