Comparing: Collective communications
Collective communication in MPI
Collective communication in MPI involves every process in the specified
communicator.
To perform collective communication on a subset of processes a new
communicator has to be created.
Distribution of data amongst processes in MPI
There exists a number of routines to distribute data amongst processes;
these include:
MPI_BCAST
MPI_SCATTER
MPI_GATHER
MPI_ALLGATHER
MPI reduction operators
Collective Communication in PVM
Dynamic process group functions are a new feature available in PVM3
- Any PVM task can join or leave a group without having to inform other tasks.
Joining and leaving groups in PVM
Broadcasting messages in PVM
Tasks can broadcast messages to groups whether or not they belong to
that group.
pvmfbcast(group, msgtag, info)
This is an asynchronous call and computation on the sending process resumes as soon as the message is sent.
Submitted by Mark Johnston,
last updated on 10 December 1994.