DFT-EFE
 
Loading...
Searching...
No Matches
dftefe::utils::mpi::MPIRequestersNBX Class Reference

#include <MPIRequestersNBX.h>

Inheritance diagram for dftefe::utils::mpi::MPIRequestersNBX:
Collaboration diagram for dftefe::utils::mpi::MPIRequestersNBX:

Public Member Functions

 MPIRequestersNBX (const std::vector< size_type > &targetIDs, const MPIComm &comm)
 
 MPIRequestersNBX ()=default
 
std::vector< size_typegetRequestingRankIds () override
 
- Public Member Functions inherited from dftefe::utils::mpi::MPIRequestersBase
virtual ~MPIRequestersBase ()=default
 
virtual std::vector< size_typegetRequestingRankIds ()=0
 

Private Member Functions

bool haveAllLocalSendReceived ()
 
void signalLocalSendCompletion ()
 
bool haveAllIncomingMsgsReceived ()
 
void probeAndReceiveIncomingMsg ()
 
void startLocalSend ()
 
void finish ()
 

Private Attributes

std::vector< size_typed_targetIDs
 
std::vector< int > d_sendBuffers
 
std::vector< MPIRequestd_sendRequests
 
std::vector< std::unique_ptr< int > > d_recvBuffers
 
std::vector< std::unique_ptr< MPIRequest > > d_recvRequests
 
MPIRequest d_barrierRequest
 
const MPICommd_comm
 
std::set< size_typed_requestingProcesses
 
int d_numProcessors
 
int d_myRank
 

Constructor & Destructor Documentation

◆ MPIRequestersNBX() [1/2]

dftefe::utils::mpi::MPIRequestersNBX::MPIRequestersNBX ( const std::vector< size_type > &  targetIDs,
const MPIComm comm 
)
Here is the call graph for this function:

◆ MPIRequestersNBX() [2/2]

dftefe::utils::mpi::MPIRequestersNBX::MPIRequestersNBX ( )
default

Member Function Documentation

◆ finish()

void dftefe::utils::mpi::MPIRequestersNBX::finish ( )
private

After all processors have received all the incoming messages, the MPI data structures can be freed and the received messages can be processed.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ getRequestingRankIds()

std::vector< size_type > dftefe::utils::mpi::MPIRequestersNBX::getRequestingRankIds ( )
overridevirtual

Implements dftefe::utils::mpi::MPIRequestersBase.

Here is the call graph for this function:

◆ haveAllIncomingMsgsReceived()

bool dftefe::utils::mpi::MPIRequestersNBX::haveAllIncomingMsgsReceived ( )
private

Check whether all of the incoming messages from other processors to the current processor have been received.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ haveAllLocalSendReceived()

bool dftefe::utils::mpi::MPIRequestersNBX::haveAllLocalSendReceived ( )
private

Check whether all of message sent from the current processor to other processors have been received or not

Here is the call graph for this function:
Here is the caller graph for this function:

◆ probeAndReceiveIncomingMsg()

void dftefe::utils::mpi::MPIRequestersNBX::probeAndReceiveIncomingMsg ( )
private

Probe for an incoming message and if there is one receive it

Here is the call graph for this function:
Here is the caller graph for this function:

◆ signalLocalSendCompletion()

void dftefe::utils::mpi::MPIRequestersNBX::signalLocalSendCompletion ( )
private

Signal to all other processors that for this processor all its message sent to other processors have been received. This is done nonblocking barrier (i.e., MPI_IBarrier).

Here is the call graph for this function:
Here is the caller graph for this function:

◆ startLocalSend()

void dftefe::utils::mpi::MPIRequestersNBX::startLocalSend ( )
private

Start to sending message to all the target processors

Here is the call graph for this function:
Here is the caller graph for this function:

Member Data Documentation

◆ d_barrierRequest

MPIRequest dftefe::utils::mpi::MPIRequestersNBX::d_barrierRequest
private

◆ d_comm

const MPIComm& dftefe::utils::mpi::MPIRequestersNBX::d_comm
private

◆ d_myRank

int dftefe::utils::mpi::MPIRequestersNBX::d_myRank
private

◆ d_numProcessors

int dftefe::utils::mpi::MPIRequestersNBX::d_numProcessors
private

◆ d_recvBuffers

std::vector<std::unique_ptr<int> > dftefe::utils::mpi::MPIRequestersNBX::d_recvBuffers
private

Buffers for receiving requests. We use a vector of unique pointers because that guarantees that the buffers themselves are never moved around in memory, even if the vector is resized and consequently its elements (the pointers) are moved around.

◆ d_recvRequests

std::vector<std::unique_ptr<MPIRequest> > dftefe::utils::mpi::MPIRequestersNBX::d_recvRequests
private

Requests for receiving requests.

◆ d_requestingProcesses

std::set<size_type> dftefe::utils::mpi::MPIRequestersNBX::d_requestingProcesses
private

List of processes who have made a request to this process.

◆ d_sendBuffers

std::vector<int> dftefe::utils::mpi::MPIRequestersNBX::d_sendBuffers
private

Buffers for sending requests.

◆ d_sendRequests

std::vector<MPIRequest> dftefe::utils::mpi::MPIRequestersNBX::d_sendRequests
private

Requests for sending requests.

◆ d_targetIDs

std::vector<size_type> dftefe::utils::mpi::MPIRequestersNBX::d_targetIDs
private

List of processes this processor wants to send requests to.


The documentation for this class was generated from the following files: