DFT-EFE
 
Loading...
Searching...
No Matches
dftefe::utils::mpi::MPIPatternP2P< memorySpace > Class Template Reference

A class template to store the communication pattern (i.e., which entries/nodes to receive from which processor and which entries/nodes to send to which processor). More...

#include <MPIPatternP2P.h>

Collaboration diagram for dftefe::utils::mpi::MPIPatternP2P< memorySpace >:

Public Types

using SizeTypeVector = utils::MemoryStorage< size_type, memorySpace >
 
using GlobalSizeTypeVector = utils::MemoryStorage< global_size_type, memorySpace >
 

Public Member Functions

virtual ~MPIPatternP2P ()=default
 
 MPIPatternP2P (const std::vector< std::pair< global_size_type, global_size_type > > &locallyOwnedRanges, const std::vector< dftefe::global_size_type > &ghostIndices, const MPIComm &mpiComm)
 Constructor. This constructor is the typical way of creation of an MPI pattern for multiple global-ranges. More...
 
 MPIPatternP2P (const std::pair< global_size_type, global_size_type > &locallyOwnedRange, const std::vector< dftefe::global_size_type > &ghostIndices, const MPIComm &mpiComm)
 Constructor. This constructor is the typical way of creation of an MPI pattern for a single global-range. More...
 
 MPIPatternP2P (const std::vector< size_type > &sizes)
 Constructor. This constructor is to create an MPI Pattern for a serial case with multiple global-ranges. This is provided so that one can seamlessly use this class even for a serial case. In this case, all the indices are owned by the current processor. More...
 
 MPIPatternP2P (const size_type &size)
 Constructor. This constructor is to create an MPI Pattern for a serial case with a single global-range . This is provided so that one can seamlessly use this class even for a serial case. In this case, all the indices are owned by the current processor. More...
 
void reinit (const std::vector< std::pair< global_size_type, global_size_type > > &locallyOwnedRanges, const std::vector< dftefe::global_size_type > &ghostIndices, const MPIComm &mpiComm)
 
void reinit (const std::vector< size_type > &sizes)
 
size_type nGlobalRanges () const
 
std::vector< std::pair< global_size_type, global_size_type > > getGlobalRanges () const
 
std::vector< std::pair< global_size_type, global_size_type > > getLocallyOwnedRanges () const
 
std::pair< global_size_type, global_size_typegetLocallyOwnedRange (size_type rangeId) const
 
size_type localOwnedSize (size_type rangeId) const
 
size_type localOwnedSize () const
 
size_type localGhostSize () const
 
std::pair< bool, size_typeinLocallyOwnedRanges (const global_size_type globalId) const
 For a given globalId, returns whether it lies in any of the locally-owned-ranges and if true the index of the global-range it belongs to. More...
 
std::pair< bool, size_typeisGhostEntry (const global_size_type globalId) const
 For a given globalId, returns whether it belongs to the current processor's ghost-set and if true the index of the global-range it belongs to. More...
 
size_type globalToLocal (const global_size_type globalId) const
 
global_size_type localToGlobal (const size_type localId) const
 
std::pair< size_type, size_typeglobalToLocalAndRangeId (const global_size_type globalId) const
 For a given global index, returns a pair containing the local index in the procesor and the index of the global-range it belongs to. More...
 
std::pair< global_size_type, size_typelocalToGlobalAndRangeId (const size_type localId) const
 For a given local index, returns a pair containing its global index and the index of the global-range it belongs to param[in] localId The input local index. More...
 
const std::vector< global_size_type > & getGhostIndices () const
 
const std::vector< size_type > & getGhostProcIds () const
 
const std::vector< size_type > & getNumGhostIndicesInGhostProcs () const
 
size_type getNumGhostIndicesInGhostProc (const size_type procId) const
 
const SizeTypeVectorgetGhostLocalIndicesForGhostProcs () const
 
SizeTypeVector getGhostLocalIndicesForGhostProc (const size_type procId) const
 
const std::vector< size_type > & getGhostLocalIndicesRanges () const
 
const std::vector< size_type > & getTargetProcIds () const
 
const std::vector< size_type > & getNumOwnedIndicesForTargetProcs () const
 
size_type getNumOwnedIndicesForTargetProc (const size_type procId) const
 
size_type getTotalOwnedIndicesForTargetProcs () const
 
const SizeTypeVectorgetOwnedLocalIndicesForTargetProcs () const
 
SizeTypeVector getOwnedLocalIndicesForTargetProc (const size_type procId) const
 
size_type nmpiProcesses () const
 
size_type thisProcessId () const
 
global_size_type nGlobalIndices () const
 
const MPICommmpiCommunicator () const
 
bool isCompatible (const MPIPatternP2P< memorySpace > &rhs) const
 

Private Attributes

std::vector< std::pair< global_size_type, global_size_type > > d_locallyOwnedRanges
 
size_type d_nGlobalRanges
 
std::vector< std::pair< global_size_type, global_size_type > > d_locallyOwnedRangesSorted
 
std::vector< size_typed_locallyOwnedRangesIdPermutation
 
std::vector< std::vector< std::pair< global_size_type, global_size_type > > > d_allOwnedRanges
 
std::vector< std::pair< global_size_type, global_size_type > > d_globalRanges
 
size_type d_numLocallyOwnedIndices
 
std::vector< std::pair< size_type, size_type > > d_locallyOwnedRangesCumulativePairs
 
std::vector< global_size_typed_ghostIndices
 
size_type d_numGhostIndices
 
OptimizedIndexSet< global_size_typed_ghostIndicesOptimizedIndexSet
 
std::vector< size_typed_ghostIndicesRangeId
 
size_type d_numGhostProcs
 
std::vector< size_typed_ghostProcIds
 
std::vector< size_typed_numGhostIndicesInGhostProcs
 
SizeTypeVector d_flattenedLocalGhostIndices
 
std::vector< size_typed_localGhostIndicesRanges
 
std::vector< std::vector< size_type > > d_ghostProcLocallyOwnedRangesCumulative
 
size_type d_numTargetProcs
 
std::vector< size_typed_targetProcIds
 
std::vector< size_typed_numOwnedIndicesForTargetProcs
 
SizeTypeVector d_flattenedLocalTargetIndices
 
int d_nprocs
 Number of processors in the MPI Communicator. More...
 
int d_myRank
 Rank of the current processor. More...
 
global_size_type d_nGlobalIndices
 
MPIComm d_mpiComm
 MPI Communicator object. More...
 

Detailed Description

template<dftefe::utils::MemorySpace memorySpace>
class dftefe::utils::mpi::MPIPatternP2P< memorySpace >

A class template to store the communication pattern (i.e., which entries/nodes to receive from which processor and which entries/nodes to send to which processor).

  • Problem Setup
    Let there be \(K\) non-overlapping intervals of non-negative integers given as \([N_l^{start},N_l^{end})\), \(l=0,1,2,\ldots,K-1\). We term these intervals as global-ranges and the index \(l\) as rangeId . Here, \([a,b)\) denotes a half-open interval where \(a\) is included, but \(b\) is not included. Instead of partitioning each of the global interval separately, we are interested in partitioning all of them simultaneously across the the same set of \(p\) processors. Had there been just one global interval, say \([N_0^{start},N_0^{end}]\), the paritioning would result in each processor having a locally-owned-range defined as contiguous sub-range \([a,b) \in [N_0^{start},N_0^{end}]\)), such that it has no overlap with the locally-owned-range in other processors. Additionally, each processor will have a set of indices (not necessarily contiguous), called ghost-set that are not part of its locally-owned-range (i.e., they are pwned by some other processor). If we extend the above logic of partitioning to the case where there are \(K\) different global-ranges, then each processor will have \(K\) different locally-owned-ranges. For ghost-set, although \(K\) global-ranges will lead to \(K\) different sets of ghost-sets, for simplicity we can concatenate them into one set of indices. We can do this concatenation because the individual ghost-sets are just sets of indices. Once again, for simplicity, we term the concatenated ghost-set as just ghost-set. For the \(i^{th}\) processor, we denote the \(K\) locally-owned-ranges as \(R_l^i=[a_l^i, b_l^i)\), where \(l=0,1,\ldots,K-1\) and \(i=0,1,2,\ldots,p-1\). Further, for \(i^{th}\) processor, the ghost-set is given by an strictly-ordered set of non-negative integers \(U^i=\{u_1^i,u_2^i, u_{G_i}^i\}\). By strictly ordered we mean \(u_1^i < u_2^i < u_3^i < \ldots < u_{G_i}^i\). Thus, given the \(R_l^i\)'s and \(U^i\)'s, this class figures out which processor needs to communicate with which other processors.

    We list down the definitions that will be handy to understand the implementation and usage of this class. Some of the definitions are already mentioned in the description above

    • global-ranges : \(K\) non-overlapping intervals of non-negative integers given as \([N_l^{start},N_l^{end})\), where \(l=0,1,2,\ldots,K-1\). Each of the interval is partitioned across the same set of \(p\) processors.
    • rangeId : An index \(l=0,1,\ldots,K-1\), which indexes the global-ranges.
    • locally-owned-ranges : For a given processor (say with rank \(i\)), locally-owned-ranges define \(K\) intervals of non-negative integers given as \(R_l^i=[a_l^i,b_l^i)\), \(l=0,1,2,\ldots,K-1\), that are owned by the processor
    • ghost-set : For a given processor (say with rank \(i\)), the ghost-set is an ordered (strictly increasing) set of non-negative integers given as \(U^i=\{u_1^i,u_2^i, u_{G_i}^i\}\).
    • numLocallyOwnedIndices : For a given processor (say with rank \(i\)), the numLocallyOwnedIndices is the number of indices that it owns. That is, numLocallyOwnedIndices = \(\sum_{i=0}^{K-1} |R_l^i| = b_l^i - a_l^i\), where \(|.|\) denotes the number of elements in a set (cardinality of a set).
    • numGhostIndices : For a given processor (say with rank \(i\)), is the size of its ghost-set. That is, numGhostIndices = \(|U^i|\).
    • numLocalIndices : For a given processor (say with rank \(i\)), it is the sum of the numLocallyOwnedIndices and numGhostIndices.
    • localId : In a processor (say with rank \(i\)), given an integer (say \(x\)) that belongs either to the locally-owned-ranges or the ghost-set, we assign it a unique index between \([0,numLocalIndices)\) called the localId. We follow the simple approach of using the position that \(x\) will have if we concatenate the locally-owned-ranges and ghost-set as its localId. That is, if \(V=R_0^i \oplus R_1^i \oplus \ldots R_{K-1}^i \oplus U^i\), where \(\oplus\) denotes concatenation of two sets, then the localId of \(x\) is its position (starting from 0 for the first entry) in \(V\).
  • Assumptions

    1. It assumes that a a sparse communication pattern. That is, a given processor only communicates with a few processors. This object should be avoided if the communication pattern is dense (e.g., all-to-all communication)
    2. The \(R_l^i\) must satisfy the following
      1. \(R_l^i = [a_l^i, b_l^i) \in [N_l^{start},N_l^{end})\). That is the \(l^{th}\) locally-owned-range in a processor must a sub-set of the \(l^{th}\) global-interval.
      2. \(\bigcup_{i=0}^{p-1}R_l^i=[N_l^{start}, N_l^{end}]\). That is, for a given rangeId \(l\), the union of the \(l^{th}\) locally-owned-range from each processor should equate the \(l^{th}\) global-range.
      3. \(R_l^i \cap R_m^j = \emptyset\), if either \(l \neq m\) or \(i \neq j\). That is no index can be owned by two or more processors. Further, within a processor, no index can belong to two or more locally-owned-ranges.
      4. \(U^i \cap R_l^i=\emptyset\). That is no index in the ghost-set of a processor should belong to any of its locally-owned-ranges. In other words, index in the ghost-set of a processor must be owned by some other processor and not the same processor.

    A typical example which illustrates the use of \(K\) global-ranges is the following. Let there be a two vectors \(\mathbf{v}_1\) and \(\mathbf{v}_2\) of sizes \(N_1\) and \(N_2\), respectively, that are partitioned across the same set of processors. Let \(r_1^i=[n_1^i, m_1^i)\) and \(r_2^i=[n_2^i, m_2^i)\) be the locally-owned-range for \(\mathbf{v}_1\) and \(\mathbf{v}_2\) in the \(i^{th}\) processor, respectively. Similarly, let \(X^i=\{x_1^i,x_2^i,\ldots,x_{nx_i}^i\}\) and \(Y^i=\{y_1^i,y_2^i,\ldots,y_{ny_i}^i\}\) be two strictly-ordered sets that define the ghost-set in the \(i^{th}\) processor for \(\mathbf{v}_1\) and \(\mathbf{v}_2\), respectively. Then, we can construct a composite vector \(\mathbf{w}\) of size \(N=N_1+N_2\) by concatenating \(\mathbf{v}_1\) and \(\mathbf{v}_2\). We now want to partition \(\mathbf{w}\) across the same set of processors in a manner that preserves the partitioning of the \(\mathbf{v}_1\) and \(\mathbf{v}_2\) parts of it. To do so, we define two global-ranges \([A_1, A_1 + N_1)\) and \([A_1 + N_1 + A_2, A_1 + N_1 + A_2 + N_2)\), where \(A_1\) and \(A_2\) are any non-negative integers, to index the \(\mathbf{v}_1\) and \(\mathbf{v}_2\) parts of \(\mathbf{w}\). In usual cases, both \(A_1\) and \(A_2\) are zero. However, one can use non-zero values for \(A_1\) and \(A_2\), as that will not violate the non-overlapping condition on the global-ranges. Now, if we are to partition \(\mathbf{w}\) such that it preserves the individual partitiioning of \(\mathbf{v}_1\) and \(\mathbf{v}_2\) across the same set of processors, then for a given processor id (say \(i\)) we need to provide two owned ranges: \(R_1^i=[A_1 + n_1^i, A_1 + m_1^i)\) and \(R_2^i=[A_1 + N_1 + A_2 + n_2^i, A_1 + N_1 + A_2 + m_2^i)\). Further, the ghost set for \(\mathbf{w}\) in the \(i^{th}\) processor (say \(U^i\)) becomes the concatenation of the ghost sets of \(\mathbf{v}_1\) and \(\mathbf{v}_2\). That is \(U_i=\{A_1 + x_1^i,A_1 + x_2^i,\ldots, A_1 + x_{nx_i}^i\} \cup \{A_1 + N_1 + A_2 + y_1^i, A_1 + N_1 + A_2 + y_2^i, \ldots, A_1 + N_1 + A_2 + y_{ny_i}^i\}\) The above process can be extended to a composition of \(K\) vectors instead of two vectors.

    A typical scenario where such a composite vector arises is while dealing with direct sums of two or more vector spaces. For instance, let there be a function expressed as a linear combination of two mutually orthogonal basis sets, where each basis set is partitioned across the same set of processors. Then, instead of paritioning two vectors each containing the linear coefficients of one of the basis sets, it is more logistically simpler to construct a composite vector that concatenates the two vectors and partition it in a way that preserves the original partitioning of the individual vectors.

Template Parameters
memorySpaceDefines the MemorySpace (i.e., HOST or DEVICE) in which the various data members of this object must reside.

Member Typedef Documentation

◆ GlobalSizeTypeVector

template<dftefe::utils::MemorySpace memorySpace>
using dftefe::utils::mpi::MPIPatternP2P< memorySpace >::GlobalSizeTypeVector = utils::MemoryStorage<global_size_type, memorySpace>

◆ SizeTypeVector

template<dftefe::utils::MemorySpace memorySpace>
using dftefe::utils::mpi::MPIPatternP2P< memorySpace >::SizeTypeVector = utils::MemoryStorage<size_type, memorySpace>

typedefs

Constructor & Destructor Documentation

◆ ~MPIPatternP2P()

template<dftefe::utils::MemorySpace memorySpace>
virtual dftefe::utils::mpi::MPIPatternP2P< memorySpace >::~MPIPatternP2P ( )
virtualdefault

◆ MPIPatternP2P() [1/4]

template<dftefe::utils::MemorySpace memorySpace>
dftefe::utils::mpi::MPIPatternP2P< memorySpace >::MPIPatternP2P ( const std::vector< std::pair< global_size_type, global_size_type > > &  locallyOwnedRanges,
const std::vector< dftefe::global_size_type > &  ghostIndices,
const MPIComm mpiComm 
)

Constructor. This constructor is the typical way of creation of an MPI pattern for multiple global-ranges.

Parameters
[in]locallyOwnedRangesA vector containing different non-overlapping ranges of non-negative integers that are owned by the current processor. If the current processor id is \(i\) then the \(l^{th}\)e entry in locallyOwnedRanges denotes the pair \(a_l^i\) and \(b_l^i\) that define the range \(R_l^i\) defined above (see top of this page).
[in]ghostIndicesAn ordered (in increasing manner) set of non-negtive indices specifying the ghost-set for the current processor (see above for definition).
[in]mpiCommThe MPI communicator object which defines the set of processors for which the MPI pattern needs to be created.
Exceptions
Throwsexception if:
  1. mpiComm is in an invalid state, or (i.e., an index is simultaneously owned and ghost for a processor)
  2. Some sanity checks with respect to MPI sends and receives fail.
  3. Any of the assumptions listed above fails
Note
  1. The pair \(a_l^i\) and \(b_l^i\) in locallyOwnedRanges must define an open interval, where \(a_l^i\) is included, but \(b_l^i\) is not included.
  2. The vector ghostIndices must be ordered (i.e., increasing and non-repeating)
  3. Care is taken to create a dummy MPIPatternP2P while not linking to an MPI library. This allows the user code to seamlessly link and delink an MPI library.

Constructor without MPI for multiple global ranges

Here is the call graph for this function:

◆ MPIPatternP2P() [2/4]

template<dftefe::utils::MemorySpace memorySpace>
dftefe::utils::mpi::MPIPatternP2P< memorySpace >::MPIPatternP2P ( const std::pair< global_size_type, global_size_type > &  locallyOwnedRange,
const std::vector< dftefe::global_size_type > &  ghostIndices,
const MPIComm mpiComm 
)

Constructor. This constructor is the typical way of creation of an MPI pattern for a single global-range.

Parameters
[in]locallyOwnedRangeA pair of non-negative integers defining the range of indices that are owned by the current processor.
[in]ghostIndicesAn ordered (in increasing manner) set of non-negtive indices specifying the ghost-set for the current processor (see above for definition).
[in]mpiCommThe MPI communicator object which defines the set of processors for which the MPI pattern needs to be created.
Exceptions
Throwsexception if:
  1. mpiComm is in an invalid state, or (i.e., an index is simultaneously owned and ghost for a processor)
  2. Some sanity checks with respect to MPI sends and receives fail.
  3. Any of the assumptions listed above fails
Note
  1. The pair \(a\) and \(b\) in locallyOwnedRange must define an open interval, where \(a\) is included, but \(b\) is not included.
  2. The vector ghostIndices must be ordered (i.e., increasing and non-repeating)
  3. Care is taken to create a dummy MPIPatternP2P while not linking to an MPI library. This allows the user code to seamlessly link and delink an MPI library.

Constructor without MPI for a single global ranges

Here is the call graph for this function:

◆ MPIPatternP2P() [3/4]

template<dftefe::utils::MemorySpace memorySpace>
dftefe::utils::mpi::MPIPatternP2P< memorySpace >::MPIPatternP2P ( const std::vector< size_type > &  sizes)

Constructor. This constructor is to create an MPI Pattern for a serial case with multiple global-ranges. This is provided so that one can seamlessly use this class even for a serial case. In this case, all the indices are owned by the current processor.

Parameters
[in]sizesVector containig the sizes of each global range
Note
  1. The global-ranges will be defined in a cumulative manner based on the input sizes. That is, the \(i^{th}\) global-range is defined by the half-open interval \([C_i,sizes[i])\), where \(C_i=\sum_{j=0}^{i-1}sizes[j]\) is the cumulative number of indices preceding the \(i^{th}\) global-range. -#This is an explicitly serial construction (i.e., it uses MPI_COMM_SELF), which is different from the dummy MPIPatternP2P created while not linking to an MPI library. For examples, within a parallel run, one might have the need to create a serial MPIPatternP2P. A typical case is creation of a serial vector as a special case of distributed vector.
Similar to the previous constructor, care is taken to create a dummy MPIPatternP2P while not linking to an MPI library.

Constructor for a serial case with multiple global ranges

Here is the call graph for this function:

◆ MPIPatternP2P() [4/4]

template<dftefe::utils::MemorySpace memorySpace>
dftefe::utils::mpi::MPIPatternP2P< memorySpace >::MPIPatternP2P ( const size_type size)

Constructor. This constructor is to create an MPI Pattern for a serial case with a single global-range . This is provided so that one can seamlessly use this class even for a serial case. In this case, all the indices are owned by the current processor.

Parameters
[in]sizeTotal number of indices.
Note
  1. The global-range will defined as the half-open interval [0,size)
  2. This is an explicitly serial construction (i.e., it uses MPI_COMM_SELF), which is different from the dummy MPIPatternP2P created while not linking to an MPI library. For examples, within a parallel run, one might have the need to create a serial MPIPatternP2P. A typical case is creation of a serial vector as a special case of distributed vector.
Similar to the previous constructor, care is taken to create a dummy MPIPatternP2P while not linking to an MPI library.

Constructor for a serial case with single global range

Here is the call graph for this function:

Member Function Documentation

◆ getGhostIndices()

template<dftefe::utils::MemorySpace memorySpace>
const std::vector< global_size_type > & dftefe::utils::mpi::MPIPatternP2P< memorySpace >::getGhostIndices

◆ getGhostLocalIndicesForGhostProc()

template<dftefe::utils::MemorySpace memorySpace>
MPIPatternP2P< memorySpace >::SizeTypeVector dftefe::utils::mpi::MPIPatternP2P< memorySpace >::getGhostLocalIndicesForGhostProc ( const size_type  procId) const
Here is the call graph for this function:

◆ getGhostLocalIndicesForGhostProcs()

template<dftefe::utils::MemorySpace memorySpace>
const MPIPatternP2P< memorySpace >::SizeTypeVector & dftefe::utils::mpi::MPIPatternP2P< memorySpace >::getGhostLocalIndicesForGhostProcs

◆ getGhostLocalIndicesRanges()

template<dftefe::utils::MemorySpace memorySpace>
const std::vector< size_type > & dftefe::utils::mpi::MPIPatternP2P< memorySpace >::getGhostLocalIndicesRanges

◆ getGhostProcIds()

template<dftefe::utils::MemorySpace memorySpace>
const std::vector< size_type > & dftefe::utils::mpi::MPIPatternP2P< memorySpace >::getGhostProcIds

◆ getGlobalRanges()

template<dftefe::utils::MemorySpace memorySpace>
std::vector< std::pair< global_size_type, global_size_type > > dftefe::utils::mpi::MPIPatternP2P< memorySpace >::getGlobalRanges

◆ getLocallyOwnedRange()

template<dftefe::utils::MemorySpace memorySpace>
std::pair< global_size_type, global_size_type > dftefe::utils::mpi::MPIPatternP2P< memorySpace >::getLocallyOwnedRange ( size_type  rangeId) const

◆ getLocallyOwnedRanges()

template<dftefe::utils::MemorySpace memorySpace>
std::vector< std::pair< global_size_type, global_size_type > > dftefe::utils::mpi::MPIPatternP2P< memorySpace >::getLocallyOwnedRanges

◆ getNumGhostIndicesInGhostProc()

template<dftefe::utils::MemorySpace memorySpace>
size_type dftefe::utils::mpi::MPIPatternP2P< memorySpace >::getNumGhostIndicesInGhostProc ( const size_type  procId) const

◆ getNumGhostIndicesInGhostProcs()

template<dftefe::utils::MemorySpace memorySpace>
const std::vector< size_type > & dftefe::utils::mpi::MPIPatternP2P< memorySpace >::getNumGhostIndicesInGhostProcs

◆ getNumOwnedIndicesForTargetProc()

template<dftefe::utils::MemorySpace memorySpace>
size_type dftefe::utils::mpi::MPIPatternP2P< memorySpace >::getNumOwnedIndicesForTargetProc ( const size_type  procId) const

◆ getNumOwnedIndicesForTargetProcs()

template<dftefe::utils::MemorySpace memorySpace>
const std::vector< size_type > & dftefe::utils::mpi::MPIPatternP2P< memorySpace >::getNumOwnedIndicesForTargetProcs

◆ getOwnedLocalIndicesForTargetProc()

template<dftefe::utils::MemorySpace memorySpace>
MPIPatternP2P< memorySpace >::SizeTypeVector dftefe::utils::mpi::MPIPatternP2P< memorySpace >::getOwnedLocalIndicesForTargetProc ( const size_type  procId) const
Here is the call graph for this function:

◆ getOwnedLocalIndicesForTargetProcs()

template<dftefe::utils::MemorySpace memorySpace>
const MPIPatternP2P< memorySpace >::SizeTypeVector & dftefe::utils::mpi::MPIPatternP2P< memorySpace >::getOwnedLocalIndicesForTargetProcs

◆ getTargetProcIds()

template<dftefe::utils::MemorySpace memorySpace>
const std::vector< size_type > & dftefe::utils::mpi::MPIPatternP2P< memorySpace >::getTargetProcIds

◆ getTotalOwnedIndicesForTargetProcs()

template<dftefe::utils::MemorySpace memorySpace>
size_type dftefe::utils::mpi::MPIPatternP2P< memorySpace >::getTotalOwnedIndicesForTargetProcs

◆ globalToLocal()

template<dftefe::utils::MemorySpace memorySpace>
size_type dftefe::utils::mpi::MPIPatternP2P< memorySpace >::globalToLocal ( const global_size_type  globalId) const

◆ globalToLocalAndRangeId()

template<dftefe::utils::MemorySpace memorySpace>
std::pair< size_type, size_type > dftefe::utils::mpi::MPIPatternP2P< memorySpace >::globalToLocalAndRangeId ( const global_size_type  globalId) const

For a given global index, returns a pair containing the local index in the procesor and the index of the global-range it belongs to.

Parameters
[in]globalIdThe input global index
Returns
A pair where the first entry contains the local index in the procesor for globalId and second entry contains the index of the global-range to which it belongs.

◆ inLocallyOwnedRanges()

template<dftefe::utils::MemorySpace memorySpace>
std::pair< bool, size_type > dftefe::utils::mpi::MPIPatternP2P< memorySpace >::inLocallyOwnedRanges ( const global_size_type  globalId) const

For a given globalId, returns whether it lies in any of the locally-owned-ranges and if true the index of the global-range it belongs to.

Parameters
[in]globalIdThe input global index
Returns
A pair where: (a) First entry contains a boolean which is true if the globalId belongs to any of the locally-owned-ranges, or else is false. (b) Second entry contains the index of the global-range to which globaId belongs. This value is meaningful only if the first entry is true, or else its value is undefined.

◆ isCompatible()

template<dftefe::utils::MemorySpace memorySpace>
bool dftefe::utils::mpi::MPIPatternP2P< memorySpace >::isCompatible ( const MPIPatternP2P< memorySpace > &  rhs) const

◆ isGhostEntry()

template<dftefe::utils::MemorySpace memorySpace>
std::pair< bool, size_type > dftefe::utils::mpi::MPIPatternP2P< memorySpace >::isGhostEntry ( const global_size_type  globalId) const

For a given globalId, returns whether it belongs to the current processor's ghost-set and if true the index of the global-range it belongs to.

Parameters
[in]globalIdThe input global index
Returns
A pair where: (a) First entry contains a boolean which is true if the globalId belongs to the ghost-set, or else is false. (b) Second entry contains the index of the global-range to which globaId belongs. This value is meaningful only if the first entry is true, or else its value is undefined.

◆ localGhostSize()

template<dftefe::utils::MemorySpace memorySpace>
size_type dftefe::utils::mpi::MPIPatternP2P< memorySpace >::localGhostSize

◆ localOwnedSize() [1/2]

template<dftefe::utils::MemorySpace memorySpace>
size_type dftefe::utils::mpi::MPIPatternP2P< memorySpace >::localOwnedSize

◆ localOwnedSize() [2/2]

template<dftefe::utils::MemorySpace memorySpace>
size_type dftefe::utils::mpi::MPIPatternP2P< memorySpace >::localOwnedSize ( size_type  rangeId) const

◆ localToGlobal()

template<dftefe::utils::MemorySpace memorySpace>
global_size_type dftefe::utils::mpi::MPIPatternP2P< memorySpace >::localToGlobal ( const size_type  localId) const

◆ localToGlobalAndRangeId()

template<dftefe::utils::MemorySpace memorySpace>
std::pair< global_size_type, size_type > dftefe::utils::mpi::MPIPatternP2P< memorySpace >::localToGlobalAndRangeId ( const size_type  localId) const

For a given local index, returns a pair containing its global index and the index of the global-range it belongs to param[in] localId The input local index.

Returns
A pair where the first entry contains the global index for localId and second entry contains the index of the global-range to which it belongs.

◆ mpiCommunicator()

template<dftefe::utils::MemorySpace memorySpace>
const MPIComm & dftefe::utils::mpi::MPIPatternP2P< memorySpace >::mpiCommunicator

◆ nGlobalIndices()

template<dftefe::utils::MemorySpace memorySpace>
global_size_type dftefe::utils::mpi::MPIPatternP2P< memorySpace >::nGlobalIndices

◆ nGlobalRanges()

template<dftefe::utils::MemorySpace memorySpace>
size_type dftefe::utils::mpi::MPIPatternP2P< memorySpace >::nGlobalRanges

◆ nmpiProcesses()

template<dftefe::utils::MemorySpace memorySpace>
size_type dftefe::utils::mpi::MPIPatternP2P< memorySpace >::nmpiProcesses

◆ reinit() [1/2]

template<dftefe::utils::MemorySpace memorySpace>
void dftefe::utils::mpi::MPIPatternP2P< memorySpace >::reinit ( const std::vector< size_type > &  sizes)

◆ reinit() [2/2]

template<dftefe::utils::MemorySpace memorySpace>
void dftefe::utils::mpi::MPIPatternP2P< memorySpace >::reinit ( const std::vector< std::pair< global_size_type, global_size_type > > &  locallyOwnedRanges,
const std::vector< dftefe::global_size_type > &  ghostIndices,
const MPIComm mpiComm 
)

reinit without MPI

Here is the caller graph for this function:

◆ thisProcessId()

template<dftefe::utils::MemorySpace memorySpace>
size_type dftefe::utils::mpi::MPIPatternP2P< memorySpace >::thisProcessId

Member Data Documentation

◆ d_allOwnedRanges

template<dftefe::utils::MemorySpace memorySpace>
std::vector<std::vector<std::pair<global_size_type, global_size_type> > > dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_allOwnedRanges
private

A 2D vector to store the locally owned ranges for each processor. The first index is range id (i.e., ranges from 0 d_nGlobalRanges). For range id \(l\) , it stores pairs defining the \(l^{th}\) locally owned range in each processor. That is, d_allOwnedRanges[l] = \(\{\{a_l^0,b_l^0\}, \{a_l^1,b_l^1\}, \ldots, \{a_l^{p-1},b_l^{p-1}\}\}\), where \(p\) is the number of processors and the pair \((a_l^i,b_l^i)\) defines the \(l^{th}\) locally owned range for the \(i^{th}\) processor.

Note
Any pair \(a\) and \(n\) define an open interval, where \(a\) is included but \(b\) is not included.

◆ d_flattenedLocalGhostIndices

template<dftefe::utils::MemorySpace memorySpace>
SizeTypeVector dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_flattenedLocalGhostIndices
private

A flattened vector of size number of ghosts containing the ghost indices ordered as per the list of ghost processor Ids in d_ghostProcIds. To elaborate, let \(M_i=\) d_numGhostIndicesInGhostProcs[i] be the number of ghost indices owned by the \(i^{th}\) ghost processor (i.e., d_ghostProcIds[i]). Let \(S_i = \{x_1^i,x_2^i,\ldots,x_{M_i}^i\}\), be an ordered set containing the ghost indices owned by the \(i^{th}\) ghost processor (i.e., d_ghostProcIds[i]). Then we can define \(s_i = \{z_1^i,z_2^i,\ldots,z_{M_i}^i\}\) to be set defining the positions of the \(x_j^i\) in d_ghostIndices, i.e., \(x_j^i=\) d_ghostIndices[ \(z_j^i\)]. The indices \(x_j^i\) are called local ghost indices as they store the relative position of a ghost index in d_ghostIndices. Given that \(S_i\) and d_ghostIndices are both ordered sets, \(s_i\) will also be ordered. The vector d_flattenedLocalGhostIndices stores the concatenation of \(s_i\)'s.

Note
We store only the local ghost idnex index local to this processor, i.e., position of the ghost index in d_ghostIndices. This is done to use size_type which is unsigned int instead of global_size_type which is long unsigned it. This helps in reducing the volume of data transfered during MPI calls.

◆ d_flattenedLocalTargetIndices

template<dftefe::utils::MemorySpace memorySpace>
SizeTypeVector dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_flattenedLocalTargetIndices
private

Vector of size \(\sum_i\) d_numOwnedIndicesForTargetProcs[ \(i\)] to store all the locally owned indices which other processors need (i.e., which are ghost indices in other processors). It is stored as a concatentation of lists \(L_i = \{o_1^i,o_2^i,\ldots,o_{M_i}^i\}\), where where \(o_j^i\)'s are locally owned indices that are needed by the \(i^{th}\) target processor (i.e., d_targetProcIds[ \(i\)]) and \(M_i=\) d_numOwnedIndicesForTargetProcs[ \(i\)] is the number of indices to be sent to \(i^{th}\) target processor. The list \(L_i\) must be ordered.

◆ d_ghostIndices

template<dftefe::utils::MemorySpace memorySpace>
std::vector<global_size_type> dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_ghostIndices
private

Vector to store ghost-set (see top of the page for description) This is an ordered set (strictly ncreasing) of non-negative integers

◆ d_ghostIndicesOptimizedIndexSet

template<dftefe::utils::MemorySpace memorySpace>
OptimizedIndexSet<global_size_type> dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_ghostIndicesOptimizedIndexSet
private

An OptimizedIndexSet object to store the ghost indices for efficient operations. The OptimizedIndexSet internally creates contiguous sub-ranges within the set of indices and hence can optimize the finding of an index

◆ d_ghostIndicesRangeId

template<dftefe::utils::MemorySpace memorySpace>
std::vector<size_type> dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_ghostIndicesRangeId
private

Vector of size d_numGhostIndices to store rangeId of each ghost index. That is d_ghostIndicesRangeId[i] tells to which of the \(K\) global-ranges the i-th ghost index in d_ghostIndices belongs to

◆ d_ghostProcIds

template<dftefe::utils::MemorySpace memorySpace>
std::vector<size_type> dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_ghostProcIds
private

Vector to store the ghost processor Ids. A ghost processor is one which owns at least one of the ghost indices of this processor.

◆ d_ghostProcLocallyOwnedRangesCumulative

template<dftefe::utils::MemorySpace memorySpace>
std::vector<std::vector<size_type> > dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_ghostProcLocallyOwnedRangesCumulative
private

A 2D vector containing the cumulative start point of each locally-owned-ranges for each ghost processors for the current processor. For the \(i^{th}\) ghost processor (i.e., the one whose rank/id is given by d_ghostProcIds[i]) and \(l^{th}\) locally-owned-range, d_ghostProcLocallyOwnedRangesCumulative[i][l] = \(\sum_{j=0}^{l-1} (b_j^i - a_j^i)\), where \(a_j^i\) and \(b_j^i\) define the \(j^{th}\) locally-owned-range in the \(i^{th}\) ghost processor (i.e., processor with rank/id given by d_ghostProcIds[i]) In other words, if we concatenate the indices defined by all the locally-owned-ranges of the \(i^{th}\) ghost processor in sequence, d_ghostProcLocallyOwnedRangesCumulativePairs[i][l] tells us where the start point of the \(l^{th}\) locally-owned-range for the \(i^{th}\) ghost processor will lie in the concatenated list

◆ d_globalRanges

template<dftefe::utils::MemorySpace memorySpace>
std::vector<std::pair<global_size_type, global_size_type> > dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_globalRanges
private

A vector of size d_nGlobalRanges that stores the global-ranges That is, d_globalRanges[i] = \(\{N_l^{start}, N_l^{end}\}\), such that the half-open interval \([N_l^{start}, N_l^{end})\) defines the \(l^{th}\) global-range (see top of the page fr details)

◆ d_localGhostIndicesRanges

template<dftefe::utils::MemorySpace memorySpace>
std::vector<size_type> dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_localGhostIndicesRanges
private

A vector of size 2 times the number of ghost processors to store the start and end positions in the above d_flattenedLocalGhostIndices that define the local ghost indices owned by each of the ghost processors. To elaborate, for the \(i^{th}\) ghost processor (i.e., d_ghostProcIds[i]), the two integers \(n=\) d_localGhostIndicesRanges[2*i] and \(m=\) d_localGhostIndicesRanges[2*i+1] define the start and end positions in d_flattenedLocalGhostIndices that belong to the \(i^{th}\) ghost processor. In other words, the set \(s_i\) (defined in d_flattenedLocalGhostIndices above) containing the local ghost indices owned by the \(i^{th}\) ghost processor is given by: \(s_i=\) {d_flattenedLocalGhostIndices[ \(n\)], d_flattenedLocalGhostIndices[ \(n+1\)], ..., d_flattenedLocalGhostIndices[ \(m-1\)].

◆ d_locallyOwnedRanges

template<dftefe::utils::MemorySpace memorySpace>
std::vector<std::pair<global_size_type, global_size_type> > dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_locallyOwnedRanges
private

A vector containing the pairs of locally owned ranges for the current processor. If the current processor id is \(i\), then the \(l^{th}\) entry in d_locallyOwnedRange is the pair \(a_l^i\) and \(b_l^i\) that define the range \(R_l^i\) (see top of this page)

Note
Any pair \(a\) and \(b\) define an open interval where \(a\) is included but \(b\) is not included.

◆ d_locallyOwnedRangesCumulativePairs

template<dftefe::utils::MemorySpace memorySpace>
std::vector<std::pair<size_type, size_type> > dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_locallyOwnedRangesCumulativePairs
private

A vector of size d_nGlobalRanges storing the cumulative start and end point of each locally-owned-ranges. That is d_locallyOwnedRangesCumulativePairs[l].first = \sum^{j=0}^{l-1} (b_j-a_j) \(, and d_locallyOwnedRangesCumulativePairs[l].second = \sum_{j=0}^{l} (b_j-a_j)\) where \(a_j\) and \(b_j\) are d_locallyOwnedRanges[j].first and d_locallyOwnedRanges[j].second In other words, if we concatenate the indices defined by all the d_locallyOwnedRanges in sequence, d_nLocallyOwnedRangesCumulativeEndIds[l] tells us where the start and end point of the \(l^{th}\) locally-owned-range will lie in the concatenated list

◆ d_locallyOwnedRangesIdPermutation

template<dftefe::utils::MemorySpace memorySpace>
std::vector<size_type> dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_locallyOwnedRangesIdPermutation
private

A vector storing the index permutation obtained while sorting the ranges in d_locallyOwnedRanges. That is, d_locallyOwnedRangesIdPermutation[i] tells where the i-th range in d_locallyOwnedRangesSorted lies in the original d_locallyOwnedRanges

◆ d_locallyOwnedRangesSorted

template<dftefe::utils::MemorySpace memorySpace>
std::vector<std::pair<global_size_type, global_size_type> > dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_locallyOwnedRangesSorted
private

A sorted (lower to higher) version of the ranges of the above d_locallyOwnedRanges. The sorting is done as per the start point of each non-empty range. If two ranges have the same start point, then they are sorted as per the end point.

◆ d_mpiComm

template<dftefe::utils::MemorySpace memorySpace>
MPIComm dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_mpiComm
private

MPI Communicator object.

◆ d_myRank

template<dftefe::utils::MemorySpace memorySpace>
int dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_myRank
private

Rank of the current processor.

◆ d_nGlobalIndices

template<dftefe::utils::MemorySpace memorySpace>
global_size_type dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_nGlobalIndices
private

Total number of unique indices across all processors

◆ d_nGlobalRanges

template<dftefe::utils::MemorySpace memorySpace>
size_type dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_nGlobalRanges
private

A non-negative integer storing the number of locally owned ranges in each processor (must be same for all processor). In other words, it stores d_locallyOwnedRanges.size()

◆ d_nprocs

template<dftefe::utils::MemorySpace memorySpace>
int dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_nprocs
private

Number of processors in the MPI Communicator.

◆ d_numGhostIndices

template<dftefe::utils::MemorySpace memorySpace>
size_type dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_numGhostIndices
private

Number of ghost indices in the current processor, i.e., the size of d_ghostIndices

◆ d_numGhostIndicesInGhostProcs

template<dftefe::utils::MemorySpace memorySpace>
std::vector<size_type> dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_numGhostIndicesInGhostProcs
private

Vector of size number of ghost processors to store how many ghost indices of the current processor are owned by a ghost processor. That is d_numGhostIndicesInGhostProcs[i] stores the number of ghost indices owned by the processor id given by d_ghostProcIds[i]

◆ d_numGhostProcs

template<dftefe::utils::MemorySpace memorySpace>
size_type dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_numGhostProcs
private

Number of ghost processors for the current processor. A ghost processor is one which owns at least one of the ghost indices of this processor.

◆ d_numLocallyOwnedIndices

template<dftefe::utils::MemorySpace memorySpace>
size_type dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_numLocallyOwnedIndices
private

Number of locally owned indices in the current processor. See numLocallyOwnedIndices at the top of the page for description

◆ d_numOwnedIndicesForTargetProcs

template<dftefe::utils::MemorySpace memorySpace>
std::vector<size_type> dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_numOwnedIndicesForTargetProcs
private

Vector of size number of target processors to store how many locally owned indices of this current processor are need ghost in each of the target processors.

◆ d_numTargetProcs

template<dftefe::utils::MemorySpace memorySpace>
size_type dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_numTargetProcs
private

Number of target processors for the current processor. A target processor is one which owns at least one of the locally owned indices of this processor as its ghost index.

◆ d_targetProcIds

template<dftefe::utils::MemorySpace memorySpace>
std::vector<size_type> dftefe::utils::mpi::MPIPatternP2P< memorySpace >::d_targetProcIds
private

Vector to store the target processor Ids. A target processor is one which contains at least one of the locally owned indices of this processor as its ghost index.


The documentation for this class was generated from the following files: