dftfe::linearAlgebra::MultiVector< ValueType, memorySpace > Class Template Reference
An class template to encapsulate a MultiVector. A MultiVector is a collection of vectors belonging to the same finite-dimensional vector space, where usual notion of vector size denotes the dimension of the vector space. Note that this in the mathematical sense and not in the sense of an multi-dimensional array.The MultiVector is stored contiguously with the vector index being the fastest index, or in other words a matrix of size in row major format with denoting the dimension of the vector space (size of individual vector).
More...
Constructor for a \serial MultiVector with a predefined MultiVector::Storage (i.e., utils::MemoryStorage). This constructor transfers the ownership of the input Storage to the MultiVector. This is useful when one does not want to allocate new memory and instead use memory allocated in the MultiVector::Storage (i.e., utils::MemoryStorage). The locallyOwnedSize, ghostSize, etc., are automatically set using the size of the input Storage object.
Constructor for a distributed MultiVector with a predefined MultiVector::Storage (i.e., utils::MemoryStorage) and MPIPatternP2P. This constructor transfers the ownership of the input Storage to the MultiVector. This is useful when one does not want to allocate new memory and instead use memory allocated in the input MultiVector::Storage (i.e., utils::MemoryStorage).
Constructor for a distributed MultiVector based on total number of global indices. The resulting MultiVector will not contain any ghost indices on any of the processors. Internally, the vector is divided to ensure as much equitable distribution across all the processors much as possible.
template<typename ValueType, dftfe::utils::MemorySpace memorySpace>
class dftfe::linearAlgebra::MultiVector< ValueType, memorySpace >
An class template to encapsulate a MultiVector. A MultiVector is a collection of vectors belonging to the same finite-dimensional vector space, where usual notion of vector size denotes the dimension of the vector space. Note that this in the mathematical sense and not in the sense of an multi-dimensional array.The MultiVector is stored contiguously with the vector index being the fastest index, or in other words a matrix of size in row major format with denoting the dimension of the vector space (size of individual vector).
This class handles both serial and distributed MultiVector in a unfied way. There are different constructors provided for the serial and distributed case.
The serial MultiVector, as the name suggests, resides entirely in a processor.
The distributed MultiVector, on the other hand, is distributed across a set of processors. The storage of each of the vectors in the distributed MultiVector in a processor follows along similar lines to a distributed Vector object and comprises of two parts:
locally owned part: A part of the distribute MultiVector, defined through a contiguous range of indices ( is included, but is not), for which the current processor is the sole owner. The size of the locally owned part (i.e., ) is termed as locallyOwnedSize. Note that the range of indices that comprises the locally owned part (i.e., ) is same for all the vectors in the MultiVector
ghost part: Part of the MultiVector, defined through a set of ghost indices, that are owned by other processors. The size of ghost indices for each vector is termed as ghostSize. Note that the set of indices that define the ghost indices are same for all the vectors in the MultiVector
The global size of each vector in the distributed MultiVector (i.e., the number of unique indices across all the processors) is simply termed as size. Additionally, we define localSize = locallyOwnedSize + ghostSize.
We handle the serial MultiVector as a special case of the distributed MultiVector, wherein size = locallyOwnedSize and ghostSize = 0.
Note
While typically one would link to an MPI library while compiling this class, care is taken to seamlessly allow usage of this class even while not linking to an MPI library. To do so, we have our own MPI wrappers that redirect to the MPI library's function calls and definitions while linking to an MPI library. While not linking to an MPI library, the MPI wrappers provide equivalent functions and definitions that mimic the MPI functions and definitions, albeit for a single processor. This allows the user of this class to seamlessly switch between linking and de-linking to an MPI library without any change in the code and with the expected behavior.
Note that the case of not linking to an MPI library and the case of creating a serial mult-Vector are two independent things. One can still create a serial MultiVector while linking to an MPI library and running the code across multipe processors. That is, one can create a serial MultiVector in one or more than one of the set of processors used when running in parallel. Internally, we handle this by using MPI_COMM_SELF as our MPI_Comm for the serial MultiVector (i.e., the processor does self communication). However, while not linking to an MPI library (which by definition means running on a single processor), there is no notion of communication (neither with self nor with other processors). In such case, both serial and distributed mult-Vector mean the same thing and the MPI wrappers ensure the expected behavior (i.e., the behavior of a MultiVector while using just one processor)
Template Parameters
template
parameter ValueType defines underlying datatype being stored in the MultiVector (i.e., int, double, complex<double>, etc.)
template
parameter memorySpace defines the MemorySpace (i.e., HOST or DEVICE) in which the MultiVector must reside.
Note
Broadly, there are two ways of constructing a distributed MultiVector.
[Prefered and efficient approach] The first approach takes a pointer to an MPIPatternP2P as an input argument (along with other arguments). The MPIPatternP2P, in turn, contains all the information regarding the locally owned and ghost part of the MultiVector as well as the interaction map between processors. This is the most efficient way of constructing a distributed MultiVector as it allows for reusing of an already constructed MPIPatternP2P.
[ Expensive approach] The second approach takes in the locally owned, ghost indices or the total number of indices across all the processors and internally creates an MPIPatternP2P object. Given that the creation of an MPIPatternP2P is expensive, this route of constructing a distributed MultiVector should be avoided.
Constructor for a \serial MultiVector with a predefined MultiVector::Storage (i.e., utils::MemoryStorage). This constructor transfers the ownership of the input Storage to the MultiVector. This is useful when one does not want to allocate new memory and instead use memory allocated in the MultiVector::Storage (i.e., utils::MemoryStorage). The locallyOwnedSize, ghostSize, etc., are automatically set using the size of the input Storage object.
Constructor for a \serial MultiVector with a predefined MultiVector::Storage (i.e., utils::MemoryStorage). This constructor transfers the ownership of the input Storage to the MultiVector. This is useful when one does not want to allocate new memory and instead use memory allocated in the MultiVector::Storage (i.e., utils::MemoryStorage).
Parameters
[in]
storage
unique_ptr to MultiVector::Storage whose ownership is to be transfered to the MultiVector
[in]
numVectors
number of vectors in the MultiVector
Note
This Constructor transfers the ownership from the input unique_ptr storage to the internal data member of the MultiVector. Thus, after the function call storage will point to NULL and any access through storage will lead to undefined behavior.
Constructor for a distributed MultiVector with a predefined MultiVector::Storage (i.e., utils::MemoryStorage) and MPIPatternP2P. This constructor transfers the ownership of the input Storage to the MultiVector. This is useful when one does not want to allocate new memory and instead use memory allocated in the input MultiVector::Storage (i.e., utils::MemoryStorage).
Parameters
[in]
storage
unique_ptr to MultiVector::Storage whose ownership is to be transfered to the MultiVector
[in]
mpiPatternP2P
A shared_ptr to const MPIPatternP2P based on which the distributed MultiVector will be created.
[in]
numVectors
number of vectors in the MultiVector
Note
This Constructor transfers the ownership from the input unique_ptr storage to the internal data member of the MultiVector. Thus, after the function call storage will point to NULL and any access through storage will lead to undefined behavior.
Constructor for a distributed MultiVector based on total number of global indices. The resulting MultiVector will not contain any ghost indices on any of the processors. Internally, the vector is divided to ensure as much equitable distribution across all the processors much as possible.
Note
This way of construction is expensive. One should use the other constructor based on an input MPIPatternP2P as far as possible. Further, the decomposition is not compatible with other ways of distributed MultiVector construction.
Parameters
[in]
globalSize
Total number of global indices that is distributed over the processors.
[in]
mpiComm
MPI_Comm object associated with the group of processors across which the MultiVector is to be distributed
[in]
numVectors
number of vectors in the MultiVector
[in]
initVal
value with which the MultiVector shoud be initialized
Note
This way of construction is expensive. One should use the other constructor based on an input MPIPatternP2P as far as possible. Further, the decomposition is not compatible with other ways of distributed MultiVector construction.