ABACUS develop
Atomic-orbital Based Ab-initio Computation at UStc
Loading...
Searching...
No Matches
Public Member Functions | Public Attributes | Private Attributes | List of all members
MPIContext Class Reference
Collaboration diagram for MPIContext:

Public Member Functions

 MPIContext ()
 
int GetRank () const
 
int GetSize () const
 
 MPIContext ()
 
int GetRank () const
 
int GetSize () const
 
 MPIContext ()
 
int GetRank () const
 
int GetSize () const
 
 MPIContext ()
 
int GetRank () const
 
int GetSize () const
 
 MPIContext ()
 
int GetRank () const
 
int GetSize () const
 

Public Attributes

int drank
 
int dsize
 
int dcolor
 
int grank
 
int gsize
 
int kpar
 
int nproc_in_pool
 
int my_pool
 
int rank_in_pool
 
int nstogroup
 
int MY_BNDGROUP
 
int rank_in_stogroup
 
int nproc_in_stogroup
 
int KPAR
 
int NPROC_IN_POOL
 
int MY_POOL
 
int RANK_IN_POOL
 

Private Attributes

int _rank
 
int _size
 

Detailed Description

The tested functions are wrappers of MPI_Bcast in ABACUS, as defined in source_base/parallel_common.h. The source is process 0 in all MPI_Bcast wrappers.

The tested functions are: i. Parallel_Global::split_diag_world(), which is used in David diagonalization in pw basis calculation. ii. Parallel_Global::split_grid_world() iii. Parallel_Global::MyProd(std::complex<double> *in,std::complex<double> *inout,int *len,MPI_Datatype *dptr); iv. Parallel_Global::init_pools(); v. Parallel_Global::divide_pools(void);

The tested functions are mainly wrappers of MPI_Allreduce and MPI_Allgather in ABACUS, as defined in source_base/ parallel_reduce.h.

The logic to test MPI_Allreduce wrapper functions is to calculate the sum of the total array in two ways, one by using MPI_Allreduce with only 1 number, another one by using MPI_Allreduce with n numbers. The total array is deemed as the sum of local arrays with the same length.

  1. ReduceIntAll: Tests two variations of reduce_all()
  2. ReduceDoubleAll: Tests two variations of reduce_all()
  3. ReduceComplexAll: Tests two variations of reduce_complex_all()
  4. GatherIntAll: Tests gather_int_all() and gather_min_int_all()
  5. GatherDoubleAll: Tests gather_min_double_all() and gather_max_double_all()
  6. ReduceIntDiag: Tests reduce_int_diag()
  7. ReduceDoubleDiag: Tests reduce_double_diag()
  8. ReduceIntGrid: Tests reduce_int_grid()
  9. ReduceDoubleGrid: Tests reduce_double_grid() 10.ReduceDoublePool: Tests two variations of reduce_pool() and two variations of reduce_double_allpool() 11.ReduceComplexPool: Tests two variations of reduce_pool() 12.GatherDoublePool: Tests gather_min_double_pool() and gather_max_double_pool()

The tested functions: i. Parallel_Global::init_pools() is the public interface to call the private function Parallel_Global::divide_pools(), which divide all processes into KPAR groups. ii.Parallel_Kpoints::kinf() is the public interface to call another three functions: get_nks_pool(), get_startk_pool(), get_whichpool(), which divide all kpoints into KPAR groups. iii.Parallel_Kpoints::gatherkvec() is an interface to gather kpoints vectors from all processors. The default number of processes is set to 4 in parallel_kpoints_test.sh. One may modify it to do more tests, or adapt this unittest to local environment.

Test fixture for class Parallel_K2D

Constructor & Destructor Documentation

◆ MPIContext() [1/5]

MPIContext::MPIContext ( )
inline

◆ MPIContext() [2/5]

MPIContext::MPIContext ( )
inline

◆ MPIContext() [3/5]

MPIContext::MPIContext ( )
inline

◆ MPIContext() [4/5]

MPIContext::MPIContext ( )
inline

◆ MPIContext() [5/5]

MPIContext::MPIContext ( )
inline

Member Function Documentation

◆ GetRank() [1/5]

int MPIContext::GetRank ( ) const
inline
Here is the caller graph for this function:

◆ GetRank() [2/5]

int MPIContext::GetRank ( ) const
inline

◆ GetRank() [3/5]

int MPIContext::GetRank ( ) const
inline

◆ GetRank() [4/5]

int MPIContext::GetRank ( ) const
inline

◆ GetRank() [5/5]

int MPIContext::GetRank ( ) const
inline

◆ GetSize() [1/5]

int MPIContext::GetSize ( ) const
inline
Here is the caller graph for this function:

◆ GetSize() [2/5]

int MPIContext::GetSize ( ) const
inline

◆ GetSize() [3/5]

int MPIContext::GetSize ( ) const
inline

◆ GetSize() [4/5]

int MPIContext::GetSize ( ) const
inline

◆ GetSize() [5/5]

int MPIContext::GetSize ( ) const
inline

Member Data Documentation

◆ _rank

int MPIContext::_rank
private

◆ _size

int MPIContext::_size
private

◆ dcolor

int MPIContext::dcolor

◆ drank

int MPIContext::drank

◆ dsize

int MPIContext::dsize

◆ grank

int MPIContext::grank

◆ gsize

int MPIContext::gsize

◆ kpar

int MPIContext::kpar

◆ KPAR

int MPIContext::KPAR

◆ MY_BNDGROUP

int MPIContext::MY_BNDGROUP

◆ my_pool

int MPIContext::my_pool

◆ MY_POOL

int MPIContext::MY_POOL

◆ nproc_in_pool

int MPIContext::nproc_in_pool

◆ NPROC_IN_POOL

int MPIContext::NPROC_IN_POOL

◆ nproc_in_stogroup

int MPIContext::nproc_in_stogroup

◆ nstogroup

int MPIContext::nstogroup

◆ rank_in_pool

int MPIContext::rank_in_pool

◆ RANK_IN_POOL

int MPIContext::RANK_IN_POOL

◆ rank_in_stogroup

int MPIContext::rank_in_stogroup

The documentation for this class was generated from the following files: