MatCreateMPIAIJMKL#
Creates a sparse parallel matrix whose local portions are stored as MATSEQAIJMKL matrices (a matrix class that inherits from MATSEQAIJ but uses some operations provided by Intel MKL).
Synopsis#
Collective
Input Parameters#
comm - MPI communicator
m - number of local rows (or
PETSC_DECIDEto have calculated ifMis given) This value should be the same as the local size used in creating the y vector for the matrix-vector product y = Ax.n - This value should be the same as the local size used in creating the x vector for the matrix-vector product y = Ax. (or
PETSC_DECIDEto have calculated if N is given) For square matrices n is almost alwaysm.M - number of global rows (or
PETSC_DETERMINEto have calculated ifmis given)N - number of global columns (or
PETSC_DETERMINEto have calculated ifnis given)d_nz - number of nonzeros per row in DIAGONAL portion of local submatrix (same value is used for all local rows)
d_nnz - array containing the number of nonzeros in the various rows of the DIAGONAL portion of the local submatrix (possibly different for each row) or
NULL, ifd_nzis used to specify the nonzero structure. The size of this array is equal to the number of local rows, i.em. For matrices you plan to factor you must leave room for the diagonal entry and put in the entry even if it is zero.o_nz - number of nonzeros per row in the OFF-DIAGONAL portion of local submatrix (same value is used for all local rows).
o_nnz - array containing the number of nonzeros in the various rows of the OFF-DIAGONAL portion of the local submatrix (possibly different for each row) or
NULL, ifo_nzis used to specify the nonzero structure. The size of this array is equal to the number of local rows, i.em.
Output Parameter#
A - the matrix
Options Database Key#
-mat_aijmkl_no_spmv2 - disables use of the SpMV2 inspector-executor routines
Notes#
If the *_nnz parameter is given then the *_nz parameter is ignored
m,n,M,N parameters specify the size of the matrix, and its partitioning across
processors, while d_nz,d_nnz,o_nz,o_nnz parameters specify the approximate
storage requirements for this matrix.
If PETSC_DECIDE or PETSC_DETERMINE is used for a particular argument on one
processor than it must be used on all processors that share the object for
that argument.
The user MUST specify either the local or global matrix dimensions (possibly both).
If m and n are not PETSC_DECIDE, then the values determine the PetscLayout of the matrix and the ranges returned by
MatGetOwnershipRange(), MatGetOwnershipRanges(), MatGetOwnershipRangeColumn(), and MatGetOwnershipRangesColumn().
The parallel matrix is partitioned such that the first m0 rows belong to
process 0, the next m1 rows belong to process 1, the next m2 rows belong
to process 2, etc., where m0, m1, m2… are the input parameter m on each MPI process.
The DIAGONAL portion of the local submatrix of a processor can be defined
as the submatrix which is obtained by extraction the part corresponding
to the rows r1 - r2 and columns r1 - r2 of the global matrix, where r1 is the
first row that belongs to the processor, and r2 is the last row belonging
to the this processor. This is a square mxm matrix. The remaining portion
of the local submatrix (mxN) constitute the OFF-DIAGONAL portion.
If o_nnz, d_nnz are specified, then o_nz, and d_nz are ignored.
When calling this routine with a single process communicator, a matrix of
type MATSEQAIJMKL is returned. If a matrix of type MATMPIAIJMKL is desired
for this type of communicator, use the construction mechanism
MatCreate(...,&A);
MatSetType(A,MPIAIJMKL);
MatMPIAIJSetPreallocation(A,...);
See Also#
Matrices, Mat, Sparse Matrix Creation, MATMPIAIJMKL, MatCreate(), MatCreateSeqAIJMKL(),
MatSetValues(), MatGetOwnershipRange(), MatGetOwnershipRanges(), MatGetOwnershipRangeColumn(),
MatGetOwnershipRangesColumn(), PetscLayout
Level#
intermediate
Location#
src/mat/impls/aij/mpi/aijmkl/mpiaijmkl.c
Index of all Mat routines
Table of Contents for all manual pages
Index of all manual pages