
36 Cray T3E User’s Guide
Routines Explanation
PSGETRF PCGETRF LU factorization and solution of linear general
PSGETRS PCGETRS distributed systems of linear equations
PSTRTRS PCTRTRS
PSGESV PCGESV
PSPOTRF PCPOTRF Cholesky factorization and solution of real
PSPOTRS PCPOTRS symmetric or complex Hermitian distributed
PSPOSV PCPOSV systems of linear equations
PSGEQRF PCGEQRF QR, RQ, QL, LQ, and QR with column pivoting
PSGERQF PCGERQF for general distributed matrices
PSGEQLF PCGEQLF
PSGELQF PCGELQF
PSGEQPF PCGEQPF
PSGETRI PCGETRI Inversion of general, triangular, real symmetric
PSTRTRI PCTRTRI positive definite or complex Hermitian positive
PSPOTRI PCPOTRI finite distributed matrices
PSSYTRD PCHETRD Reduction of real symmetric or complex
Hermitian matrices to tridiagonal form.
PSGEBRD PCGEBRD Reduction of general matrices to bidiagonal form
PSSYEVX PCHEEVX Eigenvalue solvers for real symmetric or
complex Hermitian distributed matrices
PSSYGVX PCHEGVX Solvers for generalized eigenvalue problem with
real symmetric or complex Hermitian
distributed matrices
INDXG2P Computes the coordinate of the processor in
the two-dimensional (2D) processor grid that
owns an entry of the distributed array
NUMROC Computes the number of local rows or columns
of the distributed array owned by a processor
Table 4.1: The ScaLAPACK routines on the Cray T3E.
In an exactly similar fashion, the subroutines for complex arithmetics
have always a C as their first letter, never a Z. Thus, you should call
CGEMV, not ZGEMV.
BLACS, PBLAS and ScaLAPACK libraries all share the same method to
distribute matrices and vectors over a processor grid. This distribution
is controlled by a vector called the descriptor. The descriptor in the
ScaLAPACK implementation in T3E used to differ from the one specified
in the manuals, but this is no longer true. Thus, with respect to the
composition of the descriptor, ScaLAPACK codes should be portable to
other machines.
Comentarios a estos manuales