COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)NAME
complib, complib.sgimath, sgimath - Scientific and Mathematical Library
DESCRIPTION
The Silicon Graphics Scientific Mathematical Library, complib.sgimath, is
a comprehensive collection of high-performance math libraries providing
technical support for mathematical and numerical techniques used in
scientific and technical computing. This library is provided by SGI for
the convenience of the users. Support is limited to bug fixes at SGI's
discretion.
The library complib.sgimath contains an extensive collection of industry
standard libraries such as Basic Linear Algebra Subprograms (BLAS), the
Extended BLAS (Level 2 and Level 3), EISPACK, LINPACK, and LAPACK.
Internally developed libraries for calculating Fast Fourier Transforms
(FFT's) and Convolutions are also included, as well as select direct
sparse matrix solvers. Documentation is available per routine via
individual man pages. General man pages for the Blas ( man blas ), fft
routines ( man fft ), convolution routines ( man conv ) and LAPACK ( man
lapack ) are also available.
The complib.sgimath library is available on Silicon Graphics Inc.
machines via the -l compilation flag, -lcomplib.sgimath (append _mp for
multiprocessing libraries) for OS versions 5.1 and higher. The library
is available for R3000, R4000 (-mips2) and R8000 architectures (-mips4),
and single and multiple processor architectures (-mp).
Documentation for LAPACK and LINPACK is available by writing:
SIAM Department BKLP93
P.O. Box 7260
Philadelphia, Pennsylvania 19101
Anderson E., et. al. SIAM 1992 "LAPACK Users Guide", $19.50
Dongarra J., et. al. SIAM 1979 "LINPACK Users Guide", $19.50
AVAILABILITY
Many of the routines in complib.sgimath are available from:
netlib@research.att.com.
mail netlib@research.att.com
send index
The Internet address "netlib@research.att.com" refers to a gateway
machine, 192.20.225.2, at AT&T Bell Labs in Murray Hill, New Jersey.
This address should be understood on all the major networks. For systems
having only uucp connections, use uunet!research!netlib. In this case,
someone will be paying for long distance 1200bps phone calls, so keep
Page 1
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
your requests to a reasonable size!
If ftp is more convenient for you than email, you may connect to
"research.att.com"; log in as "netlib". (This is for read-only ftp, not
telnet.) Filesnames end in ".Z", reflecting the need to have the
"uncompress" command applied after you've ftp'd them. "compress" source
code for a variety of machines and operating systems can be obtained by
anonymous ftp from ftp.uu.net. The files in netlib/crc/res/ have a list
of files with modification times, lengths, and checksums to assist people
who wish to automatically track changes.
For access from Europe, try the duplicate collection in Oslo:
Internet: netlib@nac.no
EARN/BITNET: netlib%nac.no@norunix.bitnet (now livid.uib.no
?)
X.400: s=netlib; o=nac; prmd=uninett; c=no;
EUNET/uucp: nuug!netlib
For the Pacific, try netlib@draci.cs.uow.edu.au located at the
University of Wollongong, NSW, Australia.
The contents of netlib (other than toms) is available on CD-ROM from
Prime Time Freeware. The price of their two-disc set, which also
includes statlib, TeX, Modula3, Interview, Postgres, Tcl/Tk, and more is
about $60; for current information contact
Prime Time Freeware 370 Altair Way, Suite 150 Tel: +1 408-738-4832
ptf@cfcl.com Sunnyvale, CA 94086 USA Fax: +1 408-738-2050
The following libraries are available from "netlib@research.att.com".
These libraries are part of complib.sgimath.
The BLAS library, level 1, 2 and 3 and machine constants.
The LAPACK library, for the most common problems in numerical linear
algebra: linear equations, linear least squares problems, eigenvalue
problems, and singular value problems. It has been designed to be
efficient on a wide range of modern high-performance computers.
The LINPACK library, for linear equations and linear least squares
problems, linear systems whose matrices are general, banded, symmetric
indefinite, symmetric positive definite, triangular, and tridiagonal
square. In addition, the package computes the QR and singular value
decompositions of rectangular matrices and applies them to least squares
problems.
The EISPACK library, a collection of double precision Fortran subroutines
that compute the eigenvalues and eigenvectors of nine classes of
matrices. The package can determine the eigensystems of double complex
general, double complex Hermitian, double precision general, double
precision symmetric, double precision symmetric band, double precision
symmetric tridiagonal, special double precision tridiagonal, generalized
double precision, and generalized double precision symmetric matrices. In
Page 2
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
addition, there are two routines which use the singular value
decomposition to solve certain least squares problems.
INDEX
BLAS LIBRARY - Basic Linear Algebra Subprograms
BLAS Level 1
dnrm2, snrm2, zdnrm2, csnrm2 - BLAS level ONE Euclidean norm
functions.
dcopy, scopy, zcopy, ccopy - BLAS level ONE copy subroutines
drotg, srotg, drot, srot - BLAS level ONE rotation subroutines
idamax, isamax, izamax, icamax - BLAS level ONE Maximum index
functions
ddot, sdot, zdotc, cdotc, zdotu, cdotu - BLAS level ONE, dot product
functions
dswap, sswap, zswap, cswap - BLAS level ONE swap subroutines
dasum, sasum, dzasum, scasum - BLAS level ONE L1 norm functions.
dscal, sscal, zscal, cscal, zdscal, csscal - BLAS level ONE scaling
subroutines
daxpy, saxpy, zaxpy, caxpy - BLAS level ONE axpy subroutines
BLAS Level 2 dgemv, sgemv, zgemv, cgemv - BLAS Level Two Matrix-Vector
Product
dspr, sspr, zhpr, chpr - BLAS Level Two Symmetric Packed Matrix Rank 1
Update
dsyr, ssyr, zher, cher - BLAS Level Two (Symmetric/Hermitian)Matrix
Rank 1 Update
dtpmv, stpmv, ztpmv, ctpmv - BLAS Level Two Matrix-Vector Product
dtpsv, stpsv, ztpsv, ctpsv - BLAS Level Two Solution of Triangular
System
dger, sger, zgeru, cgeru, zgerc, cgerc - BLAS Level Two Rank 1
Operation
dspr2, sspr2, zhpr2, chpr2 - BLAS Level Two Symmetric Packed Matrix
Rank 2 Update
dsyr2, ssyr2, zher2, cher2 - BLAS Level Two
(Symmetric/Hermitian)Matrix Rank 2 Update
dsbmv, ssbmv, zhbmv, chbmv - BLAS Level Two (Symmetric/Hermitian)
Banded Matrix - Vector Product
dtrmv, strmv, ztrmv, ctrmv - BLAS Level Two Matrix-Vector Product
dtrsv, strsv, ztrsv, ctrsv - BLAS Level Two Solution of triangular
system of equations.
dgbmv, sgbmv, zgbmv, cgbmv - BLAS Level Two Matrix-Vector Product
dspmv, sspmv, zhpmv, chpmv - BLAS Level Two (Symmetric/Hermitian)
Packed Matrix - Vector Product
dsymv, ssymv, zhemv, chemv - BLAS Level Two
(Symmetric/Hermitian)Matrix - Vector Product
dtbmv, stbmv, ztbmv, ctbmv, dtbsv, stbsv, ztbsv, ctbsv - BLAS Level Two
Matrix-Vector Product and Solution of System of Equations.
BLAS Level 3 dtrmm, strmm, ztrmm, ctrmm - BLAS level three Matrix
Product
Page 3
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
zhemm, chemm - BLAS level three Hermitian Matrix Product
dsyr2k, ssyr2k, zsyr2k, csyr2k - BLAS level three Symetric Rank 2K
Update.
zher2k and cher2k - BLAS level three Hermitian Rank 2K Update
dsymm, ssymm, zsymm, csymm - BLAS level three Symmetric Matrix Product
dsyrk, ssyrk, zsyrk, csyrk - BLAS level three Symetric Rank K Update.
dtrsm, strsm, ztrsm, ctrsm - BLAS level three Solution of Systems of
Equations
dgemm, sgemm, zgemm, cgemm - BLAS level three Matrix Product
zherk and cherk - BLAS level three Hermitiam Rank K Update
EISPACK LIBRARY
BAKVEC - This subroutine forms the eigenvectors of a NONSYMMETRIC
TRIDIAGONAL matrix by back transforming those of the corresponding
symmetric matrix determined by FIGI.
BALANC - This subroutine balances a REAL matrix and isolates eigenvalues
whenever possible.
BALBAK - This subroutine forms the eigenvectors of a REAL GENERAL matrix
by back transforming those of the corresponding balanced matrix
determined by BALANC.
BANDR - This subroutine reduces a REAL SYMMETRIC BAND matrix to a
symmetric tridiagonal matrix using and optionally accumulating orthogonal
similarity transformations.
BANDV - This subroutine finds those eigenvectors of a REAL SYMMETRIC
BAND matrix corresponding to specified eigenvalues, using inverse
iteration. The subroutine may also be used to solve systems of linear
equations with a symmetric or non-symmetric band coefficient matrix.
BISECT - This subroutine finds those eigenvalues of a TRIDIAGONAL
SYMMETRIC matrix which lie in a specified interval, using bisection.
BQR - This subroutine finds the eigenvalue of smallest (usually)
magnitude of a REAL SYMMETRIC BAND matrix using the QR algorithm with
shifts of origin. Consecutive calls can be made to find further
eigenvalues.
CBABK2 - This subroutine forms the eigenvectors of a COMPLEX GENERAL
matrix by back transforming those of the corresponding balanced matrix
Page 4
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
determined by CBAL.
CBAL - This subroutine balances a COMPLEX matrix and isolates
eigenvalues whenever possible.
CDIV - COMPLEX DIVISION, (CR,CI) = (AR,AI)/(BR,BI)
CG - This subroutine calls the recommended sequence of subroutines
from the eigensystem subroutine package (EISPACK) to find the eigenvalues
and eigenvectors (if desired) of a COMPLEX GENERAL matrix.
CH - This subroutine calls the recommended sequence of subroutines
from the eigensystem subroutine package (EISPACK) to find the eigenvalues
and eigenvectors (if desired) of a COMPLEX HERMITIAN matrix.
CINVIT - This subroutine finds those eigenvectors of A COMPLEX UPPER
Hessenberg matrix corresponding to specified eigenvalues, using inverse
iteration.
COMBAK - This subroutine forms the eigenvectors of a COMPLEX GENERAL
matrix by back transforming those of the corresponding upper Hessenberg
matrix determined by COMHES.
COMHES - Given a COMPLEX GENERAL matrix, this subroutine reduces a
submatrix situated in rows and columns LOW through IGH to upper
Hessenberg form by stabilized elementary similarity transformations.
COMLR - This subroutine finds the eigenvalues of a COMPLEX UPPER
Hessenberg matrix by the modified LR method.
COMLR2 - This subroutine finds the eigenvalues and eigenvectors of a
COMPLEX UPPER Hessenberg matrix by the modified LR method. The
eigenvectors of a COMPLEX GENERAL matrix can also be found if COMHES
has been used to reduce this general matrix to Hessenberg form.
COMQR - This subroutine finds the eigenvalues of a COMPLEX upper
Hessenberg matrix by the QR method.
COMQR2 - This subroutine finds the eigenvalues and eigenvectors of a
COMPLEX UPPER Hessenberg matrix by the QR method. The eigenvectors of a
COMPLEX GENERAL matrix can also be found if CORTH has been used to
Page 5
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
reduce this general matrix to Hessenberg form.
CORTB - This subroutine forms the eigenvectors of a COMPLEX GENERAL
matrix by back transforming those of the corresponding upper Hessenberg
matrix determined by CORTH.
CORTH - Given a COMPLEX GENERAL matrix, this subroutine reduces a
submatrix situated in rows and columns LOW through IGH to upper
Hessenberg form by unitary similarity transformations.
CSROOT - (YR,YI) = COMPLEX SQRT(XR,XI) BRANCH CHOSEN SO THAT YR .GE. 0.0
AND SIGN(YI) .EQ. SIGN(XI)
ELMBAK - This subroutine forms the eigenvectors of a REAL GENERAL matrix
by back transforming those of the corresponding upper Hessenberg matrix
determined by ELMHES.
ELMHES - Given a REAL GENERAL matrix, this subroutine reduces a
submatrix situated in rows and columns LOW through IGH to upper
Hessenberg form by stabilized elementary similarity transformations.
ELTRAN - This subroutine accumulates the stabilized elementary
similarity transformations used in the reduction of a REAL GENERAL matrix
to upper Hessenberg form by ELMHES.
EPSLON - ESTIMATE UNIT ROUNDOFF IN QUANTITIES OF SIZE X.
FIGI - Given a NONSYMMETRIC TRIDIAGONAL matrix such that the products
of corresponding pairs of off-diagonal elements are all non-negative,
this subroutine reduces it to a symmetric tridiagonal matrix with the
same eigenvalues. If, further, a zero product only occurs when both
factors are zero, the reduced matrix is similar to the original matrix.
FIGI2 - Given a NONSYMMETRIC TRIDIAGONAL matrix such that the products
of corresponding pairs of off-diagonal elements are all non-negative, and
zero only when both factors are zero, this subroutine reduces it to a
SYMMETRIC TRIDIAGONAL matrix using and accumulating diagonal similarity
transformations.
HQR - This subroutine finds the eigenvalues of a REAL UPPER
Hessenberg matrix by the QR method.
Page 6
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
HQR2 - This subroutine finds the eigenvalues and eigenvectors of a
REAL UPPER Hessenberg matrix by the QR method. The eigenvectors of a
REAL GENERAL matrix can also be found if ELMHES and ELTRAN or ORTHES
and ORTRAN have been used to reduce this general matrix to Hessenberg
form and to accumulate the similarity transformations.
HTRIB3 - This subroutine forms the eigenvectors of a COMPLEX HERMITIAN
matrix by back transforming those of the corresponding real symmetric
tridiagonal matrix determined by HTRID3.
HTRIBK - This subroutine forms the eigenvectors of a COMPLEX HERMITIAN
matrix by back transforming those of the corresponding real symmetric
tridiagonal matrix determined by HTRIDI.
HTRID3 - This subroutine reduces a COMPLEX HERMITIAN matrix, stored as a
single square array, to a real symmetric tridiagonal matrix using unitary
similarity transformations.
HTRIDI - This subroutine reduces a COMPLEX HERMITIAN matrix to a real
symmetric tridiagonal matrix using unitary similarity transformations.
IMTQL1 - This subroutine finds the eigenvalues of a SYMMETRIC
TRIDIAGONAL matrix by the implicit QL method.
IMTQL2 - This subroutine finds the eigenvalues and eigenvectors of a
SYMMETRIC TRIDIAGONAL matrix by the implicit QL method. The eigenvectors
of a FULL SYMMETRIC matrix can also be found if TRED2 has been used to
reduce this full matrix to tridiagonal form.
IMTQLV - This subroutine finds the eigenvalues of a SYMMETRIC
TRIDIAGONAL matrix by the implicit QL method and associates with them
their corresponding submatrix indices.
INVIT - This subroutine finds those eigenvectors of a REAL UPPER
Hessenberg matrix corresponding to specified eigenvalues, using inverse
iteration.
MINFIT - This subroutine determines, towards the solution of the linear
T system AX=B, the singular value decomposition A=USV of a real
T M by N rectangular matrix, forming U B rather than U. Householder
bidiagonalization and a variant of the QR algorithm are used.
Page 7
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
ORTBAK - This subroutine forms the eigenvectors of a REAL GENERAL matrix
by back transforming those of the corresponding upper Hessenberg matrix
determined by ORTHES.
ORTHES - Given a REAL GENERAL matrix, this subroutine reduces a
submatrix situated in rows and columns LOW through IGH to upper
Hessenberg form by orthogonal similarity transformations.
ORTRAN - This subroutine accumulates the orthogonal similarity
transformations used in the reduction of a REAL GENERAL matrix to upper
Hessenberg form by ORTHES.
PYTHAG - FINDS SQRT(A**2+B**2) WITHOUT OVERFLOW OR DESTRUCTIVE UNDERFLOW
QZHES - This subroutine accepts a pair of REAL GENERAL matrices and
reduces one of them to upper Hessenberg form and the other to upper
triangular form using orthogonal transformations. It is usually followed
by QZIT, QZVAL and, possibly, QZVEC.
QZIT - This subroutine accepts a pair of REAL matrices, one of them in
upper Hessenberg form and the other in upper triangular form. It reduces
the Hessenberg matrix to quasi-triangular form using orthogonal
transformations while maintaining the triangular form of the other
matrix. It is usually preceded by QZHES and followed by QZVAL and,
possibly, QZVEC.
QZVAL - This subroutine accepts a pair of REAL matrices, one of them in
quasi-triangular form and the other in upper triangular form. It reduces
the quasi-triangular matrix further, so that any remaining 2-by-2 blocks
correspond to pairs of complex eigenvalues, and returns quantities whose
ratios give the generalized eigenvalues. It is usually preceded by
QZHES and QZIT and may be followed by QZVEC.
QZVEC - This subroutine accepts a pair of REAL matrices, one of them in
quasi-triangular form (in which each 2-by-2 block corresponds to a pair
of complex eigenvalues) and the other in upper triangular form. It
computes the eigenvectors of the triangular problem and transforms the
results back to the original coordinate system. It is usually preceded
by QZHES, QZIT, and QZVAL.
RATQR - This subroutine finds the algebraically smallest or largest
eigenvalues of a SYMMETRIC TRIDIAGONAL matrix by the rational QR method
with Newton corrections.
Page 8
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
REBAK - This subroutine forms the eigenvectors of a generalized
SYMMETRIC eigensystem by back transforming those of the derived symmetric
matrix determined by REDUC.
REBAKB - This subroutine forms the eigenvectors of a generalized
SYMMETRIC eigensystem by back transforming those of the derived symmetric
matrix determined by REDUC2.
REDUC - This subroutine reduces the generalized SYMMETRIC eigenproblem
Ax=(LAMBDA)Bx, where B is POSITIVE DEFINITE, to the standard symmetric
eigenproblem using the Cholesky factorization of B.
REDUC2 - This subroutine reduces the generalized SYMMETRIC eigenproblems
ABx=(LAMBDA)x OR BAy=(LAMBDA)y, where B is POSITIVE DEFINITE, to the
standard symmetric eigenproblem using the Cholesky factorization of B.
RG - This subroutine calls the recommended sequence of subroutines
from the eigensystem subroutine package (EISPACK) To find the eigenvalues
and eigenvectors (if desired) of a REAL GENERAL matrix.
RGG - This subroutine calls the recommended sequence of subroutines
from the eigensystem subroutine package (EISPACK) to find the eigenvalues
and eigenvectors (if desired) for the REAL GENERAL GENERALIZED
eigenproblem Ax = (LAMBDA)Bx.
RS - This subroutine calls the recommended sequence of subroutines
from the eigensystem subroutine package (EISPACK) to find the eigenvalues
and eigenvectors (if desired) of a REAL SYMMETRIC matrix.
RSB - This subroutine calls the recommended sequence of subroutines
from the eigensystem subroutine package (EISPACK) to find the eigenvalues
and eigenvectors (if desired) of a REAL SYMMETRIC BAND matrix.
RSG - This subroutine calls the recommended sequence of subroutines
from the eigensystem subroutine package (EISPACK) To find the eigenvalues
and eigenvectors (if desired) for the REAL SYMMETRIC generalized
eigenproblem Ax = (LAMBDA)Bx.
RSGAB - This subroutine calls the recommended sequence of subroutines
from the eigensystem subroutine package (EISPACK) to find the eigenvalues
and eigenvectors (if desired) for the REAL SYMMETRIC generalized
eigenproblem ABx = (LAMBDA)x.
Page 9
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
RSGBA - This subroutine calls the recommended sequence of subroutines
from the eigensystem subroutine package (EISPACK) to find the eigenvalues
and eigenvectors (if desired) for the REAL SYMMETRIC generalized
eigenproblem BAx = (LAMBDA)x.
RSM - THIS SUBROUTINE CALLS THE RECOMMENDED SEQUENCE OF SUBROUTINES
FROM THE EIGENSYSTEM SUBROUTINE PACKAGE (EISPACK) TO FIND ALL OF THE
EIGENVALUES AND SOME OF THE EIGENVECTORS OF A REAL SYMMETRIC MATRIX.
RSP - This subroutine calls the recommended sequence of subroutines
from the eigensystem subroutine package (EISPACK) to find the eigenvalues
and eigenvectors (if desired) of a REAL SYMMETRIC PACKED matrix.
RST - This subroutine calls the recommended sequence of subroutines
from the eigensystem subroutine package (EISPACK) to find the eigenvalues
and eigenvectors (if desired) of a REAL SYMMETRIC TRIDIAGONAL matrix.
RT - This subroutine calls the recommended sequence of subroutines
from the eigensystem subroutine package (EISPACK) to find the eigenvalues
and eigenvectors (if desired) of a special REAL TRIDIAGONAL matrix.
SVD - This subroutine determines the singular value decomposition
T A=USV of a REAL M by N rectangular matrix. Householder
bidiagonalization and a variant of the QR algorithm are used.
TINVIT - This subroutine finds those eigenvectors of a TRIDIAGONAL
SYMMETRIC matrix corresponding to specified eigenvalues, using inverse
iteration.
TQL1 - This subroutine finds the eigenvalues of a SYMMETRIC
TRIDIAGONAL matrix by the QL method.
TQL2 - This subroutine finds the eigenvalues and eigenvectors of a
SYMMETRIC TRIDIAGONAL matrix by the QL method. The eigenvectors of a
FULL SYMMETRIC matrix can also be found if TRED2 has been used to
reduce this full matrix to tridiagonal form.
TQLRAT - This subroutine finds the eigenvalues of a SYMMETRIC
TRIDIAGONAL matrix by the rational QL method.
TRBAK1 - This subroutine forms the eigenvectors of a REAL SYMMETRIC
matrix by back transforming those of the corresponding symmetric
Page 10
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
tridiagonal matrix determined by TRED1.
TRBAK3 - This subroutine forms the eigenvectors of a REAL SYMMETRIC
matrix by back transforming those of the corresponding symmetric
tridiagonal matrix determined by TRED3.
TRED1 - This subroutine reduces a REAL SYMMETRIC matrix to a symmetric
tridiagonal matrix using orthogonal similarity transformations.
TRED2 - This subroutine reduces a REAL SYMMETRIC matrix to a symmetric
tridiagonal matrix using and accumulating orthogonal similarity
transformations.
TRED3 - This subroutine reduces a REAL SYMMETRIC matrix, stored as a
one-dimensional array, to a symmetric tridiagonal matrix using orthogonal
similarity transformations.
TRIDIB - This subroutine finds those eigenvalues of a TRIDIAGONAL
SYMMETRIC matrix between specified boundary indices, using bisection.
TSTURM - This subroutine finds those eigenvalues of a TRIDIAGONAL
SYMMETRIC matrix which lie in a specified interval and their associated
eigenvectors, using bisection and inverse iteration.
LINPACK LIBRARY
CCHDC - CCHDC computes the Cholesky decomposition of a positive
definite matrix. A pivoting option allows the user to estimate the
condition of a positive definite matrix or determine the rank of a
positive semidefinite matrix.
CCHDD - CCHDD downdates an augmented Cholesky decomposition or the
triangular factor of an augmented QR decomposition. Specifically, given
an upper triangular matrix R of order P, a row vector X, a column vector
Z, and a scalar Y, CCHDD determines a unitary matrix U and a scalar ZETA
such that
(R Z ) (RR ZZ)
U * ( ) = ( ) ,
(0 ZETA) ( X Y)
where RR is upper triangular. If R and Z have been obtained from the
factorization of a least squares problem, then RR and ZZ are the factors
corresponding to the problem with the observation (X,Y) removed. In this
Page 11
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
case, if RHO is the norm of the residual vector, then the norm of the
residual vector of the downdated problem is SQRT(RHO**2 - ZETA**2).
CCHDD will simultaneously downdate several triplets (Z,Y,RHO) along with
R. For a less terse description of what CCHDD does and how it may be
applied, see the LINPACK Guide.
CCHEX - CCHEX updates the Cholesky factorization
A = CTRANS(R)*R
of a positive definite matrix A of order P under diagonal permutations of
the form
TRANS(E)*A*E
where E is a permutation matrix. Specifically, given an upper triangular
matrix R and a permutation matrix E (which is specified by K, L, and
JOB), CCHEX determines a unitary matrix U such that
U*R*E = RR,
where RR is upper triangular. At the users option, the transformation U
will be multiplied into the array Z. If A = CTRANS(X)*X, so that R is
the triangular part of the QR factorization of X, then RR is the
triangular part of the QR factorization of X*E, i.e. X with its columns
permuted. For a less terse description of what CCHEX does and how it may
be applied, see the LINPACK Guide.
CCHUD - CCHUD updates an augmented Cholesky decomposition of the
triangular part of an augmented QR decomposition. Specifically, given an
upper triangular matrix R of order P, a row vector X, a column vector Z,
and a scalar Y, CCHUD determines a unitary matrix U and a scalar ZETA
such that
(R Z) (RR ZZ )
U * ( ) = ( ) ,
(X Y) ( 0 ZETA)
where RR is upper triangular. If R and Z have been obtained from the
factorization of a least squares problem, then RR and ZZ are the factors
corresponding to the problem with the observation (X,Y) appended. In
this case, if RHO is the norm of the residual vector, then the norm of
the residual vector of the updated problem is SQRT(RHO**2 + ZETA**2).
CCHUD will simultaneously update several triplets (Z,Y,RHO).
CGBCO - CGBCO factors a complex band matrix by Gaussian elimination and
estimates the condition of the matrix.
CGBDI - CGBDI computes the determinant of a band matrix using the
factors computed by CGBCO or CGBFA. If the inverse is needed, use CGBSL
N times.
Page 12
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
CGBFA - CGBFA factors a complex band matrix by elimination.
CGBSL - CGBSL solves the complex band system A * X = B or CTRANS(A) *
X = B using the factors computed by CGBCO or CGBFA.
CGECO - CGECO factors a complex matrix by Gaussian elimination and
estimates the condition of the matrix.
CGEDI - CGEDI computes the determinant and inverse of a matrix using
the factors computed by CGECO or CGEFA.
CGEFA - CGEFA factors a complex matrix by Gaussian elimination.
CGESL - CGESL solves the complex system A * X = B or CTRANS(A) * X =
B using the factors computed by CGECO or CGEFA.
CGTSL - CGTSL given a general tridiagonal matrix and a right hand side
will find the solution.
CHICO - CHICO factors a complex Hermitian matrix by elimination with
symmetric pivoting and estimates the condition of the matrix.
CHIDI - CHIDI computes the determinant, inertia and inverse of a
complex Hermitian matrix using the factors from CHIFA.
CHIFA - CHIFA factors a complex Hermitian matrix by elimination with
symmetric pivoting.
CHISL - CHISL solves the complex Hermitian system A * X = B using the
factors computed by CHIFA.
CHPCO - CHPCO factors a complex Hermitian matrix stored in packed form
by elimination with symmetric pivoting and estimates the condition of the
matrix.
CHPDI - CHPDI computes the determinant, inertia and inverse of a
complex Hermitian matrix using the factors from CHPFA, where the matrix
is stored in packed form.
CHPFA - CHPFA factors a complex Hermitian matrix stored in packed form
by elimination with symmetric pivoting.
CHPSL - CHISL solves the complex Hermitian system A * X = B using the
factors computed by CHPFA.
CPBCO - CPBCO factors a complex Hermitian positive definite matrix
stored in band form and estimates the condition of the matrix.
CPBDI - CPBDI computes the determinant of a complex Hermitian positive
definite band matrix using the factors computed by CPBCO or CPBFA. If
the inverse is needed, use CPBSL N times.
Page 13
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
CPBFA - CPBFA factors a complex Hermitian positive definite matrix
stored in band form.
CPBSL - CPBSL solves the complex Hermitian positive definite band
system A*X = B using the factors computed by CPBCO or CPBFA.
CPOCO - CPOCO factors a complex Hermitian positive definite matrix and
estimates the condition of the matrix.
CPODI - CPODI computes the determinant and inverse of a certain complex
Hermitian positive definite matrix (see below) using the factors computed
by CPOCO, CPOFA or CQRDC.
CPOFA - CPOFA factors a complex Hermitian positive definite matrix.
CPOSL - CPOSL solves the COMPLEX Hermitian positive definite system A *
X = B using the factors computed by CPOCO or CPOFA.
CPPCO - CPPCO factors a complex Hermitian positive definite matrix
stored in packed form and estimates the condition of the matrix.
CPPDI - CPPDI computes the determinant and inverse of a complex
Hermitian positive definite matrix using the factors computed by CPPCO or
CPPFA .
CPPFA - CPPFA factors a complex Hermitian positive definite matrix
stored in packed form.
CPPSL - CPPSL solves the complex Hermitian positive definite system A *
X = B using the factors computed by CPPCO or CPPFA.
CPTSL - CPTSL given a positive definite tridiagonal matrix and a right
hand side will find the solution.
CQRDC - CQRDC uses Householder transformations to compute the QR
factorization of an N by P matrix X. Column pivoting based on the 2-
norms of the reduced columns may be performed at the users option.
CQRSL - CQRSL applies the output of CQRDC to compute coordinate
transformations, projections, and least squares solutions. For K .LE.
MIN(N,P), let XK be the matrix
XK = (X(JVPT(1)),X(JVPT(2)), ... ,X(JVPT(K)))
formed from columnns JVPT(1), ... ,JVPT(K) of the original N x P matrix X
that was input to CQRDC (if no pivoting was done, XK consists of the
first K columns of X in their original order). CQRDC produces a factored
unitary matrix Q and an upper triangular matrix R such that
XK = Q * (R)
(0)
Page 14
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
This information is contained in coded form in the arrays X and QRAUX.
CSICO - CSICO factors a complex symmetric matrix by elimination with
symmetric pivoting and estimates the condition of the matrix.
CSIDI - CSIDI computes the determinant and inverse of a complex
symmetric matrix using the factors from CSIFA.
CSIFA - CSIFA factors a complex symmetric matrix by elimination with
symmetric pivoting.
CSISL - CSISL solves the complex symmetric system A * X = B using the
factors computed by CSIFA.
CSPCO - CSPCO factors a complex symmetric matrix stored in packed form
by elimination with symmetric pivoting and estimates the condition of the
matrix.
CSPDI - CSPDI computes the determinant and inverse of a complex
symmetric matrix using the factors from CSPFA, where the matrix is stored
in packed form.
CSPFA - CSPFA factors a complex symmetric matrix stored in packed form
by elimination with symmetric pivoting.
CSPSL - CSISL solves the complex symmetric system A * X = B using the
factors computed by CSPFA.
CSVDC - CSVDC is a subroutine to reduce a complex NxP matrix X by
unitary transformations U and V to diagonal form. The diagonal elements
S(I) are the singular values of X. The columns of U are the
corresponding left singular vectors, and the columns of V the right
singular vectors.
CTRCO - CTRCO estimates the condition of a complex triangular matrix.
CTRDI - CTRDI computes the determinant and inverse of a complex
triangular matrix.
CTRSL - CTRSL solves systems of the form
T * X = B or
CTRANS(T) * X = B
where T is a triangular matrix of order N. Here CTRANS(T) denotes the
conjugate transpose of the matrix T.
DCHDC - DCHDC computes the Cholesky decomposition of a positive
definite matrix. A pivoting option allows the user to estimate the
condition of a positive definite matrix or determine the rank of a
positive semidefinite matrix.
Page 15
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
DCHDD - DCHDD downdates an augmented Cholesky decomposition or the
triangular factor of an augmented QR decomposition. Specifically, given
an upper triangular matrix R of order P, a row vector X, a column vector
Z, and a scalar Y, DCHDD determines an orthogonal matrix U and a scalar
ZETA such that
(R Z ) (RR ZZ)
U * ( ) = ( ) ,
(0 ZETA) ( X Y)
where RR is upper triangular. If R and Z have been obtained from the
factorization of a least squares problem, then RR and ZZ are the factors
corresponding to the problem with the observation (X,Y) removed. In this
case, if RHO is the norm of the residual vector, then the norm of the
residual vector of the downdated problem is DSQRT(RHO**2 - ZETA**2).
DCHDD will simultaneously downdate several triplets (Z,Y,RHO) along with
R. For a less terse description of what DCHDD does and how it may be
applied, see the LINPACK guide.
DCHEX - DCHEX updates the Cholesky factorization
A = TRANS(R)*R
of a positive definite matrix A of order P under diagonal permutations of
the form
TRANS(E)*A*E
where E is a permutation matrix. Specifically, given an upper triangular
matrix R and a permutation matrix E (which is specified by K, L, and
JOB), DCHEX determines an orthogonal matrix U such that
U*R*E = RR,
where RR is upper triangular. At the users option, the transformation U
will be multiplied into the array Z. If A = TRANS(X)*X, so that R is the
triangular part of the QR factorization of X, then RR is the triangular
part of the QR factorization of X*E, i.e. X with its columns permuted.
For a less terse description of what DCHEX does and how it may be
applied, see the LINPACK guide.
DCHUD - DCHUD updates an augmented Cholesky decomposition of the
triangular part of an augmented QR decomposition. Specifically, given an
upper triangular matrix R of order P, a row vector X, a column vector Z,
and a scalar Y, DCHUD determines a untiary matrix U and a scalar ZETA
such that
(R Z) (RR ZZ )
U * ( ) = ( ) ,
(X Y) ( 0 ZETA)
Page 16
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
where RR is upper triangular. If R and Z have been obtained from the
factorization of a least squares problem, then RR and ZZ are the factors
corresponding to the problem with the observation (X,Y) appended. In
this case, if RHO is the norm of the residual vector, then the norm of
the residual vector of the updated problem is DSQRT(RHO**2 + ZETA**2).
DCHUD will simultaneously update several triplets (Z,Y,RHO). For a less
terse description of what DCHUD does and how it may be applied, see the
LINPACK guide.
DGBCO - DGBCO factors a double precision band matrix by Gaussian
elimination and estimates the condition of the matrix.
DGBDI - DGBDI computes the determinant of a band matrix using the
factors computed by DGBCO or DGBFA. If the inverse is needed, use DGBSL
N times.
DGBFA - DGBFA factors a double precision band matrix by elimination.
DGBSL - DGBSL solves the double precision band system A * X = B or
TRANS(A) * X = B using the factors computed by DGBCO or DGBFA.
DGECO - DGECO factors a double precision matrix by Gaussian elimination
and estimates the condition of the matrix.
DGEDI - DGEDI computes the determinant and inverse of a matrix using
the factors computed by DGECO or DGEFA.
DGEFA - DGEFA factors a double precision matrix by Gaussian
elimination.
DGESL - DGESL solves the double precision system A * X = B or
TRANS(A) * X = B using the factors computed by DGECO or DGEFA.
DGTSL - DGTSL given a general tridiagonal matrix and a right hand side
will find the solution.
DPBCO - DPBCO factors a double precision symmetric positive definite
matrix stored in band form and estimates the condition of the matrix.
DPBDI - DPBDI computes the determinant of a double precision symmetric
positive definite band matrix using the factors computed by DPBCO or
DPBFA. If the inverse is needed, use DPBSL N times.
DPBFA - DPBFA factors a double precision symmetric positive definite
matrix stored in band form.
DPBSL - DPBSL solves the double precision symmetric positive definite
band system A*X = B using the factors computed by DPBCO or DPBFA.
DPOCO - DPOCO factors a double precision symmetric positive definite
matrix and estimates the condition of the matrix.
Page 17
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
DPODI - DPODI computes the determinant and inverse of a certain double
precision symmetric positive definite matrix (see below) using the
factors computed by DPOCO, DPOFA or DQRDC.
DPOFA - DPOFA factors a double precision symmetric positive definite
matrix.
DPOSL - DPOSL solves the double precision symmetric positive definite
system A * X = B using the factors computed by DPOCO or DPOFA.
DPPCO - DPPCO factors a double precision symmetric positive definite
matrix stored in packed form and estimates the condition of the matrix.
DPPDI - DPPDI computes the determinant and inverse of a double
precision symmetric positive definite matrix using the factors computed
by DPPCO or DPPFA .
DPPFA - DPPFA factors a double precision symmetric positive definite
matrix stored in packed form.
DPPSL - DPPSL solves the double precision symmetric positive definite
system A * X = B using the factors computed by DPPCO or DPPFA.
DPTSL - DPTSL, given a positive definite symmetric tridiagonal matrix
and a right hand side, will find the solution.
DQRDC - DQRDC uses Householder transformations to compute the QR
factorization of an N by P matrix X. Column pivoting based on the 2-
norms of the reduced columns may be performed at the user's option.
DQRSL - DQRSL applies the output of DQRDC to compute coordinate
transformations, projections, and least squares solutions. For K .LE.
MIN(N,P), let XK be the matrix
XK = (X(JPVT(1)),X(JPVT(2)), ... ,X(JPVT(K)))
formed from columnns JPVT(1), ... ,JPVT(K) of the original N X P matrix X
that was input to DQRDC (if no pivoting was done, XK consists of the
first K columns of X in their original order). DQRDC produces a factored
orthogonal matrix Q and an upper triangular matrix R such that
XK = Q * (R)
(0)
This information is contained in coded form in the arrays X and QRAUX.
DSICO - DSICO factors a double precision symmetric matrix by
elimination with symmetric pivoting and estimates the condition of the
matrix.
DSIDI - DSIDI computes the determinant, inertia and inverse of a double
precision symmetric matrix using the factors from DSIFA.
Page 18
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
DSIFA - DSIFA factors a double precision symmetric matrix by
elimination with symmetric pivoting.
DSISL - DSISL solves the double precision symmetric system A * X = B
using the factors computed by DSIFA.
DSPCO - DSPCO factors a double precision symmetric matrix stored in
packed form by elimination with symmetric pivoting and estimates the
condition of the matrix.
DSPDI - DSPDI computes the determinant, inertia and inverse of a double
precision symmetric matrix using the factors from DSPFA, where the matrix
is stored in packed form.
DSPFA - DSPFA factors a double precision symmetric matrix stored in
packed form by elimination with symmetric pivoting.
DSPSL - DSISL solves the double precision symmetric system A * X = B
using the factors computed by DSPFA.
DSVDC - DSVDC is a subroutine to reduce a double precision NxP matrix X
by orthogonal transformations U and V to diagonal form. The diagonal
elements S(I) are the singular values of X. The columns of U are the
corresponding left singular vectors, and the columns of V the right
singular vectors.
DTRCO - DTRCO estimates the condition of a double precision triangular
matrix.
DTRDI - DTRDI computes the determinant and inverse of a double
precision triangular matrix.
DTRSL - DTRSL solves systems of the form
T * X = B or
TRANS(T) * X = B
where T is a triangular matrix of order N. Here TRANS(T) denotes the
transpose of the matrix T.
SCHDC - SCHDC computes the Cholesky decomposition of a positive
definite matrix. A pivoting option allows the user to estimate the
condition of a positive definite matrix or determine the rank of a
positive semidefinite matrix.
SCHDD - SCHDD downdates an augmented Cholesky decomposition or the
triangular factor of an augmented QR decomposition. Specifically, given
an upper triangular matrix R of order P, a row vector X, a column vector
Z, and a scalar Y, SCHDD determines an orthogonal matrix U and a scalar
ZETA such that
(R Z ) (RR ZZ)
Page 19
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
U * ( ) = ( ) ,
(0 ZETA) ( X Y)
where RR is upper triangular. If R and Z have been obtained from the
factorization of a least squares problem, then RR and ZZ are the factors
corresponding to the problem with the observation (X,Y) removed. In this
case, if RHO is the norm of the residual vector, then the norm of the
residual vector of the downdated problem is SQRT(RHO**2 - ZETA**2). SCHDD
will simultaneously downdate several triplets (Z,Y,RHO) along with R.
For a less terse description of what SCHDD does and how it may be
applied, see the LINPACK guide.
SCHEX - SCHEX updates the Cholesky factorization
A = TRANS(R)*R
of a positive definite matrix A of order P under diagonal permutations of
the form
TRANS(E)*A*E
where E is a permutation matrix. Specifically, given an upper triangular
matrix R and a permutation matrix E (which is specified by K, L, and
JOB), SCHEX determines an orthogonal matrix U such that
U*R*E = RR,
where RR is upper triangular. At the users option, the transformation U
will be multiplied into the array Z. If A = TRANS(X)*X, so that R is the
triangular part of the QR factorization of X, then RR is the triangular
part of the QR factorization of X*E, i.e., X with its columns permuted.
For a less terse description of what SCHEX does and how it may be
applied, see the LINPACK guide.
SCHUD - SCHUD updates an augmented Cholesky decomposition of the
triangular part of an augmented QR decomposition. Specifically, given an
upper triangular matrix R of order P, a row vector X, a column vector Z,
and a scalar Y, SCHUD determines a unitary matrix U and a scalar ZETA
such that
(R Z) (RR ZZ )
U * ( ) = ( ) ,
(X Y) ( 0 ZETA)
where RR is upper triangular. If R and Z have been obtained from the
factorization of a least squares problem, then RR and ZZ are the factors
corresponding to the problem with the observation (X,Y) appended. In
this case, if RHO is the norm of the residual vector, then the norm of
the residual vector of the updated problem is SQRT(RHO**2 + ZETA**2).
SCHUD will simultaneously update several triplets (Z,Y,RHO). For a less
terse description of what SCHUD does and how it may be applied, see the
Page 20
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
LINPACK guide.
SGBCO - SBGCO factors a real band matrix by Gaussian elimination and
estimates the condition of the matrix.
SGBDI - SGBDI computes the determinant of a band matrix using the
factors computed by SBGCO or SGBFA. If the inverse is needed, use SGBSL
N times.
SGBFA - SGBFA factors a real band matrix by elimination.
SGBSL - SGBSL solves the real band system A * X = B or TRANS(A) * X =
B using the factors computed by SBGCO or SGBFA.
SGECO - SGECO factors a real matrix by Gaussian elimination and
estimates the condition of the matrix.
SGEDI - SGEDI computes the determinant and inverse of a matrix using
the factors computed by SGECO or SGEFA.
SGEFA - SGEFA factors a real matrix by Gaussian elimination.
SGESL - SGESL solves the real system A * X = B or TRANS(A) * X = B
using the factors computed by SGECO or SGEFA.
SGTSL - SGTSL given a general tridiagonal matrix and a right hand side
will find the solution.
SPBCO - SPBCO factors a real symmetric positive definite matrix stored
in band form and estimates the condition of the matrix.
SPBDI - SPBDI computes the determinant of a real symmetric positive
definite band matrix using the factors computed by SPBCO or SPBFA. If
the inverse is needed, use SPBSL N times.
SPBFA - SPBFA factors a real symmetric positive definite matrix stored
in band form.
SPBSL - SPBSL solves the real symmetric positive definite band system
A*X = B using the factors computed by SPBCO or SPBFA.
SPOCO - SPOCO factors a real symmetric positive definite matrix and
estimates the condition of the matrix.
SPODI - SPODI computes the determinant and inverse of a certain real
symmetric positive definite matrix (see below) using the factors computed
by SPOCO, SPOFA or SQRDC.
SPOFA - SPOFA factors a real symmetric positive definite matrix.
SPOSL - SPOSL solves the real symmetric positive definite system A * X
= B using the factors computed by SPOCO or SPOFA.
Page 21
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
SPPCO - SPPCO factors a real symmetric positive definite matrix stored
in packed form and estimates the condition of the matrix.
SPPDI - SPPDI computes the determinant and inverse of a real symmetric
positive definite matrix using the factors computed by SPPCO or SPPFA .
SPPFA - SPPFA factors a real symmetric positive definite matrix stored
in packed form.
SPPSL - SPPSL solves the real symmetric positive definite system A * X
= B using the factors computed by SPPCO or SPPFA.
SPTSL - SPTSL given a positive definite tridiagonal matrix and a right
hand side will find the solution.
SQRDC - SQRDC uses Householder transformations to compute the QR
factorization of an N by P matrix X. Column pivoting based on the 2-
norms of the reduced columns may be performed at the user's option.
SQRSL - SQRSL applies the output of SQRDC to compute coordinate
transformations, projections, and least squares solutions. For K .LE.
MIN(N,P), let XK be the matrix
XK = (X(JPVT(1)),X(JPVT(2)), ... ,X(JPVT(K)))
formed from columnns JPVT(1), ... ,JPVT(K) of the original N x P matrix X
that was input to SQRDC (if no pivoting was done, XK consists of the
first K columns of X in their original order). SQRDC produces a factored
orthogonal matrix Q and an upper triangular matrix R such that
XK = Q * (R)
(0)
This information is contained in coded form in the arrays X and QRAUX.
SSICO - SSICO factors a real symmetric matrix by elimination with
symmetric pivoting and estimates the condition of the matrix.
SSIDI - SSIDI computes the determinant, inertia and inverse of a real
symmetric matrix using the factors from SSIFA.
SSIFA - SSIFA factors a real symmetric matrix by elimination with
symmetric pivoting.
SSISL - SSISL solves the real symmetric system A * X = B using the
factors computed by SSIFA.
SSPCO - SSPCO factors a real symmetric matrix stored in packed form by
elimination with symmetric pivoting and estimates the condition of the
matrix.
SSPDI - SSPDI computes the determinant, inertia and inverse of a real
Page 22
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
symmetric matrix using the factors from SSPFA, where the matrix is stored
in packed form.
SSPFA - SSPFA factors a real symmetric matrix stored in packed form by
elimination with symmetric pivoting.
SSPSL - SSISL solves the real symmetric system A * X = B using the
factors computed by SSPFA.
SSVDC - SSVDC is a subroutine to reduce a real NxP matrix X by
orthogonal transformations U and V to diagonal form. The diagonal
elements S(I) are the singular values of X. The columns of U are the
corresponding left singular vectors, and the columns of V the right
singular vectors.
STRCO - STRCO estimates the condition of a real triangular matrix.
STRDI - STRDI computes the determinant and inverse of a real triangular
matrix.
STRSL - STRSL solves systems of the form
T * X = B or
TRANS(T) * X = B
where T is a triangular matrix of order N. Here TRANS(T) denotes the
transpose of the matrix T.
LAPACK LIBRARY
SBDSQR computes the singular value decomposition (SVD) of a real N-by-N
(upper or lower) bidiagonal matrix B: B = Q * S * P' (P' denotes the
transpose of P), where S is a diagonal matrix with non-negative diagonal
elements (the singular values of B), and Q and P are orthogonal matrices.
CGBCON estimates the reciprocal of the condition number of a complex
general band matrix A, in either the 1-norm or the infinity-norm, using
the LU factorization computed by CGBTRF.
CGBEQU computes row and column scalings intended to equilibrate an M by N
band matrix A and reduce its condition number. R returns the row scale
factors and C the column scale factors, chosen to try to make the largest
element in each row and column of the matrix B with elements
B(i,j)=R(i)*A(i,j)*C(j) have absolute value 1.
CGBRFS improves the computed solution to a system of linear equations
when the coefficient matrix is banded, and provides error bounds and
backward error estimates for the solution.
CGBSV computes the solution to a complex system of linear equations A * X
= B, where A is a band matrix of order N with KL subdiagonals and KU
superdiagonals, and X and B are N-by-NRHS matrices.
Page 23
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
CGBSVX uses the LU factorization to compute the solution to a complex
system of linear equations A * X = B, A**T * X = B, or A**H * X = B,
where A is a band matrix of order N with KL subdiagonals and KU
superdiagonals, and X and B are N-by-NRHS matrices.
CGBTF2 computes an LU factorization of a complex m-by-n band matrix A
using partial pivoting with row interchanges.
CGBTRF computes an LU factorization of a complex m-by-n band matrix A
using partial pivoting with row interchanges.
CGBTRS solves a system of linear equations
A * X = B, A**T * X = B, or A**H * X = B with a general band matrix
A using the LU factorization computed by CGBTRF.
CGEBAK forms the right or left eigenvectors of a complex general matrix
by backward transformation on the computed eigenvectors of the balanced
matrix output by CGEBAL.
CGEBAL balances a general complex matrix A. This involves, first,
permuting A by a similarity transformation to isolate eigenvalues in the
first 1 to ILO-1 and last IHI+1 to N elements on the diagonal; and
second, applying a diagonal similarity transformation to rows and columns
ILO to IHI to make the rows and columns as close in norm as possible.
Both steps are optional.
CGEBD2 reduces a complex general m by n matrix A to upper or lower real
bidiagonal form B by a unitary transformation: Q' * A * P = B.
CGEBRD reduces a general complex M-by-N matrix A to upper or lower
bidiagonal form B by a unitary transformation: Q**H * A * P = B.
CGECON estimates the reciprocal of the condition number of a general
complex matrix A, in either the 1-norm or the infinity-norm, using the LU
factorization computed by CGETRF.
CGEEQU computes row and column scalings intended to equilibrate an M by N
matrix A and reduce its condition number. R returns the row scale
factors and C the column scale factors, chosen to try to make the largest
entry in each row and column of the matrix B with elements
B(i,j)=R(i)*A(i,j)*C(j) have absolute value 1.
CGEES computes for an N-by-N complex nonsymmetric matrix A, the
eigenvalues, the Schur form T, and, optionally, the matrix of Schur
vectors Z. This gives the Schur factorization A = Z*T*(Z**H).
CGEESX computes for an N-by-N complex nonsymmetric matrix A, the
eigenvalues, the Schur form T, and, optionally, the matrix of Schur
vectors Z. This gives the Schur factorization A = Z*T*(Z**H).
CGEEV computes for an N-by-N complex nonsymmetric matrix A, the
eigenvalues and, optionally, the left and/or right eigenvectors.
Page 24
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
CGEEVX computes for an N-by-N complex nonsymmetric matrix A, the
eigenvalues and, optionally, the left and/or right eigenvectors.
For a pair of N-by-N complex nonsymmetric matrices A, B:
compute the generalized eigenvalues (alpha, beta)
For a pair of N-by-N complex nonsymmetric matrices A, B:
compute the generalized eigenvalues (alpha, beta)
CGEHD2 reduces a complex general matrix A to upper Hessenberg form H by a
unitary similarity transformation: Q' * A * Q = H .
CGEHRD reduces a complex general matrix A to upper Hessenberg form H by a
unitary similarity transformation: Q' * A * Q = H .
CGELQ2 computes an LQ factorization of a complex m by n matrix A: A = L
* Q.
CGELQF computes an LQ factorization of a complex M-by-N matrix A: A = L
* Q.
CGELS solves overdetermined or underdetermined complex linear systems
involving an M-by-N matrix A, or its conjugate-transpose, using a QR or
LQ factorization of A. It is assumed that A has full rank.
CGELSS computes the minimum norm solution to a complex linear least
squares problem:
Minimize 2-norm(| b - A*x |).
CGELSX computes the minimum-norm solution to a complex linear least
squares problem:
minimize || A * X - B ||
CGEQL2 computes a QL factorization of a complex m by n matrix A: A = Q *
L.
CGEQLF computes a QL factorization of a complex M-by-N matrix A: A = Q *
L.
CGEQPF computes a QR factorization with column pivoting of a complex M-
by-N matrix A: A*P = Q*R.
CGEQR2 computes a QR factorization of a complex m by n matrix A: A = Q *
R.
CGEQRF computes a QR factorization of a complex M-by-N matrix A: A = Q *
R.
CGERFS improves the computed solution to a system of linear equations and
Page 25
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
provides error bounds and backward error estimates for the solution.
CGERQ2 computes an RQ factorization of a complex m by n matrix A: A = R
* Q.
CGERQF computes an RQ factorization of a complex M-by-N matrix A: A = R
* Q.
CGESV computes the solution to a complex system of linear equations
A * X = B, where A is an N-by-N matrix and X and B are N-by-NRHS
matrices.
CGESVD computes the singular value decomposition (SVD) of a complex M-
by-N matrix A, optionally computing the left and/or right singular
vectors. The SVD is written
A = U * SIGMA * conjugate-transpose(V)
CGESVX uses the LU factorization to compute the solution to a complex
system of linear equations
A * X = B, where A is an N-by-N matrix and X and B are N-by-NRHS
matrices.
CGETF2 computes an LU factorization of a general m-by-n matrix A using
partial pivoting with row interchanges.
CGETRF computes an LU factorization of a general M-by-N matrix A using
partial pivoting with row interchanges.
CGETRI computes the inverse of a matrix using the LU factorization
computed by CGETRF.
CGETRS solves a system of linear equations
A * X = B, A**T * X = B, or A**H * X = B with a general N-by-N
matrix A using the LU factorization computed by CGETRF.
CGGBAK forms the right or left eigenvectors of the generalized eigenvalue
problem by backward transformation on the computed eigenvectors of the
balanced matrix output by CGGBAL.
CGGBAL balances a pair of general complex matrices (A,B) for the
generalized eigenvalue problem A*X = lambda*B*X. This involves, first,
permuting A and B by similarity transformations to isolate eigenvalues in
the first 1 to ILO-1 and last IHI+1 to N elements on the diagonal; and
second, applying a diagonal similarity
CGGGLM solves a generalized linear regression model (GLM) problem:
minimize y'*y subject to d = A*x + B*y
CGGHRD reduces a pair of complex matrices (A,B) to generalized upper
Hessenberg form using unitary similarity transformations, where A is a
Page 26
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
(generally non-symmetric) square matrix and B is upper triangular. More
precisely, CGGHRD simultaneously decomposes A into Q H Z* and B into
Q T Z* , where H is upper Hessenberg, T is upper triangular, Q and Z are
unitary, and * means conjugate transpose.
CGGLSE solves the linear equality constrained least squares (LSE)
problem:
minimize || A*x - c ||_2 subject to B*x = d
CGGQRF computes a generalized QR factorization of an N-by-M matrix A and
an N-by-P matrix B:
A = Q*R, B = Q*T*Z,
CGGRQF computes a generalized RQ factorization of an M-by-N matrix A and
a P-by-N matrix B:
A = R*Q, B = Z*T*Q,
CGGSVD computes the generalized singular value decomposition (GSVD) of
the M-by-N complex matrix A and P-by-N complex matrix B:
U'*A*Q = D1*( 0 R ), V'*B*Q = D2*( 0 R ) (1)
where U, V and Q are unitary matrices, R is an upper triangular matrix,
and Z' means the conjugate transpose of Z. Let K+L = the numerical
effective rank of the matrix (A',B')', then D1 and D2 are M-by-(K+L) and
P-by-(K+L) "diagonal" matrices and of the following structures,
respectively:
CGGSVP computes unitary matrices U, V and Q such that A23 is upper
trapezoidal. K+L = the effective rank of the (M+P)-by-N matrix (A',B')'.
Z' denotes the conjugate transpose of Z.
CGTCON estimates the reciprocal of the condition number of a complex
tridiagonal matrix A using the LU factorization as computed by CGTTRF.
CGTRFS improves the computed solution to a system of linear equations
when the coefficient matrix is tridiagonal, and provides error bounds and
backward error estimates for the solution.
CGTSV solves the equation
where A is an N-by-N tridiagonal matrix, by Gaussian elimination with
partial pivoting.
CGTSVX uses the LU factorization to compute the solution to a complex
system of linear equations A * X = B, A**T * X = B, or A**H * X = B,
where A is a tridiagonal matrix of order N and X and B are N-by-NRHS
matrices.
Page 27
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
CGTTRF computes an LU factorization of a complex tridiagonal matrix A
using elimination with partial pivoting and row interchanges.
CGTTRS solves one of the systems of equations
A * X = B, A**T * X = B, or A**H * X = B, with a tridiagonal matrix
A using the LU factorization computed by CGTTRF.
CHBEV computes all the eigenvalues and, optionally, eigenvectors of a
complex Hermitian band matrix A.
CHBEVX computes selected eigenvalues and, optionally, eigenvectors of a
complex Hermitian band matrix A. Eigenvalues/vectors can be selected by
specifying either a range of values or a range of indices for the desired
eigenvalues.
CHBTRD reduces a complex Hermitian band matrix A to real symmetric
tridiagonal form T by a unitary similarity transformation: Q**H * A * Q
= T.
CHECON estimates the reciprocal of the condition number of a complex
Hermitian matrix A using the factorization A = U*D*U**H or A = L*D*L**H
computed by CHETRF.
CHEEV computes all eigenvalues and, optionally, eigenvectors of a complex
Hermitian matrix A.
CHEEVX computes selected eigenvalues and, optionally, eigenvectors of a
complex Hermitian matrix A. Eigenvalues and eigenvectors can be selected
by specifying either a range of values or a range of indices for the
desired eigenvalues.
CHEGS2 reduces a complex Hermitian-definite generalized eigenproblem to
standard form.
CHEGST reduces a complex Hermitian-definite generalized eigenproblem to
standard form.
CHEGV computes all the eigenvalues, and optionally, the eigenvectors of a
complex generalized Hermitian-definite eigenproblem, of the form
A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and B
are assumed to be Hermitian and B is also
CHERFS improves the computed solution to a system of linear equations
when the coefficient matrix is Hermitian indefinite, and provides error
bounds and backward error estimates for the solution.
CHESV computes the solution to a complex system of linear equations
A * X = B, where A is an N-by-N Hermitian matrix and X and B are N-
by-NRHS matrices.
CHESVX uses the diagonal pivoting factorization to compute the solution
to a complex system of linear equations A * X = B, where A is an N-by-N
Page 28
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
Hermitian matrix and X and B are N-by-NRHS matrices.
CHETD2 reduces a complex Hermitian matrix A to real symmetric tridiagonal
form T by a unitary similarity transformation: Q' * A * Q = T.
CHETF2 computes the factorization of a complex Hermitian matrix A using
the Bunch-Kaufman diagonal pivoting method:
A = U*D*U' or A = L*D*L'
CHETRD reduces a complex Hermitian matrix A to real symmetric tridiagonal
form T by a unitary similarity transformation: Q**H * A * Q = T.
CHETRF computes the factorization of a complex Hermitian matrix A using
the Bunch-Kaufman diagonal pivoting method. The form of the
factorization is
CHETRI computes the inverse of a complex Hermitian indefinite matrix A
using the factorization A = U*D*U**H or A = L*D*L**H computed by CHETRF.
CHETRS solves a system of linear equations A*X = B with a complex
Hermitian matrix A using the factorization A = U*D*U**H or A = L*D*L**H
computed by CHETRF.
CHGEQZ implements a single-shift version of the QZ method for finding the
generalized eigenvalues w(i)=ALPHA(i)/BETA(i) of the equation A are then
ALPHA(1),...,ALPHA(N), and of B are BETA(1),...,BETA(N).
CHPCON estimates the reciprocal of the condition number of a complex
Hermitian packed matrix A using the factorization A = U*D*U**H or A =
L*D*L**H computed by CHPTRF.
CHPEV computes all the eigenvalues and, optionally, eigenvectors of a
complex Hermitian matrix in packed storage.
CHPEVX computes selected eigenvalues and, optionally, eigenvectors of a
complex Hermitian matrix A in packed storage. Eigenvalues/vectors can be
selected by specifying either a range of values or a range of indices for
the desired eigenvalues.
CHPGST reduces a complex Hermitian-definite generalized eigenproblem to
standard form, using packed storage.
CHPGV computes all the eigenvalues and, optionally, the eigenvectors of a
complex generalized Hermitian-definite eigenproblem, of the form
A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and B
are assumed to be Hermitian, stored in packed format, and B is also
positive definite.
CHPRFS improves the computed solution to a system of linear equations
when the coefficient matrix is Hermitian indefinite and packed, and
provides error bounds and backward error estimates for the solution.
Page 29
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
CHPSV computes the solution to a complex system of linear equations
A * X = B, where A is an N-by-N Hermitian matrix stored in packed
format and X and B are N-by-NRHS matrices.
CHPSVX uses the diagonal pivoting factorization A = U*D*U**H or A =
L*D*L**H to compute the solution to a complex system of linear equations
A * X = B, where A is an N-by-N Hermitian matrix stored in packed format
and X and B are N-by-NRHS matrices.
CHPTRD reduces a complex Hermitian matrix A stored in packed form to real
symmetric tridiagonal form T by a unitary similarity transformation: Q**H
* A * Q = T.
CHPTRF computes the factorization of a complex Hermitian packed matrix A
using the Bunch-Kaufman diagonal pivoting method:
A = U*D*U**H or A = L*D*L**H
CHPTRI computes the inverse of a complex Hermitian indefinite matrix A in
packed storage using the factorization A = U*D*U**H or A = L*D*L**H
computed by CHPTRF.
CHPTRS solves a system of linear equations A*X = B with a complex
Hermitian matrix A stored in packed format using the factorization A =
U*D*U**H or A = L*D*L**H computed by CHPTRF.
CHSEIN uses inverse iteration to find specified right and/or left
eigenvectors of a complex upper Hessenberg matrix H.
CHSEQR computes the eigenvalues of a complex upper Hessenberg matrix H,
and, optionally, the matrices T and Z from the Schur decomposition H = Z
T Z**H, where T is an upper triangular matrix (the Schur form), and Z is
the unitary matrix of Schur vectors.
CLABRD reduces the first NB rows and columns of a complex general m by n
matrix A to upper or lower real bidiagonal form by a unitary
transformation Q' * A * P, and returns the matrices X and Y which are
needed to apply the transformation to the unreduced part of A.
CLACGV conjugates a complex vector of length N.
CLACON estimates the 1-norm of a square, complex matrix A. Reverse
communication is used for evaluating matrix-vector products.
CLACPY copies all or part of a two-dimensional matrix A to another matrix
B.
CLACRT applies a plane rotation, where the cos and sin (C and S) are
complex and the vectors CX and CY are complex.
CLADIV := X / Y, where X and Y are complex. The computation of X / Y
will not overflow on an intermediary step unless the results overflows.
Page 30
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
CLAEIN uses inverse iteration to find a right or left eigenvector
corresponding to the eigenvalue W of a complex upper Hessenberg matrix H.
CLAESY computes the eigendecomposition of a 2x2 symmetric matrix
( ( A, B );( B, C ) ) provided the norm of the matrix of eigenvectors
is larger than some threshold value.
CLAEV2 computes the eigendecomposition of a 2-by-2 Hermitian matrix
[ A B ]
[ CONJG(B) C ]. On return, RT1 is the eigenvalue of larger
absolute value, RT2 is the eigenvalue of smaller absolute value, and
(CS1,SN1) is the unit right eigenvector for RT1, giving the decomposition
CLAGS2 computes 2-by-2 unitary matrices U, V and Q, such that if ( UPPER
) then
( -CONJG(SNU) CSU ) ( -CONJG(SNV) CSV )
CLAGTM performs a matrix-vector product of the form
CLAHEF computes a partial factorization of a complex Hermitian matrix A
using the Bunch-Kaufman diagonal pivoting method. The partial
factorization has the form:
CLAHQR is an auxiliary routine called by CHSEQR to update the eigenvalues
and Schur decomposition already computed by CHSEQR, by dealing with the
Hessenberg submatrix in rows and columns ILO to IHI.
CLAHRD reduces the first NB columns of a complex general n-by-(n-k+1)
matrix A so that elements below the k-th subdiagonal are zero. The
reduction is performed by a unitary similarity transformation Q' * A * Q.
The routine returns the matrices V and T which determine Q as a block
reflector I - V*T*V', and also the matrix Y = A * V * T.
CLAIC1 applies one step of incremental condition estimation in its
simplest version:
Let x, twonorm(x) = 1, be an approximate singular vector of an j-by-j
lower triangular matrix L, such that
CLANGB returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of an n by n
band matrix A, with kl sub-diagonals and ku super-diagonals.
CLANGE returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a complex
matrix A.
CLANGT returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a complex
tridiagonal matrix A.
Page 31
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
CLANHB returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of an n by n
hermitian band matrix A, with k super-diagonals.
CLANHE returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a complex
hermitian matrix A.
CLANHP returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a complex
hermitian matrix A, supplied in packed form.
CLANHS returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a
Hessenberg matrix A.
CLANHT returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a complex
Hermitian tridiagonal matrix A.
CLANSB returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of an n by n
symmetric band matrix A, with k super-diagonals.
CLANSP returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a complex
symmetric matrix A, supplied in packed form.
CLANSY returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a complex
symmetric matrix A.
CLANTB returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of an n by n
triangular band matrix A, with ( k + 1 ) diagonals.
CLANTP returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a
triangular matrix A, supplied in packed form.
CLANTR returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a
trapezoidal or triangular matrix A.
Given two column vectors X and Y, let
The subroutine first computes the QR factorization of A = Q*R, and then
computes the SVD of the 2-by-2 upper triangular matrix R. The smaller
singular value of R is returned in SSMIN, which is used as the
measurement of the linear dependency of the vectors X and Y.
CLAPMT rearranges the columns of the M by N matrix X as specified by the
Page 32
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
permutation K(1),K(2),...,K(N) of the integers 1,...,N. If FORWRD =
.TRUE., forward permutation:
CLAQGB equilibrates a general M by N band matrix A with KL subdiagonals
and KU superdiagonals using the row and scaling factors in the vectors R
and C.
CLAQGE equilibrates a general M by N matrix A using the row and scaling
factors in the vectors R and C.
CLAQSB equilibrates a symmetric band matrix A using the scaling factors
in the vector S.
CLAQSP equilibrates a symmetric matrix A using the scaling factors in the
vector S.
CLAQSY equilibrates a symmetric matrix A using the scaling factors in the
vector S.
CLAR2V applies a vector of complex plane rotations with real cosines from
both sides to a sequence of 2-by-2 complex Hermitian matrices, defined by
the elements of the vectors x, y and z. For i = 1,2,...,n
( x(i)z(i) ) :=
CLARF applies a complex elementary reflector H to a complex M-by-N matrix
C, from either the left or the right. H is represented in the form
CLARFB applies a complex block reflector H or its transpose H' to a
complex M-by-N matrix C, from either the left or the right.
CLARFG generates a complex elementary reflector H of order n, such that
( x ) ( 0 )
CLARFT forms the triangular factor T of a complex block reflector H of
order n, which is defined as a product of k elementary reflectors.
CLARFX applies a complex elementary reflector H to a complex m by n
matrix C, from either the left or the right. H is represented in the form
CLARGV generates a vector of complex plane rotations with real cosines,
determined by elements of the complex vectors x and y. For i = 1,2,...,n
CLARNV returns a vector of n random complex numbers from a uniform or
normal distribution.
CLARTG generates a plane rotation so that
[ -SN CS ] [ G ] [ 0 ]
CLARTV applies a vector of complex plane rotations with real cosines to
elements of the complex vectors x and y. For i = 1,2,...,n
Page 33
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
( x(i) ) := ( c(i)s(i) ) ( x(i) )
CLASCL multiplies the M by N complex matrix A by the real scalar
CTO/CFROM. This is done without over/underflow as long as the final
result CTO*A(I,J)/CFROM does not over/underflow. TYPE specifies that A
may be full, upper triangular, lower triangular, upper Hessenberg, or
banded.
CLASET initializes a 2-D array A to BETA on the diagonal and ALPHA on the
offdiagonals.
CLASR performs the transformation consisting of a sequence of plane
rotations determined by the parameters PIVOT and DIRECT as follows ( z =
m when SIDE = 'L' or 'l' and z = n when SIDE = 'R' or 'r' ):
CLASSQ returns the values scl and ssq such that
where x( i ) = abs( X( 1 + ( i - 1 )*INCX ) ). The value of sumsq is
assumed to be at least unity and the value of ssq will then satisfy
1.0 .le. ssq .le. ( sumsq + 2*n ).
CLASWP performs a series of row interchanges on the matrix A. One row
interchange is initiated for each of rows K1 through K2 of A.
CLASYF computes a partial factorization of a complex symmetric matrix A
using the Bunch-Kaufman diagonal pivoting method. The partial
factorization has the form:
CLATBS solves one of the triangular systems
with scaling to prevent overflow, where A is an upper or lower triangular
band matrix. Here A' denotes the transpose of A, x and b are n-element
vectors, and s is a scaling factor, usually less than or equal to 1,
chosen so that the components of x will be less than the overflow
threshold. If the unscaled problem will not cause overflow, the Level 2
BLAS routine CTBSV is called. If the matrix A is singular (A(j,j) = 0
for some j), then s is set to 0 and a non-trivial solution to A*x = 0 is
returned.
CLATPS solves one of the triangular systems
with scaling to prevent overflow, where A is an upper or lower triangular
matrix stored in packed form. Here A**T denotes the transpose of A, A**H
denotes the conjugate transpose of A, x and b are n-element vectors, and
s is a scaling factor, usually less than or equal to 1, chosen so that
the components of x will be less than the overflow threshold. If the
unscaled problem will not cause overflow, the Level 2 BLAS routine CTPSV
is called. If the matrix A is singular (A(j,j) = 0 for some j), then s is
set to 0 and a non-trivial solution to A*x = 0 is returned.
CLATRD reduces NB rows and columns of a complex Hermitian matrix A to
Page 34
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
Hermitian tridiagonal form by a unitary similarity transformation Q' * A
* Q, and returns the matrices V and W which are needed to apply the
transformation to the unreduced part of A.
CLATRS solves one of the triangular systems
with scaling to prevent overflow. Here A is an upper or lower triangular
matrix, A**T denotes the transpose of A, A**H denotes the conjugate
transpose of A, x and b are n-element vectors, and s is a scaling factor,
usually less than or equal to 1, chosen so that the components of x will
be less than the overflow threshold. If the unscaled problem will not
cause overflow, the Level 2 BLAS routine CTRSV is called. If the matrix A
is singular (A(j,j) = 0 for some j), then s is set to 0 and a non-trivial
solution to A*x = 0 is returned.
CLATZM applies a Householder matrix generated by CTZRQF to a matrix.
CLAUU2 computes the product U * U' or L' * L, where the triangular factor
U or L is stored in the upper or lower triangular part of the array A.
CLAUUM computes the product U * U' or L' * L, where the triangular factor
U or L is stored in the upper or lower triangular part of the array A.
CLAZRO initializes a 2-D array A to BETA on the diagonal and ALPHA on the
offdiagonals.
CPBCON estimates the reciprocal of the condition number (in the 1-norm)
of a complex Hermitian positive definite band matrix using the Cholesky
factorization A = U**H*U or A = L*L**H computed by CPBTRF.
CPBEQU computes row and column scalings intended to equilibrate a
Hermitian positive definite band matrix A and reduce its condition number
(with respect to the two-norm). S contains the scale factors, S(i) =
1/sqrt(A(i,i)), chosen so that the scaled matrix B with elements B(i,j) =
S(i)*A(i,j)*S(j) has ones on the diagonal. This choice of S puts the
condition number of B within a factor N of the smallest possible
condition number over all possible diagonal scalings.
CPBRFS improves the computed solution to a system of linear equations
when the coefficient matrix is Hermitian positive definite and banded,
and provides error bounds and backward error estimates for the solution.
CPBSV computes the solution to a complex system of linear equations
A * X = B, where A is an N-by-N Hermitian positive definite band
matrix and X and B are N-by-NRHS matrices.
CPBSVX uses the Cholesky factorization A = U**H*U or A = L*L**H to
compute the solution to a complex system of linear equations
A * X = B, where A is an N-by-N Hermitian positive definite band
matrix and X and B are N-by-NRHS matrices.
CPBTF2 computes the Cholesky factorization of a complex Hermitian
Page 35
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
positive definite band matrix A.
CPBTRF computes the Cholesky factorization of a complex Hermitian
positive definite band matrix A.
CPBTRS solves a system of linear equations A*X = B with a Hermitian
positive definite band matrix A using the Cholesky factorization A =
U**H*U or A = L*L**H computed by CPBTRF.
CPOCON estimates the reciprocal of the condition number (in the 1-norm)
of a complex Hermitian positive definite matrix using the Cholesky
factorization A = U**H*U or A = L*L**H computed by CPOTRF.
CPOEQU computes row and column scalings intended to equilibrate a
Hermitian positive definite matrix A and reduce its condition number
(with respect to the two-norm). S contains the scale factors, S(i) =
1/sqrt(A(i,i)), chosen so that the scaled matrix B with elements B(i,j) =
S(i)*A(i,j)*S(j) has ones on the diagonal. This choice of S puts the
condition number of B within a factor N of the smallest possible
condition number over all possible diagonal scalings.
CPORFS improves the computed solution to a system of linear equations
when the coefficient matrix is Hermitian positive definite, and provides
error bounds and backward error estimates for the solution.
CPOSV computes the solution to a complex system of linear equations
A * X = B, where A is an N-by-N Hermitian positive definite matrix and
X and B are N-by-NRHS matrices.
CPOSVX uses the Cholesky factorization A = U**H*U or A = L*L**H to
compute the solution to a complex system of linear equations
A * X = B, where A is an N-by-N Hermitian positive definite matrix and
X and B are N-by-NRHS matrices.
CPOTF2 computes the Cholesky factorization of a complex Hermitian
positive definite matrix A.
CPOTRF computes the Cholesky factorization of a complex Hermitian
positive definite matrix A.
CPOTRI computes the inverse of a complex Hermitian positive definite
matrix A using the Cholesky factorization A = U**H*U or A = L*L**H
computed by CPOTRF.
CPOTRS solves a system of linear equations A*X = B with a Hermitian
positive definite matrix A using the Cholesky factorization A = U**H*U or
A = L*L**H computed by CPOTRF.
CPPCON estimates the reciprocal of the condition number (in the 1-norm)
of a complex Hermitian positive definite packed matrix using the Cholesky
factorization A = U**H*U or A = L*L**H computed by CPPTRF.
Page 36
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
CPPEQU computes row and column scalings intended to equilibrate a
Hermitian positive definite matrix A in packed storage and reduce its
condition number (with respect to the two-norm). S contains the scale
factors, S(i)=1/sqrt(A(i,i)), chosen so that the scaled matrix B with
elements B(i,j)=S(i)*A(i,j)*S(j) has ones on the diagonal. This choice
of S puts the condition number of B within a factor N of the smallest
possible condition number over all possible diagonal scalings.
CPPRFS improves the computed solution to a system of linear equations
when the coefficient matrix is Hermitian positive definite and packed,
and provides error bounds and backward error estimates for the solution.
CPPSV computes the solution to a complex system of linear equations
A * X = B, where A is an N-by-N Hermitian positive definite matrix
stored in packed format and X and B are N-by-NRHS matrices.
CPPSVX uses the Cholesky factorization A = U**H*U or A = L*L**H to
compute the solution to a complex system of linear equations
A * X = B, where A is an N-by-N Hermitian positive definite matrix
stored in packed format and X and B are N-by-NRHS matrices.
CPPTRF computes the Cholesky factorization of a complex Hermitian
positive definite matrix stored in packed format.
CPPTRI computes the inverse of a complex Hermitian positive definite
matrix A using the Cholesky factorization A = U**H*U or A = L*L**H
computed by CPPTRF.
CPPTRS solves a system of linear equations A*X = B with a Hermitian
positive definite matrix A in packed storage using the Cholesky
factorization A = U**H*U or A = L*L**H computed by CPPTRF.
CPTCON computes the reciprocal of the condition number (in the 1-norm) of
a complex Hermitian positive definite tridiagonal matrix using the
factorization A = L*D*L**T or A = U**T*D*U computed by CPTTRF.
CPTEQR computes all eigenvalues and, optionally, eigenvectors of a
symmetric positive definite tridiagonal matrix by first factoring the
matrix using SPTTRF and then calling CBDSQR to compute the singular
values of the bidiagonal factor.
CPTRFS improves the computed solution to a system of linear equations
when the coefficient matrix is Hermitian positive definite and
tridiagonal, and provides error bounds and backward error estimates for
the solution.
CPTSV computes the solution to a complex system of linear equations A*X =
B, where A is an N-by-N Hermitian positive definite tridiagonal matrix,
and X and B are N-by-NRHS matrices.
CPTSVX uses the factorization A = L*D*L**H to compute the solution to a
complex system of linear equations A*X = B, where A is an N-by-N
Page 37
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
Hermitian positive definite tridiagonal matrix and X and B are N-by-NRHS
matrices.
CPTTRF computes the factorization of a complex Hermitian positive
definite tridiagonal matrix A.
CPTTRS solves a system of linear equations A * X = B with a Hermitian
positive definite tridiagonal matrix A using the factorization A =
U**H*D*U or A = L*D*L**H computed by CPTTRF.
CROT applies a plane rotation, where the cos (C) is real and the sin
(S) is complex, and the vectors CX and CY are complex.
CSPCON estimates the reciprocal of the condition number (in the 1-norm)
of a complex symmetric packed matrix A using the factorization A =
U*D*U**T or A = L*D*L**T computed by CSPTRF.
CSPMV performs the matrix-vector operation
where alpha and beta are scalars, x and y are n element vectors and A is
an n by n symmetric matrix, supplied in packed form.
CSPR performs the symmetric rank 1 operation
where alpha is a complex scalar, x is an n element vector and A is an n
by n symmetric matrix, supplied in packed form.
CSPRFS improves the computed solution to a system of linear equations
when the coefficient matrix is symmetric indefinite and packed, and
provides error bounds and backward error estimates for the solution.
CSPSV computes the solution to a complex system of linear equations
A * X = B, where A is an N-by-N symmetric matrix stored in packed
format and X and B are N-by-NRHS matrices.
CSPSVX uses the diagonal pivoting factorization A = U*D*U**T or A =
L*D*L**T to compute the solution to a complex system of linear equations
A * X = B, where A is an N-by-N symmetric matrix stored in packed format
and X and B are N-by-NRHS matrices.
CSPTRF computes the factorization of a complex symmetric matrix A stored
in packed format using the Bunch-Kaufman diagonal pivoting method:
A = U*D*U**T or A = L*D*L**T
CSPTRI computes the inverse of a complex symmetric indefinite matrix A in
packed storage using the factorization A = U*D*U**T or A = L*D*L**T
computed by CSPTRF.
CSPTRS solves a system of linear equations A*X = B with a complex
symmetric matrix A stored in packed format using the factorization A =
U*D*U**T or A = L*D*L**T computed by CSPTRF.
Page 38
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
CSRSCL multiplies an n-element complex vector x by the real scalar 1/a.
This is done without overflow or underflow as long as the final result
x/a does not overflow or underflow.
CSTEIN computes the eigenvectors of a real symmetric tridiagonal matrix T
corresponding to specified eigenvalues, using inverse iteration.
CSTEQR computes all eigenvalues and, optionally, eigenvectors of a
symmetric tridiagonal matrix using the implicit QL or QR method. The
eigenvectors of a full or band complex Hermitian matrix can also be found
if CSYTRD or CSPTRD or CSBTRD has been used to reduce this matrix to
tridiagonal form.
CSYCON estimates the reciprocal of the condition number (in the 1-norm)
of a complex symmetric matrix A using the factorization A = U*D*U**T or A
= L*D*L**T computed by CSYTRF.
CSYMV performs the matrix-vector operation
where alpha and beta are scalars, x and y are n element vectors and A is
an n by n symmetric matrix.
CSYR performs the symmetric rank 1 operation
where alpha is a complex scalar, x is an n element vector and A is an n
by n symmetric matrix.
CSYRFS improves the computed solution to a system of linear equations
when the coefficient matrix is symmetric indefinite, and provides error
bounds and backward error estimates for the solution.
CSYSV computes the solution to a complex system of linear equations
A * X = B, where A is an N-by-N symmetric matrix and X and B are N-
by-NRHS matrices.
CSYSVX uses the diagonal pivoting factorization to compute the solution
to a complex system of linear equations A * X = B, where A is an N-by-N
symmetric matrix and X and B are N-by-NRHS matrices.
CSYTF2 computes the factorization of a complex symmetric matrix A using
the Bunch-Kaufman diagonal pivoting method:
A = U*D*U' or A = L*D*L'
CSYTRF computes the factorization of a complex symmetric matrix A using
the Bunch-Kaufman diagonal pivoting method. The form of the
factorization is
CSYTRI computes the inverse of a complex symmetric indefinite matrix A
using the factorization A = U*D*U**T or A = L*D*L**T computed by CSYTRF.
CSYTRS solves a system of linear equations A*X = B with a complex
Page 39
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
symmetric matrix A using the factorization A = U*D*U**T or A = L*D*L**T
computed by CSYTRF.
CTBCON estimates the reciprocal of the condition number of a triangular
band matrix A, in either the 1-norm or the infinity-norm.
CTBRFS provides error bounds and backward error estimates for the
solution to a system of linear equations with a triangular band
coefficient matrix.
CTBTRS solves a triangular system of the form
where A is a triangular band matrix of order N, and B is an N-by-NRHS
matrix. A check is made to verify that A is nonsingular.
CTGEVC computes selected left and/or right generalized eigenvectors of a
pair of complex upper triangular matrices (A,B). The j-th generalized
left and right eigenvectors are y and x, resp., such that:
CTGSJA computes the generalized singular value decomposition (GSVD) of
two complex upper triangular (or trapezoidal) matrices A and B.
CTPCON estimates the reciprocal of the condition number of a packed
triangular matrix A, in either the 1-norm or the infinity-norm.
CTPRFS provides error bounds and backward error estimates for the
solution to a system of linear equations with a triangular packed
coefficient matrix.
CTPTRI computes the inverse of a complex upper or lower triangular matrix
A stored in packed format.
CTPTRS solves a triangular system of the form
where A is a triangular matrix of order N stored in packed format, and B
is an N-by-NRHS matrix. A check is made to verify that A is nonsingular.
CTRCON estimates the reciprocal of the condition number of a triangular
matrix A, in either the 1-norm or the infinity-norm.
CTREVC computes all or some right and/or left eigenvectors of a complex
upper triangular matrix T.
CTREXC reorders the Schur factorization of a complex matrix A = Q*T*Q**H,
so that the diagonal element of T with row index IFST is moved to row
ILST.
CTRRFS provides error bounds and backward error estimates for the
solution to a system of linear equations with a triangular coefficient
matrix.
CTRSEN reorders the Schur factorization of a complex matrix A = Q*T*Q**H,
Page 40
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
so that a selected cluster of eigenvalues appears in the leading
positions on the diagonal of the upper triangular matrix T, and the
leading columns of Q form an orthonormal basis of the corresponding right
invariant subspace.
CTRSNA estimates reciprocal condition numbers for specified eigenvalues
and/or right eigenvectors of a complex upper triangular matrix T (or of
any matrix Q*T*Q**H with Q unitary).
CTRSYL solves the complex Sylvester matrix equation:
op(A)*X + X*op(B) = scale*C or
CTRTI2 computes the inverse of a complex upper or lower triangular
matrix.
CTRTRI computes the inverse of a complex upper or lower triangular matrix
A.
CTRTRS solves a triangular system of the form
where A is a triangular matrix of order N, and B is an N-by-NRHS matrix.
A check is made to verify that A is nonsingular.
CTZRQF reduces the M-by-N ( M<=N ) complex upper trapezoidal matrix A to
upper triangular form by means of unitary transformations.
CUNG2L generates an m by n complex matrix Q with orthonormal columns,
which is defined as the last n columns of a product of k elementary
reflectors of order m
CUNG2R generates an m by n complex matrix Q with orthonormal columns,
which is defined as the first n columns of a product of k elementary
reflectors of order m
CUNGBR generates one of the matrices Q or P**H determined by CGEBRD when
reducing a complex matrix A to bidiagonal form: A = Q * B * P**H.
CUNGHR generates a complex unitary matrix Q which is defined as the
product of IHI-ILO elementary reflectors of order N, as returned by
CGEHRD:
Q = H(ilo) H(ilo+1) . . . H(ihi-1).
CUNGL2 generates an m-by-n complex matrix Q with orthonormal rows, which
is defined as the first m rows of a product of k elementary reflectors of
order n
CUNGLQ generates an M-by-N complex matrix Q with orthonormal rows, which
is defined as the first M rows of a product of K elementary reflectors of
order N
Page 41
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
CUNGQL generates an M-by-N complex matrix Q with orthonormal columns,
which is defined as the last N columns of a product of K elementary
reflectors of order M
CUNGQR generates an M-by-N complex matrix Q with orthonormal columns,
which is defined as the first N columns of a product of K elementary
reflectors of order M
CUNGR2 generates an m by n complex matrix Q with orthonormal rows, which
is defined as the last m rows of a product of k elementary reflectors of
order n
CUNGRQ generates an M-by-N complex matrix Q with orthonormal rows, which
is defined as the last M rows of a product of K elementary reflectors of
order N
CUNGTR generates a complex unitary matrix Q which is defined as the
product of n-1 elementary reflectors of order N, as returned by CHETRD:
if UPLO = 'U', Q = H(n-1) . . . H(2)H(1),
CUNM2L overwrites the general complex m-by-n matrix C with
where Q is a complex unitary matrix defined as the product of k
elementary reflectors
CUNM2R overwrites the general complex m-by-n matrix C with
where Q is a complex unitary matrix defined as the product of k
elementary reflectors
If VECT = 'Q', CUNMBR overwrites the general complex M-by-N matrix C with
SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C
C * Q TRANS = 'C': Q**H * C C * Q**H
CUNMHR overwrites the general complex M-by-N matrix C with TRANS = 'C':
Q**H * C C * Q**H
CUNML2 overwrites the general complex m-by-n matrix C with
where Q is a complex unitary matrix defined as the product of k
elementary reflectors
CUNMLQ overwrites the general complex M-by-N matrix C with TRANS = 'C':
Q**H * C C * Q**H
CUNMQL overwrites the general complex M-by-N matrix C with TRANS = 'C':
Q**H * C C * Q**H
CUNMQR overwrites the general complex M-by-N matrix C with TRANS = 'C':
Q**H * C C * Q**H
Page 42
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
CUNMR2 overwrites the general complex m-by-n matrix C with
where Q is a complex unitary matrix defined as the product of k
elementary reflectors
CUNMRQ overwrites the general complex M-by-N matrix C with TRANS = 'C':
Q**H * C C * Q**H
CUNMTR overwrites the general complex M-by-N matrix C with TRANS = 'C':
Q**H * C C * Q**H
CUPGTR generates a complex unitary matrix Q which is defined as the
product of n-1 elementary reflectors of order n, as returned by CHPTRD
using packed storage:
if UPLO = 'U', Q = H(n-1) . . . H(2)H(1),
CUPMTR overwrites the general complex M-by-N matrix C with TRANS = 'C':
Q**H * C C * Q**H
DBDSQR computes the singular value decomposition (SVD) of a real N-by-N
(upper or lower) bidiagonal matrix B: B = Q * S * P' (P' denotes the
transpose of P), where S is a diagonal matrix with non-negative diagonal
elements (the singular values of B), and Q and P are orthogonal matrices.
DGBCON estimates the reciprocal of the condition number of a real general
band matrix A, in either the 1-norm or the infinity-norm, using the LU
factorization computed by DGBTRF.
DGBEQU computes row and column scalings intended to equilibrate an M by N
band matrix A and reduce its condition number. R returns the row scale
factors and C the column scale factors, chosen to try to make the largest
element in each row and column of the matrix B with elements
B(i,j)=R(i)*A(i,j)*C(j) have absolute value 1.
DGBRFS improves the computed solution to a system of linear equations
when the coefficient matrix is banded, and provides error bounds and
backward error estimates for the solution.
DGBSV computes the solution to a real system of linear equations A * X =
B, where A is a band matrix of order N with KL subdiagonals and KU
superdiagonals, and X and B are N-by-NRHS matrices.
DGBSVX uses the LU factorization to compute the solution to a real system
of linear equations A * X = B, A**T * X = B, or A**H * X = B, where A is
a band matrix of order N with KL subdiagonals and KU superdiagonals, and
X and B are N-by-NRHS matrices.
DGBTF2 computes an LU factorization of a real m-by-n band matrix A using
partial pivoting with row interchanges.
DGBTRF computes an LU factorization of a real m-by-n band matrix A using
Page 43
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
partial pivoting with row interchanges.
DGBTRS solves a system of linear equations
A * X = B or A' * X = B with a general band matrix A using the LU
factorization computed by DGBTRF.
DGEBAK forms the right or left eigenvectors of a real general matrix by
backward transformation on the computed eigenvectors of the balanced
matrix output by DGEBAL.
DGEBAL balances a general real matrix A. This involves, first, permuting
A by a similarity transformation to isolate eigenvalues in the first 1 to
ILO-1 and last IHI+1 to N elements on the diagonal; and second, applying
a diagonal similarity transformation to rows and columns ILO to IHI to
make the rows and columns as close in norm as possible. Both steps are
optional.
DGEBD2 reduces a real general m by n matrix A to upper or lower
bidiagonal form B by an orthogonal transformation: Q' * A * P = B.
DGEBRD reduces a general real M-by-N matrix A to upper or lower
bidiagonal form B by an orthogonal transformation: Q**T * A * P = B.
DGECON estimates the reciprocal of the condition number of a general real
matrix A, in either the 1-norm or the infinity-norm, using the LU
factorization computed by DGETRF.
DGEEQU computes row and column scalings intended to equilibrate an M-by-N
matrix A and reduce its condition number. R returns the row scale
factors and C the column scale factors, chosen to try to make the largest
entry in each row and column of the matrix B with elements
B(i,j)=R(i)*A(i,j)*C(j) have absolute value 1.
DGEES computes for an N-by-N real nonsymmetric matrix A, the eigenvalues,
the real Schur form T, and, optionally, the matrix of Schur vectors Z.
This gives the Schur factorization A = Z*T*(Z**T).
DGEESX computes for an N-by-N real nonsymmetric matrix A, the
eigenvalues, the real Schur form T, and, optionally, the matrix of Schur
vectors Z. This gives the Schur factorization A = Z*T*(Z**T).
DGEEV computes for an N-by-N real nonsymmetric matrix A, the eigenvalues
and, optionally, the left and/or right eigenvectors.
DGEEVX computes for an N-by-N real nonsymmetric matrix A, the eigenvalues
and, optionally, the left and/or right eigenvectors.
For a pair of N-by-N real nonsymmetric matrices A, B:
compute the generalized eigenvalues (alphar +/- alphai*i, beta)
compute the real Schur form (A,B)
Page 44
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
For a pair of N-by-N real nonsymmetric matrices A, B:
compute the generalized eigenvalues (alphar +/- alphai*i, beta)
compute the left and/or right generalized eigenvectors
(VL and VR)
DGEHD2 reduces a real general matrix A to upper Hessenberg form H by an
orthogonal similarity transformation: Q' * A * Q = H .
DGEHRD reduces a real general matrix A to upper Hessenberg form H by an
orthogonal similarity transformation: Q' * A * Q = H .
DGELQ2 computes an LQ factorization of a real m by n matrix A: A = L *
Q.
DGELQF computes an LQ factorization of a real M-by-N matrix A: A = L *
Q.
DGELS solves overdetermined or underdetermined real linear systems
involving an M-by-N matrix A, or its transpose, using a QR or LQ
factorization of A. It is assumed that A has full rank.
DGELSS computes the minimum norm solution to a real linear least squares
problem:
Minimize 2-norm(| b - A*x |).
DGELSX computes the minimum-norm solution to a real linear least squares
problem:
minimize || A * X - B ||
DGEQL2 computes a QL factorization of a real m by n matrix A: A = Q * L.
DGEQLF computes a QL factorization of a real M-by-N matrix A: A = Q * L.
DGEQPF computes a QR factorization with column pivoting of a real M-by-N
matrix A: A*P = Q*R.
DGEQR2 computes a QR factorization of a real m by n matrix A: A = Q * R.
DGEQRF computes a QR factorization of a real M-by-N matrix A: A = Q * R.
DGERFS improves the computed solution to a system of linear equations and
provides error bounds and backward error estimates for the solution.
DGERQ2 computes an RQ factorization of a real m by n matrix A: A = R *
Q.
DGERQF computes an RQ factorization of a real M-by-N matrix A: A = R *
Q.
DGESV computes the solution to a real system of linear equations
Page 45
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
A * X = B, where A is an N-by-N matrix and X and B are N-by-NRHS
matrices.
DGESVD computes the singular value decomposition (SVD) of a real M-by-N
matrix A, optionally computing the left and/or right singular vectors.
The SVD is written
A = U * SIGMA * transpose(V)
DGESVX uses the LU factorization to compute the solution to a real system
of linear equations
A * X = B, where A is an N-by-N matrix and X and B are N-by-NRHS
matrices.
DGETF2 computes an LU factorization of a general m-by-n matrix A using
partial pivoting with row interchanges.
DGETRF computes an LU factorization of a general M-by-N matrix A using
partial pivoting with row interchanges.
DGETRI computes the inverse of a matrix using the LU factorization
computed by DGETRF.
DGETRS solves a system of linear equations
A * X = B or A' * X = B with a general N-by-N matrix A using the LU
factorization computed by DGETRF.
DGGBAK forms the right or left eigenvectors of the generalized eigenvalue
problem by backward transformation on the computed eigenvectors of the
balanced matrix output by DGGBAL.
DGGBAL balances a pair of general real matrices (A,B) for the generalized
eigenvalue problem A*X = lambda*B*X. This involves, first, permuting A
and B by similarity transformations to isolate eigenvalues in the first 1
to ILO-1 and last IHI+1 to N elements on the diagonal; and second,
applying a diagonal similarity
DGGGLM solves a generalized linear regression model (GLM) problem:
minimize y'*y subject to d = A*x + B*y
DGGHRD reduces a pair of real matrices (A,B) to generalized upper
Hessenberg form using orthogonal similarity transformations, where A is a
(generally non-symmetric) square matrix and B is upper triangular. More
precisely, DGGHRD simultaneously decomposes A into Q H Z' and B into
Q T Z' , where H is upper Hessenberg, T is upper triangular, Q and Z are
orthogonal, and ' means transpose.
DGGLSE solves the linear equality constrained least squares (LSE)
problem:
minimize || A*x - c ||_2 subject to B*x = d
Page 46
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
DGGQRF computes a generalized QR factorization of an N-by-M matrix A and
an N-by-P matrix B:
A = Q*R, B = Q*T*Z,
DGGRQF computes a generalized RQ factorization of an M-by-N matrix A and
a P-by-N matrix B:
A = R*Q, B = Z*T*Q,
DGGSVD computes the generalized singular value decomposition (GSVD) of
the M-by-N matrix A and P-by-N matrix B:
U'*A*Q = D1*( 0 R ), V'*B*Q = D2*( 0 R ) (1)
where U, V and Q are orthogonal matrices, and Z' is the transpose of Z.
Let K+L = the numerical effective rank of the matrix (A',B')', then R is
a K+L-by-K+L nonsingular upper tridiagonal matrix, D1 and D2 are
"diagonal" matrices, and of the following structures, respectively:
DGGSVP computes orthogonal matrices U, V and Q such that A23 is upper
trapezoidal. K+L = the effective rank of (M+P)-by-N matrix (A',B')'. Z'
denotes the transpose of Z.
DGTCON estimates the reciprocal of the condition number of a real
tridiagonal matrix A using the LU factorization as computed by DGTTRF.
DGTRFS improves the computed solution to a system of linear equations
when the coefficient matrix is tridiagonal, and provides error bounds and
backward error estimates for the solution.
DGTSV solves the equation
where A is an N-by-N tridiagonal matrix, by Gaussian elimination with
partial pivoting.
DGTSVX uses the LU factorization to compute the solution to a real system
of linear equations A * X = B or A**T * X = B, where A is a tridiagonal
matrix of order N and X and B are N-by-NRHS matrices.
DGTTRF computes an LU factorization of a real tridiagonal matrix A using
elimination with partial pivoting and row interchanges.
DGTTRS solves one of the systems of equations
A*X = B or A'*X = B, with a tridiagonal matrix A using the LU
factorization computed by DGTTRF.
DHGEQZ implements a single-/double-shift version of the QZ method for
finding the generalized eigenvalues B is upper triangular, and A is block
upper triangular, where the diagonal blocks are either 1x1 or 2x2, the
2x2 blocks having complex generalized eigenvalues (see the description of
the argument JOB.)
Page 47
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
If JOB='S', then the pair (A,B) is simultaneously reduced to Schur form
using one orthogonal tranformation (usually called Q) on the left and
another (usually called Z) on the right. The 2x2 upper-triangular
diagonal blocks of B corresponding to 2x2 blocks of A will be reduced to
positive diagonal matrices. (I.e., if A(j+1,j) is non-zero, then
B(j+1,j)=B(j,j+1)=0 and B(j,j) and B(j+1,j+1) will be positive.)
DHSEIN uses inverse iteration to find specified right and/or left
eigenvectors of a real upper Hessenberg matrix H.
DHSEQR computes the eigenvalues of a real upper Hessenberg matrix H and,
optionally, the matrices T and Z from the Schur decomposition H = Z T
Z**T, where T is an upper quasi-triangular matrix (the Schur form), and Z
is the orthogonal matrix of Schur vectors.
DLABAD takes as input the values computed by SLAMCH for underflow and
overflow, and returns the square root of each of these values if the log
of LARGE is sufficiently large. This subroutine is intended to identify
machines with a large exponent range, such as the Crays, and redefine the
underflow and overflow limits to be the square roots of the values
computed by DLAMCH. This subroutine is needed because DLAMCH does not
compensate for poor arithmetic in the upper half of the exponent range,
as is found on a Cray.
DLABRD reduces the first NB rows and columns of a real general m by n
matrix A to upper or lower bidiagonal form by an orthogonal
transformation Q' * A * P, and returns the matrices X and Y which are
needed to apply the transformation to the unreduced part of A.
DLACON estimates the 1-norm of a square, real matrix A. Reverse
communication is used for evaluating matrix-vector products.
DLACPY copies all or part of a two-dimensional matrix A to another matrix
B.
DLADIV performs complex division in real arithmetic in D. Knuth, The art
of Computer Programming, Vol.2, p.195
DLAE2 computes the eigenvalues of a 2-by-2 symmetric matrix
[ A B ]
[ B C ]. On return, RT1 is the eigenvalue of larger absolute
value, and RT2 is the eigenvalue of smaller absolute value.
DLAEBZ contains the iteration loops which compute and use the function
N(w), which is the count of eigenvalues of a symmetric tridiagonal matrix
T less than or equal to its argument w. It performs a choice of two
types of loops:
DLAEIN uses inverse iteration to find a right or left eigenvector
corresponding to the eigenvalue (WR,WI) of a real upper Hessenberg matrix
H.
Page 48
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
DLAEV2 computes the eigendecomposition of a 2-by-2 symmetric matrix
[ A B ]
[ B C ]. On return, RT1 is the eigenvalue of larger absolute
value, RT2 is the eigenvalue of smaller absolute value, and (CS1,SN1) is
the unit right eigenvector for RT1, giving the decomposition
DLAEXC swaps adjacent diagonal blocks T11 and T22 of order 1 or 2 in an
upper quasi-triangular matrix T by an orthogonal similarity
transformation.
DLAG2 computes the eigenvalues of a 2 x 2 generalized eigenvalue problem
A - w B, with scaling as necessary to avoid over-/underflow.
DLAGS2 computes 2-by-2 orthogonal matrices U, V and Q, such that if (
UPPER ) then
DLAGTF factorizes the matrix (T - lambda*I), where T is an n by n
tridiagonal matrix and lambda is a scalar, as
where P is a permutation matrix, L is a unit lower tridiagonal matrix
with at most one non-zero sub-diagonal elements per column and U is an
upper triangular matrix with at most two non-zero super-diagonal elements
per column.
DLAGTM performs a matrix-vector product of the form
DLAGTS may be used to solve one of the systems of equations
where T is an n by n tridiagonal matrix, for x, following the
factorization of (T - lambda*I) as
DLAHQR is an auxiliary routine called by DHSEQR to update the eigenvalues
and Schur decomposition already computed by DHSEQR, by dealing with the
Hessenberg submatrix in rows and columns ILO to IHI.
DLAHRD reduces the first NB columns of a real general n-by-(n-k+1) matrix
A so that elements below the k-th subdiagonal are zero. The reduction is
performed by an orthogonal similarity transformation Q' * A * Q. The
routine returns the matrices V and T which determine Q as a block
reflector I - V*T*V', and also the matrix Y = A * V * T.
DLAIC1 applies one step of incremental condition estimation in its
simplest version:
Let x, twonorm(x) = 1, be an approximate singular vector of an j-by-j
lower triangular matrix L, such that
DLALN2 solves a system of the form (ca A - w D ) X = s B or (ca A' - w
D) X = s B with possible scaling ("s") and perturbation of A. (A'
Page 49
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
means A-transpose.)
A is an NA x NA real matrix, ca is a real scalar, D is an NA x NA real
diagonal matrix, w is a real or complex value, and X and B are NA x 1
matrices -- real if w is real, complex if w is complex. NA may be 1 or
2.
DLAMCH determines double precision machine parameters.
DLANGB returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of an n by n
band matrix A, with kl sub-diagonals and ku super-diagonals.
DLANGE returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a real
matrix A.
DLANGT returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a real
tridiagonal matrix A.
DLANHS returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a
Hessenberg matrix A.
DLANSB returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of an n by n
symmetric band matrix A, with k super-diagonals.
DLANSP returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a real
symmetric matrix A, supplied in packed form.
DLANST returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a real
symmetric tridiagonal matrix A.
DLANSY returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a real
symmetric matrix A.
DLANTB returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of an n by n
triangular band matrix A, with ( k + 1 ) diagonals.
DLANTP returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a
triangular matrix A, supplied in packed form.
DLANTR returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a
trapezoidal or triangular matrix A.
Page 50
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
DLANV2 computes the Schur factorization of a real 2-by-2 nonsymmetric
matrix in standard form:
[ A B ] = [ CS -SN ] [ AA BB ] [ CS SN ]
Given two column vectors X and Y, let
The subroutine first computes the QR factorization of A = Q*R, and then
computes the SVD of the 2-by-2 upper triangular matrix R. The smaller
singular value of R is returned in SSMIN, which is used as the
measurement of the linear dependency of the vectors X and Y.
DLAPMT rearranges the columns of the M by N matrix X as specified by the
permutation K(1),K(2),...,K(N) of the integers 1,...,N. If FORWRD =
.TRUE., forward permutation:
DLAPY2 returns sqrt(x**2+y**2), taking care not to cause unnecessary
overflow.
DLAPY3 returns sqrt(x**2+y**2+z**2), taking care not to cause unnecessary
overflow.
DLAQGB equilibrates a general M by N band matrix A with KL subdiagonals
and KU superdiagonals using the row and scaling factors in the vectors R
and C.
DLAQGE equilibrates a general M by N matrix A using the row and scaling
factors in the vectors R and C.
DLAQSB equilibrates a symmetric band matrix A using the scaling factors
in the vector S.
DLAQSP equilibrates a symmetric matrix A using the scaling factors in the
vector S.
DLAQSY equilibrates a symmetric matrix A using the scaling factors in the
vector S.
DLAQTR solves the real quasi-triangular system
or the complex quasi-triangular systems
DLAR2V applies a vector of real plane rotations from both sides to a
sequence of 2-by-2 real symmetric matrices, defined by the elements of
the vectors x, y and z. For i = 1,2,...,n
( x(i)z(i) ) := ( c(i)s(i) ) ( x(i)z(i) ) ( c(i) -s(i) )
( z(i)y(i) ) ( -s(i)c(i) ) ( z(i)y(i) ) ( s(i)c(i) )
DLARF applies a real elementary reflector H to a real m by n matrix C,
from either the left or the right. H is represented in the form
Page 51
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
H = I - tau * v * v'
DLARFB applies a real block reflector H or its transpose H' to a real m
by n matrix C, from either the left or the right.
DLARFG generates a real elementary reflector H of order n, such that
( x ) ( 0 )
DLARFT forms the triangular factor T of a real block reflector H of order
n, which is defined as a product of k elementary reflectors.
DLARFX applies a real elementary reflector H to a real m by n matrix C,
from either the left or the right. H is represented in the form
DLARGV generates a vector of real plane rotations, determined by elements
of the real vectors x and y. For i = 1,2,...,n
( c(i)s(i) ) ( x(i) ) = ( a(i) )
DLARNV returns a vector of n random real numbers from a uniform or normal
distribution.
DLARTG generate a plane rotation so that
[ -SN CS ] [ G ] [ 0 ]
DLARTV applies a vector of real plane rotations to elements of the real
vectors x and y. For i = 1,2,...,n
( x(i) ) := ( c(i)s(i) ) ( x(i) )
DLARUV returns a vector of n random real numbers from a uniform (0,1)
distribution (n <= 128).
DLAS2 computes the singular values of the 2-by-2 matrix
[ F G ]
[ 0 H ]. On return, SSMIN is the smaller singular value and SSMAX
is the larger singular value.
DLASCL multiplies the M by N real matrix A by the real scalar CTO/CFROM.
This is done without over/underflow as long as the final result
CTO*A(I,J)/CFROM does not over/underflow. TYPE specifies that A may be
full, upper triangular, lower triangular, upper Hessenberg, or banded.
DLASET initializes an m-by-n matrix A to BETA on the diagonal and ALPHA
on the offdiagonals.
DLASR performs the transformation consisting of a sequence of plane
rotations determined by the parameters PIVOT and DIRECT as follows ( z =
m when SIDE = 'L' or 'l' and z = n when SIDE = 'R' or 'r' ):
DLASSQ returns the values scl and smsq such that
Page 52
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
where x( i ) = X( 1 + ( i - 1 )*INCX ). The value of sumsq is assumed
to be non-negative and scl returns the value
DLASV2 computes the singular value decomposition of a 2-by-2 triangular
matrix
[ F G ]
[ 0 H ]. On return, abs(SSMAX) is the larger singular value,
abs(SSMIN) is the smaller singular value, and (CSL,SNL) and (CSR,SNR) are
the left and right singular vectors for abs(SSMAX), giving the
decomposition
[ CSL SNL ] [ F G ] [ CSR -SNR ] = [ SSMAX 0 ]
[-SNL CSL ] [ 0 H ] [ SNR CSR ] [ 0 SSMIN ].
DLASWP performs a series of row interchanges on the matrix A. One row
interchange is initiated for each of rows K1 through K2 of A.
DLASY2 solves for the N1 by N2 matrix X, 1 <= N1,N2 <= 2, in
where TL is N1 by N1, TR is N2 by N2, B is N1 by N2, and ISGN = 1 or -1.
op(T) = T or T', where T' denotes the transpose of T.
DLASYF computes a partial factorization of a real symmetric matrix A
using the Bunch-Kaufman diagonal pivoting method. The partial
factorization has the form:
DLATBS solves one of the triangular systems are n-element vectors, and s
is a scaling factor, usually less than or equal to 1, chosen so that the
components of x will be less than the overflow threshold. If the
unscaled problem will not cause overflow, the Level 2 BLAS routine DTBSV
is called. If the matrix A is singular (A(j,j) = 0 for some j), then s
is set to 0 and a non-trivial solution to A*x = 0 is returned.
DLATPS solves one of the triangular systems transpose of A, x and b are
n-element vectors, and s is a scaling factor, usually less than or equal
to 1, chosen so that the components of x will be less than the overflow
threshold. If the unscaled problem will not cause overflow, the Level 2
BLAS routine DTPSV is called. If the matrix A is singular (A(j,j) = 0 for
some j), then s is set to 0 and a non-trivial solution to A*x = 0 is
returned.
DLATRD reduces NB rows and columns of a real symmetric matrix A to
symmetric tridiagonal form by an orthogonal similarity transformation Q'
* A * Q, and returns the matrices V and W which are needed to apply the
transformation to the unreduced part of A.
DLATRS solves one of the triangular systems triangular matrix, A' denotes
the transpose of A, x and b are n-element vectors, and s is a scaling
factor, usually less than or equal to 1, chosen so that the components of
x will be less than the overflow threshold. If the unscaled problem will
not cause overflow, the Level 2 BLAS routine DTRSV is called. If the
matrix A is singular (A(j,j) = 0 for some j), then s is set to 0 and a
Page 53
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
non-trivial solution to A*x = 0 is returned.
DLATZM applies a Householder matrix generated by DTZRQF to a matrix.
DLAUU2 computes the product U * U' or L' * L, where the triangular factor
U or L is stored in the upper or lower triangular part of the array A.
DLAUUM computes the product U * U' or L' * L, where the triangular factor
U or L is stored in the upper or lower triangular part of the array A.
DLAZRO initializes a 2-D array A to BETA on the diagonal and ALPHA on the
offdiagonals.
DOPGTR generates a real orthogonal matrix Q which is defined as the
product of n-1 elementary reflectors of order n, as returned by DSPTRD
using packed storage:
if UPLO = 'U', Q = H(n-1) . . . H(2)H(1),
DOPMTR overwrites the general real M-by-N matrix C with TRANS = 'T':
Q**T * C C * Q**T
DORG2L generates an m by n real matrix Q with orthonormal columns, which
is defined as the last n columns of a product of k elementary reflectors
of order m
DORG2R generates an m by n real matrix Q with orthonormal columns, which
is defined as the first n columns of a product of k elementary reflectors
of order m
DORGBR generates one of the matrices Q or P**T determined by DGEBRD when
reducing a real matrix A to bidiagonal form: A = Q * B * P**T. Q and
P**T are defined as products of elementary reflectors H(i) or G(i)
respectively.
DORGHR generates a real orthogonal matrix Q which is defined as the
product of IHI-ILO elementary reflectors of order N, as returned by
DGEHRD:
Q = H(ilo) H(ilo+1) . . . H(ihi-1).
DORGL2 generates an m by n real matrix Q with orthonormal rows, which is
defined as the first m rows of a product of k elementary reflectors of
order n
DORGLQ generates an M-by-N real matrix Q with orthonormal rows, which is
defined as the first M rows of a product of K elementary reflectors of
order N
DORGQL generates an M-by-N real matrix Q with orthonormal columns, which
is defined as the last N columns of a product of K elementary reflectors
of order M
Page 54
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
DORGQR generates an M-by-N real matrix Q with orthonormal columns, which
is defined as the first N columns of a product of K elementary reflectors
of order M
DORGR2 generates an m by n real matrix Q with orthonormal rows, which is
defined as the last m rows of a product of k elementary reflectors of
order n
DORGRQ generates an M-by-N real matrix Q with orthonormal rows, which is
defined as the last M rows of a product of K elementary reflectors of
order N
DORGTR generates a real orthogonal matrix Q which is defined as the
product of n-1 elementary reflectors of order N, as returned by DSYTRD:
if UPLO = 'U', Q = H(n-1) . . . H(2)H(1),
DORM2L overwrites the general real m by n matrix C with
where Q is a real orthogonal matrix defined as the product of k
elementary reflectors
DORM2R overwrites the general real m by n matrix C with
where Q is a real orthogonal matrix defined as the product of k
elementary reflectors
If VECT = 'Q', DORMBR overwrites the general real M-by-N matrix C with
SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C
C * Q TRANS = 'T': Q**T * C C * Q**T
DORMHR overwrites the general real M-by-N matrix C with TRANS = 'T':
Q**T * C C * Q**T
DORML2 overwrites the general real m by n matrix C with
where Q is a real orthogonal matrix defined as the product of k
elementary reflectors
DORMLQ overwrites the general real M-by-N matrix C with TRANS = 'T':
Q**T * C C * Q**T
DORMQL overwrites the general real M-by-N matrix C with TRANS = 'T':
Q**T * C C * Q**T
DORMQR overwrites the general real M-by-N matrix C with TRANS = 'T':
Q**T * C C * Q**T
DORMR2 overwrites the general real m by n matrix C with
where Q is a real orthogonal matrix defined as the product of k
elementary reflectors
Page 55
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
DORMRQ overwrites the general real M-by-N matrix C with TRANS = 'T':
Q**T * C C * Q**T
DORMTR overwrites the general real M-by-N matrix C with TRANS = 'T':
Q**T * C C * Q**T
DPBCON estimates the reciprocal of the condition number (in the 1-norm)
of a real symmetric positive definite band matrix using the Cholesky
factorization A = U**T*U or A = L*L**T computed by DPBTRF.
DPBEQU computes row and column scalings intended to equilibrate a
symmetric positive definite band matrix A and reduce its condition number
(with respect to the two-norm). S contains the scale factors, S(i) =
1/sqrt(A(i,i)), chosen so that the scaled matrix B with elements B(i,j) =
S(i)*A(i,j)*S(j) has ones on the diagonal. This choice of S puts the
condition number of B within a factor N of the smallest possible
condition number over all possible diagonal scalings.
DPBRFS improves the computed solution to a system of linear equations
when the coefficient matrix is symmetric positive definite and banded,
and provides error bounds and backward error estimates for the solution.
DPBSV computes the solution to a real system of linear equations
A * X = B, where A is an N-by-N symmetric positive definite band
matrix and X and B are N-by-NRHS matrices.
DPBSVX uses the Cholesky factorization A = U**T*U or A = L*L**T to
compute the solution to a real system of linear equations
A * X = B, where A is an N-by-N symmetric positive definite band
matrix and X and B are N-by-NRHS matrices.
DPBTF2 computes the Cholesky factorization of a real symmetric positive
definite band matrix A.
DPBTRF computes the Cholesky factorization of a real symmetric positive
definite band matrix A.
DPBTRS solves a system of linear equations A*X = B with a symmetric
positive definite band matrix A using the Cholesky factorization A =
U**T*U or A = L*L**T computed by DPBTRF.
DPOCON estimates the reciprocal of the condition number (in the 1-norm)
of a real symmetric positive definite matrix using the Cholesky
factorization A = U**T*U or A = L*L**T computed by DPOTRF.
DPOEQU computes row and column scalings intended to equilibrate a
symmetric positive definite matrix A and reduce its condition number
(with respect to the two-norm). S contains the scale factors, S(i) =
1/sqrt(A(i,i)), chosen so that the scaled matrix B with elements B(i,j) =
S(i)*A(i,j)*S(j) has ones on the diagonal. This choice of S puts the
condition number of B within a factor N of the smallest possible
condition number over all possible diagonal scalings.
Page 56
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
DPORFS improves the computed solution to a system of linear equations
when the coefficient matrix is symmetric positive definite, and provides
error bounds and backward error estimates for the solution.
DPOSV computes the solution to a real system of linear equations
A * X = B, where A is an N-by-N symmetric positive definite matrix and
X and B are N-by-NRHS matrices.
DPOSVX uses the Cholesky factorization A = U**T*U or A = L*L**T to
compute the solution to a real system of linear equations
A * X = B, where A is an N-by-N symmetric positive definite matrix and
X and B are N-by-NRHS matrices.
DPOTF2 computes the Cholesky factorization of a real symmetric positive
definite matrix A.
DPOTRF computes the Cholesky factorization of a real symmetric positive
definite matrix A.
DPOTRI computes the inverse of a real symmetric positive definite matrix
A using the Cholesky factorization A = U**T*U or A = L*L**T computed by
DPOTRF.
DPOTRS solves a system of linear equations A*X = B with a symmetric
positive definite matrix A using the Cholesky factorization A = U**T*U or
A = L*L**T computed by DPOTRF.
DPPCON estimates the reciprocal of the condition number (in the 1-norm)
of a real symmetric positive definite packed matrix using the Cholesky
factorization A = U**T*U or A = L*L**T computed by DPPTRF.
DPPEQU computes row and column scalings intended to equilibrate a
symmetric positive definite matrix A in packed storage and reduce its
condition number (with respect to the two-norm). S contains the scale
factors, S(i)=1/sqrt(A(i,i)), chosen so that the scaled matrix B with
elements B(i,j)=S(i)*A(i,j)*S(j) has ones on the diagonal. This choice
of S puts the condition number of B within a factor N of the smallest
possible condition number over all possible diagonal scalings.
DPPRFS improves the computed solution to a system of linear equations
when the coefficient matrix is symmetric positive definite and packed,
and provides error bounds and backward error estimates for the solution.
DPPSV computes the solution to a real system of linear equations
A * X = B, where A is an N-by-N symmetric positive definite matrix
stored in packed format and X and B are N-by-NRHS matrices.
DPPSVX uses the Cholesky factorization A = U**T*U or A = L*L**T to
compute the solution to a real system of linear equations
A * X = B, where A is an N-by-N symmetric positive definite matrix
stored in packed format and X and B are N-by-NRHS matrices.
Page 57
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
DPPTRF computes the Cholesky factorization of a real symmetric positive
definite matrix A stored in packed format.
DPPTRI computes the inverse of a real symmetric positive definite matrix
A using the Cholesky factorization A = U**T*U or A = L*L**T computed by
DPPTRF.
DPPTRS solves a system of linear equations A*X = B with a symmetric
positive definite matrix A in packed storage using the Cholesky
factorization A = U**T*U or A = L*L**T computed by DPPTRF.
DPTCON computes the reciprocal of the condition number (in the 1-norm) of
a real symmetric positive definite tridiagonal matrix using the
factorization A = L*D*L**T or A = U**T*D*U computed by DPTTRF.
DPTEQR computes all eigenvalues and, optionally, eigenvectors of a
symmetric positive definite tridiagonal matrix by first factoring the
matrix using DPTTRF, and then calling DBDSQR to compute the singular
values of the bidiagonal factor.
DPTRFS improves the computed solution to a system of linear equations
when the coefficient matrix is symmetric positive definite and
tridiagonal, and provides error bounds and backward error estimates for
the solution.
DPTSV computes the solution to a real system of linear equations A*X = B,
where A is an N-by-N symmetric positive definite tridiagonal matrix, and
X and B are N-by-NRHS matrices.
DPTSVX uses the factorization A = L*D*L**T to compute the solution to a
real system of linear equations A*X = B, where A is an N-by-N symmetric
positive definite tridiagonal matrix and X and B are N-by-NRHS matrices.
DPTTRF computes the factorization of a real symmetric positive definite
tridiagonal matrix A.
DPTTRS solves a system of linear equations A * X = B with a symmetric
positive definite tridiagonal matrix A using the factorization A =
L*D*L**T or A = U**T*D*U computed by DPTTRF. (The two forms are
equivalent if A is real.)
DRSCL multiplies an n-element real vector x by the real scalar 1/a. This
is done without overflow or underflow as long as
DSBEV computes all the eigenvalues and, optionally, eigenvectors of a
real symmetric band matrix A.
DSBEVX computes selected eigenvalues and, optionally, eigenvectors of a
real symmetric band matrix A. Eigenvalues/vectors can be selected by
specifying either a range of values or a range of indices for the desired
eigenvalues.
Page 58
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
DSBTRD reduces a real symmetric band matrix A to symmetric tridiagonal
form T by an orthogonal similarity transformation: Q**T * A * Q = T.
DSPCON estimates the reciprocal of the condition number (in the 1-norm)
of a real symmetric packed matrix A using the factorization A = U*D*U**T
or A = L*D*L**T computed by DSPTRF.
DSPEV computes all the eigenvalues and, optionally, eigenvectors of a
real symmetric matrix A in packed storage.
DSPEVX computes selected eigenvalues and, optionally, eigenvectors of a
real symmetric matrix A in packed storage. Eigenvalues/vectors can be
selected by specifying either a range of values or a range of indices for
the desired eigenvalues.
DSPGST reduces a real symmetric-definite generalized eigenproblem to
standard form, using packed storage.
DSPGV computes all the eigenvalues and, optionally, the eigenvectors of a
real generalized symmetric-definite eigenproblem, of the form
A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and B
are assumed to be symmetric, stored in packed format, and B is also
positive definite.
DSPRFS improves the computed solution to a system of linear equations
when the coefficient matrix is symmetric indefinite and packed, and
provides error bounds and backward error estimates for the solution.
DSPSV computes the solution to a real system of linear equations
A * X = B, where A is an N-by-N symmetric matrix stored in packed
format and X and B are N-by-NRHS matrices.
DSPSVX uses the diagonal pivoting factorization A = U*D*U**T or A =
L*D*L**T to compute the solution to a real system of linear equations A *
X = B, where A is an N-by-N symmetric matrix stored in packed format and
X and B are N-by-NRHS matrices.
DSPTRD reduces a real symmetric matrix A stored in packed form to
symmetric tridiagonal form T by an orthogonal similarity transformation:
Q**T * A * Q = T.
DSPTRF computes the factorization of a real symmetric matrix A stored in
packed format using the Bunch-Kaufman diagonal pivoting method:
A = U*D*U**T or A = L*D*L**T
DSPTRI computes the inverse of a real symmetric indefinite matrix A in
packed storage using the factorization A = U*D*U**T or A = L*D*L**T
computed by DSPTRF.
DSPTRS solves a system of linear equations A*X = B with a real symmetric
matrix A stored in packed format using the factorization A = U*D*U**T or
Page 59
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
A = L*D*L**T computed by DSPTRF.
DSTEBZ computes the eigenvalues of a symmetric tridiagonal matrix T. The
user may ask for all eigenvalues, all eigenvalues in the half-open
interval (VL, VU], or the IL-th through IU-th eigenvalues.
DSTEIN computes the eigenvectors of a real symmetric tridiagonal matrix T
corresponding to specified eigenvalues, using inverse iteration.
DSTEQR computes all eigenvalues and, optionally, eigenvectors of a
symmetric tridiagonal matrix using the implicit QL or QR method. The
eigenvectors of a full or band symmetric matrix can also be found if
DSYTRD or DSPTRD or DSBTRD has been used to reduce this matrix to
tridiagonal form.
DSTERF computes all eigenvalues of a symmetric tridiagonal matrix using
the Pal-Walker-Kahan variant of the QL or QR algorithm.
DSTEV computes all eigenvalues and, optionally, eigenvectors of a real
symmetric tridiagonal matrix A.
DSTEVX computes selected eigenvalues and, optionally, eigenvectors of a
real symmetric tridiagonal matrix A. Eigenvalues/vectors can be selected
by specifying either a range of values or a range of indices for the
desired eigenvalues.
DSYCON estimates the reciprocal of the condition number (in the 1-norm)
of a real symmetric matrix A using the factorization A = U*D*U**T or A =
L*D*L**T computed by DSYTRF.
DSYEV computes all eigenvalues and, optionally, eigenvectors of a real
symmetric matrix A.
DSYEVX computes selected eigenvalues and, optionally, eigenvectors of a
real symmetric matrix A. Eigenvalues and eigenvectors can be selected by
specifying either a range of values or a range of indices for the desired
eigenvalues.
DSYGS2 reduces a real symmetric-definite generalized eigenproblem to
standard form.
DSYGST reduces a real symmetric-definite generalized eigenproblem to
standard form.
DSYGV computes all the eigenvalues, and optionally, the eigenvectors of a
real generalized symmetric-definite eigenproblem, of the form
A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and B
are assumed to be symmetric and B is also
DSYRFS improves the computed solution to a system of linear equations
when the coefficient matrix is symmetric indefinite, and provides error
bounds and backward error estimates for the solution.
Page 60
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
DSYSV computes the solution to a real system of linear equations
A * X = B, where A is an N-by-N symmetric matrix and X and B are N-
by-NRHS matrices.
DSYSVX uses the diagonal pivoting factorization to compute the solution
to a real system of linear equations A * X = B, where A is an N-by-N
symmetric matrix and X and B are N-by-NRHS matrices.
DSYTD2 reduces a real symmetric matrix A to symmetric tridiagonal form T
by an orthogonal similarity transformation: Q' * A * Q = T.
DSYTF2 computes the factorization of a real symmetric matrix A using the
Bunch-Kaufman diagonal pivoting method:
A = U*D*U' or A = L*D*L'
DSYTRD reduces a real symmetric matrix A to real symmetric tridiagonal
form T by an orthogonal similarity transformation: Q**T * A * Q = T.
DSYTRF computes the factorization of a real symmetric matrix A using the
Bunch-Kaufman diagonal pivoting method. The form of the factorization is
DSYTRI computes the inverse of a real symmetric indefinite matrix A using
the factorization A = U*D*U**T or A = L*D*L**T computed by DSYTRF.
DSYTRS solves a system of linear equations A*X = B with a real symmetric
matrix A using the factorization A = U*D*U**T or A = L*D*L**T computed by
DSYTRF.
DTBCON estimates the reciprocal of the condition number of a triangular
band matrix A, in either the 1-norm or the infinity-norm.
DTBRFS provides error bounds and backward error estimates for the
solution to a system of linear equations with a triangular band
coefficient matrix.
DTBTRS solves a triangular system of the form
where A is a triangular band matrix of order N, and B is an N-by NRHS
matrix. A check is made to verify that A is nonsingular.
DTGEVC computes selected left and/or right generalized eigenvectors of a
pair of real upper triangular matrices (A,B). The j-th generalized left
and right eigenvectors are y and x, resp., such that:
DTGSJA computes the generalized singular value decomposition (GSVD) of
two real upper ``triangular (or trapezoidal)'' matrices A and B.
DTPCON estimates the reciprocal of the condition number of a packed
triangular matrix A, in either the 1-norm or the infinity-norm.
DTPRFS provides error bounds and backward error estimates for the
Page 61
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
solution to a system of linear equations with a triangular packed
coefficient matrix.
DTPTRI computes the inverse of a real upper or lower triangular matrix A
stored in packed format.
DTPTRS solves a triangular system of the form
where A is a triangular matrix of order N stored in packed format, and B
is an N-by-NRHS matrix. A check is made to verify that A is nonsingular.
DTRCON estimates the reciprocal of the condition number of a triangular
matrix A, in either the 1-norm or the infinity-norm.
DTREVC computes all or some right and/or left eigenvectors of a real
upper quasi-triangular matrix T.
DTREXC reorders the real Schur factorization of a real matrix A =
Q*T*Q**T, so that the diagonal block of T with row index IFST is moved to
row ILST.
DTRRFS provides error bounds and backward error estimates for the
solution to a system of linear equations with a triangular coefficient
matrix.
DTRSEN reorders the real Schur factorization of a real matrix A =
Q*T*Q**T, so that a selected cluster of eigenvalues appears in the
leading diagonal blocks of the upper quasi-triangular matrix T, and the
leading columns of Q form an orthonormal basis of the corresponding right
invariant subspace.
DTRSNA estimates reciprocal condition numbers for specified eigenvalues
and/or right eigenvectors of a real upper quasi-triangular matrix T (or
of any matrix Q*T*Q**T with Q orthogonal).
DTRSYL solves the real Sylvester matrix equation:
op(A)*X + X*op(B) = scale*C or
DTRTI2 computes the inverse of a real upper or lower triangular matrix.
DTRTRI computes the inverse of a real upper or lower triangular matrix A.
DTRTRS solves a triangular system of the form
where A is a triangular matrix of order N, and B is an N-by-NRHS matrix.
A check is made to verify that A is nonsingular.
DTZRQF reduces the M-by-N ( M<=N ) real upper trapezoidal matrix A to
upper triangular form by means of orthogonal transformations.
DZSUM1 takes the sum of the absolute values of a complex vector and
Page 62
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
returns a double precision result.
ICMAX1 finds the index of the element whose real part has maximum
absolute value.
ILAENV is called from the LAPACK routines to choose problem-dependent
parameters for the local environment. See ISPEC for a description of the
parameters.
LSAME returns .TRUE. if CA is the same letter as CB regardless of case.
LSAMEN tests if the first N letters of CA are the same as the first N
letters of CB, regardless of case. LSAMEN returns .TRUE. if CA and CB
are equivalent except for case and .FALSE. otherwise. LSAMEN also
returns .FALSE. if LEN( CA ) or LEN( CB ) is less than N.
SBDSQR computes the singular value decomposition (SVD) of a real N-by-N
(upper or lower) bidiagonal matrix B: B = Q * S * P' (P' denotes the
transpose of P), where S is a diagonal matrix with non-negative diagonal
elements (the singular values of B), and Q and P are orthogonal matrices.
SCSUM1 takes the sum of the absolute values of a complex vector and
returns a single precision result.
SGBCON estimates the reciprocal of the condition number of a real general
band matrix A, in either the 1-norm or the infinity-norm, using the LU
factorization computed by SGBTRF.
SGBEQU computes row and column scalings intended to equilibrate an M by N
band matrix A and reduce its condition number. R returns the row scale
factors and C the column scale factors, chosen to try to make the largest
element in each row and column of the matrix B with elements
B(i,j)=R(i)*A(i,j)*C(j) have absolute value 1.
SGBRFS improves the computed solution to a system of linear equations
when the coefficient matrix is banded, and provides error bounds and
backward error estimates for the solution.
SGBSV computes the solution to a real system of linear equations A * X =
B, where A is a band matrix of order N with KL subdiagonals and KU
superdiagonals, and X and B are N-by-NRHS matrices.
SGBSVX uses the LU factorization to compute the solution to a real system
of linear equations A * X = B, A**T * X = B, or A**H * X = B, where A is
a band matrix of order N with KL subdiagonals and KU superdiagonals, and
X and B are N-by-NRHS matrices.
SGBTF2 computes an LU factorization of a real m-by-n band matrix A using
partial pivoting with row interchanges.
SGBTRF computes an LU factorization of a real m-by-n band matrix A using
partial pivoting with row interchanges.
Page 63
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
SGBTRS solves a system of linear equations
A * X = B or A' * X = B with a general band matrix A using the LU
factorization computed by SGBTRF.
SGEBAK forms the right or left eigenvectors of a real general matrix by
backward transformation on the computed eigenvectors of the balanced
matrix output by SGEBAL.
SGEBAL balances a general real matrix A. This involves, first, permuting
A by a similarity transformation to isolate eigenvalues in the first 1 to
ILO-1 and last IHI+1 to N elements on the diagonal; and second, applying
a diagonal similarity transformation to rows and columns ILO to IHI to
make the rows and columns as close in norm as possible. Both steps are
optional.
SGEBD2 reduces a real general m by n matrix A to upper or lower
bidiagonal form B by an orthogonal transformation: Q' * A * P = B.
SGEBRD reduces a general real M-by-N matrix A to upper or lower
bidiagonal form B by an orthogonal transformation: Q**T * A * P = B.
SGECON estimates the reciprocal of the condition number of a general real
matrix A, in either the 1-norm or the infinity-norm, using the LU
factorization computed by SGETRF.
SGEEQU computes row and column scalings intended to equilibrate an M-by-N
matrix A and reduce its condition number. R returns the row scale
factors and C the column scale factors, chosen to try to make the largest
entry in each row and column of the matrix B with elements
B(i,j)=R(i)*A(i,j)*C(j) have absolute value 1.
SGEES computes for an N-by-N real nonsymmetric matrix A, the eigenvalues,
the real Schur form T, and, optionally, the matrix of Schur vectors Z.
This gives the Schur factorization A = Z*T*(Z**T).
SGEESX computes for an N-by-N real nonsymmetric matrix A, the
eigenvalues, the real Schur form T, and, optionally, the matrix of Schur
vectors Z. This gives the Schur factorization A = Z*T*(Z**T).
SGEEV computes for an N-by-N real nonsymmetric matrix A, the eigenvalues
and, optionally, the left and/or right eigenvectors.
SGEEVX computes for an N-by-N real nonsymmetric matrix A, the eigenvalues
and, optionally, the left and/or right eigenvectors.
For a pair of N-by-N real nonsymmetric matrices A, B:
compute the generalized eigenvalues (alphar +/- alphai*i, beta)
compute the real Schur form (A,B)
For a pair of N-by-N real nonsymmetric matrices A, B:
Page 64
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
compute the generalized eigenvalues (alphar +/- alphai*i, beta)
compute the left and/or right generalized eigenvectors
(VL and VR)
SGEHD2 reduces a real general matrix A to upper Hessenberg form H by an
orthogonal similarity transformation: Q' * A * Q = H .
SGEHRD reduces a real general matrix A to upper Hessenberg form H by an
orthogonal similarity transformation: Q' * A * Q = H .
SGELQ2 computes an LQ factorization of a real m by n matrix A: A = L *
Q.
SGELQF computes an LQ factorization of a real M-by-N matrix A: A = L *
Q.
SGELS solves overdetermined or underdetermined real linear systems
involving an M-by-N matrix A, or its transpose, using a QR or LQ
factorization of A. It is assumed that A has full rank.
SGELSS computes the minimum norm solution to a real linear least squares
problem:
Minimize 2-norm(| b - A*x |).
SGELSX computes the minimum-norm solution to a real linear least squares
problem:
minimize || A * X - B ||
SGEQL2 computes a QL factorization of a real m by n matrix A: A = Q * L.
SGEQLF computes a QL factorization of a real M-by-N matrix A: A = Q * L.
SGEQPF computes a QR factorization with column pivoting of a real M-by-N
matrix A: A*P = Q*R.
SGEQR2 computes a QR factorization of a real m by n matrix A: A = Q * R.
SGEQRF computes a QR factorization of a real M-by-N matrix A: A = Q * R.
SGERFS improves the computed solution to a system of linear equations and
provides error bounds and backward error estimates for the solution.
SGERQ2 computes an RQ factorization of a real m by n matrix A: A = R *
Q.
SGERQF computes an RQ factorization of a real M-by-N matrix A: A = R *
Q.
SGESV computes the solution to a real system of linear equations
A * X = B, where A is an N-by-N matrix and X and B are N-by-NRHS
matrices.
Page 65
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
SGESVD computes the singular value decomposition (SVD) of a real M-by-N
matrix A, optionally computing the left and/or right singular vectors.
The SVD is written
A = U * SIGMA * transpose(V)
SGESVX uses the LU factorization to compute the solution to a real system
of linear equations
A * X = B, where A is an N-by-N matrix and X and B are N-by-NRHS
matrices.
SGETF2 computes an LU factorization of a general m-by-n matrix A using
partial pivoting with row interchanges.
SGETRF computes an LU factorization of a general M-by-N matrix A using
partial pivoting with row interchanges.
SGETRI computes the inverse of a matrix using the LU factorization
computed by SGETRF.
SGETRS solves a system of linear equations
A * X = B or A' * X = B with a general N-by-N matrix A using the LU
factorization computed by SGETRF.
SGGBAK forms the right or left eigenvectors of the generalized eigenvalue
problem by backward transformation on the computed eigenvectors of the
balanced matrix output by SGGBAL.
SGGBAL balances a pair of general real matrices (A,B) for the generalized
eigenvalue problem A*X = lambda*B*X. This involves, first, permuting A
and B by similarity transformations to isolate eigenvalues in the first 1
to ILO-1 and last IHI+1 to N elements on the diagonal; and second,
applying a diagonal similarity
SGGGLM solves a generalized linear regression model (GLM) problem:
minimize y'*y subject to d = A*x + B*y
SGGHRD reduces a pair of real matrices (A,B) to generalized upper
Hessenberg form using orthogonal similarity transformations, where A is a
(generally non-symmetric) square matrix and B is upper triangular. More
precisely, SGGHRD simultaneously decomposes A into Q H Z' and B into
Q T Z' , where H is upper Hessenberg, T is upper triangular, Q and Z are
orthogonal, and ' means transpose.
SGGLSE solves the linear equality constrained least squares (LSE)
problem:
minimize || A*x - c ||_2 subject to B*x = d
SGGQRF computes a generalized QR factorization of an N-by-M matrix A and
an N-by-P matrix B:
Page 66
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
A = Q*R, B = Q*T*Z,
SGGRQF computes a generalized RQ factorization of an M-by-N matrix A and
a P-by-N matrix B:
A = R*Q, B = Z*T*Q,
SGGSVD computes the generalized singular value decomposition (GSVD) of
the M-by-N matrix A and P-by-N matrix B:
U'*A*Q = D1*( 0 R ), V'*B*Q = D2*( 0 R ) (1)
where U, V and Q are orthogonal matrices, and Z' is the transpose of Z.
Let K+L = the numerical effective rank of the matrix (A',B')', then R is
a K+L-by-K+L nonsingular upper tridiagonal matrix, D1 and D2 are
"diagonal" matrices, and of the following structures, respectively:
SGGSVP computes orthogonal matrices U, V and Q such that A23 is upper
trapezoidal. K+L = the effective rank of (M+P)-by-N matrix (A',B')'. Z'
denotes the transpose of Z.
SGTCON estimates the reciprocal of the condition number of a real
tridiagonal matrix A using the LU factorization as computed by SGTTRF.
SGTRFS improves the computed solution to a system of linear equations
when the coefficient matrix is tridiagonal, and provides error bounds and
backward error estimates for the solution.
SGTSV solves the equation
where A is an N-by-N tridiagonal matrix, by Gaussian elimination with
partial pivoting.
SGTSVX uses the LU factorization to compute the solution to a real system
of linear equations A * X = B or A**T * X = B, where A is a tridiagonal
matrix of order N and X and B are N-by-NRHS matrices.
SGTTRF computes an LU factorization of a real tridiagonal matrix A using
elimination with partial pivoting and row interchanges.
SGTTRS solves one of the systems of equations
A*X = B or A'*X = B, with a tridiagonal matrix A using the LU
factorization computed by SGTTRF.
SHGEQZ implements a single-/double-shift version of the QZ method for
finding the generalized eigenvalues B is upper triangular, and A is block
upper triangular, where the diagonal blocks are either 1x1 or 2x2, the
2x2 blocks having complex generalized eigenvalues (see the description of
the argument JOB.)
If JOB='S', then the pair (A,B) is simultaneously reduced to Schur form
using one orthogonal tranformation (usually called Q) on the left and
Page 67
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
another (usually called Z) on the right. The 2x2 upper-triangular
diagonal blocks of B corresponding to 2x2 blocks of A will be reduced to
positive diagonal matrices. (I.e., if A(j+1,j) is non-zero, then
B(j+1,j)=B(j,j+1)=0 and B(j,j) and B(j+1,j+1) will be positive.)
SHSEIN uses inverse iteration to find specified right and/or left
eigenvectors of a real upper Hessenberg matrix H.
SHSEQR computes the eigenvalues of a real upper Hessenberg matrix H and,
optionally, the matrices T and Z from the Schur decomposition H = Z T
Z**T, where T is an upper quasi-triangular matrix (the Schur form), and Z
is the orthogonal matrix of Schur vectors.
SLABAD takes as input the values computed by SLAMCH for underflow and
overflow, and returns the square root of each of these values if the log
of LARGE is sufficiently large. This subroutine is intended to identify
machines with a large exponent range, such as the Crays, and redefine the
underflow and overflow limits to be the square roots of the values
computed by SLAMCH. This subroutine is needed because SLAMCH does not
compensate for poor arithmetic in the upper half of the exponent range,
as is found on a Cray.
SLABRD reduces the first NB rows and columns of a real general m by n
matrix A to upper or lower bidiagonal form by an orthogonal
transformation Q' * A * P, and returns the matrices X and Y which are
needed to apply the transformation to the unreduced part of A.
SLACON estimates the 1-norm of a square, real matrix A. Reverse
communication is used for evaluating matrix-vector products.
SLACPY copies all or part of a two-dimensional matrix A to another matrix
B.
SLADIV performs complex division in real arithmetic in D. Knuth, The art
of Computer Programming, Vol.2, p.195
SLAE2 computes the eigenvalues of a 2-by-2 symmetric matrix
[ A B ]
[ B C ]. On return, RT1 is the eigenvalue of larger absolute
value, and RT2 is the eigenvalue of smaller absolute value.
SLAEBZ contains the iteration loops which compute and use the function
N(w), which is the count of eigenvalues of a symmetric tridiagonal matrix
T less than or equal to its argument w. It performs a choice of two
types of loops:
SLAEIN uses inverse iteration to find a right or left eigenvector
corresponding to the eigenvalue (WR,WI) of a real upper Hessenberg matrix
H.
SLAEV2 computes the eigendecomposition of a 2-by-2 symmetric matrix
Page 68
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
[ A B ]
[ B C ]. On return, RT1 is the eigenvalue of larger absolute
value, RT2 is the eigenvalue of smaller absolute value, and (CS1,SN1) is
the unit right eigenvector for RT1, giving the decomposition
SLAEXC swaps adjacent diagonal blocks T11 and T22 of order 1 or 2 in an
upper quasi-triangular matrix T by an orthogonal similarity
transformation.
SLAG2 computes the eigenvalues of a 2 x 2 generalized eigenvalue problem
A - w B, with scaling as necessary to avoid over-/underflow.
SLAGS2 computes 2-by-2 orthogonal matrices U, V and Q, such that if (
UPPER ) then
SLAGTF factorizes the matrix (T - lambda*I), where T is an n by n
tridiagonal matrix and lambda is a scalar, as
where P is a permutation matrix, L is a unit lower tridiagonal matrix
with at most one non-zero sub-diagonal elements per column and U is an
upper triangular matrix with at most two non-zero super-diagonal elements
per column.
SLAGTM performs a matrix-vector product of the form
SLAGTS may be used to solve one of the systems of equations
where T is an n by n tridiagonal matrix, for x, following the
factorization of (T - lambda*I) as
SLAHQR is an auxiliary routine called by SHSEQR to update the eigenvalues
and Schur decomposition already computed by SHSEQR, by dealing with the
Hessenberg submatrix in rows and columns ILO to IHI.
SLAHRD reduces the first NB columns of a real general n-by-(n-k+1) matrix
A so that elements below the k-th subdiagonal are zero. The reduction is
performed by an orthogonal similarity transformation Q' * A * Q. The
routine returns the matrices V and T which determine Q as a block
reflector I - V*T*V', and also the matrix Y = A * V * T.
SLAIC1 applies one step of incremental condition estimation in its
simplest version:
Let x, twonorm(x) = 1, be an approximate singular vector of an j-by-j
lower triangular matrix L, such that
SLALN2 solves a system of the form (ca A - w D ) X = s B or (ca A' - w
D) X = s B with possible scaling ("s") and perturbation of A. (A'
means A-transpose.)
Page 69
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
A is an NA x NA real matrix, ca is a real scalar, D is an NA x NA real
diagonal matrix, w is a real or complex value, and X and B are NA x 1
matrices -- real if w is real, complex if w is complex. NA may be 1 or
2.
SLAMCH determines single precision machine parameters.
SLANGB returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of an n by n
band matrix A, with kl sub-diagonals and ku super-diagonals.
SLANGE returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a real
matrix A.
SLANGT returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a real
tridiagonal matrix A.
SLANHS returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a
Hessenberg matrix A.
SLANSB returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of an n by n
symmetric band matrix A, with k super-diagonals.
SLANSP returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a real
symmetric matrix A, supplied in packed form.
SLANST returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a real
symmetric tridiagonal matrix A.
SLANSY returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a real
symmetric matrix A.
SLANTB returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of an n by n
triangular band matrix A, with ( k + 1 ) diagonals.
SLANTP returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a
triangular matrix A, supplied in packed form.
SLANTR returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a
trapezoidal or triangular matrix A.
SLANV2 computes the Schur factorization of a real 2-by-2 nonsymmetric
Page 70
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
matrix in standard form:
[ A B ] = [ CS -SN ] [ AA BB ] [ CS SN ]
Given two column vectors X and Y, let
The subroutine first computes the QR factorization of A = Q*R, and then
computes the SVD of the 2-by-2 upper triangular matrix R. The smaller
singular value of R is returned in SSMIN, which is used as the
measurement of the linear dependency of the vectors X and Y.
SLAPMT rearranges the columns of the M by N matrix X as specified by the
permutation K(1),K(2),...,K(N) of the integers 1,...,N. If FORWRD =
.TRUE., forward permutation:
SLAPY2 returns sqrt(x**2+y**2), taking care not to cause unnecessary
overflow.
SLAPY3 returns sqrt(x**2+y**2+z**2), taking care not to cause unnecessary
overflow.
SLAQGB equilibrates a general M by N band matrix A with KL subdiagonals
and KU superdiagonals using the row and scaling factors in the vectors R
and C.
SLAQGE equilibrates a general M by N matrix A using the row and scaling
factors in the vectors R and C.
SLAQSB equilibrates a symmetric band matrix A using the scaling factors
in the vector S.
SLAQSP equilibrates a symmetric matrix A using the scaling factors in the
vector S.
SLAQSY equilibrates a symmetric matrix A using the scaling factors in the
vector S.
SLAQTR solves the real quasi-triangular system
or the complex quasi-triangular systems
SLAR2V applies a vector of real plane rotations from both sides to a
sequence of 2-by-2 real symmetric matrices, defined by the elements of
the vectors x, y and z. For i = 1,2,...,n
( x(i)z(i) ) := ( c(i)s(i) ) ( x(i)z(i) ) ( c(i) -s(i) )
( z(i)y(i) ) ( -s(i)c(i) ) ( z(i)y(i) ) ( s(i)c(i) )
SLARF applies a real elementary reflector H to a real m by n matrix C,
from either the left or the right. H is represented in the form
Page 71
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
H = I - tau * v * v'
SLARFB applies a real block reflector H or its transpose H' to a real m
by n matrix C, from either the left or the right.
SLARFG generates a real elementary reflector H of order n, such that
( x ) ( 0 )
SLARFT forms the triangular factor T of a real block reflector H of order
n, which is defined as a product of k elementary reflectors.
SLARFX applies a real elementary reflector H to a real m by n matrix C,
from either the left or the right. H is represented in the form
SLARGV generates a vector of real plane rotations, determined by elements
of the real vectors x and y. For i = 1,2,...,n
( c(i)s(i) ) ( x(i) ) = ( a(i) )
SLARNV returns a vector of n random real numbers from a uniform or normal
distribution.
SLARTG generate a plane rotation so that
[ -SN CS ] [ G ] [ 0 ]
SLARTV applies a vector of real plane rotations to elements of the real
vectors x and y. For i = 1,2,...,n
( x(i) ) := ( c(i)s(i) ) ( x(i) )
SLARUV returns a vector of n random real numbers from a uniform (0,1)
distribution (n <= 128).
SLAS2 computes the singular values of the 2-by-2 matrix
[ F G ]
[ 0 H ]. On return, SSMIN is the smaller singular value and SSMAX
is the larger singular value.
SLASCL multiplies the M by N real matrix A by the real scalar CTO/CFROM.
This is done without over/underflow as long as the final result
CTO*A(I,J)/CFROM does not over/underflow. TYPE specifies that A may be
full, upper triangular, lower triangular, upper Hessenberg, or banded.
SLASET initializes an m-by-n matrix A to BETA on the diagonal and ALPHA
on the offdiagonals.
SLASR performs the transformation consisting of a sequence of plane
rotations determined by the parameters PIVOT and DIRECT as follows ( z =
m when SIDE = 'L' or 'l' and z = n when SIDE = 'R' or 'r' ):
SLASSQ returns the values scl and smsq such that
Page 72
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
where x( i ) = X( 1 + ( i - 1 )*INCX ). The value of sumsq is assumed
to be non-negative and scl returns the value
SLASV2 computes the singular value decomposition of a 2-by-2 triangular
matrix
[ F G ]
[ 0 H ]. On return, abs(SSMAX) is the larger singular value,
abs(SSMIN) is the smaller singular value, and (CSL,SNL) and (CSR,SNR) are
the left and right singular vectors for abs(SSMAX), giving the
decomposition
[ CSL SNL ] [ F G ] [ CSR -SNR ] = [ SSMAX 0 ]
[-SNL CSL ] [ 0 H ] [ SNR CSR ] [ 0 SSMIN ].
SLASWP performs a series of row interchanges on the matrix A. One row
interchange is initiated for each of rows K1 through K2 of A.
SLASY2 solves for the N1 by N2 matrix X, 1 <= N1,N2 <= 2, in
where TL is N1 by N1, TR is N2 by N2, B is N1 by N2, and ISGN = 1 or -1.
op(T) = T or T', where T' denotes the transpose of T.
SLASYF computes a partial factorization of a real symmetric matrix A
using the Bunch-Kaufman diagonal pivoting method. The partial
factorization has the form:
SLATBS solves one of the triangular systems are n-element vectors, and s
is a scaling factor, usually less than or equal to 1, chosen so that the
components of x will be less than the overflow threshold. If the
unscaled problem will not cause overflow, the Level 2 BLAS routine STBSV
is called. If the matrix A is singular (A(j,j) = 0 for some j), then s
is set to 0 and a non-trivial solution to A*x = 0 is returned.
SLATPS solves one of the triangular systems transpose of A, x and b are
n-element vectors, and s is a scaling factor, usually less than or equal
to 1, chosen so that the components of x will be less than the overflow
threshold. If the unscaled problem will not cause overflow, the Level 2
BLAS routine STPSV is called. If the matrix A is singular (A(j,j) = 0 for
some j), then s is set to 0 and a non-trivial solution to A*x = 0 is
returned.
SLATRD reduces NB rows and columns of a real symmetric matrix A to
symmetric tridiagonal form by an orthogonal similarity transformation Q'
* A * Q, and returns the matrices V and W which are needed to apply the
transformation to the unreduced part of A.
SLATRS solves one of the triangular systems triangular matrix, A' denotes
the transpose of A, x and b are n-element vectors, and s is a scaling
factor, usually less than or equal to 1, chosen so that the components of
x will be less than the overflow threshold. If the unscaled problem will
not cause overflow, the Level 2 BLAS routine STRSV is called. If the
matrix A is singular (A(j,j) = 0 for some j), then s is set to 0 and a
Page 73
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
non-trivial solution to A*x = 0 is returned.
SLATZM applies a Householder matrix generated by STZRQF to a matrix.
SLAUU2 computes the product U * U' or L' * L, where the triangular factor
U or L is stored in the upper or lower triangular part of the array A.
SLAUUM computes the product U * U' or L' * L, where the triangular factor
U or L is stored in the upper or lower triangular part of the array A.
SLAZRO initializes a 2-D array A to BETA on the diagonal and ALPHA on the
offdiagonals.
SOPGTR generates a real orthogonal matrix Q which is defined as the
product of n-1 elementary reflectors of order n, as returned by SSPTRD
using packed storage:
if UPLO = 'U', Q = H(n-1) . . . H(2)H(1),
SOPMTR overwrites the general real M-by-N matrix C with TRANS = 'T':
Q**T * C C * Q**T
SORG2L generates an m by n real matrix Q with orthonormal columns, which
is defined as the last n columns of a product of k elementary reflectors
of order m
SORG2R generates an m by n real matrix Q with orthonormal columns, which
is defined as the first n columns of a product of k elementary reflectors
of order m
SORGBR generates one of the matrices Q or P**T determined by SGEBRD when
reducing a real matrix A to bidiagonal form: A = Q * B * P**T. Q and
P**T are defined as products of elementary reflectors H(i) or G(i)
respectively.
SORGHR generates a real orthogonal matrix Q which is defined as the
product of IHI-ILO elementary reflectors of order N, as returned by
SGEHRD:
Q = H(ilo) H(ilo+1) . . . H(ihi-1).
SORGL2 generates an m by n real matrix Q with orthonormal rows, which is
defined as the first m rows of a product of k elementary reflectors of
order n
SORGLQ generates an M-by-N real matrix Q with orthonormal rows, which is
defined as the first M rows of a product of K elementary reflectors of
order N
SORGQL generates an M-by-N real matrix Q with orthonormal columns, which
is defined as the last N columns of a product of K elementary reflectors
of order M
Page 74
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
SORGQR generates an M-by-N real matrix Q with orthonormal columns, which
is defined as the first N columns of a product of K elementary reflectors
of order M
SORGR2 generates an m by n real matrix Q with orthonormal rows, which is
defined as the last m rows of a product of k elementary reflectors of
order n
SORGRQ generates an M-by-N real matrix Q with orthonormal rows, which is
defined as the last M rows of a product of K elementary reflectors of
order N
SORGTR generates a real orthogonal matrix Q which is defined as the
product of n-1 elementary reflectors of order N, as returned by SSYTRD:
if UPLO = 'U', Q = H(n-1) . . . H(2)H(1),
SORM2L overwrites the general real m by n matrix C with
where Q is a real orthogonal matrix defined as the product of k
elementary reflectors
SORM2R overwrites the general real m by n matrix C with
where Q is a real orthogonal matrix defined as the product of k
elementary reflectors
If VECT = 'Q', SORMBR overwrites the general real M-by-N matrix C with
SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C
C * Q TRANS = 'T': Q**T * C C * Q**T
SORMHR overwrites the general real M-by-N matrix C with TRANS = 'T':
Q**T * C C * Q**T
SORML2 overwrites the general real m by n matrix C with
where Q is a real orthogonal matrix defined as the product of k
elementary reflectors
SORMLQ overwrites the general real M-by-N matrix C with TRANS = 'T':
Q**T * C C * Q**T
SORMQL overwrites the general real M-by-N matrix C with TRANS = 'T':
Q**T * C C * Q**T
SORMQR overwrites the general real M-by-N matrix C with TRANS = 'T':
Q**T * C C * Q**T
SORMR2 overwrites the general real m by n matrix C with
where Q is a real orthogonal matrix defined as the product of k
elementary reflectors
Page 75
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
SORMRQ overwrites the general real M-by-N matrix C with TRANS = 'T':
Q**T * C C * Q**T
SORMTR overwrites the general real M-by-N matrix C with TRANS = 'T':
Q**T * C C * Q**T
SPBCON estimates the reciprocal of the condition number (in the 1-norm)
of a real symmetric positive definite band matrix using the Cholesky
factorization A = U**T*U or A = L*L**T computed by SPBTRF.
SPBEQU computes row and column scalings intended to equilibrate a
symmetric positive definite band matrix A and reduce its condition number
(with respect to the two-norm). S contains the scale factors, S(i) =
1/sqrt(A(i,i)), chosen so that the scaled matrix B with elements B(i,j) =
S(i)*A(i,j)*S(j) has ones on the diagonal. This choice of S puts the
condition number of B within a factor N of the smallest possible
condition number over all possible diagonal scalings.
SPBRFS improves the computed solution to a system of linear equations
when the coefficient matrix is symmetric positive definite and banded,
and provides error bounds and backward error estimates for the solution.
SPBSV computes the solution to a real system of linear equations
A * X = B, where A is an N-by-N symmetric positive definite band
matrix and X and B are N-by-NRHS matrices.
SPBSVX uses the Cholesky factorization A = U**T*U or A = L*L**T to
compute the solution to a real system of linear equations
A * X = B, where A is an N-by-N symmetric positive definite band
matrix and X and B are N-by-NRHS matrices.
SPBTF2 computes the Cholesky factorization of a real symmetric positive
definite band matrix A.
SPBTRF computes the Cholesky factorization of a real symmetric positive
definite band matrix A.
SPBTRS solves a system of linear equations A*X = B with a symmetric
positive definite band matrix A using the Cholesky factorization A =
U**T*U or A = L*L**T computed by SPBTRF.
SPOCON estimates the reciprocal of the condition number (in the 1-norm)
of a real symmetric positive definite matrix using the Cholesky
factorization A = U**T*U or A = L*L**T computed by SPOTRF.
SPOEQU computes row and column scalings intended to equilibrate a
symmetric positive definite matrix A and reduce its condition number
(with respect to the two-norm). S contains the scale factors, S(i) =
1/sqrt(A(i,i)), chosen so that the scaled matrix B with elements B(i,j) =
S(i)*A(i,j)*S(j) has ones on the diagonal. This choice of S puts the
condition number of B within a factor N of the smallest possible
condition number over all possible diagonal scalings.
Page 76
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
SPORFS improves the computed solution to a system of linear equations
when the coefficient matrix is symmetric positive definite, and provides
error bounds and backward error estimates for the solution.
SPOSV computes the solution to a real system of linear equations
A * X = B, where A is an N-by-N symmetric positive definite matrix and
X and B are N-by-NRHS matrices.
SPOSVX uses the Cholesky factorization A = U**T*U or A = L*L**T to
compute the solution to a real system of linear equations
A * X = B, where A is an N-by-N symmetric positive definite matrix and
X and B are N-by-NRHS matrices.
SPOTF2 computes the Cholesky factorization of a real symmetric positive
definite matrix A.
SPOTRF computes the Cholesky factorization of a real symmetric positive
definite matrix A.
SPOTRI computes the inverse of a real symmetric positive definite matrix
A using the Cholesky factorization A = U**T*U or A = L*L**T computed by
SPOTRF.
SPOTRS solves a system of linear equations A*X = B with a symmetric
positive definite matrix A using the Cholesky factorization A = U**T*U or
A = L*L**T computed by SPOTRF.
SPPCON estimates the reciprocal of the condition number (in the 1-norm)
of a real symmetric positive definite packed matrix using the Cholesky
factorization A = U**T*U or A = L*L**T computed by SPPTRF.
SPPEQU computes row and column scalings intended to equilibrate a
symmetric positive definite matrix A in packed storage and reduce its
condition number (with respect to the two-norm). S contains the scale
factors, S(i)=1/sqrt(A(i,i)), chosen so that the scaled matrix B with
elements B(i,j)=S(i)*A(i,j)*S(j) has ones on the diagonal. This choice
of S puts the condition number of B within a factor N of the smallest
possible condition number over all possible diagonal scalings.
SPPRFS improves the computed solution to a system of linear equations
when the coefficient matrix is symmetric positive definite and packed,
and provides error bounds and backward error estimates for the solution.
SPPSV computes the solution to a real system of linear equations
A * X = B, where A is an N-by-N symmetric positive definite matrix
stored in packed format and X and B are N-by-NRHS matrices.
SPPSVX uses the Cholesky factorization A = U**T*U or A = L*L**T to
compute the solution to a real system of linear equations
A * X = B, where A is an N-by-N symmetric positive definite matrix
stored in packed format and X and B are N-by-NRHS matrices.
Page 77
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
SPPTRF computes the Cholesky factorization of a real symmetric positive
definite matrix A stored in packed format.
SPPTRI computes the inverse of a real symmetric positive definite matrix
A using the Cholesky factorization A = U**T*U or A = L*L**T computed by
SPPTRF.
SPPTRS solves a system of linear equations A*X = B with a symmetric
positive definite matrix A in packed storage using the Cholesky
factorization A = U**T*U or A = L*L**T computed by SPPTRF.
SPTCON computes the reciprocal of the condition number (in the 1-norm) of
a real symmetric positive definite tridiagonal matrix using the
factorization A = L*D*L**T or A = U**T*D*U computed by SPTTRF.
SPTEQR computes all eigenvalues and, optionally, eigenvectors of a
symmetric positive definite tridiagonal matrix by first factoring the
matrix using SPTTRF, and then calling SBDSQR to compute the singular
values of the bidiagonal factor.
SPTRFS improves the computed solution to a system of linear equations
when the coefficient matrix is symmetric positive definite and
tridiagonal, and provides error bounds and backward error estimates for
the solution.
SPTSV computes the solution to a real system of linear equations A*X = B,
where A is an N-by-N symmetric positive definite tridiagonal matrix, and
X and B are N-by-NRHS matrices.
SPTSVX uses the factorization A = L*D*L**T to compute the solution to a
real system of linear equations A*X = B, where A is an N-by-N symmetric
positive definite tridiagonal matrix and X and B are N-by-NRHS matrices.
SPTTRF computes the factorization of a real symmetric positive definite
tridiagonal matrix A.
SPTTRS solves a system of linear equations A * X = B with a symmetric
positive definite tridiagonal matrix A using the factorization A =
L*D*L**T or A = U**T*D*U computed by SPTTRF. (The two forms are
equivalent if A is real.)
SRSCL multiplies an n-element real vector x by the real scalar 1/a. This
is done without overflow or underflow as long as
SSBEV computes all the eigenvalues and, optionally, eigenvectors of a
real symmetric band matrix A.
SSBEVX computes selected eigenvalues and, optionally, eigenvectors of a
real symmetric band matrix A. Eigenvalues/vectors can be selected by
specifying either a range of values or a range of indices for the desired
eigenvalues.
Page 78
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
SSBTRD reduces a real symmetric band matrix A to symmetric tridiagonal
form T by an orthogonal similarity transformation: Q**T * A * Q = T.
SSPCON estimates the reciprocal of the condition number (in the 1-norm)
of a real symmetric packed matrix A using the factorization A = U*D*U**T
or A = L*D*L**T computed by SSPTRF.
SSPEV computes all the eigenvalues and, optionally, eigenvectors of a
real symmetric matrix A in packed storage.
SSPEVX computes selected eigenvalues and, optionally, eigenvectors of a
real symmetric matrix A in packed storage. Eigenvalues/vectors can be
selected by specifying either a range of values or a range of indices for
the desired eigenvalues.
SSPGST reduces a real symmetric-definite generalized eigenproblem to
standard form, using packed storage.
SSPGV computes all the eigenvalues and, optionally, the eigenvectors of a
real generalized symmetric-definite eigenproblem, of the form
A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and B
are assumed to be symmetric, stored in packed format, and B is also
positive definite.
SSPRFS improves the computed solution to a system of linear equations
when the coefficient matrix is symmetric indefinite and packed, and
provides error bounds and backward error estimates for the solution.
SSPSV computes the solution to a real system of linear equations
A * X = B, where A is an N-by-N symmetric matrix stored in packed
format and X and B are N-by-NRHS matrices.
SSPSVX uses the diagonal pivoting factorization A = U*D*U**T or A =
L*D*L**T to compute the solution to a real system of linear equations A *
X = B, where A is an N-by-N symmetric matrix stored in packed format and
X and B are N-by-NRHS matrices.
SSPTRD reduces a real symmetric matrix A stored in packed form to
symmetric tridiagonal form T by an orthogonal similarity transformation:
Q**T * A * Q = T.
SSPTRF computes the factorization of a real symmetric matrix A stored in
packed format using the Bunch-Kaufman diagonal pivoting method:
A = U*D*U**T or A = L*D*L**T
SSPTRI computes the inverse of a real symmetric indefinite matrix A in
packed storage using the factorization A = U*D*U**T or A = L*D*L**T
computed by SSPTRF.
SSPTRS solves a system of linear equations A*X = B with a real symmetric
matrix A stored in packed format using the factorization A = U*D*U**T or
Page 79
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
A = L*D*L**T computed by SSPTRF.
SSTEBZ computes the eigenvalues of a symmetric tridiagonal matrix T. The
user may ask for all eigenvalues, all eigenvalues in the half-open
interval (VL, VU], or the IL-th through IU-th eigenvalues.
SSTEIN computes the eigenvectors of a real symmetric tridiagonal matrix T
corresponding to specified eigenvalues, using inverse iteration.
SSTEQR computes all eigenvalues and, optionally, eigenvectors of a
symmetric tridiagonal matrix using the implicit QL or QR method. The
eigenvectors of a full or band symmetric matrix can also be found if
SSYTRD or SSPTRD or SSBTRD has been used to reduce this matrix to
tridiagonal form.
SSTERF computes all eigenvalues of a symmetric tridiagonal matrix using
the Pal-Walker-Kahan variant of the QL or QR algorithm.
SSTEV computes all eigenvalues and, optionally, eigenvectors of a real
symmetric tridiagonal matrix A.
SSTEVX computes selected eigenvalues and, optionally, eigenvectors of a
real symmetric tridiagonal matrix A. Eigenvalues/vectors can be selected
by specifying either a range of values or a range of indices for the
desired eigenvalues.
SSYCON estimates the reciprocal of the condition number (in the 1-norm)
of a real symmetric matrix A using the factorization A = U*D*U**T or A =
L*D*L**T computed by SSYTRF.
SSYEV computes all eigenvalues and, optionally, eigenvectors of a real
symmetric matrix A.
SSYEVX computes selected eigenvalues and, optionally, eigenvectors of a
real symmetric matrix A. Eigenvalues and eigenvectors can be selected by
specifying either a range of values or a range of indices for the desired
eigenvalues.
SSYGS2 reduces a real symmetric-definite generalized eigenproblem to
standard form.
SSYGST reduces a real symmetric-definite generalized eigenproblem to
standard form.
SSYGV computes all the eigenvalues, and optionally, the eigenvectors of a
real generalized symmetric-definite eigenproblem, of the form
A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and B
are assumed to be symmetric and B is also
SSYRFS improves the computed solution to a system of linear equations
when the coefficient matrix is symmetric indefinite, and provides error
bounds and backward error estimates for the solution.
Page 80
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
SSYSV computes the solution to a real system of linear equations
A * X = B, where A is an N-by-N symmetric matrix and X and B are N-
by-NRHS matrices.
SSYSVX uses the diagonal pivoting factorization to compute the solution
to a real system of linear equations A * X = B, where A is an N-by-N
symmetric matrix and X and B are N-by-NRHS matrices.
SSYTD2 reduces a real symmetric matrix A to symmetric tridiagonal form T
by an orthogonal similarity transformation: Q' * A * Q = T.
SSYTF2 computes the factorization of a real symmetric matrix A using the
Bunch-Kaufman diagonal pivoting method:
A = U*D*U' or A = L*D*L'
SSYTRD reduces a real symmetric matrix A to real symmetric tridiagonal
form T by an orthogonal similarity transformation: Q**T * A * Q = T.
SSYTRF computes the factorization of a real symmetric matrix A using the
Bunch-Kaufman diagonal pivoting method. The form of the factorization is
SSYTRI computes the inverse of a real symmetric indefinite matrix A using
the factorization A = U*D*U**T or A = L*D*L**T computed by SSYTRF.
SSYTRS solves a system of linear equations A*X = B with a real symmetric
matrix A using the factorization A = U*D*U**T or A = L*D*L**T computed by
SSYTRF.
STBCON estimates the reciprocal of the condition number of a triangular
band matrix A, in either the 1-norm or the infinity-norm.
STBRFS provides error bounds and backward error estimates for the
solution to a system of linear equations with a triangular band
coefficient matrix.
STBTRS solves a triangular system of the form
where A is a triangular band matrix of order N, and B is an N-by NRHS
matrix. A check is made to verify that A is nonsingular.
STGEVC computes selected left and/or right generalized eigenvectors of a
pair of real upper triangular matrices (A,B). The j-th generalized left
and right eigenvectors are y and x, resp., such that:
STGSJA computes the generalized singular value decomposition (GSVD) of
two real upper ``triangular (or trapezoidal)'' matrices A and B.
STPCON estimates the reciprocal of the condition number of a packed
triangular matrix A, in either the 1-norm or the infinity-norm.
STPRFS provides error bounds and backward error estimates for the
Page 81
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
solution to a system of linear equations with a triangular packed
coefficient matrix.
STPTRI computes the inverse of a real upper or lower triangular matrix A
stored in packed format.
STPTRS solves a triangular system of the form
where A is a triangular matrix of order N stored in packed format, and B
is an N-by-NRHS matrix. A check is made to verify that A is nonsingular.
STRCON estimates the reciprocal of the condition number of a triangular
matrix A, in either the 1-norm or the infinity-norm.
STREVC computes all or some right and/or left eigenvectors of a real
upper quasi-triangular matrix T.
STREXC reorders the real Schur factorization of a real matrix A =
Q*T*Q**T, so that the diagonal block of T with row index IFST is moved to
row ILST.
STRRFS provides error bounds and backward error estimates for the
solution to a system of linear equations with a triangular coefficient
matrix.
STRSEN reorders the real Schur factorization of a real matrix A =
Q*T*Q**T, so that a selected cluster of eigenvalues appears in the
leading diagonal blocks of the upper quasi-triangular matrix T, and the
leading columns of Q form an orthonormal basis of the corresponding right
invariant subspace.
STRSNA estimates reciprocal condition numbers for specified eigenvalues
and/or right eigenvectors of a real upper quasi-triangular matrix T (or
of any matrix Q*T*Q**T with Q orthogonal).
STRSYL solves the real Sylvester matrix equation:
op(A)*X + X*op(B) = scale*C or
STRTI2 computes the inverse of a real upper or lower triangular matrix.
STRTRI computes the inverse of a real upper or lower triangular matrix A.
STRTRS solves a triangular system of the form
where A is a triangular matrix of order N, and B is an N-by-NRHS matrix.
A check is made to verify that A is nonsingular.
STZRQF reduces the M-by-N ( M<=N ) real upper trapezoidal matrix A to
upper triangular form by means of orthogonal transformations.
XERBLA is an error handler for the LAPACK routines. It is called by an
Page 82
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
LAPACK routine if an input parameter has an invalid value. A message is
printed and execution stops.
DBDSQR computes the singular value decomposition (SVD) of a real N-by-N
(upper or lower) bidiagonal matrix B: B = Q * S * P' (P' denotes the
transpose of P), where S is a diagonal matrix with non-negative diagonal
elements (the singular values of B), and Q and P are orthogonal matrices.
ZDRSCL multiplies an n-element complex vector x by the real scalar 1/a.
This is done without overflow or underflow as long as the final result
x/a does not overflow or underflow.
ZGBCON estimates the reciprocal of the condition number of a complex
general band matrix A, in either the 1-norm or the infinity-norm, using
the LU factorization computed by ZGBTRF.
ZGBEQU computes row and column scalings intended to equilibrate an M by N
band matrix A and reduce its condition number. R returns the row scale
factors and C the column scale factors, chosen to try to make the largest
element in each row and column of the matrix B with elements
B(i,j)=R(i)*A(i,j)*C(j) have absolute value 1.
ZGBRFS improves the computed solution to a system of linear equations
when the coefficient matrix is banded, and provides error bounds and
backward error estimates for the solution.
ZGBSV computes the solution to a complex system of linear equations A * X
= B, where A is a band matrix of order N with KL subdiagonals and KU
superdiagonals, and X and B are N-by-NRHS matrices.
ZGBSVX uses the LU factorization to compute the solution to a complex
system of linear equations A * X = B, A**T * X = B, or A**H * X = B,
where A is a band matrix of order N with KL subdiagonals and KU
superdiagonals, and X and B are N-by-NRHS matrices.
ZGBTF2 computes an LU factorization of a complex m-by-n band matrix A
using partial pivoting with row interchanges.
ZGBTRF computes an LU factorization of a complex m-by-n band matrix A
using partial pivoting with row interchanges.
ZGBTRS solves a system of linear equations
A * X = B, A**T * X = B, or A**H * X = B with a general band matrix
A using the LU factorization computed by ZGBTRF.
ZGEBAK forms the right or left eigenvectors of a complex general matrix
by backward transformation on the computed eigenvectors of the balanced
matrix output by ZGEBAL.
ZGEBAL balances a general complex matrix A. This involves, first,
Page 83
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
permuting A by a similarity transformation to isolate eigenvalues in the
first 1 to ILO-1 and last IHI+1 to N elements on the diagonal; and
second, applying a diagonal similarity transformation to rows and columns
ILO to IHI to make the rows and columns as close in norm as possible.
Both steps are optional.
ZGEBD2 reduces a complex general m by n matrix A to upper or lower real
bidiagonal form B by a unitary transformation: Q' * A * P = B.
ZGEBRD reduces a general complex M-by-N matrix A to upper or lower
bidiagonal form B by a unitary transformation: Q**H * A * P = B.
ZGECON estimates the reciprocal of the condition number of a general
complex matrix A, in either the 1-norm or the infinity-norm, using the LU
factorization computed by ZGETRF.
ZGEEQU computes row and column scalings intended to equilibrate an M by N
matrix A and reduce its condition number. R returns the row scale
factors and C the column scale factors, chosen to try to make the largest
entry in each row and column of the matrix B with elements
B(i,j)=R(i)*A(i,j)*C(j) have absolute value 1.
ZGEES computes for an N-by-N complex nonsymmetric matrix A, the
eigenvalues, the Schur form T, and, optionally, the matrix of Schur
vectors Z. This gives the Schur factorization A = Z*T*(Z**H).
ZGEESX computes for an N-by-N complex nonsymmetric matrix A, the
eigenvalues, the Schur form T, and, optionally, the matrix of Schur
vectors Z. This gives the Schur factorization A = Z*T*(Z**H).
ZGEEV computes for an N-by-N complex nonsymmetric matrix A, the
eigenvalues and, optionally, the left and/or right eigenvectors.
ZGEEVX computes for an N-by-N complex nonsymmetric matrix A, the
eigenvalues and, optionally, the left and/or right eigenvectors.
For a pair of N-by-N complex nonsymmetric matrices A, B:
compute the generalized eigenvalues (alpha, beta)
For a pair of N-by-N complex nonsymmetric matrices A, B:
compute the generalized eigenvalues (alpha, beta)
ZGEHD2 reduces a complex general matrix A to upper Hessenberg form H by a
unitary similarity transformation: Q' * A * Q = H .
ZGEHRD reduces a complex general matrix A to upper Hessenberg form H by a
unitary similarity transformation: Q' * A * Q = H .
ZGELQ2 computes an LQ factorization of a complex m by n matrix A: A = L
* Q.
Page 84
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
ZGELQF computes an LQ factorization of a complex M-by-N matrix A: A = L
* Q.
ZGELS solves overdetermined or underdetermined complex linear systems
involving an M-by-N matrix A, or its conjugate-transpose, using a QR or
LQ factorization of A. It is assumed that A has full rank.
ZGELSS computes the minimum norm solution to a complex linear least
squares problem:
Minimize 2-norm(| b - A*x |).
ZGELSX computes the minimum-norm solution to a complex linear least
squares problem:
minimize || A * X - B ||
ZGEQL2 computes a QL factorization of a complex m by n matrix A: A = Q *
L.
ZGEQLF computes a QL factorization of a complex M-by-N matrix A: A = Q *
L.
ZGEQPF computes a QR factorization with column pivoting of a complex M-
by-N matrix A: A*P = Q*R.
ZGEQR2 computes a QR factorization of a complex m by n matrix A: A = Q *
R.
ZGEQRF computes a QR factorization of a complex M-by-N matrix A: A = Q *
R.
ZGERFS improves the computed solution to a system of linear equations and
provides error bounds and backward error estimates for the solution.
ZGERQ2 computes an RQ factorization of a complex m by n matrix A: A = R
* Q.
ZGERQF computes an RQ factorization of a complex M-by-N matrix A: A = R
* Q.
ZGESV computes the solution to a complex system of linear equations
A * X = B, where A is an N-by-N matrix and X and B are N-by-NRHS
matrices.
ZGESVD computes the singular value decomposition (SVD) of a complex M-
by-N matrix A, optionally computing the left and/or right singular
vectors. The SVD is written
A = U * SIGMA * conjugate-transpose(V)
ZGESVX uses the LU factorization to compute the solution to a complex
system of linear equations
Page 85
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
A * X = B, where A is an N-by-N matrix and X and B are N-by-NRHS
matrices.
ZGETF2 computes an LU factorization of a general m-by-n matrix A using
partial pivoting with row interchanges.
ZGETRF computes an LU factorization of a general M-by-N matrix A using
partial pivoting with row interchanges.
ZGETRI computes the inverse of a matrix using the LU factorization
computed by ZGETRF.
ZGETRS solves a system of linear equations
A * X = B, A**T * X = B, or A**H * X = B with a general N-by-N
matrix A using the LU factorization computed by ZGETRF.
ZGGBAK forms the right or left eigenvectors of the generalized eigenvalue
problem by backward transformation on the computed eigenvectors of the
balanced matrix output by ZGGBAL.
ZGGBAL balances a pair of general complex matrices (A,B) for the
generalized eigenvalue problem A*X = lambda*B*X. This involves, first,
permuting A and B by similarity transformations to isolate eigenvalues in
the first 1 to ILO-1 and last IHI+1 to N elements on the diagonal; and
second, applying a diagonal similarity
ZGGGLM solves a generalized linear regression model (GLM) problem:
minimize y'*y subject to d = A*x + B*y
ZGGHRD reduces a pair of complex matrices (A,B) to generalized upper
Hessenberg form using unitary similarity transformations, where A is a
(generally non-symmetric) square matrix and B is upper triangular. More
precisely, ZGGHRD simultaneously decomposes A into Q H Z* and B into
Q T Z* , where H is upper Hessenberg, T is upper triangular, Q and Z are
unitary, and * means conjugate transpose.
ZGGLSE solves the linear equality constrained least squares (LSE)
problem:
minimize || A*x - c ||_2 subject to B*x = d
ZGGQRF computes a generalized QR factorization of an N-by-M matrix A and
an N-by-P matrix B:
A = Q*R, B = Q*T*Z,
ZGGRQF computes a generalized RQ factorization of an M-by-N matrix A and
a P-by-N matrix B:
A = R*Q, B = Z*T*Q,
Page 86
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
ZGGSVD computes the generalized singular value decomposition (GSVD) of
the M-by-N complex matrix A and P-by-N complex matrix B:
U'*A*Q = D1*( 0 R ), V'*B*Q = D2*( 0 R ) (1)
where U, V and Q are unitary matrices, R is an upper triangular matrix,
and Z' means the conjugate transpose of Z. Let K+L = the numerical
effective rank of the matrix (A',B')', then D1 and D2 are M-by-(K+L) and
P-by-(K+L) "diagonal" matrices and of the following structures,
respectively:
ZGGSVP computes unitary matrices U, V and Q such that A23 is upper
trapezoidal. K+L = the effective rank of the (M+P)-by-N matrix (A',B')'.
Z' denotes the conjugate transpose of Z.
ZGTCON estimates the reciprocal of the condition number of a complex
tridiagonal matrix A using the LU factorization as computed by ZGTTRF.
ZGTRFS improves the computed solution to a system of linear equations
when the coefficient matrix is tridiagonal, and provides error bounds and
backward error estimates for the solution.
ZGTSV solves the equation
where A is an N-by-N tridiagonal matrix, by Gaussian elimination with
partial pivoting.
ZGTSVX uses the LU factorization to compute the solution to a complex
system of linear equations A * X = B, A**T * X = B, or A**H * X = B,
where A is a tridiagonal matrix of order N and X and B are N-by-NRHS
matrices.
ZGTTRF computes an LU factorization of a complex tridiagonal matrix A
using elimination with partial pivoting and row interchanges.
ZGTTRS solves one of the systems of equations
A * X = B, A**T * X = B, or A**H * X = B, with a tridiagonal matrix
A using the LU factorization computed by ZGTTRF.
ZHBEV computes all the eigenvalues and, optionally, eigenvectors of a
complex Hermitian band matrix A.
ZHBEVX computes selected eigenvalues and, optionally, eigenvectors of a
complex Hermitian band matrix A. Eigenvalues/vectors can be selected by
specifying either a range of values or a range of indices for the desired
eigenvalues.
ZHBTRD reduces a complex Hermitian band matrix A to real symmetric
tridiagonal form T by a unitary similarity transformation: Q**H * A * Q
= T.
ZHECON estimates the reciprocal of the condition number of a complex
Page 87
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
Hermitian matrix A using the factorization A = U*D*U**H or A = L*D*L**H
computed by ZHETRF.
ZHEEV computes all eigenvalues and, optionally, eigenvectors of a complex
Hermitian matrix A.
ZHEEVX computes selected eigenvalues and, optionally, eigenvectors of a
complex Hermitian matrix A. Eigenvalues and eigenvectors can be selected
by specifying either a range of values or a range of indices for the
desired eigenvalues.
ZHEGS2 reduces a complex Hermitian-definite generalized eigenproblem to
standard form.
ZHEGST reduces a complex Hermitian-definite generalized eigenproblem to
standard form.
ZHEGV computes all the eigenvalues, and optionally, the eigenvectors of a
complex generalized Hermitian-definite eigenproblem, of the form
A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and B
are assumed to be Hermitian and B is also
ZHERFS improves the computed solution to a system of linear equations
when the coefficient matrix is Hermitian indefinite, and provides error
bounds and backward error estimates for the solution.
ZHESV computes the solution to a complex system of linear equations
A * X = B, where A is an N-by-N Hermitian matrix and X and B are N-
by-NRHS matrices.
ZHESVX uses the diagonal pivoting factorization to compute the solution
to a complex system of linear equations A * X = B, where A is an N-by-N
Hermitian matrix and X and B are N-by-NRHS matrices.
ZHETD2 reduces a complex Hermitian matrix A to real symmetric tridiagonal
form T by a unitary similarity transformation: Q' * A * Q = T.
ZHETF2 computes the factorization of a complex Hermitian matrix A using
the Bunch-Kaufman diagonal pivoting method:
A = U*D*U' or A = L*D*L'
ZHETRD reduces a complex Hermitian matrix A to real symmetric tridiagonal
form T by a unitary similarity transformation: Q**H * A * Q = T.
ZHETRF computes the factorization of a complex Hermitian matrix A using
the Bunch-Kaufman diagonal pivoting method. The form of the
factorization is
ZHETRI computes the inverse of a complex Hermitian indefinite matrix A
using the factorization A = U*D*U**H or A = L*D*L**H computed by ZHETRF.
Page 88
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
ZHETRS solves a system of linear equations A*X = B with a complex
Hermitian matrix A using the factorization A = U*D*U**H or A = L*D*L**H
computed by ZHETRF.
ZHGEQZ implements a single-shift version of the QZ method for finding the
generalized eigenvalues w(i)=ALPHA(i)/BETA(i) of the equation A are then
ALPHA(1),...,ALPHA(N), and of B are BETA(1),...,BETA(N).
ZHPCON estimates the reciprocal of the condition number of a complex
Hermitian packed matrix A using the factorization A = U*D*U**H or A =
L*D*L**H computed by ZHPTRF.
ZHPEV computes all the eigenvalues and, optionally, eigenvectors of a
complex Hermitian matrix in packed storage.
ZHPEVX computes selected eigenvalues and, optionally, eigenvectors of a
complex Hermitian matrix A in packed storage. Eigenvalues/vectors can be
selected by specifying either a range of values or a range of indices for
the desired eigenvalues.
ZHPGST reduces a complex Hermitian-definite generalized eigenproblem to
standard form, using packed storage.
ZHPGV computes all the eigenvalues and, optionally, the eigenvectors of a
complex generalized Hermitian-definite eigenproblem, of the form
A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and B
are assumed to be Hermitian, stored in packed format, and B is also
positive definite.
ZHPRFS improves the computed solution to a system of linear equations
when the coefficient matrix is Hermitian indefinite and packed, and
provides error bounds and backward error estimates for the solution.
ZHPSV computes the solution to a complex system of linear equations
A * X = B, where A is an N-by-N Hermitian matrix stored in packed
format and X and B are N-by-NRHS matrices.
ZHPSVX uses the diagonal pivoting factorization A = U*D*U**H or A =
L*D*L**H to compute the solution to a complex system of linear equations
A * X = B, where A is an N-by-N Hermitian matrix stored in packed format
and X and B are N-by-NRHS matrices.
ZHPTRD reduces a complex Hermitian matrix A stored in packed form to real
symmetric tridiagonal form T by a unitary similarity transformation: Q**H
* A * Q = T.
ZHPTRF computes the factorization of a complex Hermitian packed matrix A
using the Bunch-Kaufman diagonal pivoting method:
A = U*D*U**H or A = L*D*L**H
ZHPTRI computes the inverse of a complex Hermitian indefinite matrix A in
Page 89
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
packed storage using the factorization A = U*D*U**H or A = L*D*L**H
computed by ZHPTRF.
ZHPTRS solves a system of linear equations A*X = B with a complex
Hermitian matrix A stored in packed format using the factorization A =
U*D*U**H or A = L*D*L**H computed by ZHPTRF.
ZHSEIN uses inverse iteration to find specified right and/or left
eigenvectors of a complex upper Hessenberg matrix H.
ZHSEQR computes the eigenvalues of a complex upper Hessenberg matrix H,
and, optionally, the matrices T and Z from the Schur decomposition H = Z
T Z**H, where T is an upper triangular matrix (the Schur form), and Z is
the unitary matrix of Schur vectors.
ZLABRD reduces the first NB rows and columns of a complex general m by n
matrix A to upper or lower real bidiagonal form by a unitary
transformation Q' * A * P, and returns the matrices X and Y which are
needed to apply the transformation to the unreduced part of A.
ZLACGV conjugates a complex vector of length N.
ZLACON estimates the 1-norm of a square, complex matrix A. Reverse
communication is used for evaluating matrix-vector products.
ZLACPY copies all or part of a two-dimensional matrix A to another matrix
B.
ZLACRT applies a plane rotation, where the cos and sin (C and S) are
complex and the vectors CX and CY are complex.
ZLADIV := X / Y, where X and Y are complex. The computation of X / Y
will not overflow on an intermediary step unless the results overflows.
ZLAEIN uses inverse iteration to find a right or left eigenvector
corresponding to the eigenvalue W of a complex upper Hessenberg matrix H.
ZLAESY computes the eigendecomposition of a 2x2 symmetric matrix
( ( A, B );( B, C ) ) provided the norm of the matrix of eigenvectors
is larger than some threshold value.
ZLAEV2 computes the eigendecomposition of a 2-by-2 Hermitian matrix
[ A B ]
[ CONJG(B) C ]. On return, RT1 is the eigenvalue of larger
absolute value, RT2 is the eigenvalue of smaller absolute value, and
(CS1,SN1) is the unit right eigenvector for RT1, giving the decomposition
ZLAGS2 computes 2-by-2 unitary matrices U, V and Q, such that if ( UPPER
) then
( -CONJG(SNU) CSU ) ( -CONJG(SNV) CSV )
ZLAGTM performs a matrix-vector product of the form
Page 90
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
ZLAHEF computes a partial factorization of a complex Hermitian matrix A
using the Bunch-Kaufman diagonal pivoting method. The partial
factorization has the form:
ZLAHQR is an auxiliary routine called by ZHSEQR to update the eigenvalues
and Schur decomposition already computed by ZHSEQR, by dealing with the
Hessenberg submatrix in rows and columns ILO to IHI.
ZLAHRD reduces the first NB columns of a complex general n-by-(n-k+1)
matrix A so that elements below the k-th subdiagonal are zero. The
reduction is performed by a unitary similarity transformation Q' * A * Q.
The routine returns the matrices V and T which determine Q as a block
reflector I - V*T*V', and also the matrix Y = A * V * T.
ZLAIC1 applies one step of incremental condition estimation in its
simplest version:
Let x, twonorm(x) = 1, be an approximate singular vector of an j-by-j
lower triangular matrix L, such that
ZLANGB returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of an n by n
band matrix A, with kl sub-diagonals and ku super-diagonals.
ZLANGE returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a complex
matrix A.
ZLANGT returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a complex
tridiagonal matrix A.
ZLANHB returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of an n by n
hermitian band matrix A, with k super-diagonals.
ZLANHE returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a complex
hermitian matrix A.
ZLANHP returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a complex
hermitian matrix A, supplied in packed form.
ZLANHS returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a
Hessenberg matrix A.
ZLANHT returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a complex
Hermitian tridiagonal matrix A.
Page 91
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
ZLANSB returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of an n by n
symmetric band matrix A, with k super-diagonals.
ZLANSP returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a complex
symmetric matrix A, supplied in packed form.
ZLANSY returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a complex
symmetric matrix A.
ZLANTB returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of an n by n
triangular band matrix A, with ( k + 1 ) diagonals.
ZLANTP returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a
triangular matrix A, supplied in packed form.
ZLANTR returns the value of the one norm, or the Frobenius norm, or the
infinity norm, or the element of largest absolute value of a
trapezoidal or triangular matrix A.
Given two column vectors X and Y, let
The subroutine first computes the QR factorization of A = Q*R, and then
computes the SVD of the 2-by-2 upper triangular matrix R. The smaller
singular value of R is returned in SSMIN, which is used as the
measurement of the linear dependency of the vectors X and Y.
ZLAPMT rearranges the columns of the M by N matrix X as specified by the
permutation K(1),K(2),...,K(N) of the integers 1,...,N. If FORWRD =
.TRUE., forward permutation:
ZLAQGB equilibrates a general M by N band matrix A with KL subdiagonals
and KU superdiagonals using the row and scaling factors in the vectors R
and C.
ZLAQGE equilibrates a general M by N matrix A using the row and scaling
factors in the vectors R and C.
ZLAQSB equilibrates a symmetric band matrix A using the scaling factors
in the vector S.
ZLAQSP equilibrates a symmetric matrix A using the scaling factors in the
vector S.
ZLAQSY equilibrates a symmetric matrix A using the scaling factors in the
vector S.
ZLAR2V applies a vector of complex plane rotations with real cosines from
Page 92
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
both sides to a sequence of 2-by-2 complex Hermitian matrices, defined by
the elements of the vectors x, y and z. For i = 1,2,...,n
( x(i)z(i) ) :=
ZLARF applies a complex elementary reflector H to a complex M-by-N matrix
C, from either the left or the right. H is represented in the form
ZLARFB applies a complex block reflector H or its transpose H' to a
complex M-by-N matrix C, from either the left or the right.
ZLARFG generates a complex elementary reflector H of order n, such that
( x ) ( 0 )
ZLARFT forms the triangular factor T of a complex block reflector H of
order n, which is defined as a product of k elementary reflectors.
ZLARFX applies a complex elementary reflector H to a complex m by n
matrix C, from either the left or the right. H is represented in the form
ZLARGV generates a vector of complex plane rotations with real cosines,
determined by elements of the complex vectors x and y. For i = 1,2,...,n
ZLARNV returns a vector of n random complex numbers from a uniform or
normal distribution.
ZLARTG generates a plane rotation so that
[ -SN CS ] [ G ] [ 0 ]
ZLARTV applies a vector of complex plane rotations with real cosines to
elements of the complex vectors x and y. For i = 1,2,...,n
( x(i) ) := ( c(i)s(i) ) ( x(i) )
ZLASCL multiplies the M by N complex matrix A by the real scalar
CTO/CFROM. This is done without over/underflow as long as the final
result CTO*A(I,J)/CFROM does not over/underflow. TYPE specifies that A
may be full, upper triangular, lower triangular, upper Hessenberg, or
banded.
ZLASET initializes a 2-D array A to BETA on the diagonal and ALPHA on the
offdiagonals.
ZLASR performs the transformation consisting of a sequence of plane
rotations determined by the parameters PIVOT and DIRECT as follows ( z =
m when SIDE = 'L' or 'l' and z = n when SIDE = 'R' or 'r' ):
ZLASSQ returns the values scl and ssq such that
where x( i ) = abs( X( 1 + ( i - 1 )*INCX ) ). The value of sumsq is
assumed to be at least unity and the value of ssq will then satisfy
Page 93
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
1.0 .le. ssq .le. ( sumsq + 2*n ).
ZLASWP performs a series of row interchanges on the matrix A. One row
interchange is initiated for each of rows K1 through K2 of A.
ZLASYF computes a partial factorization of a complex symmetric matrix A
using the Bunch-Kaufman diagonal pivoting method. The partial
factorization has the form:
ZLATBS solves one of the triangular systems
with scaling to prevent overflow, where A is an upper or lower triangular
band matrix. Here A' denotes the transpose of A, x and b are n-element
vectors, and s is a scaling factor, usually less than or equal to 1,
chosen so that the components of x will be less than the overflow
threshold. If the unscaled problem will not cause overflow, the Level 2
BLAS routine ZTBSV is called. If the matrix A is singular (A(j,j) = 0
for some j), then s is set to 0 and a non-trivial solution to A*x = 0 is
returned.
ZLATPS solves one of the triangular systems
with scaling to prevent overflow, where A is an upper or lower triangular
matrix stored in packed form. Here A**T denotes the transpose of A, A**H
denotes the conjugate transpose of A, x and b are n-element vectors, and
s is a scaling factor, usually less than or equal to 1, chosen so that
the components of x will be less than the overflow threshold. If the
unscaled problem will not cause overflow, the Level 2 BLAS routine ZTPSV
is called. If the matrix A is singular (A(j,j) = 0 for some j), then s is
set to 0 and a non-trivial solution to A*x = 0 is returned.
ZLATRD reduces NB rows and columns of a complex Hermitian matrix A to
Hermitian tridiagonal form by a unitary similarity transformation Q' * A
* Q, and returns the matrices V and W which are needed to apply the
transformation to the unreduced part of A.
ZLATRS solves one of the triangular systems
with scaling to prevent overflow. Here A is an upper or lower triangular
matrix, A**T denotes the transpose of A, A**H denotes the conjugate
transpose of A, x and b are n-element vectors, and s is a scaling factor,
usually less than or equal to 1, chosen so that the components of x will
be less than the overflow threshold. If the unscaled problem will not
cause overflow, the Level 2 BLAS routine ZTRSV is called. If the matrix A
is singular (A(j,j) = 0 for some j), then s is set to 0 and a non-trivial
solution to A*x = 0 is returned.
ZLATZM applies a Householder matrix generated by ZTZRQF to a matrix.
ZLAUU2 computes the product U * U' or L' * L, where the triangular factor
U or L is stored in the upper or lower triangular part of the array A.
Page 94
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
ZLAUUM computes the product U * U' or L' * L, where the triangular factor
U or L is stored in the upper or lower triangular part of the array A.
ZLAZRO initializes a 2-D array A to BETA on the diagonal and ALPHA on the
offdiagonals.
ZPBCON estimates the reciprocal of the condition number (in the 1-norm)
of a complex Hermitian positive definite band matrix using the Cholesky
factorization A = U**H*U or A = L*L**H computed by ZPBTRF.
ZPBEQU computes row and column scalings intended to equilibrate a
Hermitian positive definite band matrix A and reduce its condition number
(with respect to the two-norm). S contains the scale factors, S(i) =
1/sqrt(A(i,i)), chosen so that the scaled matrix B with elements B(i,j) =
S(i)*A(i,j)*S(j) has ones on the diagonal. This choice of S puts the
condition number of B within a factor N of the smallest possible
condition number over all possible diagonal scalings.
ZPBRFS improves the computed solution to a system of linear equations
when the coefficient matrix is Hermitian positive definite and banded,
and provides error bounds and backward error estimates for the solution.
ZPBSV computes the solution to a complex system of linear equations
A * X = B, where A is an N-by-N Hermitian positive definite band
matrix and X and B are N-by-NRHS matrices.
ZPBSVX uses the Cholesky factorization A = U**H*U or A = L*L**H to
compute the solution to a complex system of linear equations
A * X = B, where A is an N-by-N Hermitian positive definite band
matrix and X and B are N-by-NRHS matrices.
ZPBTF2 computes the Cholesky factorization of a complex Hermitian
positive definite band matrix A.
ZPBTRF computes the Cholesky factorization of a complex Hermitian
positive definite band matrix A.
ZPBTRS solves a system of linear equations A*X = B with a Hermitian
positive definite band matrix A using the Cholesky factorization A =
U**H*U or A = L*L**H computed by ZPBTRF.
ZPOCON estimates the reciprocal of the condition number (in the 1-norm)
of a complex Hermitian positive definite matrix using the Cholesky
factorization A = U**H*U or A = L*L**H computed by ZPOTRF.
ZPOEQU computes row and column scalings intended to equilibrate a
Hermitian positive definite matrix A and reduce its condition number
(with respect to the two-norm). S contains the scale factors, S(i) =
1/sqrt(A(i,i)), chosen so that the scaled matrix B with elements B(i,j) =
S(i)*A(i,j)*S(j) has ones on the diagonal. This choice of S puts the
condition number of B within a factor N of the smallest possible
condition number over all possible diagonal scalings.
Page 95
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
ZPORFS improves the computed solution to a system of linear equations
when the coefficient matrix is Hermitian positive definite, and provides
error bounds and backward error estimates for the solution.
ZPOSV computes the solution to a complex system of linear equations
A * X = B, where A is an N-by-N Hermitian positive definite matrix and
X and B are N-by-NRHS matrices.
ZPOSVX uses the Cholesky factorization A = U**H*U or A = L*L**H to
compute the solution to a complex system of linear equations
A * X = B, where A is an N-by-N Hermitian positive definite matrix and
X and B are N-by-NRHS matrices.
ZPOTF2 computes the Cholesky factorization of a complex Hermitian
positive definite matrix A.
ZPOTRF computes the Cholesky factorization of a complex Hermitian
positive definite matrix A.
ZPOTRI computes the inverse of a complex Hermitian positive definite
matrix A using the Cholesky factorization A = U**H*U or A = L*L**H
computed by ZPOTRF.
ZPOTRS solves a system of linear equations A*X = B with a Hermitian
positive definite matrix A using the Cholesky factorization A = U**H*U or
A = L*L**H computed by ZPOTRF.
ZPPCON estimates the reciprocal of the condition number (in the 1-norm)
of a complex Hermitian positive definite packed matrix using the Cholesky
factorization A = U**H*U or A = L*L**H computed by ZPPTRF.
ZPPEQU computes row and column scalings intended to equilibrate a
Hermitian positive definite matrix A in packed storage and reduce its
condition number (with respect to the two-norm). S contains the scale
factors, S(i)=1/sqrt(A(i,i)), chosen so that the scaled matrix B with
elements B(i,j)=S(i)*A(i,j)*S(j) has ones on the diagonal. This choice
of S puts the condition number of B within a factor N of the smallest
possible condition number over all possible diagonal scalings.
ZPPRFS improves the computed solution to a system of linear equations
when the coefficient matrix is Hermitian positive definite and packed,
and provides error bounds and backward error estimates for the solution.
ZPPSV computes the solution to a complex system of linear equations
A * X = B, where A is an N-by-N Hermitian positive definite matrix
stored in packed format and X and B are N-by-NRHS matrices.
ZPPSVX uses the Cholesky factorization A = U**H*U or A = L*L**H to
compute the solution to a complex system of linear equations
A * X = B, where A is an N-by-N Hermitian positive definite matrix
stored in packed format and X and B are N-by-NRHS matrices.
Page 96
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
ZPPTRF computes the Cholesky factorization of a complex Hermitian
positive definite matrix stored in packed format.
ZPPTRI computes the inverse of a complex Hermitian positive definite
matrix A using the Cholesky factorization A = U**H*U or A = L*L**H
computed by ZPPTRF.
ZPPTRS solves a system of linear equations A*X = B with a Hermitian
positive definite matrix A in packed storage using the Cholesky
factorization A = U**H*U or A = L*L**H computed by ZPPTRF.
ZPTCON computes the reciprocal of the condition number (in the 1-norm) of
a complex Hermitian positive definite tridiagonal matrix using the
factorization A = L*D*L**T or A = U**T*D*U computed by ZPTTRF.
ZPTEQR computes all eigenvalues and, optionally, eigenvectors of a
symmetric positive definite tridiagonal matrix by first factoring the
matrix using DPTTRF and then calling ZBDSQR to compute the singular
values of the bidiagonal factor.
ZPTRFS improves the computed solution to a system of linear equations
when the coefficient matrix is Hermitian positive definite and
tridiagonal, and provides error bounds and backward error estimates for
the solution.
ZPTSV computes the solution to a complex system of linear equations A*X =
B, where A is an N-by-N Hermitian positive definite tridiagonal matrix,
and X and B are N-by-NRHS matrices.
ZPTSVX uses the factorization A = L*D*L**H to compute the solution to a
complex system of linear equations A*X = B, where A is an N-by-N
Hermitian positive definite tridiagonal matrix and X and B are N-by-NRHS
matrices.
ZPTTRF computes the factorization of a complex Hermitian positive
definite tridiagonal matrix A.
ZPTTRS solves a system of linear equations A * X = B with a Hermitian
positive definite tridiagonal matrix A using the factorization A =
U**H*D*U or A = L*D*L**H computed by ZPTTRF.
ZROT applies a plane rotation, where the cos (C) is real and the sin
(S) is complex, and the vectors CX and CY are complex.
ZSPCON estimates the reciprocal of the condition number (in the 1-norm)
of a complex symmetric packed matrix A using the factorization A =
U*D*U**T or A = L*D*L**T computed by ZSPTRF.
ZSPMV performs the matrix-vector operation
where alpha and beta are scalars, x and y are n element vectors and A is
an n by n symmetric matrix, supplied in packed form.
Page 97
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
ZSPR performs the symmetric rank 1 operation
where alpha is a complex scalar, x is an n element vector and A is an n
by n symmetric matrix, supplied in packed form.
ZSPRFS improves the computed solution to a system of linear equations
when the coefficient matrix is symmetric indefinite and packed, and
provides error bounds and backward error estimates for the solution.
ZSPSV computes the solution to a complex system of linear equations
A * X = B, where A is an N-by-N symmetric matrix stored in packed
format and X and B are N-by-NRHS matrices.
ZSPSVX uses the diagonal pivoting factorization A = U*D*U**T or A =
L*D*L**T to compute the solution to a complex system of linear equations
A * X = B, where A is an N-by-N symmetric matrix stored in packed format
and X and B are N-by-NRHS matrices.
ZSPTRF computes the factorization of a complex symmetric matrix A stored
in packed format using the Bunch-Kaufman diagonal pivoting method:
A = U*D*U**T or A = L*D*L**T
ZSPTRI computes the inverse of a complex symmetric indefinite matrix A in
packed storage using the factorization A = U*D*U**T or A = L*D*L**T
computed by ZSPTRF.
ZSPTRS solves a system of linear equations A*X = B with a complex
symmetric matrix A stored in packed format using the factorization A =
U*D*U**T or A = L*D*L**T computed by ZSPTRF.
ZSTEIN computes the eigenvectors of a real symmetric tridiagonal matrix T
corresponding to specified eigenvalues, using inverse iteration.
ZSTEQR computes all eigenvalues and, optionally, eigenvectors of a
symmetric tridiagonal matrix using the implicit QL or QR method. The
eigenvectors of a full or band complex Hermitian matrix can also be found
if ZSYTRD or ZSPTRD or ZSBTRD has been used to reduce this matrix to
tridiagonal form.
ZSYCON estimates the reciprocal of the condition number (in the 1-norm)
of a complex symmetric matrix A using the factorization A = U*D*U**T or A
= L*D*L**T computed by ZSYTRF.
ZSYMV performs the matrix-vector operation
where alpha and beta are scalars, x and y are n element vectors and A is
an n by n symmetric matrix.
ZSYR performs the symmetric rank 1 operation
where alpha is a complex scalar, x is an n element vector and A is an n
Page 98
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
by n symmetric matrix.
ZSYRFS improves the computed solution to a system of linear equations
when the coefficient matrix is symmetric indefinite, and provides error
bounds and backward error estimates for the solution.
ZSYSV computes the solution to a complex system of linear equations
A * X = B, where A is an N-by-N symmetric matrix and X and B are N-
by-NRHS matrices.
ZSYSVX uses the diagonal pivoting factorization to compute the solution
to a complex system of linear equations A * X = B, where A is an N-by-N
symmetric matrix and X and B are N-by-NRHS matrices.
ZSYTF2 computes the factorization of a complex symmetric matrix A using
the Bunch-Kaufman diagonal pivoting method:
A = U*D*U' or A = L*D*L'
ZSYTRF computes the factorization of a complex symmetric matrix A using
the Bunch-Kaufman diagonal pivoting method. The form of the
factorization is
ZSYTRI computes the inverse of a complex symmetric indefinite matrix A
using the factorization A = U*D*U**T or A = L*D*L**T computed by ZSYTRF.
ZSYTRS solves a system of linear equations A*X = B with a complex
symmetric matrix A using the factorization A = U*D*U**T or A = L*D*L**T
computed by ZSYTRF.
ZTBCON estimates the reciprocal of the condition number of a triangular
band matrix A, in either the 1-norm or the infinity-norm.
ZTBRFS provides error bounds and backward error estimates for the
solution to a system of linear equations with a triangular band
coefficient matrix.
ZTBTRS solves a triangular system of the form
where A is a triangular band matrix of order N, and B is an N-by-NRHS
matrix. A check is made to verify that A is nonsingular.
ZTGEVC computes selected left and/or right generalized eigenvectors of a
pair of complex upper triangular matrices (A,B). The j-th generalized
left and right eigenvectors are y and x, resp., such that:
ZTGSJA computes the generalized singular value decomposition (GSVD) of
two complex upper triangular (or trapezoidal) matrices A and B.
ZTPCON estimates the reciprocal of the condition number of a packed
triangular matrix A, in either the 1-norm or the infinity-norm.
Page 99
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
ZTPRFS provides error bounds and backward error estimates for the
solution to a system of linear equations with a triangular packed
coefficient matrix.
ZTPTRI computes the inverse of a complex upper or lower triangular matrix
A stored in packed format.
ZTPTRS solves a triangular system of the form
where A is a triangular matrix of order N stored in packed format, and B
is an N-by-NRHS matrix. A check is made to verify that A is nonsingular.
ZTRCON estimates the reciprocal of the condition number of a triangular
matrix A, in either the 1-norm or the infinity-norm.
ZTREVC computes all or some right and/or left eigenvectors of a complex
upper triangular matrix T.
ZTREXC reorders the Schur factorization of a complex matrix A = Q*T*Q**H,
so that the diagonal element of T with row index IFST is moved to row
ILST.
ZTRRFS provides error bounds and backward error estimates for the
solution to a system of linear equations with a triangular coefficient
matrix.
ZTRSEN reorders the Schur factorization of a complex matrix A = Q*T*Q**H,
so that a selected cluster of eigenvalues appears in the leading
positions on the diagonal of the upper triangular matrix T, and the
leading columns of Q form an orthonormal basis of the corresponding right
invariant subspace.
ZTRSNA estimates reciprocal condition numbers for specified eigenvalues
and/or right eigenvectors of a complex upper triangular matrix T (or of
any matrix Q*T*Q**H with Q unitary).
ZTRSYL solves the complex Sylvester matrix equation:
op(A)*X + X*op(B) = scale*C or
ZTRTI2 computes the inverse of a complex upper or lower triangular
matrix.
ZTRTRI computes the inverse of a complex upper or lower triangular matrix
A.
ZTRTRS solves a triangular system of the form
where A is a triangular matrix of order N, and B is an N-by-NRHS matrix.
A check is made to verify that A is nonsingular.
ZTZRQF reduces the M-by-N ( M<=N ) complex upper trapezoidal matrix A to
Page 100
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
upper triangular form by means of unitary transformations.
ZUNG2L generates an m by n complex matrix Q with orthonormal columns,
which is defined as the last n columns of a product of k elementary
reflectors of order m
ZUNG2R generates an m by n complex matrix Q with orthonormal columns,
which is defined as the first n columns of a product of k elementary
reflectors of order m
ZUNGBR generates one of the matrices Q or P**H determined by ZGEBRD when
reducing a complex matrix A to bidiagonal form: A = Q * B * P**H.
ZUNGHR generates a complex unitary matrix Q which is defined as the
product of IHI-ILO elementary reflectors of order N, as returned by
ZGEHRD:
Q = H(ilo) H(ilo+1) . . . H(ihi-1).
ZUNGL2 generates an m-by-n complex matrix Q with orthonormal rows, which
is defined as the first m rows of a product of k elementary reflectors of
order n
ZUNGLQ generates an M-by-N complex matrix Q with orthonormal rows, which
is defined as the first M rows of a product of K elementary reflectors of
order N
ZUNGQL generates an M-by-N complex matrix Q with orthonormal columns,
which is defined as the last N columns of a product of K elementary
reflectors of order M
ZUNGQR generates an M-by-N complex matrix Q with orthonormal columns,
which is defined as the first N columns of a product of K elementary
reflectors of order M
ZUNGR2 generates an m by n complex matrix Q with orthonormal rows, which
is defined as the last m rows of a product of k elementary reflectors of
order n
ZUNGRQ generates an M-by-N complex matrix Q with orthonormal rows, which
is defined as the last M rows of a product of K elementary reflectors of
order N
ZUNGTR generates a complex unitary matrix Q which is defined as the
product of n-1 elementary reflectors of order N, as returned by ZHETRD:
if UPLO = 'U', Q = H(n-1) . . . H(2)H(1),
ZUNM2L overwrites the general complex m-by-n matrix C with
where Q is a complex unitary matrix defined as the product of k
elementary reflectors
Page 101
COMPLIB.SGIMATH(3F)COMPLIB.SGIMATH(3F)
ZUNM2R overwrites the general complex m-by-n matrix C with
where Q is a complex unitary matrix defined as the product of k
elementary reflectors
If VECT = 'Q', ZUNMBR overwrites the general complex M-by-N matrix C with
SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C
C * Q TRANS = 'C': Q**H * C C * Q**H
ZUNMHR overwrites the general complex M-by-N matrix C with TRANS = 'C':
Q**H * C C * Q**H
ZUNML2 overwrites the general complex m-by-n matrix C with
where Q is a complex unitary matrix defined as the product of k
elementary reflectors
ZUNMLQ overwrites the general complex M-by-N matrix C with TRANS = 'C':
Q**H * C C * Q**H
ZUNMQL overwrites the general complex M-by-N matrix C with TRANS = 'C':
Q**H * C C * Q**H
ZUNMQR overwrites the general complex M-by-N matrix C with TRANS = 'C':
Q**H * C C * Q**H
ZUNMR2 overwrites the general complex m-by-n matrix C with
where Q is a complex unitary matrix defined as the product of k
elementary reflectors
ZUNMRQ overwrites the general complex M-by-N matrix C with TRANS = 'C':
Q**H * C C * Q**H
ZUNMTR overwrites the general complex M-by-N matrix C with TRANS = 'C':
Q**H * C C * Q**H
ZUPGTR generates a complex unitary matrix Q which is defined as the
product of n-1 elementary reflectors of order n, as returned by ZHPTRD
using packed storage:
if UPLO = 'U', Q = H(n-1) . . . H(2)H(1),
ZUPMTR overwrites the general complex M-by-N matrix C with TRANS = 'C':
Q**H * C C * Q**H
Page 102