To compute I = Integral of F over (A,B), with error estimate J = integral of ABS(F) over (A,B)
The original (infinite integration range is mapped onto the interval (0,1) and (A,B) is a part of (0,1). it is the purpose to compute I = Integral of transformed integrand over (A,B), J = Integral of ABS(Transformed Integrand) over (A,B).
To compute I = Integral of F*W over (A,B), with error estimate J = Integral of ABS(F*W) over (A,B)
To compute I = Integral of F over (A,B), with error estimate J = integral of ABS(F) over (A,B)
The original (infinite integration range is mapped onto the interval (0,1) and (A,B) is a part of (0,1). it is the purpose to compute I = Integral of transformed integrand over (A,B), J = Integral of ABS(Transformed Integrand) over (A,B).
To compute I = Integral of F*W over (A,B), with error estimate J = Integral of ABS(F*W) over (A,B)
To compute I = Integral of F over (A,B), with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B), with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F*W over (A,B) with error estimate, where W(X) = 1/(X-C)
To compute I = Integral of F*W over (BL,BR), with error estimate, where the weight function W has a singular behaviour of ALGEBRAICO-LOGARITHMIC type at the points A and/or B. (BL,BR) is a part of (A,B).
To compute I = Integral of F*W over (A,B) with error estimate, where W(X) = 1/(X-C)
To compute I = Integral of F*W over (BL,BR), with error estimate, where the weight function W has a singular behaviour of ALGEBRAICO-LOGARITHMIC type at the points A and/or B. (BL,BR) is a part of (A,B).
To compute I = Integral of F over (A,B) with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B) with error estimate J = Integral of ABS(F) over (A,B)
Evaluate the 3j symbol f(L1) = ( L1 L2 L3) (-M2-M3 M2 M3) for all allowed values of L1, the other parameters being held fixed.
Evaluate the 3j symbol g(M2) = (L1 L2 L3 ) (M1 M2 -M1-M2) for all allowed values of M2, the other parameters being held fixed.
Evaluate the 3j symbol f(L1) = ( L1 L2 L3) (-M2-M3 M2 M3) for all allowed values of L1, the other parameters being held fixed.
Evaluate the 3j symbol g(M2) = (L1 L2 L3 ) (M1 M2 -M1-M2) for all allowed values of M2, the other parameters being held fixed.
Evaluate the 3j symbol f(L1) = ( L1 L2 L3) (-M2-M3 M2 M3) for all allowed values of L1, the other parameters being held fixed.
Evaluate the 3j symbol g(M2) = (L1 L2 L3 ) (M1 M2 -M1-M2) for all allowed values of M2, the other parameters being held fixed.
Evaluate the 3j symbol f(L1) = ( L1 L2 L3) (-M2-M3 M2 M3) for all allowed values of L1, the other parameters being held fixed.
Evaluate the 3j symbol g(M2) = (L1 L2 L3 ) (M1 M2 -M1-M2) for all allowed values of M2, the other parameters being held fixed.
To compute I = Integral of F over (A,B), with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B), with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B) with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B) with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B) with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B) with error estimate J = Integral of ABS(F) over (A,B)
Evaluate the 6j symbol h(L1) = {L1 L2 L3} {L4 L5 L6} for all allowed values of L1, the other parameters being held fixed.
Evaluate the 6j symbol h(L1) = {L1 L2 L3} {L4 L5 L6} for all allowed values of L1, the other parameters being held fixed.
Evaluate the 6j symbol h(L1) = {L1 L2 L3} {L4 L5 L6} for all allowed values of L1, the other parameters being held fixed.
Evaluate the 6j symbol h(L1) = {L1 L2 L3} {L4 L5 L6} for all allowed values of L1, the other parameters being held fixed.
Abort program execution and print error message.
Compute the logarithm of the absolute value of the Gamma function.
Compute the logarithm of the absolute value of the Gamma function.
Compute the logarithm of the absolute value of the Gamma function.
Compute the logarithm of the absolute value of the Gamma function.
Compute the logarithm of the absolute value of the Gamma function.
Compute the arc hyperbolic cosine.
Compute the arc hyperbolic cosine.
Compute the arc hyperbolic cosine.
Approximate the solution at XOUT by evaluating the polynomial computed in DSTEPS at XOUT. Must be used in conjunction with DSTEPS.
Integrate a system of first order ordinary differential equations one step.
Approximate the solution at XOUT by evaluating the polynomial computed in STEPS at XOUT. Must be used in conjunction with STEPS.
Integrate a system of first order ordinary differential equations one step.
Solve an initial value problem in ordinary differential equations using an Adams-Bashforth method.
Solve an initial value problem in ordinary differential equations using an Adams-Bashforth method.
Integrate a real function of one variable over a finite interval using an adaptive 8-point Legendre-Gauss algorithm. Intended primarily for high accuracy integration or integration of smooth functions.
Integrate a function using a 7-point adaptive Newton-Cotes quadrature rule.
Integrate a real function of one variable over a finite interval using an adaptive 8-point Legendre-Gauss algorithm. Intended primarily for high accuracy integration or integration of smooth functions.
Integrate a function using a 7-point adaptive Newton-Cotes quadrature rule.
Evaluate the Airy function.
Compute the Airy function Ai(z) or its derivative dAi/dz for complex argument z. A scaling option is available to help avoid underflow and overflow.
Compute the Airy function Bi(z) or its derivative dBi/dz for complex argument z. A scaling option is available to help avoid overflow.
Evaluate the Airy modulus and phase.
Evaluate the Airy function.
Evaluate the Airy modulus and phase.
Compute the Airy function Ai(z) or its derivative dAi/dz for complex argument z. A scaling option is available to help avoid underflow and overflow.
Compute the Airy function Bi(z) or its derivative dBi/dz for complex argument z. A scaling option is available to help avoid overflow.
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points, see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points, see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
This function subprogram is used together with the routine DQAWS and defines the WEIGHT function.
This function subprogram is used together with the routine QAWS and defines the WEIGHT function.
Evaluate the variance function of the curve obtained by the constrained B-spline fitting subprogram FC.
Evaluate the variance function of the curve obtained by the constrained B-spline fitting subprogram DFC.
Rearrange a given array according to a prescribed permutation vector.
Rearrange a given array according to a prescribed permutation vector.
Rearrange a given array according to a prescribed permutation vector.
Compute the complex arc cosine.
Compute the arc hyperbolic cosine.
Compute the arc hyperbolic cosine.
Compute the arc hyperbolic cosine.
Compute the arc hyperbolic sine.
Compute the arc hyperbolic sine.
Compute the arc hyperbolic sine.
Compute the arc hyperbolic tangent.
Compute the arc hyperbolic tangent.
Compute the arc hyperbolic tangent.
Compute the complex arc sine.
Compute the complex arc tangent.
Compute the complex arc tangent in the proper quadrant.
Evaluate DATAN(X) from first order relative accuracy so that DATAN(X) = X + X**3*D9ATN1(X).
Evaluate ATAN(X) from first order relative accuracy so that ATAN(X) = X + X**3*R9ATN1(X).
Compute the argument of a complex number.
Compute the arc hyperbolic sine.
Compute the arc hyperbolic sine.
Compute the arc hyperbolic sine.
Compute the arc hyperbolic tangent.
Compute the arc hyperbolic tangent.
Compute the arc hyperbolic tangent.
Integrate a real function of one variable over a finite interval using an adaptive 8-point Legendre-Gauss algorithm. Intended primarily for high accuracy integration or integration of smooth functions.
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT)LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESLT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given INTEGRAL I = Integral of F over (BOUND,+INFINITY) OR I = Integral of F over (-INFINITY,BOUND) OR I = Integral of F over (-INFINITY,+INFINITY) Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given integral I = Integral of F over (BOUND,+INFINITY) or I = Integral of F over (-INFINITY,BOUND) or I = Integral of F over (-INFINITY,+INFINITY), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy break points of the integration interval, where local difficulties of the integrand may occur (e.g. SINGULARITIES, DISCONTINUITIES), are provided by the user.
Approximate a given definite integral I = Integral of F over (A,B), hopefully satisfying the accuracy claim: ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)). Break points of the integration interval, where local difficulties of the integrand may occur (e.g. singularities or discontinuities) are provided by the user.
The routine calculates an approximation result to a given Definite integral I = Integral of F over (A,B), Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a Cauchy principal value I = INTEGRAL of F*W over (A,B) (W(X) = 1/((X-C), C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABE,EPSREL*ABS(I)).
The routine calculates an approximation result to a CAUCHY PRINCIPAL VALUE I = Integral of F*W over (A,B) (W(X) = 1/(X-C), (C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
The routine calculates an approximation result to a given Fourier integral I=Integral of F(X)*W(X) over (A,INFINITY) where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
The routine calculates an approximation result to a given Fourier integral I = Integral of F(X)*W(X) over (A,INFINITY) where W(X)=COS(OMEGA*X) or W(X)=SIN(OMEGA*X), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
Calculate an approximation to a given definite integral I= Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X)=SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points, see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Integrate a real function of one variable over a finite interval using an adaptive 8-point Legendre-Gauss algorithm. Intended primarily for high accuracy integration or integration of smooth functions.
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT)LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESLT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given INTEGRAL I = Integral of F over (BOUND,+INFINITY) OR I = Integral of F over (-INFINITY,BOUND) OR I = Integral of F over (-INFINITY,+INFINITY) Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given integral I = Integral of F over (BOUND,+INFINITY) or I = Integral of F over (-INFINITY,BOUND) or I = Integral of F over (-INFINITY,+INFINITY), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy break points of the integration interval, where local difficulties of the integrand may occur(e.g. SINGULARITIES, DISCONTINUITIES), are provided by the user.
Approximate a given definite integral I = Integral of F over (A,B), hopefully satisfying the accuracy claim: ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)). Break points of the integration interval, where local difficulties of the integrand may occur (e.g. singularities or discontinuities) are provided by the user.
The routine calculates an approximation result to a given Definite integral I = Integral of F over (A,B), Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a Cauchy principal value I = INTEGRAL of F*W over (A,B) (W(X) = 1/((X-C), C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABE,EPSREL*ABS(I)).
The routine calculates an approximation result to a CAUCHY PRINCIPAL VALUE I = Integral of F*W over (A,B) (W(X) = 1/(X-C), (C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
The routine calculates an approximation result to a given Fourier integral I = Integral of F(X)*W(X) over (A,INFINITY) where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
The routine calculates an approximation result to a given Fourier integral I = Integral of F(X)*W(X) over (A,INFINITY) where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points, see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Compute the B-representation of a cubic spline which interpolates given data.
Compute the B-representation of a spline which interpolates given data.
Documentation for BSPLINE, a package of subprograms for working with piecewise polynomial functions in B-representation.
Use the B-representation to construct a divided difference table preparatory to a (right) derivative calculation.
Calculate the value of the spline and its derivatives from the B-representation.
Convert the B-representation of a B-spline to the piecewise polynomial (PP) form.
Evaluate the variance function of the curve obtained by the constrained B-spline fitting subprogram FC.
Compute the B-representation of a cubic spline which interpolates given data.
Compute the B-representation of a spline which interpolates given data.
Use the B-representation to construct a divided difference table preparatory to a (right) derivative calculation.
Calculate the value of the spline and its derivatives from the B-representation.
Convert the B-representation of a B-spline to the piecewise polynomial (PP) form.
Evaluate the variance function of the curve obtained by the constrained B-spline fitting subprogram DFC.
Fit a piecewise polynomial curve to discrete data. The piecewise polynomials are represented as B-splines. The fitting is done in a weighted least squares sense.
Fit a piecewise polynomial curve to discrete data. The piecewise polynomials are represented as B-splines. The fitting is done in a weighted least squares sense. Equality and inequality constraints can be imposed on the fitted curve.
Compute the largest integer ILEFT in 1 .LE. ILEFT .LE. LXT such that XT(ILEFT) .LE. X where XT(*) is a subdivision of the X interval.
Compute the integral on (X1,X2) of a product of a function F and the ID-th derivative of a B-spline, (PP-representation).
Compute the integral on (X1,X2) of a K-th order B-spline using the piecewise polynomial (PP) representation.
Calculate the value of the IDERIV-th derivative of the B-spline from the PP-representation.
Fit a piecewise polynomial curve to discrete data. The piecewise polynomials are represented as B-splines. The fitting is done in a weighted least squares sense.
Fit a piecewise polynomial curve to discrete data. The piecewise polynomials are represented as B-splines. The fitting is done in a weighted least squares sense. Equality and inequality constraints can be imposed on the fitted curve.
Compute the largest integer ILEFT in 1 .LE. ILEFT .LE. LXT such that XT(ILEFT) .LE. X where XT(*) is a subdivision of the X interval.
Compute the integral on (X1,X2) of a product of a function F and the ID-th derivative of a B-spline, (PP-representation).
Compute the integral on (X1,X2) of a K-th order B-spline using the piecewise polynomial (PP) representation.
Calculate the value of the IDERIV-th derivative of the B-spline from the PP-representation.
Piecewise Cubic Hermite to B-Spline converter.
Piecewise Cubic Hermite to B-Spline converter.
This code solves a system of differential/algebraic equations of the form G(T,Y,YPRIME) = 0.
Solve an initial value problem in ordinary differential equations using backward differentiation formulas. It is intended primarily for stiff problems.
Solve an initial value problem in ordinary differential equations using backward differentiation formulas. It is intended primarily for stiff problems.
This code solves a system of differential/algebraic equations of the form G(T,Y,YPRIME) = 0.
Evaluate the Bairy function (the Airy function of the second kind).
Calculate the Bairy function for a negative argument and an exponentially scaled Bairy function for a non-negative argument.
Evaluate the Bairy function (the Airy function of the second kind).
Calculate the Bairy function for a negative argument and an exponentially scaled Bairy function for a non-negative argument.
Factor a band matrix by Gaussian elimination and estimate the condition number of the matrix.
Compute the determinant of a complex band matrix using the factors from CGBCO or CGBFA.
Factor a band matrix using Gaussian elimination.
Solve the complex band system A*X=B or CTRANS(A)*X=B using the factors computed by CGBCO or CGBFA.
Factor a band matrix using Gaussian elimination and estimate the condition number.
Compute the determinant of a band matrix using the factors computed by CNBCO or CNBFA.
Factor a band matrix by elimination.
Solve a general nonsymmetric banded system of linear equations.
Solve a general nonsymmetric banded system of linear equations. Iterative refinement is used to obtain an error estimate.
Solve a complex band system using the factors computed by CNBCO or CNBFA.
Factor a complex Hermitian positive definite matrix stored in band form and estimate the condition number of the matrix.
Compute the determinant of a complex Hermitian positive definite band matrix using the factors computed by CPBCO or CPBFA.
Factor a complex Hermitian positive definite matrix stored in band form.
Solve the complex Hermitian positive definite band system using the factors computed by CPBCO or CPBFA.
Factor a band matrix by Gaussian elimination and estimate the condition number of the matrix.
Compute the determinant of a band matrix using the factors computed by DGBCO or DGBFA.
Factor a band matrix using Gaussian elimination.
Solve the real band system A*X=B or TRANS(A)*X=B using the factors computed by DGBCO or DGBFA.
Factor a band matrix using Gaussian elimination and estimate the condition number.
Compute the determinant of a band matrix using the factors computed by DNBCO or DNBFA.
Factor a band matrix by elimination.
Solve a general nonsymmetric banded system of linear equations.
Solve a real band system using the factors computed by DNBCO or DNBFA.
Factor a real symmetric positive definite matrix stored in band form and estimate the condition number of the matrix.
Compute the determinant of a symmetric positive definite band matrix using the factors computed by DPBCO or DPBFA.
Factor a real symmetric positive definite matrix stored in in band form.
Solve a real symmetric positive definite band system using the factors computed by DPBCO or DPBFA.
Factor a band matrix by Gaussian elimination and estimate the condition number of the matrix.
Compute the determinant of a band matrix using the factors computed by SGBCO or SGBFA.
Factor a band matrix using Gaussian elimination.
Solve the real band system A*X=B or TRANS(A)*X=B using the factors computed by SGBCO or SGBFA.
Factor a band matrix using Gaussian elimination and estimate the condition number.
Compute the determinant of a band matrix using the factors computed by SNBCO or SNBFA.
Factor a real band matrix by elimination.
Solve a general nonsymmetric banded system of linear equations.
Solve a general nonsymmetric banded system of linear equations. Iterative refinement is used to obtain an error estimate.
Solve a real band system using the factors computed by SNBCO or SNBFA.
Factor a real symmetric positive definite matrix stored in band form and estimate the condition number of the matrix.
Compute the determinant of a symmetric positive definite band matrix using the factors computed by SPBCO or SPBFA.
Factor a real symmetric positive definite matrix stored in band form.
Solve a real symmetric positive definite band system using the factors computed by SPBCO or SPBFA.
Compute the LU factorization of a banded matrices using sequential accumulation of rows of the data matrix. Exactly one right-hand side vector is permitted.
Solve the least squares problem for a banded matrix using sequential accumulation of rows of the data matrix. Exactly one right-hand side vector is permitted.
Compute the LU factorization of a banded matrices using sequential accumulation of rows of the data matrix. Exactly one right-hand side vector is permitted.
Solve the least squares problem for a banded matrix using sequential accumulation of rows of the data matrix. Exactly one right-hand side vector is permitted.
Compute the principal value of the complex base 10 logarithm.
Compute the Bessel function of the first kind of order zero.
Compute the Bessel function of the first kind of order one.
Compute the Bessel function of the second kind of order zero.
Compute the Bessel function of the second kind of order one.
Evaluate the modulus and phase for the J0 and Y0 Bessel functions.
Evaluate the modulus and phase for the J1 and Y1 Bessel functions.
Compute Bessel functions EXP(X)*K-SUB-XNU(X) and EXP(X)* K-SUB-XNU+1(X) for 0.0 .LE. XNU .LT. 1.0.
Compute the Bessel function of the first kind of order zero.
Compute the Bessel function of the first kind of order one.
Compute the Bessel function of the second kind of order zero.
Compute the Bessel function of the second kind of order one.
Compute Bessel functions EXP(X)*K-SUB-XNU(X) and EXP(X)* K-SUB-XNU+1(X) for 0.0 .LE. XNU .LT. 1.0.
Compute the Airy function Ai(z) or its derivative dAi/dz for complex argument z. A scaling option is available to help avoid underflow and overflow.
Compute the Airy function Bi(z) or its derivative dBi/dz for complex argument z. A scaling option is available to help avoid overflow.
Compute the Airy function Ai(z) or its derivative dAi/dz for complex argument z. A scaling option is available to help avoid underflow and overflow.
Compute the Airy function Bi(z) or its derivative dBi/dz for complex argument z. A scaling option is available to help avoid overflow.
Compute the Airy function Ai(z) or its derivative dAi/dz for complex argument z. A scaling option is available to help avoid underflow and overflow.
Compute the Airy function Bi(z) or its derivative dBi/dz for complex argument z. A scaling option is available to help avoid overflow.
Compute the Airy function Ai(z) or its derivative dAi/dz for complex argument z. A scaling option is available to help avoid underflow and overflow.
Compute the Airy function Bi(z) or its derivative dBi/dz for complex argument z. A scaling option is available to help avoid overflow.
Compute a sequence of the Hankel functions H(m,a,z) for superscript m=1 or 2, real nonnegative orders a=b, b+1,... where b>0, and nonzero complex argument z. A scaling option is available to help avoid overflow.
Compute a sequence of the Bessel functions I(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Compute a sequence of the Bessel functions J(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Compute a sequence of the Bessel functions K(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Compute a sequence of the Bessel functions Y(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Compute a sequence of the Hankel functions H(m,a,z) for superscript m=1 or 2, real nonnegative orders a=b, b+1,... where b>0, and nonzero complex argument z. A scaling option is available to help avoid overflow.
Compute a sequence of the Bessel functions I(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Compute a sequence of the Bessel functions J(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Compute a sequence of the Bessel functions K(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Compute a sequence of the Bessel functions Y(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Compute a sequence of the Bessel functions Y(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Compute a sequence of the Bessel functions Y(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Compute a sequence of the Bessel functions J(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Compute a sequence of the Bessel functions J(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Compute a sequence of the Hankel functions H(m,a,z) for superscript m=1 or 2, real nonnegative orders a=b, b+1,... where b>0, and nonzero complex argument z. A scaling option is available to help avoid overflow.
Compute a sequence of the Hankel functions H(m,a,z) for superscript m=1 or 2, real nonnegative orders a=b, b+1,... where b>0, and nonzero complex argument z. A scaling option is available to help avoid overflow.
Compute repeated integrals of the K-zero Bessel function.
Compute repeated integrals of the K-zero Bessel function.
Preconditioned BiConjugate Gradient Sparse Ax = b Solver. Routine to solve a Non-Symmetric linear system Ax = b using the Preconditioned BiConjugate Gradient method.
Preconditioned BiConjugate Gradient Squared Ax=b Solver. Routine to solve a Non-Symmetric linear system Ax = b using the Preconditioned BiConjugate Gradient Squared method.
Preconditioned BiConjugate Gradient Sparse Ax = b Solver. Routine to solve a Non-Symmetric linear system Ax = b using the Preconditioned BiConjugate Gradient method.
Preconditioned BiConjugate Gradient Squared Ax=b Solver. Routine to solve a Non-Symmetric linear system Ax = b using the Preconditioned BiConjugate Gradient Squared method.
Sparse Linear Algebra Package Version 2.0.2 Documentation. Routines to solve large sparse symmetric and nonsymmetric positive definite linear systems, Ax = b, using precondi- tioned iterative methods.
Sparse Linear Algebra Package Version 2.0.2 Documentation. Routines to solve large sparse symmetric and nonsymmetric positive definite linear systems, Ax = b, using precondi- tioned iterative methods.
Search for a zero of a function F(X) in a given interval (B,C). It is designed primarily for problems where F(B) and F(C) have opposite signs.
Search for a zero of a function F(X) in a given interval (B,C). It is designed primarily for problems where F(B) and F(C) have opposite signs.
Compute a constant times a vector plus a vector.
Copy a vector.
Compute the inner product of two vectors with extended precision accumulation.
Dot product of two complex vectors using the complex conjugate of the first vector.
Compute the inner product of two vectors.
Construct a Givens transformation.
Multiply a vector by a constant.
Apply a plane Givens rotation.
Scale a complex vector.
Interchange two vectors.
Compute the sum of the magnitudes of the elements of a vector.
Compute a constant times a vector plus a vector.
Compute the inner product of two vectors with extended precision accumulation and result.
Copy a vector.
Copy the negative of a vector to a vector.
Compute the inner product of two vectors.
Compute the Euclidean length (L2 norm) of a vector.
Apply a plane Givens rotation.
Construct a plane Givens rotation.
Apply a modified Givens transformation.
Construct a modified Givens transformation.
Multiply a vector by a constant.
Compute the inner product of two vectors with extended precision accumulation and result.
Interchange two vectors.
Find the smallest index of the component of a complex vector having the maximum sum of magnitudes of real and imaginary parts.
Copy a vector.
Find the smallest index of that component of a vector having the maximum magnitude.
Find the smallest index of that component of a vector having the maximum magnitude.
Interchange two vectors.
Compute the sum of the magnitudes of the elements of a vector.
Compute a constant times a vector plus a vector.
Compute the sum of the magnitudes of the real and imaginary elements of a complex vector.
Compute the unitary norm of a complex vector.
Copy a vector.
Copy the negative of a vector to a vector.
Compute the inner product of two vectors.
Compute the inner product of two vectors with extended precision accumulation.
Compute the Euclidean length (L2 norm) of a vector.
Apply a plane Givens rotation.
Construct a plane Givens rotation.
Apply a modified Givens transformation.
Construct a modified Givens transformation.
Multiply a vector by a constant.
Interchange two vectors.
Solve the bounded and constrained least squares problem consisting of solving the equation E*X = F (in the least squares sense) subject to the linear constraints C*X = Y.
Solve the problem E*X = F (in the least squares sense) with bounds on selected X values.
Solve the bounded and constrained least squares problem consisting of solving the equation E*X = F (in the least squares sense) subject to the linear constraints C*X = Y.
Solve the problem E*X = F (in the least squares sense) with bounds on selected X values.
Solve a square system of nonlinear equations.
Solve a square system of nonlinear equations.
Solve the standard seven-point finite difference approximation to the Helmholtz equation in Cartesian coordinates.
Solves the standard five-point finite difference approximation to the Helmholtz equation in Cartesian coordinates.
The routine calculates an approximation result to a Cauchy principal value I = INTEGRAL of F*W over (A,B) (W(X) = 1/((X-C), C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABE,EPSREL*ABS(I)).
The routine calculates an approximation result to a CAUCHY PRINCIPAL VALUE I = Integral of F*W over (A,B) (W(X) = 1/(X-C), (C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
This function subprogram is used together with the routine DQAWC and defines the WEIGHT function.
The routine calculates an approximation result to a Cauchy principal value I = INTEGRAL of F*W over (A,B) (W(X) = 1/((X-C), C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABE,EPSREL*ABS(I)).
The routine calculates an approximation result to a CAUCHY PRINCIPAL VALUE I = Integral of F*W over (A,B) (W(X) = 1/(X-C), (C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
This function subprogram is used together with the routine QAWC and defines the WEIGHT function.
Test two characters to determine if they are the same letter, except for case.
Determine the number of terms needed in an orthogonal polynomial series so that it meets a specified accuracy.
Determine the number of terms needed in an orthogonal polynomial series so that it meets a specified accuracy.
This routine computes the CHEBYSHEV series expansion of degrees 12 and 24 of a function using A FAST FOURIER TRANSFORM METHOD F(X) = SUM(K=1,..,13) (CHEB12(K)*T(K-1,X)), F(X) = SUM(K=1,..,25) (CHEB24(K)*T(K-1,X)), Where T(K,X) is the CHEBYSHEV POLYNOMIAL OF DEGREE K.
This routine computes the CHEBYSHEV series expansion of degrees 12 and 24 of a function using A FAST FOURIER TRANSFORM METHOD F(X) = SUM(K=1,..,13) (CHEB12(K)*T(K-1,X)), F(X) = SUM(K=1,..,25) (CHEB24(K)*T(K-1,X)), Where T(K,X) is the CHEBYSHEV POLYNOMIAL OF DEGREE K.
Compute the Cholesky decomposition of a positive definite matrix. A pivoting option allows the user to estimate the condition number of a positive definite matrix or determine the rank of a positive semidefinite matrix.
Downdate an augmented Cholesky decomposition or the triangular factor of an augmented QR decomposition.
Update the Cholesky factorization A=TRANS(R)*R of a positive definite matrix A of order P under diagonal permutations of the form TRANS(E)*A*E, where E is a permutation matrix.
Update an augmented Cholesky decomposition of the triangular part of an augmented QR decomposition.
Compute the Cholesky decomposition of a positive definite matrix. A pivoting option allows the user to estimate the condition number of a positive definite matrix or determine the rank of a positive semidefinite matrix.
Downdate an augmented Cholesky decomposition or the triangular factor of an augmented QR decomposition.
Update the Cholesky factorization A=TRANS(R)*R of a positive definite matrix A of order P under diagonal permutations of the form TRANS(E)*A*E, where E is a permutation matrix.
Update an augmented Cholesky decomposition of the triangular part of an augmented QR decomposition.
Compute the Cholesky decomposition of a positive definite matrix. A pivoting option allows the user to estimate the condition number of a positive definite matrix or determine the rank of a positive semidefinite matrix.
Downdate an augmented Cholesky decomposition or the triangular factor of an augmented QR decomposition.
Update the Cholesky factorization A=TRANS(R)*R of A positive definite matrix A of order P under diagonal permutations of the form TRANS(E)*A*E, where E is a permutation matrix.
Update an augmented Cholesky decomposition of the triangular part of an augmented QR decomposition.
Evaluate the 3j symbol f(L1) = ( L1 L2 L3) (-M2-M3 M2 M3) for all allowed values of L1, the other parameters being held fixed.
Evaluate the 3j symbol g(M2) = (L1 L2 L3 ) (M1 M2 -M1-M2) for all allowed values of M2, the other parameters being held fixed.
Evaluate the 6j symbol h(L1) = {L1 L2 L3} {L4 L5 L6} for all allowed values of L1, the other parameters being held fixed.
Evaluate the 3j symbol f(L1) = ( L1 L2 L3) (-M2-M3 M2 M3) for all allowed values of L1, the other parameters being held fixed.
Evaluate the 3j symbol g(M2) = (L1 L2 L3 ) (M1 M2 -M1-M2) for all allowed values of M2, the other parameters being held fixed.
Evaluate the 6j symbol h(L1) = {L1 L2 L3} {L4 L5 L6} for all allowed values of L1, the other parameters being held fixed.
The routine calculates an approximation result to a Cauchy principal value I = INTEGRAL of F*W over (A,B) (W(X) = 1/((X-C), C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABE,EPSREL*ABS(I)).
The routine calculates an approximation result to a CAUCHY PRINCIPAL VALUE I = Integral of F*W over (A,B) (W(X) = 1/(X-C), (C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
Calculate an approximation to a given definite integral I= Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X)=SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points, see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
To compute the integral I=Integral of F(X) over (A,B) Where W(X) = COS(OMEGA*X) or W(X)=SIN(OMEGA*X) and to compute J = Integral of ABS(F) over (A,B). For small value of OMEGA or small intervals (A,B) the 15-point GAUSS-KRONRO Rule is used. Otherwise a generalized CLENSHAW-CURTIS method is used.
The routine calculates an approximation result to a Cauchy principal value I = INTEGRAL of F*W over (A,B) (W(X) = 1/((X-C), C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABE,EPSREL*ABS(I)).
The routine calculates an approximation result to a CAUCHY PRINCIPAL VALUE I = Integral of F*W over (A,B) (W(X) = 1/(X-C), (C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points, see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
To compute the integral I=Integral of F(X) over (A,B) Where W(X) = COS(OMEGA*X) Or (WX)=SIN(OMEGA*X) and to compute J=Integral of ABS(F) over (A,B). For small value of OMEGA or small intervals (A,B) 15-point GAUSS- KRONROD Rule used. Otherwise generalized CLENSHAW-CURTIS us
Compute the coefficients of the polynomial fit (including Hermite polynomial fits) produced by a previous call to POLINT.
Compute the coefficients of the polynomial fit (including Hermite polynomial fits) produced by a previous call to POLINT.
Compute the complementary incomplete Gamma function for A near a negative integer and X small.
Compute Tricomi's incomplete Gamma function for small arguments.
Compute the log complementary incomplete Gamma function for large X and for A .LE. X.
Calculate the complementary incomplete Gamma function.
Calculate Tricomi's form of the incomplete Gamma function.
Calculate the complementary incomplete Gamma function.
Calculate Tricomi's form of the incomplete Gamma function.
Compute the complementary incomplete Gamma function for A near a negative integer and for small X.
Compute Tricomi's incomplete Gamma function for small arguments.
Compute the log complementary incomplete Gamma function for large X and for A .LE. X.
Compute the complete Beta function.
Compute the complete Beta function.
Compute the complete Beta function.
Compute the incomplete or complete elliptic integral of the 2nd kind. For X and Y nonnegative, X+Y and Z positive, DRD(X,Y,Z) = Integral from zero to infinity of -1/2 -1/2 -3/2 (3/2)(t+X) (t+Y) (t+Z) dt. If X or Y is zero, the integral is complete.
Compute the incomplete or complete elliptic integral of the 1st kind. For X, Y, and Z non-negative and at most one of them zero, RF(X,Y,Z) = Integral from zero to infinity of -1/2 -1/2 -1/2 (1/2)(t+X) (t+Y) (t+Z) dt. If X, Y or Z is zero, the integral is complete.
Compute the incomplete or complete (X or Y or Z is zero) elliptic integral of the 3rd kind. For X, Y, and Z non- negative, at most one of them zero, and P positive, RJ(X,Y,Z,P) = Integral from zero to infinity of -1/2 -1/2 -1/2 -1 (3/2)(t+X) (t+Y) (t+Z) (t+P) dt.
Compute the incomplete or complete elliptic integral of the 2nd kind. For X and Y nonnegative, X+Y and Z positive, RD(X,Y,Z) = Integral from zero to infinity of -1/2 -1/2 -3/2 (3/2)(t+X) (t+Y) (t+Z) dt. If X or Y is zero, the integral is complete.
Compute the incomplete or complete elliptic integral of the 1st kind. For X, Y, and Z non-negative and at most one of them zero, RF(X,Y,Z) = Integral from zero to infinity of -1/2 -1/2 -1/2 (1/2)(t+X) (t+Y) (t+Z) dt. If X, Y or Z is zero, the integral is complete.
Compute the incomplete or complete (X or Y or Z is zero) elliptic integral of the 3rd kind. For X, Y, and Z non- negative, at most one of them zero, and P positive, RJ(X,Y,Z,P) = Integral from zero to infinity of -1/2 -1/2 -1/2 -1 (3/2)(t+X) (t+Y) (t+Z) (t+P) dt.
Compute the logarithm of the absolute value of the Gamma function.
Compute the log gamma correction factor so that LOG(CGAMMA(Z)) = 0.5*LOG(2.*PI) + (Z-0.5)*LOG(Z) - Z + C9LGMC(Z).
Compute the complete Gamma function.
Compute the logarithm of the absolute value of the Gamma function.
Compute the log Gamma correction factor so that LOG(DGAMMA(X)) = LOG(SQRT(2*PI)) + (X-5.)*LOG(X) - X + D9LGMC(X).
Compute the minimum and maximum bounds for the argument in the Gamma function.
Compute the complete Gamma function.
Compute the logarithm of the absolute value of the Gamma function.
Compute the minimum and maximum bounds for the argument in the Gamma function.
Compute the complete Gamma function.
Compute the log Gamma correction factor so that LOG(GAMMA(X)) = LOG(SQRT(2*PI)) + (X-.5)*LOG(X) - X + R9LGMC(X).
Compute the eigenvalues and, optionally, the eigenvectors of a complex Hermitian matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a real symmetric matrix.
Solve a general system of linear equations.
Solve a general system of linear equations. Iterative refinement is used to obtain an error estimate.
Solve a general system of linear equations.
Solve a general system of linear equations.
Solve a general system of linear equations. Iterative refinement is used to obtain an error estimate.
Find the zeros of a polynomial with complex coefficients.
Find the zeros of a polynomial with real coefficients.
The function of CDRIV1 is to solve N (200 or fewer) ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. CDRIV1 allows complex-valued differential equations.
The function of CDRIV2 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. CDRIV2 allows complex-valued differential equations.
The function of CDRIV3 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. Other important options are available. CDRIV3 allows complex-valued differential equations.
Compute the inner product of two vectors with extended precision accumulation and result.
Compute the inner product of two vectors with extended precision accumulation and result.
Factor a band matrix by Gaussian elimination and estimate the condition number of the matrix.
Factor a matrix using Gaussian elimination and estimate the condition number of the matrix.
Factor a complex Hermitian matrix by elimination with sym- metric pivoting and estimate the condition of the matrix.
Factor a complex Hermitian matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Factor a complex Hermitian positive definite matrix stored in band form and estimate the condition number of the matrix.
Factor a complex Hermitian positive definite matrix and estimate the condition number of the matrix.
Factor a complex Hermitian positive definite matrix stored in packed form and estimate the condition number of the matrix.
Factor a complex symmetric matrix by elimination with symmetric pivoting and estimate the condition number of the matrix.
Factor a complex symmetric matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Estimate the condition number of a triangular matrix.
Factor a band matrix by Gaussian elimination and estimate the condition number of the matrix.
Factor a matrix using Gaussian elimination and estimate the condition number of the matrix.
Factor a real symmetric positive definite matrix stored in band form and estimate the condition number of the matrix.
Factor a real symmetric positive definite matrix and estimate the condition of the matrix.
Factor a symmetric positive definite matrix stored in packed form and estimate the condition number of the matrix.
Factor a symmetric matrix by elimination with symmetric pivoting and estimate the condition number of the matrix.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Estimate the condition number of a triangular matrix.
Factor a band matrix by Gaussian elimination and estimate the condition number of the matrix.
Factor a matrix using Gaussian elimination and estimate the condition number of the matrix.
Factor a real symmetric positive definite matrix stored in band form and estimate the condition number of the matrix.
Factor a real symmetric positive definite matrix and estimate the condition number of the matrix.
Factor a symmetric positive definite matrix stored in packed form and estimate the condition number of the matrix.
Factor a symmetric matrix by elimination with symmetric pivoting and estimate the condition number of the matrix.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Estimate the condition number of a triangular matrix.
Evaluate the variance function of the curve obtained by the constrained B-spline fitting subprogram FC.
Evaluate the variance function of the curve obtained by the constrained B-spline fitting subprogram DFC.
Fit a piecewise polynomial curve to discrete data. The piecewise polynomials are represented as B-splines. The fitting is done in a weighted least squares sense.
Fit a piecewise polynomial curve to discrete data. The piecewise polynomials are represented as B-splines. The fitting is done in a weighted least squares sense. Equality and inequality constraints can be imposed on the fitted curve.
Solve a linearly constrained least squares problem with equality and inequality constraints, and optionally compute a covariance matrix.
Solve a linearly constrained least squares problem with equality constraints and nonnegativity constraints on selected variables.
Fit a piecewise polynomial curve to discrete data. The piecewise polynomials are represented as B-splines. The fitting is done in a weighted least squares sense. Equality and inequality constraints can be imposed on the fitted curve.
Solve a linearly constrained least squares problem with equality and inequality constraints, and optionally compute a covariance matrix.
Solve a linearly constrained least squares problem with equality constraints and nonnegativity constraints on selected variables.
Solve the bounded and constrained least squares problem consisting of solving the equation E*X = F (in the least squares sense) subject to the linear constraints C*X = Y.
Solve the problem E*X = F (in the least squares sense) with bounds on selected X values.
Solve the bounded and constrained least squares problem consisting of solving the equation E*X = F (in the least squares sense) subject to the linear constraints C*X = Y.
Solve the problem E*X = F (in the least squares sense) with bounds on selected X values.
The routine calculates an approximation result to a given Fourier integral I=Integral of F(X)*W(X) over (A,INFINITY) where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
The routine calculates an approximation result to a given Fourier integral I = Integral of F(X)*W(X) over (A,INFINITY) where W(X)=COS(OMEGA*X) or W(X)=SIN(OMEGA*X), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
The routine determines the limit of a given sequence of approximations, by means of the Epsilon algorithm of P.Wynn. An estimate of the absolute error is also given. The condensed Epsilon table is computed. Only those elements needed for the computation of the next diagonal are preserved.
The routine calculates an approximation result to a given Fourier integral I = Integral of F(X)*W(X) over (A,INFINITY) where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
The routine calculates an approximation result to a given Fourier integral I = Integral of F(X)*W(X) over (A,INFINITY) where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
The routine determines the limit of a given sequence of approximations, by means of the Epsilon algorithm of P. Wynn. An estimate of the absolute error is also given. The condensed Epsilon table is computed. Only those elements needed for the computation of the next diagonal are preserved.
Piecewise Cubic Hermite to B-Spline converter.
Piecewise Cubic Hermite to B-Spline converter.
Copy a vector.
Copy a vector.
Copy the negative of a vector to a vector.
Copy a vector.
Copy a vector.
Copy the negative of a vector to a vector.
Compute the log gamma correction factor so that LOG(CGAMMA(Z)) = 0.5*LOG(2.*PI) + (Z-0.5)*LOG(Z) - Z + C9LGMC(Z).
Compute the log Gamma correction factor so that LOG(DGAMMA(X)) = LOG(SQRT(2*PI)) + (X-5.)*LOG(X) - X + D9LGMC(X).
Compute the log Gamma correction factor so that LOG(GAMMA(X)) = LOG(SQRT(2*PI)) + (X-.5)*LOG(X) - X + R9LGMC(X).
This function subprogram is used together with the routine DQAWF and defines the WEIGHT function.
This function subprogram is used together with the routine QAWF and defines the WEIGHT function.
Compute the cosine of an argument in degrees.
Compute the cosine of an argument in degrees.
Compute the forward cosine transform with odd wave numbers.
Initialize a work array for COSQF and COSQB.
Compute the cosine transform of a real, even sequence.
Initialize a work array for COST.
Calculate the covariance matrix for a nonlinear data fitting problem. It is intended to be used after a successful return from either DNLS1 or DNLS1E.
Calculate the covariance matrix for a nonlinear data fitting problem. It is intended to be used after a successful return from either SNLS1 or SNLS1E.
Evaluate a cubic polynomial given in Hermite form and its first derivative at an array of points. While designed for use by PCHFD, it may be useful directly as an evaluator for a piecewise cubic Hermite function in applications, such as graphing, where the interval is known in advance. If only function values are required, use CHFEV instead.
Evaluate a cubic polynomial given in Hermite form and its first derivative at an array of points. While designed for use by DPCHFD, it may be useful directly as an evaluator for a piecewise cubic Hermite function in applications, such as graphing, where the interval is known in advance. If only function values are required, use DCHFEV instead.
Evaluate a piecewise cubic Hermite function and its first derivative at an array of points. May be used by itself for Hermite interpolation, or as an evaluator for DPCHIM or DPCHIC. If only function values are required, use DPCHFE instead.
Evaluate a piecewise cubic Hermite function and its first derivative at an array of points. May be used by itself for Hermite interpolation, or as an evaluator for PCHIM or PCHIC. If only function values are required, use PCHFE instead.
Evaluate a cubic polynomial given in Hermite form and its first derivative at an array of points. While designed for use by PCHFD, it may be useful directly as an evaluator for a piecewise cubic Hermite function in applications, such as graphing, where the interval is known in advance. If only function values are required, use CHFEV instead.
Evaluate a cubic polynomial given in Hermite form at an array of points. While designed for use by PCHFE, it may be useful directly as an evaluator for a piecewise cubic Hermite function in applications, such as graphing, where the interval is known in advance.
Evaluate a cubic polynomial given in Hermite form and its first derivative at an array of points. While designed for use by DPCHFD, it may be useful directly as an evaluator for a piecewise cubic Hermite function in applications, such as graphing, where the interval is known in advance. If only function values are required, use DCHFEV instead.
Evaluate a cubic polynomial given in Hermite form at an array of points. While designed for use by DPCHFE, it may be useful directly as an evaluator for a piecewise cubic Hermite function in applications, such as graphing, where the interval is known in advance.
Evaluate a piecewise cubic Hermite function and its first derivative at an array of points. May be used by itself for Hermite interpolation, or as an evaluator for DPCHIM or DPCHIC. If only function values are required, use DPCHFE instead.
Evaluate a piecewise cubic Hermite function at an array of points. May be used by itself for Hermite interpolation, or as an evaluator for DPCHIM or DPCHIC.
Evaluate a piecewise cubic Hermite function and its first derivative at an array of points. May be used by itself for Hermite interpolation, or as an evaluator for PCHIM or PCHIC. If only function values are required, use PCHFE instead.
Evaluate a piecewise cubic Hermite function at an array of points. May be used by itself for Hermite interpolation, or as an evaluator for PCHIM or PCHIC.
Piecewise Cubic Hermite to B-Spline converter.
Check a cubic Hermite function for monotonicity.
Evaluate the definite integral of a piecewise cubic Hermite function over an arbitrary interval.
Set derivatives needed to determine a piecewise monotone piecewise cubic Hermite interpolant to given data. User control is available over boundary conditions and/or treatment of points where monotonicity switches direction.
Evaluate the definite integral of a piecewise cubic Hermite function over an interval whose endpoints are data points.
Set derivatives needed to determine a monotone piecewise cubic Hermite interpolant to given data. Boundary values are provided which are compatible with monotonicity. The interpolant will have an extremum at each point where mono- tonicity switches direction. (See DPCHIC if user control is desired over boundary or switch conditions.)
Set derivatives needed to determine the Hermite represen- tation of the cubic spline interpolant to given data, with specified boundary conditions.
Piecewise Cubic Hermite to B-Spline converter.
Check a cubic Hermite function for monotonicity.
Documentation for PCHIP, a Fortran package for piecewise cubic Hermite interpolation of data.
Evaluate the definite integral of a piecewise cubic Hermite function over an arbitrary interval.
Set derivatives needed to determine a piecewise monotone piecewise cubic Hermite interpolant to given data. User control is available over boundary conditions and/or treatment of points where monotonicity switches direction.
Evaluate the definite integral of a piecewise cubic Hermite function over an interval whose endpoints are data points.
Set derivatives needed to determine a monotone piecewise cubic Hermite interpolant to given data. Boundary values are provided which are compatible with monotonicity. The interpolant will have an extremum at each point where mono- tonicity switches direction. (See PCHIC if user control is desired over boundary or switch conditions.)
Set derivatives needed to determine the Hermite represen- tation of the cubic spline interpolant to given data, with specified boundary conditions.
Evaluate a cubic polynomial given in Hermite form and its first derivative at an array of points. While designed for use by PCHFD, it may be useful directly as an evaluator for a piecewise cubic Hermite function in applications, such as graphing, where the interval is known in advance. If only function values are required, use CHFEV instead.
Evaluate a cubic polynomial given in Hermite form at an array of points. While designed for use by PCHFE, it may be useful directly as an evaluator for a piecewise cubic Hermite function in applications, such as graphing, where the interval is known in advance.
Evaluate a cubic polynomial given in Hermite form and its first derivative at an array of points. While designed for use by DPCHFD, it may be useful directly as an evaluator for a piecewise cubic Hermite function in applications, such as graphing, where the interval is known in advance. If only function values are required, use DCHFEV instead.
Evaluate a cubic polynomial given in Hermite form at an array of points. While designed for use by DPCHFE, it may be useful directly as an evaluator for a piecewise cubic Hermite function in applications, such as graphing, where the interval is known in advance.
Compute the B-representation of a cubic spline which interpolates given data.
Compute the B-representation of a cubic spline which interpolates given data.
Compute the LU factorization of a banded matrices using sequential accumulation of rows of the data matrix. Exactly one right-hand side vector is permitted.
Solve the least squares problem for a banded matrix using sequential accumulation of rows of the data matrix. Exactly one right-hand side vector is permitted.
Evaluate the variance function of the curve obtained by the constrained B-spline fitting subprogram FC.
Compute the LU factorization of a banded matrices using sequential accumulation of rows of the data matrix. Exactly one right-hand side vector is permitted.
Solve the least squares problem for a banded matrix using sequential accumulation of rows of the data matrix. Exactly one right-hand side vector is permitted.
Evaluate the variance function of the curve obtained by the constrained B-spline fitting subprogram DFC.
Fit a piecewise polynomial curve to discrete data. The piecewise polynomials are represented as B-splines. The fitting is done in a weighted least squares sense.
Fit a piecewise polynomial curve to discrete data. The piecewise polynomials are represented as B-splines. The fitting is done in a weighted least squares sense. Equality and inequality constraints can be imposed on the fitted curve.
Solve a least squares problem for banded matrices using sequential accumulation of rows of the data matrix. Exactly one right-hand side vector is permitted.
Solve a linearly constrained least squares problem with equality and inequality constraints, and optionally compute a covariance matrix.
Use the coefficients generated by DPOLFT to evaluate the polynomial fit of degree L, along with the first NDER of its derivatives, at a specified point.
Convert the DPOLFT coefficients to Taylor series form.
Fit discrete data in a least squares sense by polynomials in one variable.
Solve a linearly constrained least squares problem with equality constraints and nonnegativity constraints on selected variables.
Fit a piecewise polynomial curve to discrete data. The piecewise polynomials are represented as B-splines. The fitting is done in a weighted least squares sense.
Fit a piecewise polynomial curve to discrete data. The piecewise polynomials are represented as B-splines. The fitting is done in a weighted least squares sense. Equality and inequality constraints can be imposed on the fitted curve.
Solve a linear least squares problems by performing a QR factorization of the matrix using Householder transformations.
Solve a linearly constrained least squares problem with equality and inequality constraints, and optionally compute a covariance matrix.
Convert the POLFIT coefficients to Taylor series form.
Fit discrete data in a least squares sense by polynomials in one variable.
Use the coefficients generated by POLFIT to evaluate the polynomial fit of degree L, along with the first NDER of its derivatives, at a specified point.
Solve a linearly constrained least squares problem with equality constraints and nonnegativity constraints on selected variables.
Solve a complex block tridiagonal linear system of equations by a cyclic reduction algorithm.
Solve the standard five-point finite difference approximation on a staggered grid to the modified Helmholtz equation in cylindrical coordinates.
Solve a standard finite difference approximation to the Helmholtz equation in cylindrical coordinates.
This code solves a system of differential/algebraic equations of the form G(T,Y,YPRIME) = 0.
This code solves a system of differential/algebraic equations of the form G(T,Y,YPRIME) = 0.
Compute the B-representation of a cubic spline which interpolates given data.
Compute the B-representation of a spline which interpolates given data.
Use the B-representation to construct a divided difference table preparatory to a (right) derivative calculation.
Calculate the value of the spline and its derivatives from the B-representation.
Compute the B-representation of a cubic spline which interpolates given data.
Compute the B-representation of a spline which interpolates given data.
Use the B-representation to construct a divided difference table preparatory to a (right) derivative calculation.
Calculate the value of the spline and its derivatives from the B-representation.
Compute the largest integer ILEFT in 1 .LE. ILEFT .LE. LXT such that XT(ILEFT) .LE. X where XT(*) is a subdivision of the X interval.
Solve a linearly constrained least squares problem with equality and inequality constraints, and optionally compute a covariance matrix.
Convert the DPOLFT coefficients to Taylor series form.
Compute the integral on (X1,X2) of a product of a function F and the ID-th derivative of a B-spline, (PP-representation).
Fit discrete data in a least squares sense by polynomials in one variable.
Compute the integral on (X1,X2) of a K-th order B-spline using the piecewise polynomial (PP) representation.
Calculate the value of the IDERIV-th derivative of the B-spline from the PP-representation.
Solve a linearly constrained least squares problem with equality constraints and nonnegativity constraints on selected variables.
Compute the largest integer ILEFT in 1 .LE. ILEFT .LE. LXT such that XT(ILEFT) .LE. X where XT(*) is a subdivision of the X interval.
Solve a linearly constrained least squares problem with equality and inequality constraints, and optionally compute a covariance matrix.
Convert the POLFIT coefficients to Taylor series form.
Compute the integral on (X1,X2) of a product of a function F and the ID-th derivative of a B-spline, (PP-representation).
Fit discrete data in a least squares sense by polynomials in one variable.
Compute the integral on (X1,X2) of a K-th order B-spline using the piecewise polynomial (PP) representation.
Calculate the value of the IDERIV-th derivative of the B-spline from the PP-representation.
Solve a linearly constrained least squares problem with equality constraints and nonnegativity constraints on selected variables.
Compute the cosine of an argument in degrees.
Compute the cosine of an argument in degrees.
Compute the sine of an argument in degrees.
Compute the sine of an argument in degrees.
Solve an initial value problem in ordinary differential equations using an Adams-Bashforth method.
Solve an initial value problem in ordinary differential equations using backward differentiation formulas. It is intended primarily for stiff problems.
Solve an initial value problem in ordinary differential equations using a Runge-Kutta-Fehlberg scheme.
Solve an initial value problem in ordinary differential equations using an Adams-Bashforth method.
Solve an initial value problem in ordinary differential equations using backward differentiation formulas. It is intended primarily for stiff problems.
Solve an initial value problem in ordinary differential equations using a Runge-Kutta-Fehlberg scheme.
Approximate the solution at XOUT by evaluating the polynomial computed in DSTEPS at XOUT. Must be used in conjunction with DSTEPS.
Integrate a system of first order ordinary differential equations one step.
Approximate the solution at XOUT by evaluating the polynomial computed in STEPS at XOUT. Must be used in conjunction with STEPS.
Integrate a system of first order ordinary differential equations one step.
Compute the determinant of a complex band matrix using the factors from CGBCO or CGBFA.
Compute the determinant and inverse of a matrix using the factors computed by CGECO or CGEFA.
Compute the determinant, inertia and inverse of a complex Hermitian matrix using the factors obtained from CHIFA.
Compute the determinant, inertia and inverse of a complex Hermitian matrix stored in packed form using the factors obtained from CHPFA.
Compute the determinant of a band matrix using the factors computed by CNBCO or CNBFA.
Compute the determinant of a complex Hermitian positive definite band matrix using the factors computed by CPBCO or CPBFA.
Compute the determinant and inverse of a certain complex Hermitian positive definite matrix using the factors computed by CPOCO, CPOFA, or CQRDC.
Compute the determinant and inverse of a complex Hermitian positive definite matrix using factors from CPPCO or CPPFA.
Compute the determinant and inverse of a complex symmetric matrix using the factors from CSIFA.
Compute the determinant and inverse of a complex symmetric matrix stored in packed form using the factors from CSPFA.
Compute the determinant and inverse of a triangular matrix.
Compute the determinant of a band matrix using the factors computed by DGBCO or DGBFA.
Compute the determinant and inverse of a matrix using the factors computed by DGECO or DGEFA.
Compute the determinant of a band matrix using the factors computed by DNBCO or DNBFA.
Compute the determinant of a symmetric positive definite band matrix using the factors computed by DPBCO or DPBFA.
Compute the determinant and inverse of a certain real symmetric positive definite matrix using the factors computed by DPOCO, DPOFA or DQRDC.
Compute the determinant and inverse of a real symmetric positive definite matrix using factors from DPPCO or DPPFA.
Compute the determinant, inertia and inverse of a real symmetric matrix using the factors from DSIFA.
Compute the determinant, inertia, inverse of a real symmetric matrix stored in packed form using the factors from DSPFA.
Compute the determinant and inverse of a triangular matrix.
Compute the determinant of a band matrix using the factors computed by SGBCO or SGBFA.
Compute the determinant and inverse of a matrix using the factors computed by SGECO or SGEFA.
Compute the determinant of a band matrix using the factors computed by SNBCO or SNBFA.
Compute the determinant of a symmetric positive definite band matrix using the factors computed by SPBCO or SPBFA.
Compute the determinant and inverse of a certain real symmetric positive definite matrix using the factors computed by SPOCO, SPOFA or SQRDC.
Compute the determinant and inverse of a real symmetric positive definite matrix using factors from SPPCO or SPPFA.
Compute the determinant, inertia and inverse of a real symmetric matrix using the factors from SSIFA.
Compute the determinant, inertia, inverse of a real symmetric matrix stored in packed form using the factors from SSPFA.
Compute the determinant and inverse of a triangular matrix.
Printer Plot of SLAP Column Format Matrix. Routine to print out a SLAP Column format matrix in a "printer plot" graphical representation.
Read in SLAP Triad Format Linear System. Routine to read in a SLAP Triad format matrix and right hand side and solution to the system, if known.
Write out SLAP Triad Format Linear System. Routine to write out a SLAP Triad format matrix and right hand side and solution to the system, if known.
Printer Plot of SLAP Column Format Matrix. Routine to print out a SLAP Column format matrix in a "printer plot" graphical representation.
Read in SLAP Triad Format Linear System. Routine to read in a SLAP Triad format matrix and right hand side and solution to the system, if known.
Write out SLAP Triad Format Linear System. Routine to write out a SLAP Triad format matrix and right hand side and solution to the system, if known.
Diagonal Scaling Preconditioner SLAP Normal Eqns Set Up. Routine to compute the inverse of the diagonal of the matrix A*A', where A is stored in SLAP-Column format.
Diagonal Scaling Preconditioner SLAP Set Up. Routine to compute the inverse of the diagonal of a matrix stored in the SLAP Column format.
Diagonal Scaling of system Ax = b. This routine scales (and unscales) the system Ax = b by symmetric diagonal scaling.
Diagonal Scaling Preconditioner SLAP Normal Eqns Set Up. Routine to compute the inverse of the diagonal of the matrix A*A', where A is stored in SLAP-Column format.
Diagonal Scaling Preconditioner SLAP Set Up. Routine to compute the inverse of the diagonal of a matrix stored in the SLAP Column format.
Diagonal Scaling of system Ax = b. This routine scales (and unscales) the system Ax = b by symmetric diagonal scaling.
This code solves a system of differential/algebraic equations of the form G(T,Y,YPRIME) = 0.
This code solves a system of differential/algebraic equations of the form G(T,Y,YPRIME) = 0.
Calculate the value and all derivatives of order less than NDERIV of all basis functions which do not vanish at X.
Evaluate the B-representation of a B-spline at X for the function value or any of its derivatives.
Calculate the value and all derivatives of order less than NDERIV of all basis functions which do not vanish at X.
Evaluate the B-representation of a B-spline at X for the function value or any of its derivatives.
Use the B-representation to construct a divided difference table preparatory to a (right) derivative calculation.
Use the B-representation to construct a divided difference table preparatory to a (right) derivative calculation.
Compute the Psi (or Digamma) function.
Compute the Psi (or Digamma) function.
Compute the Psi (or Digamma) function.
SLATEC Common Mathematical Library disclaimer and version.
SLATEC Common Mathematical Library disclaimer and version.
Documentation for BSPLINE, a package of subprograms for working with piecewise polynomial functions in B-representation.
Sparse Linear Algebra Package Version 2.0.2 Documentation. Routines to solve large sparse symmetric and nonsymmetric positive definite linear systems, Ax = b, using precondi- tioned iterative methods.
Documentation for FFTPACK, a collection of Fast Fourier Transform routines.
Documentation for FNLIB, a collection of routines for evaluating elementary and special functions.
Documentation for PCHIP, a Fortran package for piecewise cubic Hermite interpolation of data.
Documentation for QUADPACK, a package of subprograms for automatic evaluation of one-dimensional definite integrals.
Sparse Linear Algebra Package Version 2.0.2 Documentation. Routines to solve large sparse symmetric and nonsymmetric positive definite linear systems, Ax = b, using precondi- tioned iterative methods.
Compute the inner product of two vectors with extended precision accumulation.
Compute the inner product of two vectors with extended precision accumulation and result.
Compute the inner product of two vectors with extended precision accumulation and result.
Compute the inner product of two vectors with extended precision accumulation and result.
Compute the inner product of two vectors with extended precision accumulation and result.
Compute the inner product of two vectors with extended precision accumulation.
The function of DDRIV1 is to solve N (200 or fewer) ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. DDRIV1 uses double precision arithmetic.
The function of DDRIV2 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. DDRIV2 uses double precision arithmetic.
The function of DDRIV3 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. Other important options are available. DDRIV3 uses double precision arithmetic.
Downdate an augmented Cholesky decomposition or the triangular factor of an augmented QR decomposition.
Downdate an augmented Cholesky decomposition or the triangular factor of an augmented QR decomposition.
Downdate an augmented Cholesky decomposition or the triangular factor of an augmented QR decomposition.
Calculate a double precision approximation to DRC(X,Y) = Integral from zero to infinity of -1/2 -1 (1/2)(t+X) (t+Y) dt, where X is nonnegative and Y is positive.
Compute the incomplete or complete elliptic integral of the 2nd kind. For X and Y nonnegative, X+Y and Z positive, DRD(X,Y,Z) = Integral from zero to infinity of -1/2 -1/2 -3/2 (3/2)(t+X) (t+Y) (t+Z) dt. If X or Y is zero, the integral is complete.
Compute the incomplete or complete elliptic integral of the 1st kind. For X, Y, and Z non-negative and at most one of them zero, RF(X,Y,Z) = Integral from zero to infinity of -1/2 -1/2 -1/2 (1/2)(t+X) (t+Y) (t+Z) dt. If X, Y or Z is zero, the integral is complete.
Compute the incomplete or complete (X or Y or Z is zero) elliptic integral of the 3rd kind. For X, Y, and Z non- negative, at most one of them zero, and P positive, RJ(X,Y,Z,P) = Integral from zero to infinity of -1/2 -1/2 -1/2 -1 (3/2)(t+X) (t+Y) (t+Z) (t+P) dt.
Calculate an approximation to RC(X,Y) = Integral from zero to infinity of -1/2 -1 (1/2)(t+X) (t+Y) dt, where X is nonnegative and Y is positive.
Compute the incomplete or complete elliptic integral of the 2nd kind. For X and Y nonnegative, X+Y and Z positive, RD(X,Y,Z) = Integral from zero to infinity of -1/2 -1/2 -3/2 (3/2)(t+X) (t+Y) (t+Z) dt. If X or Y is zero, the integral is complete.
Compute the incomplete or complete elliptic integral of the 1st kind. For X, Y, and Z non-negative and at most one of them zero, RF(X,Y,Z) = Integral from zero to infinity of -1/2 -1/2 -1/2 (1/2)(t+X) (t+Y) (t+Z) dt. If X, Y or Z is zero, the integral is complete.
Compute the incomplete or complete (X or Y or Z is zero) elliptic integral of the 3rd kind. For X, Y, and Z non- negative, at most one of them zero, and P positive, RJ(X,Y,Z,P) = Integral from zero to infinity of -1/2 -1/2 -1/2 -1 (3/2)(t+X) (t+Y) (t+Z) (t+P) dt.
An easy-to-use code which minimizes the sum of the squares of M nonlinear functions in N variables by a modification of the Levenberg-Marquardt algorithm.
An easy-to-use code to find a zero of a system of N nonlinear functions in N variables by a modification of the Powell hybrid method.
An easy-to-use code which minimizes the sum of the squares of M nonlinear functions in N variables by a modification of the Levenberg-Marquardt algorithm.
An easy-to-use code to find a zero of a system of N nonlinear functions in N variables by a modification of the Powell hybrid method.
Reduce a real symmetric band matrix to symmetric tridiagonal matrix and, optionally, accumulate orthogonal similarity transformations.
Compute the eigenvalues of a symmetric tridiagonal matrix in a given interval using Sturm sequencing.
Compute some of the eigenvalues of a real symmetric matrix using the QR method with shifts of origin.
Compute the eigenvalues and, optionally, the eigenvectors of a complex general matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a complex Hermitian matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a complex general matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a complex Hermitian matrix.
Compute the eigenvectors of a complex upper Hessenberg associated with specified eigenvalues using inverse iteration.
Form the eigenvectors of a complex general matrix from the eigenvectors of a upper Hessenberg matrix output from COMHES.
Reduce a complex general matrix to complex upper Hessenberg form using stabilized elementary similarity transformations.
Compute the eigenvalues of a complex upper Hessenberg matrix using the modified LR method.
Compute the eigenvalues and eigenvectors of a complex upper Hessenberg matrix using the modified LR method.
Compute the eigenvalues of complex upper Hessenberg matrix using the QR method.
Compute the eigenvalues and eigenvectors of a complex upper Hessenberg matrix.
Form the eigenvectors of a complex general matrix from eigenvectors of upper Hessenberg matrix output from CORTH.
Reduce a complex general matrix to complex upper Hessenberg form using unitary similarity transformations.
Documentation for EISPACK, a collection of subprograms for solving matrix eigen-problems.
Form the eigenvectors of a real general matrix from the eigenvectors of the upper Hessenberg matrix output from ELMHES.
Reduce a real general matrix to upper Hessenberg form using stabilized elementary similarity transformations.
Accumulates the stabilized elementary similarity transformations used in the reduction of a real general matrix to upper Hessenberg form by ELMHES.
Transforms certain real non-symmetric tridiagonal matrix to symmetric tridiagonal matrix.
Transforms certain real non-symmetric tridiagonal matrix to symmetric tridiagonal matrix.
Compute the eigenvalues of a real upper Hessenberg matrix using the QR method.
Compute the eigenvalues and eigenvectors of a real upper Hessenberg matrix using QR method.
Compute the eigenvectors of a complex Hermitian matrix from the eigenvectors of a real symmetric tridiagonal matrix output from HTRID3.
Form the eigenvectors of a complex Hermitian matrix from the eigenvectors of a real symmetric tridiagonal matrix output from HTRIDI.
Reduce a complex Hermitian (packed) matrix to a real symmetric tridiagonal matrix by unitary similarity transformations.
Reduce a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations.
Compute the eigenvalues of a symmetric tridiagonal matrix using the implicit QL method.
Compute the eigenvalues and eigenvectors of a symmetric tridiagonal matrix using the implicit QL method.
Compute the eigenvalues of a symmetric tridiagonal matrix using the implicit QL method. Eigenvectors may be computed later.
Compute the eigenvectors of a real upper Hessenberg matrix associated with specified eigenvalues by inverse iteration.
Compute the singular value decomposition of a rectangular matrix and solve the related linear least squares problem.
Form the eigenvectors of a general real matrix from the eigenvectors of the upper Hessenberg matrix output from ORTHES.
Reduce a real general matrix to upper Hessenberg form using orthogonal similarity transformations.
Accumulate orthogonal similarity transformations in the reduction of real general matrix by ORTHES.
The first step of the QZ algorithm for solving generalized matrix eigenproblems. Accepts a pair of real general matrices and reduces one of them to upper Hessenberg and the other to upper triangular form using orthogonal transformations. Usually followed by QZIT, QZVAL, QZVEC.
The second step of the QZ algorithm for generalized eigenproblems. Accepts an upper Hessenberg and an upper triangular matrix and reduces the former to quasi-triangular form while preserving the form of the latter. Usually preceded by QZHES and followed by QZVAL and QZVEC.
The third step of the QZ algorithm for generalized eigenproblems. Accepts a pair of real matrices, one in quasi-triangular form and the other in upper triangular form and computes the eigenvalues of the associated eigenproblem. Usually preceded by QZHES, QZIT, and followed by QZVEC.
The optional fourth step of the QZ algorithm for generalized eigenproblems. Accepts a matrix in quasi-triangular form and another in upper triangular and computes the eigenvectors of the triangular problem and transforms them back to the original coordinates Usually preceded by QZHES, QZIT, and QZVAL.
Compute the largest or smallest eigenvalues of a symmetric tridiagonal matrix using the rational QR method with Newton correction.
Form the eigenvectors of a generalized symmetric eigensystem from the eigenvectors of derived matrix output from REDUC or REDUC2.
Form the eigenvectors of a generalized symmetric eigensystem from the eigenvectors of derived matrix output from REDUC2.
Reduce a generalized symmetric eigenproblem to a standard symmetric eigenproblem using Cholesky factorization.
Reduce a certain generalized symmetric eigenproblem to a standard symmetric eigenproblem using Cholesky factorization.
Compute the eigenvalues and, optionally, the eigenvectors of a real general matrix.
Compute the eigenvalues and eigenvectors for a real generalized eigenproblem.
Compute the eigenvalues and, optionally, the eigenvectors of a real symmetric matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a symmetric band matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a symmetric generalized eigenproblem.
Compute the eigenvalues and, optionally, the eigenvectors of a symmetric generalized eigenproblem.
Compute the eigenvalues and, optionally, the eigenvectors of a symmetric generalized eigenproblem.
Compute the eigenvalues and, optionally, the eigenvectors of a real symmetric matrix packed into a one dimensional array.
Compute the eigenvalues and, optionally, the eigenvectors of a real symmetric tridiagonal matrix.
Compute the eigenvalues and eigenvectors of a special real tridiagonal matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a real general matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a real symmetric matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a real symmetric matrix stored in packed form.
Compute the eigenvalues and eigenvectors of symmetric tridiagonal matrix.
Reduce a real symmetric matrix to symmetric tridiagonal matrix using orthogonal similarity transformations.
Reduce a real symmetric matrix to a symmetric tridiagonal matrix using and accumulating orthogonal transformations.
Reduce a real symmetric matrix stored in packed form to symmetric tridiagonal matrix using orthogonal transformations.
Find those eigenvalues of a symmetric tridiagonal matrix in a given interval and their associated eigenvectors by Sturm sequencing.
Compute the eigenvalues of a symmetric tridiagonal matrix in a given interval using Sturm sequencing.
Compute the eigenvalues of symmetric tridiagonal matrix by the QL method.
Compute the eigenvalues of symmetric tridiagonal matrix using a rational variant of the QL method.
Form the eigenvectors of a certain real non-symmetric tridiagonal matrix from a symmetric tridiagonal matrix output from FIGI.
Balance a real general matrix and isolate eigenvalues whenever possible.
Form the eigenvectors of a real general matrix from the eigenvectors of matrix output from BALANC.
Reduce a real symmetric band matrix to symmetric tridiagonal matrix and, optionally, accumulate orthogonal similarity transformations.
Form the eigenvectors of a real symmetric band matrix associated with a set of ordered approximate eigenvalues by inverse iteration.
Form the eigenvectors of a complex general matrix from the eigenvectors of matrix output from CBAL.
Balance a complex general matrix and isolate eigenvalues whenever possible.
Compute the eigenvalues and, optionally, the eigenvectors of a complex general matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a complex Hermitian matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a complex general matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a complex Hermitian matrix.
Compute the eigenvectors of a complex upper Hessenberg associated with specified eigenvalues using inverse iteration.
Form the eigenvectors of a complex general matrix from the eigenvectors of a upper Hessenberg matrix output from COMHES.
Reduce a complex general matrix to complex upper Hessenberg form using stabilized elementary similarity transformations.
Compute the eigenvalues and eigenvectors of a complex upper Hessenberg matrix using the modified LR method.
Compute the eigenvalues of complex upper Hessenberg matrix using the QR method.
Compute the eigenvalues and eigenvectors of a complex upper Hessenberg matrix.
Form the eigenvectors of a complex general matrix from eigenvectors of upper Hessenberg matrix output from CORTH.
Reduce a complex general matrix to complex upper Hessenberg form using unitary similarity transformations.
Documentation for EISPACK, a collection of subprograms for solving matrix eigen-problems.
Form the eigenvectors of a real general matrix from the eigenvectors of the upper Hessenberg matrix output from ELMHES.
Reduce a real general matrix to upper Hessenberg form using stabilized elementary similarity transformations.
Accumulates the stabilized elementary similarity transformations used in the reduction of a real general matrix to upper Hessenberg form by ELMHES.
Transforms certain real non-symmetric tridiagonal matrix to symmetric tridiagonal matrix.
Transforms certain real non-symmetric tridiagonal matrix to symmetric tridiagonal matrix.
Compute the eigenvalues of a real upper Hessenberg matrix using the QR method.
Compute the eigenvalues and eigenvectors of a real upper Hessenberg matrix using QR method.
Compute the eigenvectors of a complex Hermitian matrix from the eigenvectors of a real symmetric tridiagonal matrix output from HTRID3.
Form the eigenvectors of a complex Hermitian matrix from the eigenvectors of a real symmetric tridiagonal matrix output from HTRIDI.
Reduce a complex Hermitian (packed) matrix to a real symmetric tridiagonal matrix by unitary similarity transformations.
Reduce a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations.
Compute the eigenvalues of a symmetric tridiagonal matrix using the implicit QL method.
Compute the eigenvalues and eigenvectors of a symmetric tridiagonal matrix using the implicit QL method.
Compute the eigenvalues of a symmetric tridiagonal matrix using the implicit QL method. Eigenvectors may be computed later.
Compute the eigenvectors of a real upper Hessenberg matrix associated with specified eigenvalues by inverse iteration.
Compute the singular value decomposition of a rectangular matrix and solve the related linear least squares problem.
Form the eigenvectors of a general real matrix from the eigenvectors of the upper Hessenberg matrix output from ORTHES.
Reduce a real general matrix to upper Hessenberg form using orthogonal similarity transformations.
Accumulate orthogonal similarity transformations in the reduction of real general matrix by ORTHES.
The first step of the QZ algorithm for solving generalized matrix eigenproblems. Accepts a pair of real general matrices and reduces one of them to upper Hessenberg and the other to upper triangular form using orthogonal transformations. Usually followed by QZIT, QZVAL, QZVEC.
The second step of the QZ algorithm for generalized eigenproblems. Accepts an upper Hessenberg and an upper triangular matrix and reduces the former to quasi-triangular form while preserving the form of the latter. Usually preceded by QZHES and followed by QZVAL and QZVEC.
The third step of the QZ algorithm for generalized eigenproblems. Accepts a pair of real matrices, one in quasi-triangular form and the other in upper triangular form and computes the eigenvalues of the associated eigenproblem. Usually preceded by QZHES, QZIT, and followed by QZVEC.
The optional fourth step of the QZ algorithm for generalized eigenproblems. Accepts a matrix in quasi-triangular form and another in upper triangular and computes the eigenvectors of the triangular problem and transforms them back to the original coordinates Usually preceded by QZHES, QZIT, and QZVAL.
Compute the largest or smallest eigenvalues of a symmetric tridiagonal matrix using the rational QR method with Newton correction.
Form the eigenvectors of a generalized symmetric eigensystem from the eigenvectors of derived matrix output from REDUC or REDUC2.
Form the eigenvectors of a generalized symmetric eigensystem from the eigenvectors of derived matrix output from REDUC2.
Reduce a generalized symmetric eigenproblem to a standard symmetric eigenproblem using Cholesky factorization.
Reduce a certain generalized symmetric eigenproblem to a standard symmetric eigenproblem using Cholesky factorization.
Compute the eigenvalues and, optionally, the eigenvectors of a real general matrix.
Compute the eigenvalues and eigenvectors for a real generalized eigenproblem.
Compute the eigenvalues and, optionally, the eigenvectors of a real symmetric matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a symmetric band matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a symmetric generalized eigenproblem.
Compute the eigenvalues and, optionally, the eigenvectors of a symmetric generalized eigenproblem.
Compute the eigenvalues and, optionally, the eigenvectors of a symmetric generalized eigenproblem.
Compute the eigenvalues and, optionally, the eigenvectors of a real symmetric matrix packed into a one dimensional array.
Compute the eigenvalues and, optionally, the eigenvectors of a real symmetric tridiagonal matrix.
Compute the eigenvalues and eigenvectors of a special real tridiagonal matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a real general matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a real symmetric matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a real symmetric matrix stored in packed form.
Compute the eigenvectors of symmetric tridiagonal matrix corresponding to specified eigenvalues, using inverse iteration.
Compute the eigenvalues and eigenvectors of symmetric tridiagonal matrix.
Reduce a real symmetric matrix to symmetric tridiagonal matrix using orthogonal similarity transformations.
Reduce a real symmetric matrix to a symmetric tridiagonal matrix using and accumulating orthogonal transformations.
Reduce a real symmetric matrix stored in packed form to symmetric tridiagonal matrix using orthogonal transformations.
Find those eigenvalues of a symmetric tridiagonal matrix in a given interval and their associated eigenvectors by Sturm sequencing.
Form the eigenvectors of real symmetric matrix from the eigenvectors of a symmetric tridiagonal matrix formed by TRED1.
Form the eigenvectors of a real symmetric matrix from the eigenvectors of a symmetric tridiagonal matrix formed by TRED3.
Form the eigenvectors of a certain real non-symmetric tridiagonal matrix from a symmetric tridiagonal matrix output from FIGI.
Balance a real general matrix and isolate eigenvalues whenever possible.
Form the eigenvectors of a real general matrix from the eigenvectors of matrix output from BALANC.
Reduce a real symmetric band matrix to symmetric tridiagonal matrix and, optionally, accumulate orthogonal similarity transformations.
Form the eigenvectors of a real symmetric band matrix associated with a set of ordered approximate eigenvalues by inverse iteration.
Compute the eigenvalues of a symmetric tridiagonal matrix in a given interval using Sturm sequencing.
Compute some of the eigenvalues of a real symmetric matrix using the QR method with shifts of origin.
Form the eigenvectors of a complex general matrix from the eigenvectors of matrix output from CBAL.
Balance a complex general matrix and isolate eigenvalues whenever possible.
Compute the eigenvalues and, optionally, the eigenvectors of a complex general matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a complex Hermitian matrix.
Compute the eigenvectors of a complex upper Hessenberg associated with specified eigenvalues using inverse iteration.
Form the eigenvectors of a complex general matrix from the eigenvectors of a upper Hessenberg matrix output from COMHES.
Reduce a complex general matrix to complex upper Hessenberg form using stabilized elementary similarity transformations.
Compute the eigenvalues of a complex upper Hessenberg matrix using the modified LR method.
Compute the eigenvalues and eigenvectors of a complex upper Hessenberg matrix using the modified LR method.
Compute the eigenvalues of complex upper Hessenberg matrix using the QR method.
Compute the eigenvalues and eigenvectors of a complex upper Hessenberg matrix.
Form the eigenvectors of a complex general matrix from eigenvectors of upper Hessenberg matrix output from CORTH.
Reduce a complex general matrix to complex upper Hessenberg form using unitary similarity transformations.
Documentation for EISPACK, a collection of subprograms for solving matrix eigen-problems.
Form the eigenvectors of a real general matrix from the eigenvectors of the upper Hessenberg matrix output from ELMHES.
Reduce a real general matrix to upper Hessenberg form using stabilized elementary similarity transformations.
Accumulates the stabilized elementary similarity transformations used in the reduction of a real general matrix to upper Hessenberg form by ELMHES.
Transforms certain real non-symmetric tridiagonal matrix to symmetric tridiagonal matrix.
Transforms certain real non-symmetric tridiagonal matrix to symmetric tridiagonal matrix.
Compute the eigenvalues of a real upper Hessenberg matrix using the QR method.
Compute the eigenvalues and eigenvectors of a real upper Hessenberg matrix using QR method.
Compute the eigenvectors of a complex Hermitian matrix from the eigenvectors of a real symmetric tridiagonal matrix output from HTRID3.
Form the eigenvectors of a complex Hermitian matrix from the eigenvectors of a real symmetric tridiagonal matrix output from HTRIDI.
Reduce a complex Hermitian (packed) matrix to a real symmetric tridiagonal matrix by unitary similarity transformations.
Reduce a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations.
Compute the eigenvalues of a symmetric tridiagonal matrix using the implicit QL method.
Compute the eigenvalues and eigenvectors of a symmetric tridiagonal matrix using the implicit QL method.
Compute the eigenvalues of a symmetric tridiagonal matrix using the implicit QL method. Eigenvectors may be computed later.
Compute the eigenvectors of a real upper Hessenberg matrix associated with specified eigenvalues by inverse iteration.
Compute the singular value decomposition of a rectangular matrix and solve the related linear least squares problem.
Form the eigenvectors of a general real matrix from the eigenvectors of the upper Hessenberg matrix output from ORTHES.
Reduce a real general matrix to upper Hessenberg form using orthogonal similarity transformations.
Accumulate orthogonal similarity transformations in the reduction of real general matrix by ORTHES.
The first step of the QZ algorithm for solving generalized matrix eigenproblems. Accepts a pair of real general matrices and reduces one of them to upper Hessenberg and the other to upper triangular form using orthogonal transformations. Usually followed by QZIT, QZVAL, QZVEC.
The second step of the QZ algorithm for generalized eigenproblems. Accepts an upper Hessenberg and an upper triangular matrix and reduces the former to quasi-triangular form while preserving the form of the latter. Usually preceded by QZHES and followed by QZVAL and QZVEC.
The third step of the QZ algorithm for generalized eigenproblems. Accepts a pair of real matrices, one in quasi-triangular form and the other in upper triangular form and computes the eigenvalues of the associated eigenproblem. Usually preceded by QZHES, QZIT, and followed by QZVEC.
The optional fourth step of the QZ algorithm for generalized eigenproblems. Accepts a matrix in quasi-triangular form and another in upper triangular and computes the eigenvectors of the triangular problem and transforms them back to the original coordinates Usually preceded by QZHES, QZIT, and QZVAL.
Compute the largest or smallest eigenvalues of a symmetric tridiagonal matrix using the rational QR method with Newton correction.
Form the eigenvectors of a generalized symmetric eigensystem from the eigenvectors of derived matrix output from REDUC or REDUC2.
Form the eigenvectors of a generalized symmetric eigensystem from the eigenvectors of derived matrix output from REDUC2.
Reduce a generalized symmetric eigenproblem to a standard symmetric eigenproblem using Cholesky factorization.
Reduce a certain generalized symmetric eigenproblem to a standard symmetric eigenproblem using Cholesky factorization.
Compute the eigenvalues and, optionally, the eigenvectors of a real general matrix.
Compute the eigenvalues and eigenvectors for a real generalized eigenproblem.
Compute the eigenvalues and, optionally, the eigenvectors of a real symmetric matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a symmetric band matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a symmetric generalized eigenproblem.
Compute the eigenvalues and, optionally, the eigenvectors of a symmetric generalized eigenproblem.
Compute the eigenvalues and, optionally, the eigenvectors of a symmetric generalized eigenproblem.
Compute the eigenvalues and, optionally, the eigenvectors of a real symmetric matrix packed into a one dimensional array.
Compute the eigenvalues and, optionally, the eigenvectors of a real symmetric tridiagonal matrix.
Compute the eigenvalues and eigenvectors of a special real tridiagonal matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a real symmetric matrix stored in packed form.
Compute the eigenvectors of symmetric tridiagonal matrix corresponding to specified eigenvalues, using inverse iteration.
Compute the eigenvalues of symmetric tridiagonal matrix by the QL method.
Compute the eigenvalues and eigenvectors of symmetric tridiagonal matrix.
Compute the eigenvalues of symmetric tridiagonal matrix using a rational variant of the QL method.
Form the eigenvectors of real symmetric matrix from the eigenvectors of a symmetric tridiagonal matrix formed by TRED1.
Form the eigenvectors of a real symmetric matrix from the eigenvectors of a symmetric tridiagonal matrix formed by TRED3.
Reduce a real symmetric matrix to symmetric tridiagonal matrix using orthogonal similarity transformations.
Reduce a real symmetric matrix to a symmetric tridiagonal matrix using and accumulating orthogonal transformations.
Reduce a real symmetric matrix stored in packed form to symmetric tridiagonal matrix using orthogonal transformations.
Compute the eigenvalues of a symmetric tridiagonal matrix in a given interval using Sturm sequencing.
Find those eigenvalues of a symmetric tridiagonal matrix in a given interval and their associated eigenvectors by Sturm sequencing.
Compute the arc hyperbolic cosine.
Evaluate ln(1+X) accurate in the sense of relative error.
Compute the arc hyperbolic sine.
Compute the arc hyperbolic tangent.
Evaluate LOG(1+Z) from second order relative accuracy so that LOG(1+Z) = Z - Z**2/2 + Z**3*C9LN2R(Z).
Compute the complex arc cosine.
Compute the arc hyperbolic cosine.
Compute the argument of a complex number.
Compute the complex arc sine.
Compute the arc hyperbolic sine.
Compute the complex arc tangent.
Compute the complex arc tangent in the proper quadrant.
Compute the arc hyperbolic tangent.
Compute the cube root.
Compute the cube root.
Compute the complex hyperbolic cosine.
Compute the cotangent.
Calculate the relative error exponential (EXP(X)-1)/X.
Evaluate ln(1+X) accurate in the sense of relative error.
Compute the principal value of the complex base 10 logarithm.
Compute the cosine of an argument in degrees.
Compute the cotangent.
Compute the complex hyperbolic sine.
Compute the complex tangent.
Compute the complex hyperbolic tangent.
Evaluate DATAN(X) from first order relative accuracy so that DATAN(X) = X + X**3*D9ATN1(X).
Evaluate LOG(1+X) from second order relative accuracy so that LOG(1+X) = X - X**2/2 + X**3*D9LN2R(X)
Compute the arc hyperbolic cosine.
Compute the arc hyperbolic sine.
Compute the arc hyperbolic tangent.
Compute the cube root.
Compute the cosine of an argument in degrees.
Compute the cotangent.
Calculate the relative error exponential (EXP(X)-1)/X.
Evaluate ln(1+X) accurate in the sense of relative error.
Calculate a double precision approximation to DRC(X,Y) = Integral from zero to infinity of -1/2 -1 (1/2)(t+X) (t+Y) dt, where X is nonnegative and Y is positive.
Compute the sine of an argument in degrees.
Calculate the relative error exponential (EXP(X)-1)/X.
Documentation for FNLIB, a collection of routines for evaluating elementary and special functions.
Evaluate ATAN(X) from first order relative accuracy so that ATAN(X) = X + X**3*R9ATN1(X).
Evaluate LOG(1+X) from second order relative accuracy so that LOG(1+X) = X - X**2/2 + X**3*R9LN2R(X).
Calculate an approximation to RC(X,Y) = Integral from zero to infinity of -1/2 -1 (1/2)(t+X) (t+Y) dt, where X is nonnegative and Y is positive.
Compute the sine of an argument in degrees.
Solve by a cyclic reduction algorithm the linear system of equations that results from a finite difference approximation to certain 2-d elliptic PDE's on a centered grid .
Solve the standard five-point finite difference approximation on a staggered grid to the Helmholtz equation in Cartesian coordinates.
Solve the standard five-point finite difference approximation on a staggered grid to the modified Helmholtz equation in spherical coordinates assuming axisymmetry (no dependence on longitude).
Solve the standard five-point finite difference approximation on a staggered grid to the modified Helmholtz equation in cylindrical coordinates.
Solve the standard five-point finite difference approximation on a staggered grid to the Helmholtz equation in polar coordinates.
Solve the standard five-point finite difference approximation on a staggered grid to the Helmholtz equation in spherical coordinates and on the surface of the unit sphere (radius of 1).
Solve the standard seven-point finite difference approximation to the Helmholtz equation in Cartesian coordinates.
Solves the standard five-point finite difference approximation to the Helmholtz equation in Cartesian coordinates.
Solve a finite difference approximation to the modified Helmholtz equation in spherical coordinates assuming axisymmetry (no dependence on longitude).
Solve a standard finite difference approximation to the Helmholtz equation in cylindrical coordinates.
Solve a finite difference approximation to the Helmholtz equation in polar coordinates.
Solve a finite difference approximation to the Helmholtz equation in spherical coordinates and on the surface of the unit sphere (radius of 1).
Solve a block tridiagonal system of linear equations that results from a staggered grid finite difference approximation to 2-D elliptic PDE's.
Discretize and solve a second and, optionally, a fourth order finite difference approximation on a uniform grid to the general separable elliptic partial differential equation on a rectangle with any combination of periodic or mixed boundary conditions.
Solve for either the second or fourth order finite difference approximation to the solution of a separable elliptic partial differential equation on a rectangle. Any combination of periodic or mixed boundary conditions is allowed.
Calculate a double precision approximation to DRC(X,Y) = Integral from zero to infinity of -1/2 -1 (1/2)(t+X) (t+Y) dt, where X is nonnegative and Y is positive.
Calculate an approximation to RC(X,Y) = Integral from zero to infinity of -1/2 -1 (1/2)(t+X) (t+Y) dt, where X is nonnegative and Y is positive.
Solve a block tridiagonal system of linear equations (usually resulting from the discretization of separable two-dimensional elliptic equations).
Solve a block tridiagonal system of linear equations (usually resulting from the discretization of separable two-dimensional elliptic equations).
Solve a complex block tridiagonal linear system of equations by a cyclic reduction algorithm.
Solve a three-dimensional block tridiagonal linear system which arises from a finite difference approximation to a three-dimensional Poisson equation using the Fourier transform package FFTPAK written by Paul Swarztrauber.
The routine calculates an approximation result to a given Definite integral I = Integral of F over (A,B), Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
This function subprogram is used together with the routine DQAWS and defines the WEIGHT function.
The routine calculates an approximation result to a given Definite integral I = Integral of F over (A,B), Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
This function subprogram is used together with the routine QAWS and defines the WEIGHT function.
The routine determines the limit of a given sequence of approximations, by means of the Epsilon algorithm of P.Wynn. An estimate of the absolute error is also given. The condensed Epsilon table is computed. Only those elements needed for the computation of the next diagonal are preserved.
The routine determines the limit of a given sequence of approximations, by means of the Epsilon algorithm of P. Wynn. An estimate of the absolute error is also given. The condensed Epsilon table is computed. Only those elements needed for the computation of the next diagonal are preserved.
Solve a linearly constrained least squares problem with equality and inequality constraints, and optionally compute a covariance matrix.
Solve a linearly constrained least squares problem with equality constraints and nonnegativity constraints on selected variables.
Solve a linearly constrained least squares problem with equality and inequality constraints, and optionally compute a covariance matrix.
Solve a linearly constrained least squares problem with equality constraints and nonnegativity constraints on selected variables.
Symbolic dump (should be locally written).
Reset current error number to zero.
Allow user control over handling of errors.
Print the error tables and then clear them.
Abort program execution and print error message.
Set maximum number of times any error message is to be printed.
Record that an error has occurred.
Return the current value of the error control flag.
Return unit number(s) to which error messages are being sent.
Return the (first) output file to which error messages are being sent.
Set the error control flag.
Set logical unit numbers (up to 5) to which error messages are to be sent.
Set output file to which error messages are to be sent.
SLAP WORK/IWORK Array Bounds Checker. This routine checks the work array lengths and interfaces to the SLATEC error handler if a problem is found.
SLAP WORK/IWORK Array Bounds Checker. This routine checks the work array lengths and interfaces to the SLATEC error handler if a problem is found.
Error handler for the Level 2 and Level 3 BLAS Routines.
Process error messages for SLATEC and other libraries.
Save or recall global variables needed by error handling routines.
Print error messages processed by XERMSG.
Save or recall global variables needed by error handling routines.
Return the most recent error number.
Compute the Euclidean length (L2 norm) of a vector.
Compute the unitary norm of a complex vector.
Compute the Euclidean length (L2 norm) of a vector.
Compute the Euclidean length (L2 norm) of a vector.
Compute the unitary norm of a complex vector.
Compute the Euclidean length (L2 norm) of a vector.
Calculate the value and all derivatives of order less than NDERIV of all basis functions which do not vanish at X.
Calculate the value of all (possibly) nonzero basis functions at X.
Evaluate the B-representation of a B-spline at X for the function value or any of its derivatives.
Calculate the value and all derivatives of order less than NDERIV of all basis functions which do not vanish at X.
Calculate the value of all (possibly) nonzero basis functions at X.
Evaluate the B-representation of a B-spline at X for the function value or any of its derivatives.
Update the Cholesky factorization A=TRANS(R)*R of a positive definite matrix A of order P under diagonal permutations of the form TRANS(E)*A*E, where E is a permutation matrix.
Update the Cholesky factorization A=TRANS(R)*R of a positive definite matrix A of order P under diagonal permutations of the form TRANS(E)*A*E, where E is a permutation matrix.
Update the Cholesky factorization A=TRANS(R)*R of A positive definite matrix A of order P under diagonal permutations of the form TRANS(E)*A*E, where E is a permutation matrix.
Calculate the relative error exponential (EXP(X)-1)/X.
Calculate the relative error exponential (EXP(X)-1)/X.
Calculate the relative error exponential (EXP(X)-1)/X.
Compute repeated integrals of the K-zero Bessel function.
Compute repeated integrals of the K-zero Bessel function.
Compute the exponential integral E1(X).
Compute the exponential integral Ei(X).
Compute an M member sequence of exponential integrals E(N+K,X), K=0,1,...,M-1 for N .GE. 1 and X .GE. 0.
Compute the exponential integral E1(X).
Compute the exponential integral Ei(X).
Compute an M member sequence of exponential integrals E(N+K,X), K=0,1,...,M-1 for N .GE. 1 and X .GE. 0.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order one.
Compute a sequence of exponentially scaled modified Bessel functions of the third kind of fractional order.
Calculate the Bairy function for a negative argument and an exponentially scaled Bairy function for a non-negative argument.
Calculate the Bairy function for a negative argument and an exponentially scaled Bairy function for a non-negative argument.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order one.
Compute a sequence of exponentially scaled modified Bessel functions of the third kind of fractional order.
Calculate the Airy function for a negative argument and an exponentially scaled Airy function for a non-negative argument.
Calculate the Airy function for a negative argument and an exponentially scaled Airy function for a non-negative argument.
To provide double-precision floating-point arithmetic with an extended exponent range.
To provide double-precision floating-point arithmetic with an extended exponent range.
To provide double-precision floating-point arithmetic with an extended exponent range.
To provide double-precision floating-point arithmetic with an extended exponent range.
To provide double-precision floating-point arithmetic with an extended exponent range.
To provide double-precision floating-point arithmetic with an extended exponent range.
To provide single-precision floating-point arithmetic with an extended exponent range.
To provide single-precision floating-point arithmetic with an extended exponent range.
To provide single-precision floating-point arithmetic with an extended exponent range.
To provide single-precision floating-point arithmetic with an extended exponent range.
To provide single-precision floating-point arithmetic with an extended exponent range.
To provide single-precision floating-point arithmetic with an extended exponent range.
The routine calculates an approximation result to a given INTEGRAL I = Integral of F over (BOUND,+INFINITY) OR I = Integral of F over (-INFINITY,BOUND) OR I = Integral of F over (-INFINITY,+INFINITY) Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given integral I = Integral of F over (BOUND,+INFINITY) or I = Integral of F over (-INFINITY,BOUND) or I = Integral of F over (-INFINITY,+INFINITY), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy break points of the integration interval, where local difficulties of the integrand may occur (e.g. SINGULARITIES, DISCONTINUITIES), are provided by the user.
Approximate a given definite integral I = Integral of F over (A,B), hopefully satisfying the accuracy claim: ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)). Break points of the integration interval, where local difficulties of the integrand may occur (e.g. singularities or discontinuities) are provided by the user.
The routine calculates an approximation result to a given Definite integral I = Integral of F over (A,B), Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I= Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X)=SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine determines the limit of a given sequence of approximations, by means of the Epsilon algorithm of P.Wynn. An estimate of the absolute error is also given. The condensed Epsilon table is computed. Only those elements needed for the computation of the next diagonal are preserved.
The routine calculates an approximation result to a given INTEGRAL I = Integral of F over (BOUND,+INFINITY) OR I = Integral of F over (-INFINITY,BOUND) OR I = Integral of F over (-INFINITY,+INFINITY) Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given integral I = Integral of F over (BOUND,+INFINITY) or I = Integral of F over (-INFINITY,BOUND) or I = Integral of F over (-INFINITY,+INFINITY), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy break points of the integration interval, where local difficulties of the integrand may occur(e.g. SINGULARITIES, DISCONTINUITIES), are provided by the user.
Approximate a given definite integral I = Integral of F over (A,B), hopefully satisfying the accuracy claim: ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)). Break points of the integration interval, where local difficulties of the integrand may occur (e.g. singularities or discontinuities) are provided by the user.
The routine calculates an approximation result to a given Definite integral I = Integral of F over (A,B), Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine determines the limit of a given sequence of approximations, by means of the Epsilon algorithm of P. Wynn. An estimate of the absolute error is also given. The condensed Epsilon table is computed. Only those elements needed for the computation of the next diagonal are preserved.
This routine computes the CHEBYSHEV series expansion of degrees 12 and 24 of a function using A FAST FOURIER TRANSFORM METHOD F(X) = SUM(K=1,..,13) (CHEB12(K)*T(K-1,X)), F(X) = SUM(K=1,..,25) (CHEB24(K)*T(K-1,X)), Where T(K,X) is the CHEBYSHEV POLYNOMIAL OF DEGREE K.
Documentation for FFTPACK, a collection of Fast Fourier Transform routines.
This routine computes the CHEBYSHEV series expansion of degrees 12 and 24 of a function using A FAST FOURIER TRANSFORM METHOD F(X) = SUM(K=1,..,13) (CHEB12(K)*T(K-1,X)), F(X) = SUM(K=1,..,25) (CHEB24(K)*T(K-1,X)), Where T(K,X) is the CHEBYSHEV POLYNOMIAL OF DEGREE K.
Documentation for FFTPACK, a collection of Fast Fourier Transform routines.
Compute the unnormalized inverse of CFFTF.
Compute the unnormalized inverse of CFFTF1.
Compute the forward transform of a complex, periodic sequence.
Compute the forward transform of a complex, periodic sequence.
Initialize a work array for CFFTF and CFFTB.
Initialize a real and an integer work array for CFFTF1 and CFFTB1.
Compute the unnormalized inverse cosine transform.
Compute the unnormalized inverse of COSQF1.
Compute the forward cosine transform with odd wave numbers.
Compute the forward cosine transform with odd wave numbers.
Initialize a work array for COSQF and COSQB.
Compute the cosine transform of a real, even sequence.
Initialize a work array for COST.
A simplified real, periodic, backward fast Fourier transform.
Compute a simplified real, periodic, fast Fourier forward transform.
Initialize a work array for EZFFTF and EZFFTB.
Compute the backward fast Fourier transform of a real coefficient array.
Compute the backward fast Fourier transform of a real coefficient array.
Compute the forward transform of a real, periodic sequence.
Compute the forward transform of a real, periodic sequence.
Initialize a work array for RFFTF and RFFTB.
Initialize a real and an integer work array for RFFTF1 and RFFTB1.
Compute the unnormalized inverse of SINQF.
Compute the forward sine transform with odd wave numbers.
Initialize a work array for SINQF and SINQB.
Compute the sine transform of a real, odd sequence.
Initialize a work array for SINT.
Compute the hyperbolic Bessel function of the first kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order zero.
Compute the modified (hyperbolic) Bessel function of the first kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order one.
Compute the Bessel function of the first kind of order zero.
Compute the Bessel function of the first kind of order one.
Compute the hyperbolic Bessel function of the first kind of order zero.
Compute the modified (hyperbolic) Bessel function of the first kind of order one.
Compute the Bessel function of the first kind of order zero.
Compute the Bessel function of the first kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order one.
Calculate the relative error exponential (EXP(X)-1)/X.
Evaluate DATAN(X) from first order relative accuracy so that DATAN(X) = X + X**3*D9ATN1(X).
Calculate the relative error exponential (EXP(X)-1)/X.
Calculate a generalization of Pochhammer's symbol starting from first order.
Calculate the relative error exponential (EXP(X)-1)/X.
Calculate a generalization of Pochhammer's symbol starting from first order.
Evaluate ATAN(X) from first order relative accuracy so that ATAN(X) = X + X**3*R9ATN1(X).
Solve a block tridiagonal system of linear equations (usually resulting from the discretization of separable two-dimensional elliptic equations).
Solve a block tridiagonal system of linear equations (usually resulting from the discretization of separable two-dimensional elliptic equations).
Solve a complex block tridiagonal linear system of equations by a cyclic reduction algorithm.
Solve by a cyclic reduction algorithm the linear system of equations that results from a finite difference approximation to certain 2-d elliptic PDE's on a centered grid .
Solve the standard five-point finite difference approximation on a staggered grid to the Helmholtz equation in Cartesian coordinates.
Solve the standard five-point finite difference approximation on a staggered grid to the modified Helmholtz equation in spherical coordinates assuming axisymmetry (no dependence on longitude).
Solve the standard five-point finite difference approximation on a staggered grid to the modified Helmholtz equation in cylindrical coordinates.
Solve the standard five-point finite difference approximation on a staggered grid to the Helmholtz equation in polar coordinates.
Solve the standard five-point finite difference approximation on a staggered grid to the Helmholtz equation in spherical coordinates and on the surface of the unit sphere (radius of 1).
Solve the standard seven-point finite difference approximation to the Helmholtz equation in Cartesian coordinates.
Solves the standard five-point finite difference approximation to the Helmholtz equation in Cartesian coordinates.
Solve a finite difference approximation to the modified Helmholtz equation in spherical coordinates assuming axisymmetry (no dependence on longitude).
Solve a standard finite difference approximation to the Helmholtz equation in cylindrical coordinates.
Solve a finite difference approximation to the Helmholtz equation in polar coordinates.
Solve a finite difference approximation to the Helmholtz equation in spherical coordinates and on the surface of the unit sphere (radius of 1).
Solve a three-dimensional block tridiagonal linear system which arises from a finite difference approximation to a three-dimensional Poisson equation using the Fourier transform package FFTPAK written by Paul Swarztrauber.
Solve a block tridiagonal system of linear equations that results from a staggered grid finite difference approximation to 2-D elliptic PDE's.
Discretize and solve a second and, optionally, a fourth order finite difference approximation on a uniform grid to the general separable elliptic partial differential equation on a rectangle with any combination of periodic or mixed boundary conditions.
Solve for either the second or fourth order finite difference approximation to the solution of a separable elliptic partial differential equation on a rectangle. Any combination of periodic or mixed boundary conditions is allowed.
Compute the arc hyperbolic cosine.
Evaluate the Airy function.
Calculate the Airy function for a negative argument and an exponentially scaled Airy function for a non-negative argument.
Compute the natural logarithm of the complete Beta function.
Compute the logarithm of the absolute value of the Gamma function.
Compute the logarithmic integral.
Compute the logarithm of the absolute value of the Gamma function.
Evaluate ln(1+X) accurate in the sense of relative error.
Compute the arc hyperbolic sine.
Compute the arc hyperbolic tangent.
Compute the hyperbolic Bessel function of the first kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order zero.
Compute the modified (hyperbolic) Bessel function of the first kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order one.
Compute the Bessel function of the first kind of order zero.
Compute the Bessel function of the first kind of order one.
Compute the modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the modified (hyperbolic) Bessel function of the third kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order one.
Compute a sequence of exponentially scaled modified Bessel functions of the third kind of fractional order.
Compute a sequence of modified Bessel functions of the third kind of fractional order.
Compute the Bessel function of the second kind of order zero.
Compute the Bessel function of the second kind of order one.
Compute the complete Beta function.
Calculate the incomplete Beta function.
Evaluate the Bairy function (the Airy function of the second kind).
Calculate the Bairy function for a negative argument and an exponentially scaled Bairy function for a non-negative argument.
Compute the binomial coefficients.
Evaluate (Z+0.5)*LOG((Z+1.)/Z) - 1.0 with relative accuracy.
Compute the log gamma correction factor so that LOG(CGAMMA(Z)) = 0.5*LOG(2.*PI) + (Z-0.5)*LOG(Z) - Z + C9LGMC(Z).
Evaluate LOG(1+Z) from second order relative accuracy so that LOG(1+Z) = Z - Z**2/2 + Z**3*C9LN2R(Z).
Compute the complex arc cosine.
Compute the arc hyperbolic cosine.
Compute the argument of a complex number.
Compute the complex arc sine.
Compute the arc hyperbolic sine.
Compute the complex arc tangent.
Compute the complex arc tangent in the proper quadrant.
Compute the arc hyperbolic tangent.
Compute the complete Beta function.
Compute the cube root.
Compute the cube root.
Compute the complex hyperbolic cosine.
Compute the cotangent.
Calculate the relative error exponential (EXP(X)-1)/X.
Compute the complete Gamma function.
Compute the reciprocal of the Gamma function.
Compute the logarithmic confluent hypergeometric function.
Compute the natural logarithm of the complete Beta function.
Compute the logarithm of the absolute value of the Gamma function.
Evaluate ln(1+X) accurate in the sense of relative error.
Compute the principal value of the complex base 10 logarithm.
Compute the cosine of an argument in degrees.
Compute the cotangent.
Compute the Psi (or Digamma) function.
Evaluate a Chebyshev series.
Compute the complex hyperbolic sine.
Compute the complex tangent.
Compute the complex hyperbolic tangent.
Evaluate the Airy modulus and phase.
Evaluate DATAN(X) from first order relative accuracy so that DATAN(X) = X + X**3*D9ATN1(X).
Evaluate the modulus and phase for the J0 and Y0 Bessel functions.
Evaluate the modulus and phase for the J1 and Y1 Bessel functions.
Evaluate for large Z Z**A * U(A,B,Z) where U is the logarithmic confluent hypergeometric function.
Compute the complementary incomplete Gamma function for A near a negative integer and X small.
Compute Tricomi's incomplete Gamma function for small arguments.
Compute Bessel functions EXP(X)*K-SUB-XNU(X) and EXP(X)* K-SUB-XNU+1(X) for 0.0 .LE. XNU .LT. 1.0.
Compute the log complementary incomplete Gamma function for large X and for A .LE. X.
Compute the logarithm of Tricomi's incomplete Gamma function with Perron's continued fraction for large X and A .GE. X.
Compute the log Gamma correction factor so that LOG(DGAMMA(X)) = LOG(SQRT(2*PI)) + (X-5.)*LOG(X) - X + D9LGMC(X).
Evaluate LOG(1+X) from second order relative accuracy so that LOG(1+X) = X - X**2/2 + X**3*D9LN2R(X)
Pack a base 2 exponent into a floating point number.
Unpack a floating point number X so that X = Y*2**N.
Evaluate the Airy function.
Compute the arc hyperbolic cosine.
Calculate the Airy function for a negative argument and an exponentially scaled Airy function for a non-negative argument.
Compute the arc hyperbolic sine.
Compute the arc hyperbolic tangent.
Compute Dawson's function.
Compute the hyperbolic Bessel function of the first kind of order zero.
Compute the modified (hyperbolic) Bessel function of the first kind of order one.
Compute the Bessel function of the first kind of order zero.
Compute the Bessel function of the first kind of order one.
Compute the modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the modified (hyperbolic) Bessel function of the third kind of order one.
Compute a sequence of modified Bessel functions of the third kind of fractional order.
Compute the Bessel function of the second kind of order zero.
Compute the Bessel function of the second kind of order one.
Compute the complete Beta function.
Calculate the incomplete Beta function.
Evaluate the Bairy function (the Airy function of the second kind).
Calculate the Bairy function for a negative argument and an exponentially scaled Bairy function for a non-negative argument.
Compute the binomial coefficients.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order one.
Compute a sequence of exponentially scaled modified Bessel functions of the third kind of fractional order.
Compute the cube root.
Compute the logarithmic confluent hypergeometric function.
Compute the cosine of an argument in degrees.
Compute the cotangent.
Evaluate a Chebyshev series.
Compute Dawson's function.
Compute the exponential integral E1(X).
Compute the exponential integral Ei(X).
Compute the error function.
Compute the complementary error function.
Calculate the relative error exponential (EXP(X)-1)/X.
Compute the factorial function.
Evaluate the incomplete Gamma function.
Calculate the complementary incomplete Gamma function.
Calculate Tricomi's form of the incomplete Gamma function.
Compute the minimum and maximum bounds for the argument in the Gamma function.
Compute the complete Gamma function.
Compute the reciprocal of the Gamma function.
Compute the natural logarithm of the complete Beta function.
Compute the logarithm of the absolute value of the Gamma function.
Compute the logarithmic integral.
Compute the logarithm of the absolute value of the Gamma function.
Evaluate ln(1+X) accurate in the sense of relative error.
Evaluate a generalization of Pochhammer's symbol.
Calculate a generalization of Pochhammer's symbol starting from first order.
Compute the Psi (or Digamma) function.
Compute the sine of an argument in degrees.
Compute a form of Spence's integral due to K. Mitchell.
Compute the exponential integral E1(X).
Compute the exponential integral Ei(X).
Compute the error function.
Compute the complementary error function.
Calculate the relative error exponential (EXP(X)-1)/X.
Compute the factorial function.
Evaluate the incomplete Gamma function.
Calculate the complementary incomplete Gamma function.
Calculate Tricomi's form of the incomplete Gamma function.
Compute the minimum and maximum bounds for the argument in the Gamma function.
Compute the complete Gamma function.
Compute the reciprocal of the Gamma function.
Determine the number of terms needed in an orthogonal polynomial series so that it meets a specified accuracy.
Determine the number of terms needed in an orthogonal polynomial series so that it meets a specified accuracy.
Compute the Psi (or Digamma) function.
Evaluate a generalization of Pochhammer's symbol.
Calculate a generalization of Pochhammer's symbol starting from first order.
Evaluate the Airy modulus and phase.
Evaluate ATAN(X) from first order relative accuracy so that ATAN(X) = X + X**3*R9ATN1(X).
Evaluate for large Z Z**A * U(A,B,Z) where U is the logarithmic confluent hypergeometric function.
Compute the complementary incomplete Gamma function for A near a negative integer and for small X.
Compute Tricomi's incomplete Gamma function for small arguments.
Generate a uniformly distributed random number.
Compute Bessel functions EXP(X)*K-SUB-XNU(X) and EXP(X)* K-SUB-XNU+1(X) for 0.0 .LE. XNU .LT. 1.0.
Compute the log complementary incomplete Gamma function for large X and for A .LE. X.
Compute the logarithm of Tricomi's incomplete Gamma function with Perron's continued fraction for large X and A .GE. X.
Compute the log Gamma correction factor so that LOG(GAMMA(X)) = LOG(SQRT(2*PI)) + (X-.5)*LOG(X) - X + R9LGMC(X).
Evaluate LOG(1+X) from second order relative accuracy so that LOG(1+X) = X - X**2/2 + X**3*R9LN2R(X).
Pack a base 2 exponent into a floating point number.
Unpack a floating point number X so that X = Y*2**N.
Generate a normally distributed (Gaussian) random number.
Generate a uniformly distributed random number.
Compute the sine of an argument in degrees.
Compute a form of Spence's integral due to K. Mitchell.
The routine calculates an approximation result to a given Fourier integral I=Integral of F(X)*W(X) over (A,INFINITY) where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
The routine calculates an approximation result to a given Fourier integral I = Integral of F(X)*W(X) over (A,INFINITY) where W(X)=COS(OMEGA*X) or W(X)=SIN(OMEGA*X), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
The routine calculates an approximation result to a given Fourier integral I = Integral of F(X)*W(X) over (A,INFINITY) where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
The routine calculates an approximation result to a given Fourier integral I = Integral of F(X)*W(X) over (A,INFINITY) where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
Compute the unnormalized inverse of CFFTF.
Compute the unnormalized inverse of CFFTF1.
Compute the forward transform of a complex, periodic sequence.
Compute the forward transform of a complex, periodic sequence.
Initialize a work array for CFFTF and CFFTB.
Initialize a real and an integer work array for CFFTF1 and CFFTB1.
Compute the unnormalized inverse of COSQF1.
Compute the forward cosine transform with odd wave numbers.
A simplified real, periodic, backward fast Fourier transform.
Compute a simplified real, periodic, fast Fourier forward transform.
Initialize a work array for EZFFTF and EZFFTB.
Compute the backward fast Fourier transform of a real coefficient array.
Compute the backward fast Fourier transform of a real coefficient array.
Compute the forward transform of a real, periodic sequence.
Compute the forward transform of a real, periodic sequence.
Initialize a work array for RFFTF and RFFTB.
Initialize a real and an integer work array for RFFTF1 and RFFTB1.
Compute the unnormalized inverse of SINQF.
Compute the forward sine transform with odd wave numbers.
Initialize a work array for SINQF and SINQB.
Compute the sine transform of a real, odd sequence.
Initialize a work array for SINT.
Compute a sequence of exponentially scaled modified Bessel functions of the third kind of fractional order.
Compute a sequence of modified Bessel functions of the third kind of fractional order.
Compute a sequence of modified Bessel functions of the third kind of fractional order.
Compute a sequence of exponentially scaled modified Bessel functions of the third kind of fractional order.
Evaluate (Z+0.5)*LOG((Z+1.)/Z) - 1.0 with relative accuracy.
Integrate a real function of one variable over a finite interval using an adaptive 8-point Legendre-Gauss algorithm. Intended primarily for high accuracy integration or integration of smooth functions.
Integrate a real function of one variable over a finite interval using an adaptive 8-point Legendre-Gauss algorithm. Intended primarily for high accuracy integration or integration of smooth functions.
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT)LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESLT).LE.MAX(EPSABS,EPSREL*ABS(I)).
To compute the integral I=Integral of F(X) over (A,B) Where W(X) = COS(OMEGA*X) or W(X)=SIN(OMEGA*X) and to compute J = Integral of ABS(F) over (A,B). For small value of OMEGA or small intervals (A,B) the 15-point GAUSS-KRONRO Rule is used. Otherwise a generalized CLENSHAW-CURTIS method is used.
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT)LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESLT).LE.MAX(EPSABS,EPSREL*ABS(I)).
To compute the integral I=Integral of F(X) over (A,B) Where W(X) = COS(OMEGA*X) Or (WX)=SIN(OMEGA*X) and to compute J=Integral of ABS(F) over (A,B). For small value of OMEGA or small intervals (A,B) 15-point GAUSS- KRONROD Rule used. Otherwise generalized CLENSHAW-CURTIS us
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Generate a normally distributed (Gaussian) random number.
The function of CDRIV1 is to solve N (200 or fewer) ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. CDRIV1 allows complex-valued differential equations.
The function of CDRIV2 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. CDRIV2 allows complex-valued differential equations.
The function of CDRIV3 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. Other important options are available. CDRIV3 allows complex-valued differential equations.
The function of DDRIV1 is to solve N (200 or fewer) ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. DDRIV1 uses double precision arithmetic.
The function of DDRIV2 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. DDRIV2 uses double precision arithmetic.
The function of DDRIV3 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. Other important options are available. DDRIV3 uses double precision arithmetic.
The function of SDRIV1 is to solve N (200 or fewer) ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. SDRIV1 uses single precision arithmetic.
The function of SDRIV2 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. SDRIV2 uses single precision arithmetic.
The function of SDRIV3 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. Other important options are available. SDRIV3 uses single precision arithmetic.
Factor a matrix using Gaussian elimination and estimate the condition number of the matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a complex general matrix.
Factor a matrix using Gaussian elimination.
Solve a general system of linear equations.
Solve a general system of linear equations. Iterative refinement is used to obtain an error estimate.
Factor a matrix using Gaussian elimination and estimate the condition number of the matrix.
Factor a matrix using Gaussian elimination.
Solve a general system of linear equations.
Factor a matrix using Gaussian elimination and estimate the condition number of the matrix.
Compute the eigenvalues and, optionally, the eigenvectors of a real general matrix.
Factor a matrix using Gaussian elimination.
Solve a general system of linear equations.
Solve a general system of linear equations. Iterative refinement is used to obtain an error estimate.
Solve a general system of linear equations.
Solve a general system of linear equations. Iterative refinement is used to obtain an error estimate.
Solve a general system of linear equations.
Solve a general system of linear equations.
Solve a general system of linear equations. Iterative refinement is used to obtain an error estimate.
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT)LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESLT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given INTEGRAL I = Integral of F over (BOUND,+INFINITY) OR I = Integral of F over (-INFINITY,BOUND) OR I = Integral of F over (-INFINITY,+INFINITY) Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given integral I = Integral of F over (BOUND,+INFINITY) or I = Integral of F over (-INFINITY,BOUND) or I = Integral of F over (-INFINITY,+INFINITY), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy break points of the integration interval, where local difficulties of the integrand may occur (e.g. SINGULARITIES, DISCONTINUITIES), are provided by the user.
Approximate a given definite integral I = Integral of F over (A,B), hopefully satisfying the accuracy claim: ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)). Break points of the integration interval, where local difficulties of the integrand may occur (e.g. singularities or discontinuities) are provided by the user.
The routine calculates an approximation result to a given Definite integral I = Integral of F over (A,B), Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT)LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESLT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given INTEGRAL I = Integral of F over (BOUND,+INFINITY) OR I = Integral of F over (-INFINITY,BOUND) OR I = Integral of F over (-INFINITY,+INFINITY) Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given integral I = Integral of F over (BOUND,+INFINITY) or I = Integral of F over (-INFINITY,BOUND) or I = Integral of F over (-INFINITY,+INFINITY), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy break points of the integration interval, where local difficulties of the integrand may occur(e.g. SINGULARITIES, DISCONTINUITIES), are provided by the user.
Approximate a given definite integral I = Integral of F over (A,B), hopefully satisfying the accuracy claim: ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)). Break points of the integration interval, where local difficulties of the integrand may occur (e.g. singularities or discontinuities) are provided by the user.
The routine calculates an approximation result to a given Definite integral I = Integral of F over (A,B), Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Preconditioned GMRES iterative sparse Ax=b solver. This routine uses the generalized minimum residual (GMRES) method with preconditioning to solve non-symmetric linear systems of the form: Ax = b.
Internal routine for DGMRES.
Internal routine for DGMRES.
Sparse Linear Algebra Package Version 2.0.2 Documentation. Routines to solve large sparse symmetric and nonsymmetric positive definite linear systems, Ax = b, using precondi- tioned iterative methods.
Internal routine for DGMRES.
Internal routine for DGMRES.
Internal routine for DGMRES.
Diagonally scaled GMRES iterative sparse Ax=b solver. This routine uses the generalized minimum residual (GMRES) method with diagonal scaling to solve possibly non-symmetric linear systems of the form: Ax = b.
Incomplete LU GMRES iterative sparse Ax=b solver. This routine uses the generalized minimum residual (GMRES) method with incomplete LU factorization for preconditioning to solve possibly non-symmetric linear systems of the form: Ax = b.
Internal routine for DGMRES.
Preconditioned GMRES Iterative Sparse Ax=b Solver. This routine uses the generalized minimum residual (GMRES) method with preconditioning to solve non-symmetric linear systems of the form: Ax = b.
Internal routine for SGMRES.
Internal routine for SGMRES.
Sparse Linear Algebra Package Version 2.0.2 Documentation. Routines to solve large sparse symmetric and nonsymmetric positive definite linear systems, Ax = b, using precondi- tioned iterative methods.
Internal routine for SGMRES.
Internal routine for SGMRES.
Internal routine for SGMRES.
Diagonally Scaled GMRES Iterative Sparse Ax=b Solver. This routine uses the generalized minimum residual (GMRES) method with diagonal scaling to solve possibly non-symmetric linear systems of the form: Ax = b.
Incomplete LU GMRES Iterative Sparse Ax=b Solver. This routine uses the generalized minimum residual (GMRES) method with incomplete LU factorization for preconditioning to solve possibly non-symmetric linear systems of the form: Ax = b.
Internal routine for SGMRES.
Construct a Givens transformation.
Apply a plane Givens rotation.
Apply a plane Givens rotation.
Construct a plane Givens rotation.
Apply a plane Givens rotation.
Construct a plane Givens rotation.
Construct a Givens transformation.
Apply a plane Givens rotation.
Apply a plane Givens rotation.
Construct a plane Givens rotation.
Apply a plane Givens rotation.
Construct a plane Givens rotation.
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT)LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESLT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given INTEGRAL I = Integral of F over (BOUND,+INFINITY) OR I = Integral of F over (-INFINITY,BOUND) OR I = Integral of F over (-INFINITY,+INFINITY) Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given integral I = Integral of F over (BOUND,+INFINITY) or I = Integral of F over (-INFINITY,BOUND) or I = Integral of F over (-INFINITY,+INFINITY), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy break points of the integration interval, where local difficulties of the integrand may occur (e.g. SINGULARITIES, DISCONTINUITIES), are provided by the user.
Approximate a given definite integral I = Integral of F over (A,B), hopefully satisfying the accuracy claim: ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)). Break points of the integration interval, where local difficulties of the integrand may occur (e.g. singularities or discontinuities) are provided by the user.
The routine calculates an approximation result to a given Definite integral I = Integral of F over (A,B), Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a Cauchy principal value I = INTEGRAL of F*W over (A,B) (W(X) = 1/((X-C), C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABE,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I= Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X)=SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT)LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESLT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given INTEGRAL I = Integral of F over (BOUND,+INFINITY) OR I = Integral of F over (-INFINITY,BOUND) OR I = Integral of F over (-INFINITY,+INFINITY) Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given integral I = Integral of F over (BOUND,+INFINITY) or I = Integral of F over (-INFINITY,BOUND) or I = Integral of F over (-INFINITY,+INFINITY), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy break points of the integration interval, where local difficulties of the integrand may occur(e.g. SINGULARITIES, DISCONTINUITIES), are provided by the user.
Approximate a given definite integral I = Integral of F over (A,B), hopefully satisfying the accuracy claim: ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)). Break points of the integration interval, where local difficulties of the integrand may occur (e.g. singularities or discontinuities) are provided by the user.
The routine calculates an approximation result to a given Definite integral I = Integral of F over (A,B), Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a Cauchy principal value I = INTEGRAL of F*W over (A,B) (W(X) = 1/((X-C), C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABE,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Generalized Minimum Residual Stop Test. This routine calculates the stop test for the Generalized Minimum RESidual (GMRES) iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Generalized Minimum Residual Stop Test. This routine calculates the stop test for the Generalized Minimum RESidual (GMRES) iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Check the gradients of M nonlinear functions in N variables, evaluated at a point X, for consistency with the functions themselves.
Check the gradients of M nonlinear functions in N variables, evaluated at a point X, for consistency with the functions themselves.
Documentation for QUADPACK, a package of subprograms for automatic evaluation of one-dimensional definite integrals.
Compute a sequence of the Hankel functions H(m,a,z) for superscript m=1 or 2, real nonnegative orders a=b, b+1,... where b>0, and nonzero complex argument z. A scaling option is available to help avoid overflow.
Compute a sequence of the Hankel functions H(m,a,z) for superscript m=1 or 2, real nonnegative orders a=b, b+1,... where b>0, and nonzero complex argument z. A scaling option is available to help avoid overflow.
Compute a sequence of the Hankel functions H(m,a,z) for superscript m=1 or 2, real nonnegative orders a=b, b+1,... where b>0, and nonzero complex argument z. A scaling option is available to help avoid overflow.
Compute a sequence of the Hankel functions H(m,a,z) for superscript m=1 or 2, real nonnegative orders a=b, b+1,... where b>0, and nonzero complex argument z. A scaling option is available to help avoid overflow.
Solve the standard five-point finite difference approximation on a staggered grid to the Helmholtz equation in Cartesian coordinates.
Solve the standard five-point finite difference approximation on a staggered grid to the modified Helmholtz equation in spherical coordinates assuming axisymmetry (no dependence on longitude).
Solve the standard five-point finite difference approximation on a staggered grid to the modified Helmholtz equation in cylindrical coordinates.
Solve the standard five-point finite difference approximation on a staggered grid to the Helmholtz equation in polar coordinates.
Solve the standard five-point finite difference approximation on a staggered grid to the Helmholtz equation in spherical coordinates and on the surface of the unit sphere (radius of 1).
Solve the standard seven-point finite difference approximation to the Helmholtz equation in Cartesian coordinates.
Solves the standard five-point finite difference approximation to the Helmholtz equation in Cartesian coordinates.
Solve a finite difference approximation to the modified Helmholtz equation in spherical coordinates assuming axisymmetry (no dependence on longitude).
Solve a standard finite difference approximation to the Helmholtz equation in cylindrical coordinates.
Solve a finite difference approximation to the Helmholtz equation in polar coordinates.
Solve a finite difference approximation to the Helmholtz equation in spherical coordinates and on the surface of the unit sphere (radius of 1).
Solve a three-dimensional block tridiagonal linear system which arises from a finite difference approximation to a three-dimensional Poisson equation using the Fourier transform package FFTPAK written by Paul Swarztrauber.
Solve a block tridiagonal system of linear equations that results from a staggered grid finite difference approximation to 2-D elliptic PDE's.
Discretize and solve a second and, optionally, a fourth order finite difference approximation on a uniform grid to the general separable elliptic partial differential equation on a rectangle with any combination of periodic or mixed boundary conditions.
Solve for either the second or fourth order finite difference approximation to the solution of a separable elliptic partial differential equation on a rectangle. Any combination of periodic or mixed boundary conditions is allowed.
Evaluate a piecewise cubic Hermite function and its first derivative at an array of points. May be used by itself for Hermite interpolation, or as an evaluator for DPCHIM or DPCHIC. If only function values are required, use DPCHFE instead.
Evaluate a piecewise cubic Hermite function at an array of points. May be used by itself for Hermite interpolation, or as an evaluator for DPCHIM or DPCHIC.
Evaluate a piecewise cubic Hermite function and its first derivative at an array of points. May be used by itself for Hermite interpolation, or as an evaluator for PCHIM or PCHIC. If only function values are required, use PCHFE instead.
Evaluate a piecewise cubic Hermite function at an array of points. May be used by itself for Hermite interpolation, or as an evaluator for PCHIM or PCHIC.
Factor a complex Hermitian matrix by elimination with sym- metric pivoting and estimate the condition of the matrix.
Compute the determinant, inertia and inverse of a complex Hermitian matrix using the factors obtained from CHIFA.
Factor a complex Hermitian matrix by elimination (symmetric pivoting).
Solve the complex Hermitian system using factors obtained from CHIFA.
Factor a complex Hermitian matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant, inertia and inverse of a complex Hermitian matrix stored in packed form using the factors obtained from CHPFA.
Factor a complex Hermitian matrix stored in packed form by elimination with symmetric pivoting.
Solve a complex Hermitian system using factors obtained from CHPFA.
Solve a positive definite symmetric complex system of linear equations.
Solve a positive definite Hermitian system of linear equations. Iterative refinement is used to obtain an error estimate.
Solve a positive definite symmetric system of linear equations.
Solve a positive definite symmetric system of linear equations.
Solve a positive definite symmetric system of linear equations. Iterative refinement is used to obtain an error estimate.
Compute the hyperbolic Bessel function of the first kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order zero.
Compute the modified (hyperbolic) Bessel function of the first kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order one.
Compute the modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the modified (hyperbolic) Bessel function of the third kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order one.
Compute the hyperbolic Bessel function of the first kind of order zero.
Compute the modified (hyperbolic) Bessel function of the first kind of order one.
Compute the modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the modified (hyperbolic) Bessel function of the third kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order one.
Compute the complex hyperbolic cosine.
Compute the complex hyperbolic sine.
Compute the complex hyperbolic tangent.
Compute an N member sequence of I Bessel functions I/SUB(ALPHA+K-1)/(X), K=1,...,N or scaled Bessel functions EXP(-X)*I/SUB(ALPHA+K-1)/(X), K=1,...,N for non-negative ALPHA and X.
Compute an N member sequence of I Bessel functions I/SUB(ALPHA+K-1)/(X), K=1,...,N or scaled Bessel functions EXP(-X)*I/SUB(ALPHA+K-1)/(X), K=1,...,N for nonnegative ALPHA and X.
Compute a sequence of the Bessel functions I(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Compute a sequence of the Bessel functions I(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
This code solves a system of differential/algebraic equations of the form G(T,Y,YPRIME) = 0.
This code solves a system of differential/algebraic equations of the form G(T,Y,YPRIME) = 0.
Incomplete Cholesky Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the incomplete Cholesky Preconditioned Conjugate Gradient method.
Incomplete Cholesky Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the incomplete Cholesky Preconditioned Conjugate Gradient method.
Incompl. Cholesky Decomposition Preconditioner SLAP Set Up. Routine to generate the Incomplete Cholesky decomposition, L*D*L-trans, of a symmetric positive definite matrix, A, which is stored in SLAP Column format. The unit lower triangular matrix L is stored by rows, and the inverse of the diagonal matrix D is stored.
Incompl. Cholesky Decomposition Preconditioner SLAP Set Up. Routine to generate the Incomplete Cholesky decomposition, L*D*L-trans, of a symmetric positive definite matrix, A, which is stored in SLAP Column format. The unit lower triangular matrix L is stored by rows, and the inverse of the diagonal matrix D is stored.
Compute the incomplete or complete elliptic integral of the 2nd kind. For X and Y nonnegative, X+Y and Z positive, DRD(X,Y,Z) = Integral from zero to infinity of -1/2 -1/2 -3/2 (3/2)(t+X) (t+Y) (t+Z) dt. If X or Y is zero, the integral is complete.
Compute the incomplete or complete elliptic integral of the 1st kind. For X, Y, and Z non-negative and at most one of them zero, RF(X,Y,Z) = Integral from zero to infinity of -1/2 -1/2 -1/2 (1/2)(t+X) (t+Y) (t+Z) dt. If X, Y or Z is zero, the integral is complete.
Compute the incomplete or complete (X or Y or Z is zero) elliptic integral of the 3rd kind. For X, Y, and Z non- negative, at most one of them zero, and P positive, RJ(X,Y,Z,P) = Integral from zero to infinity of -1/2 -1/2 -1/2 -1 (3/2)(t+X) (t+Y) (t+Z) (t+P) dt.
Compute the incomplete or complete elliptic integral of the 2nd kind. For X and Y nonnegative, X+Y and Z positive, RD(X,Y,Z) = Integral from zero to infinity of -1/2 -1/2 -3/2 (3/2)(t+X) (t+Y) (t+Z) dt. If X or Y is zero, the integral is complete.
Compute the incomplete or complete elliptic integral of the 1st kind. For X, Y, and Z non-negative and at most one of them zero, RF(X,Y,Z) = Integral from zero to infinity of -1/2 -1/2 -1/2 (1/2)(t+X) (t+Y) (t+Z) dt. If X, Y or Z is zero, the integral is complete.
Compute the incomplete or complete (X or Y or Z is zero) elliptic integral of the 3rd kind. For X, Y, and Z non- negative, at most one of them zero, and P positive, RJ(X,Y,Z,P) = Integral from zero to infinity of -1/2 -1/2 -1/2 -1 (3/2)(t+X) (t+Y) (t+Z) (t+P) dt.
SLAP Backsolve routine for LDL' Factorization. Routine to solve a system of the form L*D*L' X = B, where L is a unit lower triangular matrix and D is a diagonal matrix and ' means transpose.
SLAP Backsolve routine for LDL' Factorization. Routine to solve a system of the form L*D*L' X = B, where L is a unit lower triangular matrix and D is a diagonal matrix and ' means transpose.
Compute the logarithm of Tricomi's incomplete Gamma function with Perron's continued fraction for large X and A .GE. X.
Evaluate the incomplete Gamma function.
Evaluate the incomplete Gamma function.
Compute the logarithm of Tricomi's incomplete Gamma function with Perron's continued fraction for large X and A .GE. X.
Incomplete LU Decomposition Preconditioner SLAP Set Up. Routine to generate the incomplete LDU decomposition of a matrix. The unit lower triangular factor L is stored by rows and the unit upper triangular factor U is stored by columns. The inverse of the diagonal matrix D is stored. No fill in is allowed.
Incomplete LU Decomposition Preconditioner SLAP Set Up. Routine to generate the incomplete LDU decomposition of a matrix. The unit lower triangular factor L is stored by rows and the unit upper triangular factor U is stored by columns. The inverse of the diagonal matrix D is stored. No fill in is allowed.
Solve the bounded and constrained least squares problem consisting of solving the equation E*X = F (in the least squares sense) subject to the linear constraints C*X = Y.
Solve the problem E*X = F (in the least squares sense) with bounds on selected X values.
Solve the bounded and constrained least squares problem consisting of solving the equation E*X = F (in the least squares sense) subject to the linear constraints C*X = Y.
Solve the problem E*X = F (in the least squares sense) with bounds on selected X values.
Solve a linearly constrained least squares problem with equality and inequality constraints, and optionally compute a covariance matrix.
Solve a linearly constrained least squares problem with equality constraints and nonnegativity constraints on selected variables.
Solve a linearly constrained least squares problem with equality and inequality constraints, and optionally compute a covariance matrix.
Solve a linearly constrained least squares problem with equality constraints and nonnegativity constraints on selected variables.
The routine calculates an approximation result to a given INTEGRAL I = Integral of F over (BOUND,+INFINITY) OR I = Integral of F over (-INFINITY,BOUND) OR I = Integral of F over (-INFINITY,+INFINITY) Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given integral I = Integral of F over (BOUND,+INFINITY) or I = Integral of F over (-INFINITY,BOUND) or I = Integral of F over (-INFINITY,+INFINITY), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
The routine calculates an approximation result to a given INTEGRAL I = Integral of F over (BOUND,+INFINITY) OR I = Integral of F over (-INFINITY,BOUND) OR I = Integral of F over (-INFINITY,+INFINITY) Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given integral I = Integral of F over (BOUND,+INFINITY) or I = Integral of F over (-INFINITY,BOUND) or I = Integral of F over (-INFINITY,+INFINITY), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
The function of CDRIV1 is to solve N (200 or fewer) ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. CDRIV1 allows complex-valued differential equations.
The function of CDRIV2 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. CDRIV2 allows complex-valued differential equations.
The function of CDRIV3 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. Other important options are available. CDRIV3 allows complex-valued differential equations.
Solve an initial value problem in ordinary differential equations using an Adams-Bashforth method.
Solve an initial value problem in ordinary differential equations using backward differentiation formulas. It is intended primarily for stiff problems.
Solve an initial value problem in ordinary differential equations using a Runge-Kutta-Fehlberg scheme.
The function of DDRIV1 is to solve N (200 or fewer) ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. DDRIV1 uses double precision arithmetic.
The function of DDRIV2 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. DDRIV2 uses double precision arithmetic.
The function of DDRIV3 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. Other important options are available. DDRIV3 uses double precision arithmetic.
Solve an initial value problem in ordinary differential equations using an Adams-Bashforth method.
Solve an initial value problem in ordinary differential equations using backward differentiation formulas. It is intended primarily for stiff problems.
Solve an initial value problem in ordinary differential equations using a Runge-Kutta-Fehlberg scheme.
Approximate the solution at XOUT by evaluating the polynomial computed in DSTEPS at XOUT. Must be used in conjunction with DSTEPS.
Integrate a system of first order ordinary differential equations one step.
The function of SDRIV1 is to solve N (200 or fewer) ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. SDRIV1 uses single precision arithmetic.
The function of SDRIV2 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. SDRIV2 uses single precision arithmetic.
The function of SDRIV3 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. Other important options are available. SDRIV3 uses single precision arithmetic.
Approximate the solution at XOUT by evaluating the polynomial computed in STEPS at XOUT. Must be used in conjunction with STEPS.
Integrate a system of first order ordinary differential equations one step.
Determine the number of terms needed in an orthogonal polynomial series so that it meets a specified accuracy.
Determine the number of terms needed in an orthogonal polynomial series so that it meets a specified accuracy.
Compute the inner product of two vectors with extended precision accumulation.
Dot product of two complex vectors using the complex conjugate of the first vector.
Compute the inner product of two vectors.
Compute the inner product of two vectors with extended precision accumulation and result.
Compute the inner product of two vectors.
Compute the inner product of two vectors with extended precision accumulation and result.
Compute the inner product of two vectors with extended precision accumulation and result.
Compute the inner product of two vectors with extended precision accumulation and result.
Compute the inner product of two vectors.
Compute the inner product of two vectors with extended precision accumulation.
Compute the integral of a product of a function and a derivative of a B-spline.
Compute the integral of a product of a function and a derivative of a K-th order B-spline.
Compute the integral of a K-th order B-spline using the B-representation.
Compute the integral of a K-th order B-spline using the B-representation.
Compute the incomplete or complete elliptic integral of the 1st kind. For X, Y, and Z non-negative and at most one of them zero, RF(X,Y,Z) = Integral from zero to infinity of -1/2 -1/2 -1/2 (1/2)(t+X) (t+Y) (t+Z) dt. If X, Y or Z is zero, the integral is complete.
Compute the incomplete or complete elliptic integral of the 1st kind. For X, Y, and Z non-negative and at most one of them zero, RF(X,Y,Z) = Integral from zero to infinity of -1/2 -1/2 -1/2 (1/2)(t+X) (t+Y) (t+Z) dt. If X, Y or Z is zero, the integral is complete.
Compute the incomplete or complete elliptic integral of the 2nd kind. For X and Y nonnegative, X+Y and Z positive, DRD(X,Y,Z) = Integral from zero to infinity of -1/2 -1/2 -3/2 (3/2)(t+X) (t+Y) (t+Z) dt. If X or Y is zero, the integral is complete.
Compute the incomplete or complete elliptic integral of the 2nd kind. For X and Y nonnegative, X+Y and Z positive, RD(X,Y,Z) = Integral from zero to infinity of -1/2 -1/2 -3/2 (3/2)(t+X) (t+Y) (t+Z) dt. If X or Y is zero, the integral is complete.
Compute the incomplete or complete (X or Y or Z is zero) elliptic integral of the 3rd kind. For X, Y, and Z non- negative, at most one of them zero, and P positive, RJ(X,Y,Z,P) = Integral from zero to infinity of -1/2 -1/2 -1/2 -1 (3/2)(t+X) (t+Y) (t+Z) (t+P) dt.
Compute the incomplete or complete (X or Y or Z is zero) elliptic integral of the 3rd kind. For X, Y, and Z non- negative, at most one of them zero, and P positive, RJ(X,Y,Z,P) = Integral from zero to infinity of -1/2 -1/2 -1/2 -1 (3/2)(t+X) (t+Y) (t+Z) (t+P) dt.
Compute repeated integrals of the K-zero Bessel function.
Compute repeated integrals of the K-zero Bessel function.
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT)LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESLT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT)LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESLT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I= Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X)=SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Integrate a function tabulated at arbitrarily spaced abscissas using overlapping parabolas.
Integrate a function tabulated at arbitrarily spaced abscissas using overlapping parabolas.
Integrate a function using a 7-point adaptive Newton-Cotes quadrature rule.
Integrate a function using a 7-point adaptive Newton-Cotes quadrature rule.
The routine calculates an approximation result to a given Fourier integral I=Integral of F(X)*W(X) over (A,INFINITY) where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
The routine calculates an approximation result to a given Fourier integral I = Integral of F(X)*W(X) over (A,INFINITY) where W(X)=COS(OMEGA*X) or W(X)=SIN(OMEGA*X), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
The routine calculates an approximation result to a given Fourier integral I = Integral of F(X)*W(X) over (A,INFINITY) where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
The routine calculates an approximation result to a given Fourier integral I = Integral of F(X)*W(X) over (A,INFINITY) where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
To compute the integral I=Integral of F(X) over (A,B) Where W(X) = COS(OMEGA*X) or W(X)=SIN(OMEGA*X) and to compute J = Integral of ABS(F) over (A,B). For small value of OMEGA or small intervals (A,B) the 15-point GAUSS-KRONRO Rule is used. Otherwise a generalized CLENSHAW-CURTIS method is used.
To compute the integral I=Integral of F(X) over (A,B) Where W(X) = COS(OMEGA*X) Or (WX)=SIN(OMEGA*X) and to compute J=Integral of ABS(F) over (A,B). For small value of OMEGA or small intervals (A,B) 15-point GAUSS- KRONROD Rule used. Otherwise generalized CLENSHAW-CURTIS us
Interchange two vectors.
Interchange two vectors.
Interchange two vectors.
Interchange two vectors.
Compute the B-representation of a cubic spline which interpolates given data.
Compute the B-representation of a spline which interpolates given data.
Use the B-representation to construct a divided difference table preparatory to a (right) derivative calculation.
Calculate the value of the spline and its derivatives from the B-representation.
Compute the B-representation of a cubic spline which interpolates given data.
Compute the B-representation of a spline which interpolates given data.
Use the B-representation to construct a divided difference table preparatory to a (right) derivative calculation.
Calculate the value of the spline and its derivatives from the B-representation.
Compute the largest integer ILEFT in 1 .LE. ILEFT .LE. LXT such that XT(ILEFT) .LE. X where XT(*) is a subdivision of the X interval.
Compute the integral on (X1,X2) of a product of a function F and the ID-th derivative of a B-spline, (PP-representation).
Compute the integral on (X1,X2) of a K-th order B-spline using the piecewise polynomial (PP) representation.
Calculate the value of the IDERIV-th derivative of the B-spline from the PP-representation.
Compute the largest integer ILEFT in 1 .LE. ILEFT .LE. LXT such that XT(ILEFT) .LE. X where XT(*) is a subdivision of the X interval.
Compute the integral on (X1,X2) of a product of a function F and the ID-th derivative of a B-spline, (PP-representation).
Compute the integral on (X1,X2) of a K-th order B-spline using the piecewise polynomial (PP) representation.
Calculate the value of the IDERIV-th derivative of the B-spline from the PP-representation.
Compute the determinant of a complex band matrix using the factors from CGBCO or CGBFA.
Compute the determinant and inverse of a matrix using the factors computed by CGECO or CGEFA.
Compute the determinant, inertia and inverse of a complex Hermitian matrix using the factors obtained from CHIFA.
Compute the determinant, inertia and inverse of a complex Hermitian matrix stored in packed form using the factors obtained from CHPFA.
Compute the determinant of a complex Hermitian positive definite band matrix using the factors computed by CPBCO or CPBFA.
Compute the determinant and inverse of a certain complex Hermitian positive definite matrix using the factors computed by CPOCO, CPOFA, or CQRDC.
Compute the determinant and inverse of a complex Hermitian positive definite matrix using factors from CPPCO or CPPFA.
Compute the determinant and inverse of a complex symmetric matrix using the factors from CSIFA.
Compute the determinant and inverse of a complex symmetric matrix stored in packed form using the factors from CSPFA.
Compute the determinant and inverse of a triangular matrix.
Compute the determinant of a band matrix using the factors computed by DGBCO or DGBFA.
Compute the determinant and inverse of a matrix using the factors computed by DGECO or DGEFA.
Compute the determinant of a symmetric positive definite band matrix using the factors computed by DPBCO or DPBFA.
Compute the determinant and inverse of a certain real symmetric positive definite matrix using the factors computed by DPOCO, DPOFA or DQRDC.
Compute the determinant and inverse of a real symmetric positive definite matrix using factors from DPPCO or DPPFA.
Compute the determinant, inertia and inverse of a real symmetric matrix using the factors from DSIFA.
Compute the determinant, inertia, inverse of a real symmetric matrix stored in packed form using the factors from DSPFA.
Compute the determinant and inverse of a triangular matrix.
Compute the determinant of a band matrix using the factors computed by SGBCO or SGBFA.
Compute the determinant and inverse of a matrix using the factors computed by SGECO or SGEFA.
Compute the determinant of a symmetric positive definite band matrix using the factors computed by SPBCO or SPBFA.
Compute the determinant and inverse of a certain real symmetric positive definite matrix using the factors computed by SPOCO, SPOFA or SQRDC.
Compute the determinant and inverse of a real symmetric positive definite matrix using factors from SPPCO or SPPFA.
Compute the determinant, inertia and inverse of a real symmetric matrix using the factors from SSIFA.
Compute the determinant, inertia, inverse of a real symmetric matrix stored in packed form using the factors from SSPFA.
Compute the determinant and inverse of a triangular matrix.
Compute the unnormalized inverse cosine transform.
Compute the arc hyperbolic cosine.
Compute the arc hyperbolic cosine.
Compute the arc hyperbolic cosine.
Compute the arc hyperbolic sine.
Compute the arc hyperbolic sine.
Compute the arc hyperbolic sine.
Compute the arc hyperbolic tangent.
Compute the arc hyperbolic tangent.
Compute the arc hyperbolic tangent.
Sparse Linear Algebra Package Version 2.0.2 Documentation. Routines to solve large sparse symmetric and nonsymmetric positive definite linear systems, Ax = b, using precondi- tioned iterative methods.
Sparse Linear Algebra Package Version 2.0.2 Documentation. Routines to solve large sparse symmetric and nonsymmetric positive definite linear systems, Ax = b, using precondi- tioned iterative methods.
Incomplete LU BiConjugate Gradient Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient method with Incomplete LU decomposition preconditioning.
Incomplete LU CG Sparse Ax=b Solver for Normal Equations. Routine to solve a general linear system Ax = b using the incomplete LU decomposition with the Conjugate Gradient method applied to the normal equations, viz., AA'y = b, x = A'y.
Incomplete LU BiConjugate Gradient Squared Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient Squared method with Incomplete LU decomposition preconditioning.
Incomplete LU Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Orthomin method with Incomplete LU decomposition.
Incomplete LU BiConjugate Gradient Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient method with Incomplete LU decomposition preconditioning.
Incomplete LU CG Sparse Ax=b Solver for Normal Equations. Routine to solve a general linear system Ax = b using the incomplete LU decomposition with the Conjugate Gradient method applied to the normal equations, viz., AA'y = b, x = A'y.
Incomplete LU BiConjugate Gradient Squared Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient Squared method with Incomplete LU decomposition preconditioning.
Incomplete LU Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Orthomin method with Incomplete LU decomposition.
Preconditioned BiConjugate Gradient Sparse Ax = b Solver. Routine to solve a Non-Symmetric linear system Ax = b using the Preconditioned BiConjugate Gradient method.
Preconditioned Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the Preconditioned Conjugate Gradient method.
Preconditioned CG Sparse Ax=b Solver for Normal Equations. Routine to solve a general linear system Ax = b using the Preconditioned Conjugate Gradient method applied to the normal equations AA'y = b, x=A'y.
Preconditioned BiConjugate Gradient Squared Ax=b Solver. Routine to solve a Non-Symmetric linear system Ax = b using the Preconditioned BiConjugate Gradient Squared method.
Preconditioned GMRES iterative sparse Ax=b solver. This routine uses the generalized minimum residual (GMRES) method with preconditioning to solve non-symmetric linear systems of the form: Ax = b.
Internal routine for DGMRES.
Internal routine for DGMRES.
Preconditioned Iterative Refinement Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using iterative refinement with a matrix splitting.
SLAP Backsolve routine for LDL' Factorization. Routine to solve a system of the form L*D*L' X = B, where L is a unit lower triangular matrix and D is a diagonal matrix and ' means transpose.
Preconditioned Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Preconditioned Orthomin method.
Internal routine for DGMRES.
Internal routine for DGMRES.
Internal routine for DGMRES.
Diagonally Scaled BiConjugate Gradient Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient method with diagonal scaling.
Diagonally Scaled Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the Preconditioned Conjugate Gradient method. The preconditioner is diagonal scaling.
Diagonally Scaled CG Sparse Ax=b Solver for Normal Eqn's. Routine to solve a general linear system Ax = b using diagonal scaling with the Conjugate Gradient method applied to the the normal equations, viz., AA'y = b, where x = A'y.
Diagonally Scaled CGS Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient Squared method with diagonal scaling.
Diagonally scaled GMRES iterative sparse Ax=b solver. This routine uses the generalized minimum residual (GMRES) method with diagonal scaling to solve possibly non-symmetric linear systems of the form: Ax = b.
Diagonal Matrix Vector Multiply. Routine to calculate the product X = DIAG*B, where DIAG is a diagonal matrix.
Diagonally Scaled Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Orthomin method with diagonal scaling.
Gauss-Seidel Method Iterative Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using Gauss-Seidel iteration.
Incomplete Cholesky Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the incomplete Cholesky Preconditioned Conjugate Gradient method.
Incompl. Cholesky Decomposition Preconditioner SLAP Set Up. Routine to generate the Incomplete Cholesky decomposition, L*D*L-trans, of a symmetric positive definite matrix, A, which is stored in SLAP Column format. The unit lower triangular matrix L is stored by rows, and the inverse of the diagonal matrix D is stored.
Incomplete LU Iterative Refinement Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using the incomplete LU decomposition with iterative refinement.
Incomplete LU Decomposition Preconditioner SLAP Set Up. Routine to generate the incomplete LDU decomposition of a matrix. The unit lower triangular factor L is stored by rows and the unit upper triangular factor U is stored by columns. The inverse of the diagonal matrix D is stored. No fill in is allowed.
Jacobi's Method Iterative Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using Jacobi iteration.
SLAP MSOLVE for Lower Triangle Matrix. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes L B = X.
SLAP Lower Triangle Matrix Backsolve. Routine to solve a system of the form Lx = b , where L is a lower triangular matrix.
SLAP MSOLVE for LDL' (IC) Factorization. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes (LDL') B = X.
Incomplete LU GMRES iterative sparse Ax=b solver. This routine uses the generalized minimum residual (GMRES) method with incomplete LU factorization for preconditioning to solve possibly non-symmetric linear systems of the form: Ax = b.
SLAP MSOLVE for LDU Factorization. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes (LDU) B = X.
SLAP Backsolve for LDU Factorization. Routine to solve a system of the form L*D*U X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix.
SLAP Backsolve for LDU Factorization. Routine to solve a system of the form (L*D*U)' X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix and ' denotes transpose.
SLAP MTSOLV for LDU Factorization. This routine acts as an interface between the SLAP generic MTSOLV calling convention and the routine that actually -T computes (LDU) B = X.
SLAP Backsolve for LDU Factorization of Normal Equations. To solve a system of the form (L*D*U)*(L*D*U)' X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix and ' denotes transpose.
SLAP MSOLVE for LDU Factorization of Normal Equations. This routine acts as an interface between the SLAP generic MMTSLV calling convention and the routine that actually -1 computes [(LDU)*(LDU)'] B = X.
Internal routine for DGMRES.
Preconditioned BiConjugate Gradient Stop Test. This routine calculates the stop test for the BiConjugate Gradient iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned CG on Normal Equations Stop Test. This routine calculates the stop test for the Conjugate Gradient iteration scheme applied to the normal equations. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned BiConjugate Gradient Squared Stop Test. This routine calculates the stop test for the BiConjugate Gradient Squared iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Orthomin Stop Test. This routine calculates the stop test for the Orthomin iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned BiConjugate Gradient Stop Test. This routine calculates the stop test for the BiConjugate Gradient iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned CG on Normal Equations Stop Test. This routine calculates the stop test for the Conjugate Gradient iteration scheme applied to the normal equations. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned BiConjugate Gradient Squared Stop Test. This routine calculates the stop test for the BiConjugate Gradient Squared iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Orthomin Stop Test. This routine calculates the stop test for the Orthomin iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned BiConjugate Gradient Sparse Ax = b Solver. Routine to solve a Non-Symmetric linear system Ax = b using the Preconditioned BiConjugate Gradient method.
Preconditioned Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the Preconditioned Conjugate Gradient method.
Preconditioned CG Sparse Ax=b Solver for Normal Equations. Routine to solve a general linear system Ax = b using the Preconditioned Conjugate Gradient method applied to the normal equations AA'y = b, x=A'y.
Preconditioned BiConjugate Gradient Squared Ax=b Solver. Routine to solve a Non-Symmetric linear system Ax = b using the Preconditioned BiConjugate Gradient Squared method.
Preconditioned Iterative Refinement Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using iterative refinement with a matrix splitting.
Preconditioned GMRES Iterative Sparse Ax=b Solver. This routine uses the generalized minimum residual (GMRES) method with preconditioning to solve non-symmetric linear systems of the form: Ax = b.
Internal routine for SGMRES.
Internal routine for SGMRES.
SLAP Backsolve routine for LDL' Factorization. Routine to solve a system of the form L*D*L' X = B, where L is a unit lower triangular matrix and D is a diagonal matrix and ' means transpose.
Preconditioned Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Preconditioned Orthomin method.
Internal routine for SGMRES.
Internal routine for SGMRES.
Internal routine for SGMRES.
Diagonally Scaled BiConjugate Gradient Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient method with diagonal scaling.
Diagonally Scaled Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the Preconditioned Conjugate Gradient method. The preconditioner is diagonal scaling.
Diagonally Scaled CG Sparse Ax=b Solver for Normal Eqn's. Routine to solve a general linear system Ax = b using diagonal scaling with the Conjugate Gradient method applied to the the normal equations, viz., AA'y = b, where x = A'y.
Diagonally Scaled CGS Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient Squared method with diagonal scaling.
Diagonally Scaled GMRES Iterative Sparse Ax=b Solver. This routine uses the generalized minimum residual (GMRES) method with diagonal scaling to solve possibly non-symmetric linear systems of the form: Ax = b.
Diagonal Matrix Vector Multiply. Routine to calculate the product X = DIAG*B, where DIAG is a diagonal matrix.
Diagonally Scaled Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Orthomin method with diagonal scaling.
Gauss-Seidel Method Iterative Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using Gauss-Seidel iteration.
Incomplete Cholesky Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the incomplete Cholesky Preconditioned Conjugate Gradient method.
Incompl. Cholesky Decomposition Preconditioner SLAP Set Up. Routine to generate the Incomplete Cholesky decomposition, L*D*L-trans, of a symmetric positive definite matrix, A, which is stored in SLAP Column format. The unit lower triangular matrix L is stored by rows, and the inverse of the diagonal matrix D is stored.
Incomplete LU Iterative Refinement Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using the incomplete LU decomposition with iterative refinement.
Incomplete LU Decomposition Preconditioner SLAP Set Up. Routine to generate the incomplete LDU decomposition of a matrix. The unit lower triangular factor L is stored by rows and the unit upper triangular factor U is stored by columns. The inverse of the diagonal matrix D is stored. No fill in is allowed.
Jacobi's Method Iterative Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using Jacobi iteration.
SLAP MSOLVE for Lower Triangle Matrix. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes L B = X.
SLAP Lower Triangle Matrix Backsolve. Routine to solve a system of the form Lx = b , where L is a lower triangular matrix.
SLAP MSOLVE for LDL' (IC) Factorization. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes (LDL') B = X.
Incomplete LU GMRES Iterative Sparse Ax=b Solver. This routine uses the generalized minimum residual (GMRES) method with incomplete LU factorization for preconditioning to solve possibly non-symmetric linear systems of the form: Ax = b.
SLAP MSOLVE for LDU Factorization. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes (LDU) B = X.
SLAP Backsolve for LDU Factorization. Routine to solve a system of the form L*D*U X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix.
SLAP Backsolve for LDU Factorization. Routine to solve a system of the form (L*D*U)' X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix and ' denotes transpose.
SLAP MTSOLV for LDU Factorization. This routine acts as an interface between the SLAP generic MTSOLV calling convention and the routine that actually -T computes (LDU) B = X.
SLAP Backsolve for LDU Factorization of Normal Equations. To solve a system of the form (L*D*U)*(L*D*U)' X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix and ' denotes transpose.
SLAP MSOLVE for LDU Factorization of Normal Equations. This routine acts as an interface between the SLAP generic MMTSLV calling convention and the routine that actually -1 computes [(LDU)*(LDU)'] B = X.
Internal routine for SGMRES.
Compute an N member sequence of J Bessel functions J/SUB(ALPHA+K-1)/(X), K=1,...,N for non-negative ALPHA and X.
Compute an N member sequence of J Bessel functions J/SUB(ALPHA+K-1)/(X), K=1,...,N for non-negative ALPHA and X.
Compute a sequence of the Bessel functions J(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Compute a sequence of the Bessel functions J(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Check the gradients of M nonlinear functions in N variables, evaluated at a point X, for consistency with the functions themselves.
Check the gradients of M nonlinear functions in N variables, evaluated at a point X, for consistency with the functions themselves.
Implement forward recursion on the three term recursion relation for a sequence of non-negative order Bessel functions K/SUB(FNU+I-1)/(X), or scaled Bessel functions EXP(X)*K/SUB(FNU+I-1)/(X), I=1,...,N for real, positive X and non-negative orders FNU.
Implement forward recursion on the three term recursion relation for a sequence of non-negative order Bessel functions K/SUB(FNU+I-1)/(X), or scaled Bessel functions EXP(X)*K/SUB(FNU+I-1)/(X), I=1,...,N for real, positive X and non-negative orders FNU.
Compute a sequence of the Bessel functions K(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Compute a sequence of the Bessel functions K(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Compute repeated integrals of the K-zero Bessel function.
Compute repeated integrals of the K-zero Bessel function.
Compute the Euclidean length (L2 norm) of a vector.
Compute the unitary norm of a complex vector.
Compute the Euclidean length (L2 norm) of a vector.
Compute the log complementary incomplete Gamma function for large X and for A .LE. X.
Compute the log complementary incomplete Gamma function for large X and for A .LE. X.
Compute the LU factorization of a banded matrices using sequential accumulation of rows of the data matrix. Exactly one right-hand side vector is permitted.
Solve the least squares problem for a banded matrix using sequential accumulation of rows of the data matrix. Exactly one right-hand side vector is permitted.
Compute the LU factorization of a banded matrices using sequential accumulation of rows of the data matrix. Exactly one right-hand side vector is permitted.
Solve the least squares problem for a banded matrix using sequential accumulation of rows of the data matrix. Exactly one right-hand side vector is permitted.
Solve the bounded and constrained least squares problem consisting of solving the equation E*X = F (in the least squares sense) subject to the linear constraints C*X = Y.
Solve the problem E*X = F (in the least squares sense) with bounds on selected X values.
Solve a least squares problem for banded matrices using sequential accumulation of rows of the data matrix. Exactly one right-hand side vector is permitted.
Use the coefficients generated by DPOLFT to evaluate the polynomial fit of degree L, along with the first NDER of its derivatives, at a specified point.
Convert the DPOLFT coefficients to Taylor series form.
Fit discrete data in a least squares sense by polynomials in one variable.
Convert the POLFIT coefficients to Taylor series form.
Fit discrete data in a least squares sense by polynomials in one variable.
Use the coefficients generated by POLFIT to evaluate the polynomial fit of degree L, along with the first NDER of its derivatives, at a specified point.
Solve the bounded and constrained least squares problem consisting of solving the equation E*X = F (in the least squares sense) subject to the linear constraints C*X = Y.
Solve the problem E*X = F (in the least squares sense) with bounds on selected X values.
Compute normalized Legendre polynomials and associated Legendre functions.
Compute normalized Legendre polynomials.
To compute the values of Legendre functions for DXLEGF. Method: backward mu-wise recurrence for P(-MU,NU,X) for fixed nu to obtain P(-MU2,NU1,X), P(-(MU2-1),NU1,X), ..., P(-MU1,NU1,X) and store in ascending mu order.
To compute the values of Legendre functions for DXLEGF. This subroutine transforms an array of Legendre functions of the first kind of negative order stored in array PQA into Legendre functions of the first kind of positive order stored in array PQA. The original array is destroyed.
To compute the values of Legendre functions for DXLEGF. This subroutine transforms an array of Legendre functions of the first kind of negative order stored in array PQA into normalized Legendre polynomials stored in array PQA. The original array is destroyed.
To compute the values of Legendre functions for DXLEGF. This subroutine calculates initial values of P or Q using power series, then performs forward nu-wise recurrence to obtain P(-MU,NU,X), Q(0,NU,X), or Q(1,NU,X). The nu-wise recurrence is stable for P for all mu and for Q for mu=0,1.
To compute the values of Legendre functions for DXLEGF. Method: forward mu-wise recurrence for Q(MU,NU,X) for fixed nu to obtain Q(MU1,NU,X), Q(MU1+1,NU,X), ..., Q(MU2,NU,X).
To compute the values of Legendre functions for DXLEGF. Method: backward nu-wise recurrence for Q(MU,NU,X) for fixed mu to obtain Q(MU1,NU1,X), Q(MU1,NU1+1,X), ..., Q(MU1,NU2,X).
Compute normalized Legendre polynomials and associated Legendre functions.
Compute normalized Legendre polynomials.
To compute the values of Legendre functions for XLEGF. Method: backward mu-wise recurrence for P(-MU,NU,X) for fixed nu to obtain P(-MU2,NU1,X), P(-(MU2-1),NU1,X), ..., P(-MU1,NU1,X) and store in ascending mu order.
To compute the values of Legendre functions for XLEGF. This subroutine transforms an array of Legendre functions of the first kind of negative order stored in array PQA into Legendre functions of the first kind of positive order stored in array PQA. The original array is destroyed.
To compute the values of Legendre functions for XLEGF. This subroutine transforms an array of Legendre functions of the first kind of negative order stored in array PQA into normalized Legendre polynomials stored in array PQA. The original array is destroyed.
To compute the values of Legendre functions for XLEGF. This subroutine calculates initial values of P or Q using power series, then performs forward nu-wise recurrence to obtain P(-MU,NU,X), Q(0,NU,X), or Q(1,NU,X). The nu-wise recurrence is stable for P for all mu and for Q for mu=0,1.
To compute the values of Legendre functions for XLEGF. Method: forward mu-wise recurrence for Q(MU,NU,X) for fixed nu to obtain Q(MU1,NU,X), Q(MU1+1,NU,X), ..., Q(MU2,NU,X).
To compute the values of Legendre functions for XLEGF. Method: backward nu-wise recurrence for Q(MU,NU,X) for fixed mu to obtain Q(MU1,NU1,X), Q(MU1,NU1+1,X), ..., Q(MU1,NU2,X).
Multiply a complex vector by a complex general band matrix.
Multiply a complex vector by a complex general matrix.
Perform conjugated rank 1 update of a complex general matrix.
Perform unconjugated rank 1 update of a complex general matrix.
Multiply a complex vector by a complex Hermitian band matrix.
Multiply a complex vector by a complex Hermitian matrix.
Perform Hermitian rank 1 update of a complex Hermitian matrix.
Perform Hermitian rank 2 update of a complex Hermitian matrix.
Perform the matrix-vector operation.
Perform the hermitian rank 1 operation.
Perform the hermitian rank 2 operation.
Multiply a complex vector by a complex triangular band matrix.
Solve a complex triangular banded system of equations.
Perform one of the matrix-vector operations.
Solve one of the systems of equations.
Multiply a complex vector by a complex triangular matrix.
Solve a complex triangular system of equations.
Perform one of the matrix-vector operations.
Perform one of the matrix-vector operations.
Perform the rank 1 operation.
Perform the matrix-vector operation.
Perform the matrix-vector operation.
Perform the symmetric rank 1 operation.
Perform the symmetric rank 2 operation.
Perform the matrix-vector operation.
Perform the symmetric rank 1 operation.
Perform the symmetric rank 2 operation.
Perform one of the matrix-vector operations.
Solve one of the systems of equations.
Perform one of the matrix-vector operations.
Solve one of the systems of equations.
Perform one of the matrix-vector operations.
Solve one of the systems of equations.
Test two characters to determine if they are the same letter, except for case.
Multiply a real vector by a real general band matrix.
Multiply a real vector by a real general matrix.
Perform rank 1 update of a real general matrix.
Multiply a real vector by a real symmetric band matrix.
Perform the matrix-vector operation.
Performs the symmetric rank 1 operation.
Perform the symmetric rank 2 operation.
Multiply a real vector by a real symmetric matrix.
Perform symmetric rank 1 update of a real symmetric matrix.
Perform symmetric rank 2 update of a real symmetric matrix.
Multiply a real vector by a real triangular band matrix.
Solve a real triangular banded system of linear equations.
Perform one of the matrix-vector operations.
Solve one of the systems of equations.
Multiply a real vector by a real triangular matrix.
Solve a real triangular system of linear equations.
Multiply a complex general matrix by a complex general matrix.
Multiply a complex general matrix by a complex Hermitian matrix.
Perform Hermitian rank 2k update of a complex.
Perform Hermitian rank k update of a complex Hermitian matrix.
Multiply a complex general matrix by a complex symmetric matrix.
Perform symmetric rank 2k update of a complex symmetric matrix.
Perform symmetric rank k update of a complex symmetric matrix.
Multiply a complex general matrix by a complex triangular matrix.
Solve a complex triangular system of equations with multiple right-hand sides.
Perform one of the matrix-matrix operations.
Perform one of the matrix-matrix operations.
Perform one of the symmetric rank 2k operations.
Perform one of the symmetric rank k operations.
Perform one of the matrix-matrix operations.
Solve one of the matrix equations.
Test two characters to determine if they are the same letter, except for case.
Multiply a real general matrix by a real general matrix.
Multiply a real general matrix by a real symmetric matrix.
Perform symmetric rank 2k update of a real symmetric matrix
Perform symmetric rank k update of a real symmetric matrix.
Multiply a real general matrix by a real triangular matrix.
Solve a real triangular system of equations with multiple right-hand sides.
Minimize the sum of the squares of M nonlinear functions in N variables by a modification of the Levenberg-Marquardt algorithm.
An easy-to-use code which minimizes the sum of the squares of M nonlinear functions in N variables by a modification of the Levenberg-Marquardt algorithm.
Minimize the sum of the squares of M nonlinear functions in N variables by a modification of the Levenberg-Marquardt algorithm.
An easy-to-use code which minimizes the sum of the squares of M nonlinear functions in N variables by a modification of the Levenberg-Marquardt algorithm.
Compute the minimum and maximum bounds for the argument in the Gamma function.
Compute the minimum and maximum bounds for the argument in the Gamma function.
Solve the bounded and constrained least squares problem consisting of solving the equation E*X = F (in the least squares sense) subject to the linear constraints C*X = Y.
Solve the problem E*X = F (in the least squares sense) with bounds on selected X values.
Solve the bounded and constrained least squares problem consisting of solving the equation E*X = F (in the least squares sense) subject to the linear constraints C*X = Y.
Solve the problem E*X = F (in the least squares sense) with bounds on selected X values.
Compute a constant times a vector plus a vector.
Compute the Cholesky decomposition of a positive definite matrix. A pivoting option allows the user to estimate the condition number of a positive definite matrix or determine the rank of a positive semidefinite matrix.
Downdate an augmented Cholesky decomposition or the triangular factor of an augmented QR decomposition.
Update the Cholesky factorization A=TRANS(R)*R of a positive definite matrix A of order P under diagonal permutations of the form TRANS(E)*A*E, where E is a permutation matrix.
Update an augmented Cholesky decomposition of the triangular part of an augmented QR decomposition.
Copy a vector.
Compute the inner product of two vectors with extended precision accumulation.
Dot product of two complex vectors using the complex conjugate of the first vector.
Compute the inner product of two vectors.
Factor a band matrix by Gaussian elimination and estimate the condition number of the matrix.
Compute the determinant of a complex band matrix using the factors from CGBCO or CGBFA.
Factor a band matrix using Gaussian elimination.
Multiply a complex vector by a complex general band matrix.
Solve the complex band system A*X=B or CTRANS(A)*X=B using the factors computed by CGBCO or CGBFA.
Factor a matrix using Gaussian elimination and estimate the condition number of the matrix.
Compute the determinant and inverse of a matrix using the factors computed by CGECO or CGEFA.
Factor a matrix using Gaussian elimination.
Multiply a complex general matrix by a complex general matrix.
Multiply a complex vector by a complex general matrix.
Perform conjugated rank 1 update of a complex general matrix.
Perform unconjugated rank 1 update of a complex general matrix.
Solve the complex system A*X=B or CTRANS(A)*X=B using the factors computed by CGECO or CGEFA.
Solve a tridiagonal linear system.
Multiply a complex vector by a complex Hermitian band matrix.
Multiply a complex general matrix by a complex Hermitian matrix.
Multiply a complex vector by a complex Hermitian matrix.
Perform Hermitian rank 1 update of a complex Hermitian matrix.
Perform Hermitian rank 2 update of a complex Hermitian matrix.
Perform Hermitian rank 2k update of a complex.
Perform Hermitian rank k update of a complex Hermitian matrix.
Factor a complex Hermitian matrix by elimination with sym- metric pivoting and estimate the condition of the matrix.
Compute the determinant, inertia and inverse of a complex Hermitian matrix using the factors obtained from CHIFA.
Factor a complex Hermitian matrix by elimination (symmetric pivoting).
Solve the complex Hermitian system using factors obtained from CHIFA.
Factor a complex Hermitian matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant, inertia and inverse of a complex Hermitian matrix stored in packed form using the factors obtained from CHPFA.
Factor a complex Hermitian matrix stored in packed form by elimination with symmetric pivoting.
Perform the matrix-vector operation.
Perform the hermitian rank 1 operation.
Perform the hermitian rank 2 operation.
Solve a complex Hermitian system using factors obtained from CHPFA.
Factor a complex Hermitian positive definite matrix stored in band form and estimate the condition number of the matrix.
Compute the determinant of a complex Hermitian positive definite band matrix using the factors computed by CPBCO or CPBFA.
Factor a complex Hermitian positive definite matrix stored in band form.
Solve the complex Hermitian positive definite band system using the factors computed by CPBCO or CPBFA.
Factor a complex Hermitian positive definite matrix and estimate the condition number of the matrix.
Compute the determinant and inverse of a certain complex Hermitian positive definite matrix using the factors computed by CPOCO, CPOFA, or CQRDC.
Factor a complex Hermitian positive definite matrix.
Solve the complex Hermitian positive definite linear system using the factors computed by CPOCO or CPOFA.
Factor a complex Hermitian positive definite matrix stored in packed form and estimate the condition number of the matrix.
Compute the determinant and inverse of a complex Hermitian positive definite matrix using factors from CPPCO or CPPFA.
Factor a complex Hermitian positive definite matrix stored in packed form.
Solve the complex Hermitian positive definite system using the factors computed by CPPCO or CPPFA.
Solve a positive definite tridiagonal linear system.
Use Householder transformations to compute the QR factorization of an N by P matrix. Column pivoting is a users option.
Apply the output of CQRDC to compute coordinate transfor- mations, projections, and least squares solutions.
Construct a Givens transformation.
Multiply a vector by a constant.
Factor a complex symmetric matrix by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant and inverse of a complex symmetric matrix using the factors from CSIFA.
Factor a complex symmetric matrix by elimination with symmetric pivoting.
Solve a complex symmetric system using the factors obtained from CSIFA.
Factor a complex symmetric matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant and inverse of a complex symmetric matrix stored in packed form using the factors from CSPFA.
Factor a complex symmetric matrix stored in packed form by elimination with symmetric pivoting.
Solve a complex symmetric system using the factors obtained from CSPFA.
Apply a plane Givens rotation.
Scale a complex vector.
Perform the singular value decomposition of a rectangular matrix.
Interchange two vectors.
Multiply a complex general matrix by a complex symmetric matrix.
Perform symmetric rank 2k update of a complex symmetric matrix.
Perform symmetric rank k update of a complex symmetric matrix.
Multiply a complex vector by a complex triangular band matrix.
Solve a complex triangular banded system of equations.
Perform one of the matrix-vector operations.
Solve one of the systems of equations.
Estimate the condition number of a triangular matrix.
Compute the determinant and inverse of a triangular matrix.
Multiply a complex general matrix by a complex triangular matrix.
Multiply a complex vector by a complex triangular matrix.
Solve a system of the form T*X=B or CTRANS(T)*X=B, where T is a triangular matrix. Here CTRANS(T) is the conjugate transpose.
Solve a complex triangular system of equations with multiple right-hand sides.
Solve a complex triangular system of equations.
Compute the sum of the magnitudes of the elements of a vector.
Compute a constant times a vector plus a vector.
Compute the inner product of two vectors with extended precision accumulation and result.
Compute the Cholesky decomposition of a positive definite matrix. A pivoting option allows the user to estimate the condition number of a positive definite matrix or determine the rank of a positive semidefinite matrix.
Downdate an augmented Cholesky decomposition or the triangular factor of an augmented QR decomposition.
Update the Cholesky factorization A=TRANS(R)*R of a positive definite matrix A of order P under diagonal permutations of the form TRANS(E)*A*E, where E is a permutation matrix.
Update an augmented Cholesky decomposition of the triangular part of an augmented QR decomposition.
Copy a vector.
Compute the inner product of two vectors.
Factor a band matrix by Gaussian elimination and estimate the condition number of the matrix.
Compute the determinant of a band matrix using the factors computed by DGBCO or DGBFA.
Factor a band matrix using Gaussian elimination.
Perform one of the matrix-vector operations.
Solve the real band system A*X=B or TRANS(A)*X=B using the factors computed by DGBCO or DGBFA.
Factor a matrix using Gaussian elimination and estimate the condition number of the matrix.
Compute the determinant and inverse of a matrix using the factors computed by DGECO or DGEFA.
Factor a matrix using Gaussian elimination.
Perform one of the matrix-matrix operations.
Perform one of the matrix-vector operations.
Perform the rank 1 operation.
Solve the real system A*X=B or TRANS(A)*X=B using the factors computed by DGECO or DGEFA.
Solve a tridiagonal linear system.
Compute the Euclidean length (L2 norm) of a vector.
Factor a real symmetric positive definite matrix stored in band form and estimate the condition number of the matrix.
Compute the determinant of a symmetric positive definite band matrix using the factors computed by DPBCO or DPBFA.
Factor a real symmetric positive definite matrix stored in in band form.
Solve a real symmetric positive definite band system using the factors computed by DPBCO or DPBFA.
Factor a real symmetric positive definite matrix and estimate the condition of the matrix.
Compute the determinant and inverse of a certain real symmetric positive definite matrix using the factors computed by DPOCO, DPOFA or DQRDC.
Factor a real symmetric positive definite matrix.
Solve the real symmetric positive definite linear system using the factors computed by DPOCO or DPOFA.
Factor a symmetric positive definite matrix stored in packed form and estimate the condition number of the matrix.
Compute the determinant and inverse of a real symmetric positive definite matrix using factors from DPPCO or DPPFA.
Factor a real symmetric positive definite matrix stored in packed form.
Solve the real symmetric positive definite system using the factors computed by DPPCO or DPPFA.
Solve a positive definite tridiagonal linear system.
Use Householder transformations to compute the QR factorization of an N by P matrix. Column pivoting is a users option.
Apply the output of DQRDC to compute coordinate transfor- mations, projections, and least squares solutions.
Apply a plane Givens rotation.
Construct a plane Givens rotation.
Apply a modified Givens transformation.
Construct a modified Givens transformation.
Perform the matrix-vector operation.
Multiply a vector by a constant.
Compute the inner product of two vectors with extended precision accumulation and result.
Factor a symmetric matrix by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant, inertia and inverse of a real symmetric matrix using the factors from DSIFA.
Factor a real symmetric matrix by elimination with symmetric pivoting.
Solve a real symmetric system using the factors obtained from SSIFA.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant, inertia, inverse of a real symmetric matrix stored in packed form using the factors from DSPFA.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting.
Perform the matrix-vector operation.
Perform the symmetric rank 1 operation.
Perform the symmetric rank 2 operation.
Solve a real symmetric system using the factors obtained from DSPFA.
Perform the singular value decomposition of a rectangular matrix.
Interchange two vectors.
Perform one of the matrix-matrix operations.
Perform the matrix-vector operation.
Perform the symmetric rank 1 operation.
Perform the symmetric rank 2 operation.
Perform one of the symmetric rank 2k operations.
Perform one of the symmetric rank k operations.
Perform one of the matrix-vector operations.
Solve one of the systems of equations.
Perform one of the matrix-vector operations.
Solve one of the systems of equations.
Estimate the condition number of a triangular matrix.
Compute the determinant and inverse of a triangular matrix.
Perform one of the matrix-matrix operations.
Perform one of the matrix-vector operations.
Solve a system of the form T*X=B or TRANS(T)*X=B, where T is a triangular matrix.
Solve one of the matrix equations.
Solve one of the systems of equations.
Find the smallest index of the component of a complex vector having the maximum sum of magnitudes of real and imaginary parts.
Copy a vector.
Find the smallest index of that component of a vector having the maximum magnitude.
Find the smallest index of that component of a vector having the maximum magnitude.
Interchange two vectors.
Compute the sum of the magnitudes of the elements of a vector.
Compute a constant times a vector plus a vector.
Compute the sum of the magnitudes of the real and imaginary elements of a complex vector.
Compute the Cholesky decomposition of a positive definite matrix. A pivoting option allows the user to estimate the condition number of a positive definite matrix or determine the rank of a positive semidefinite matrix.
Downdate an augmented Cholesky decomposition or the triangular factor of an augmented QR decomposition.
Update the Cholesky factorization A=TRANS(R)*R of A positive definite matrix A of order P under diagonal permutations of the form TRANS(E)*A*E, where E is a permutation matrix.
Update an augmented Cholesky decomposition of the triangular part of an augmented QR decomposition.
Compute the unitary norm of a complex vector.
Copy a vector.
Compute the inner product of two vectors.
Compute the inner product of two vectors with extended precision accumulation.
Factor a band matrix by Gaussian elimination and estimate the condition number of the matrix.
Compute the determinant of a band matrix using the factors computed by SGBCO or SGBFA.
Factor a band matrix using Gaussian elimination.
Multiply a real vector by a real general band matrix.
Solve the real band system A*X=B or TRANS(A)*X=B using the factors computed by SGBCO or SGBFA.
Factor a matrix using Gaussian elimination and estimate the condition number of the matrix.
Compute the determinant and inverse of a matrix using the factors computed by SGECO or SGEFA.
Factor a matrix using Gaussian elimination.
Multiply a real general matrix by a real general matrix.
Multiply a real vector by a real general matrix.
Perform rank 1 update of a real general matrix.
Solve the real system A*X=B or TRANS(A)*X=B using the factors of SGECO or SGEFA.
Solve a tridiagonal linear system.
Compute the Euclidean length (L2 norm) of a vector.
Factor a real symmetric positive definite matrix stored in band form and estimate the condition number of the matrix.
Compute the determinant of a symmetric positive definite band matrix using the factors computed by SPBCO or SPBFA.
Factor a real symmetric positive definite matrix stored in band form.
Solve a real symmetric positive definite band system using the factors computed by SPBCO or SPBFA.
Factor a real symmetric positive definite matrix and estimate the condition number of the matrix.
Compute the determinant and inverse of a certain real symmetric positive definite matrix using the factors computed by SPOCO, SPOFA or SQRDC.
Factor a real symmetric positive definite matrix.
Solve the real symmetric positive definite linear system using the factors computed by SPOCO or SPOFA.
Factor a symmetric positive definite matrix stored in packed form and estimate the condition number of the matrix.
Compute the determinant and inverse of a real symmetric positive definite matrix using factors from SPPCO or SPPFA.
Factor a real symmetric positive definite matrix stored in packed form.
Solve the real symmetric positive definite system using the factors computed by SPPCO or SPPFA.
Solve a positive definite tridiagonal linear system.
Use Householder transformations to compute the QR factorization of an N by P matrix. Column pivoting is a users option.
Apply the output of SQRDC to compute coordinate transfor- mations, projections, and least squares solutions.
Apply a plane Givens rotation.
Construct a plane Givens rotation.
Apply a modified Givens transformation.
Construct a modified Givens transformation.
Multiply a real vector by a real symmetric band matrix.
Multiply a vector by a constant.
Factor a symmetric matrix by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant, inertia and inverse of a real symmetric matrix using the factors from SSIFA.
Factor a real symmetric matrix by elimination with symmetric pivoting.
Solve a real symmetric system using the factors obtained from SSIFA.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant, inertia, inverse of a real symmetric matrix stored in packed form using the factors from SSPFA.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting.
Perform the matrix-vector operation.
Performs the symmetric rank 1 operation.
Perform the symmetric rank 2 operation.
Solve a real symmetric system using the factors obtained from SSPFA.
Perform the singular value decomposition of a rectangular matrix.
Interchange two vectors.
Multiply a real general matrix by a real symmetric matrix.
Multiply a real vector by a real symmetric matrix.
Perform symmetric rank 1 update of a real symmetric matrix.
Perform symmetric rank 2 update of a real symmetric matrix.
Perform symmetric rank 2k update of a real symmetric matrix
Perform symmetric rank k update of a real symmetric matrix.
Multiply a real vector by a real triangular band matrix.
Solve a real triangular banded system of linear equations.
Perform one of the matrix-vector operations.
Solve one of the systems of equations.
Estimate the condition number of a triangular matrix.
Compute the determinant and inverse of a triangular matrix.
Multiply a real general matrix by a real triangular matrix.
Multiply a real vector by a real triangular matrix.
Solve a system of the form T*X=B or TRANS(T)*X=B, where T is a triangular matrix.
Solve a real triangular system of equations with multiple right-hand sides.
Solve a real triangular system of linear equations.
Solve linear programming problems involving at most a few thousand constraints and variables. Takes advantage of sparsity in the constraint matrix.
Solve linear programming problems involving at most a few thousand constraints and variables. Takes advantage of sparsity in the constraint matrix.
Factor a band matrix using Gaussian elimination and estimate the condition number.
Compute the determinant of a band matrix using the factors computed by CNBCO or CNBFA.
Factor a band matrix by elimination.
Solve a general nonsymmetric banded system of linear equations.
Solve a general nonsymmetric banded system of linear equations. Iterative refinement is used to obtain an error estimate.
Solve a complex band system using the factors computed by CNBCO or CNBFA.
Solve a positive definite symmetric complex system of linear equations.
Solve a positive definite Hermitian system of linear equations. Iterative refinement is used to obtain an error estimate.
Factor a band matrix using Gaussian elimination and estimate the condition number.
Compute the determinant of a band matrix using the factors computed by DNBCO or DNBFA.
Factor a band matrix by elimination.
Solve a general nonsymmetric banded system of linear equations.
Solve a real band system using the factors computed by DNBCO or DNBFA.
Solve a positive definite symmetric system of linear equations.
Factor a band matrix using Gaussian elimination and estimate the condition number.
Compute the determinant of a band matrix using the factors computed by SNBCO or SNBFA.
Factor a real band matrix by elimination.
Solve a general nonsymmetric banded system of linear equations.
Solve a general nonsymmetric banded system of linear equations. Iterative refinement is used to obtain an error estimate.
Solve a real band system using the factors computed by SNBCO or SNBFA.
Solve a positive definite symmetric system of linear equations.
Solve a positive definite symmetric system of linear equations. Iterative refinement is used to obtain an error estimate.
Solve a linear least squares problems by performing a QR factorization of the input matrix using Householder transformations. Emphasis is put on detecting possible rank deficiency.
Solve linear least squares problems by performing a QR factorization of the input matrix using Householder transformations. Emphasis is put on detecting possible rank deficiency.
Solve an underdetermined linear system of equations by performing an LQ factorization of the matrix using Householder transformations. Emphasis is put on detecting possible rank deficiency.
Solve a linear least squares problems by performing a QR factorization of the matrix using Householder transformations.
Solve a linear least squares problems by performing a QR factorization of the matrix using Householder transformations. Emphasis is put on detecting possible rank deficiency.
Solve a linear least squares problems by performing a QR factorization of the matrix using Householder transformations. Emphasis is put on detecting possible rank deficiency.
Solve an underdetermined linear system of equations by performing an LQ factorization of the matrix using Householder transformations. Emphasis is put on detecting possible rank deficiency.
Solve linear programming problems involving at most a few thousand constraints and variables. Takes advantage of sparsity in the constraint matrix.
Solve linear programming problems involving at most a few thousand constraints and variables. Takes advantage of sparsity in the constraint matrix.
Solve linear programming problems involving at most a few thousand constraints and variables. Takes advantage of sparsity in the constraint matrix.
Solve linear programming problems involving at most a few thousand constraints and variables. Takes advantage of sparsity in the constraint matrix.
Read a Sparse Linear System in the Boeing/Harwell Format. The matrix is read in and if the right hand side is also present in the input file then it too is read in. The matrix is then modified to be in the SLAP Column format.
Printer Plot of SLAP Column Format Matrix. Routine to print out a SLAP Column format matrix in a "printer plot" graphical representation.
Preconditioned Iterative Refinement Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using iterative refinement with a matrix splitting.
Lower Triangle Preconditioner SLAP Set Up. Routine to store the lower triangle of a matrix stored in the SLAP Column format.
SLAP Triad to SLAP Column Format Converter. Routine to convert from the SLAP Triad to SLAP Column format.
Gauss-Seidel Method Iterative Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using Gauss-Seidel iteration.
Incompl. Cholesky Decomposition Preconditioner SLAP Set Up. Routine to generate the Incomplete Cholesky decomposition, L*D*L-trans, of a symmetric positive definite matrix, A, which is stored in SLAP Column format. The unit lower triangular matrix L is stored by rows, and the inverse of the diagonal matrix D is stored.
Incomplete LU Iterative Refinement Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using the incomplete LU decomposition with iterative refinement.
Jacobi's Method Iterative Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using Jacobi iteration.
SLAP Backsolve for LDU Factorization of Normal Equations. To solve a system of the form (L*D*U)*(L*D*U)' X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix and ' denotes transpose.
Read in SLAP Triad Format Linear System. Routine to read in a SLAP Triad format matrix and right hand side and solution to the system, if known.
Write out SLAP Triad Format Linear System. Routine to write out a SLAP Triad format matrix and right hand side and solution to the system, if known.
Preconditioned Conjugate Gradient Stop Test. This routine calculates the stop test for the Conjugate Gradient iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Generalized Minimum Residual Stop Test. This routine calculates the stop test for the Generalized Minimum RESidual (GMRES) iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Iterative Refinement Stop Test. This routine calculates the stop test for the iterative refinement iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Conjugate Gradient Stop Test. This routine calculates the stop test for the Conjugate Gradient iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Generalized Minimum Residual Stop Test. This routine calculates the stop test for the Generalized Minimum RESidual (GMRES) iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Iterative Refinement Stop Test. This routine calculates the stop test for the iterative refinement iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Read a Sparse Linear System in the Boeing/Harwell Format. The matrix is read in and if the right hand side is also present in the input file then it too is read in. The matrix is then modified to be in the SLAP Column format.
Preconditioned Iterative Refinement Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using iterative refinement with a matrix splitting.
Printer Plot of SLAP Column Format Matrix. Routine to print out a SLAP Column format matrix in a "printer plot" graphical representation.
Lower Triangle Preconditioner SLAP Set Up. Routine to store the lower triangle of a matrix stored in the SLAP Column format.
SLAP Triad to SLAP Column Format Converter. Routine to convert from the SLAP Triad to SLAP Column format.
Gauss-Seidel Method Iterative Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using Gauss-Seidel iteration.
Incompl. Cholesky Decomposition Preconditioner SLAP Set Up. Routine to generate the Incomplete Cholesky decomposition, L*D*L-trans, of a symmetric positive definite matrix, A, which is stored in SLAP Column format. The unit lower triangular matrix L is stored by rows, and the inverse of the diagonal matrix D is stored.
Incomplete LU Iterative Refinement Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using the incomplete LU decomposition with iterative refinement.
Jacobi's Method Iterative Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using Jacobi iteration.
SLAP Backsolve for LDU Factorization of Normal Equations. To solve a system of the form (L*D*U)*(L*D*U)' X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix and ' denotes transpose.
Read in SLAP Triad Format Linear System. Routine to read in a SLAP Triad format matrix and right hand side and solution to the system, if known.
Write out SLAP Triad Format Linear System. Routine to write out a SLAP Triad format matrix and right hand side and solution to the system, if known.
Diagonal Matrix Vector Multiply. Routine to calculate the product X = DIAG*B, where DIAG is a diagonal matrix.
SLAP MSOLVE for Lower Triangle Matrix. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes L B = X.
SLAP Lower Triangle Matrix Backsolve. Routine to solve a system of the form Lx = b , where L is a lower triangular matrix.
SLAP MSOLVE for LDL' (IC) Factorization. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes (LDL') B = X.
SLAP MTSOLV for LDU Factorization. This routine acts as an interface between the SLAP generic MTSOLV calling convention and the routine that actually -T computes (LDU) B = X.
SLAP MSOLVE for LDU Factorization of Normal Equations. This routine acts as an interface between the SLAP generic MMTSLV calling convention and the routine that actually -1 computes [(LDU)*(LDU)'] B = X.
Diagonal Matrix Vector Multiply. Routine to calculate the product X = DIAG*B, where DIAG is a diagonal matrix.
SLAP MSOLVE for Lower Triangle Matrix. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes L B = X.
SLAP Lower Triangle Matrix Backsolve. Routine to solve a system of the form Lx = b , where L is a lower triangular matrix.
SLAP MSOLVE for LDL' (IC) Factorization. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes (LDL') B = X.
SLAP MTSOLV for LDU Factorization. This routine acts as an interface between the SLAP generic MTSOLV calling convention and the routine that actually -T computes (LDU) B = X.
SLAP MSOLVE for LDU Factorization of Normal Equations. This routine acts as an interface between the SLAP generic MMTSLV calling convention and the routine that actually -1 computes [(LDU)*(LDU)'] B = X.
Compute the Cholesky decomposition of a positive definite matrix. A pivoting option allows the user to estimate the condition number of a positive definite matrix or determine the rank of a positive semidefinite matrix.
Downdate an augmented Cholesky decomposition or the triangular factor of an augmented QR decomposition.
Update the Cholesky factorization A=TRANS(R)*R of a positive definite matrix A of order P under diagonal permutations of the form TRANS(E)*A*E, where E is a permutation matrix.
Update an augmented Cholesky decomposition of the triangular part of an augmented QR decomposition.
Factor a band matrix by Gaussian elimination and estimate the condition number of the matrix.
Compute the determinant of a complex band matrix using the factors from CGBCO or CGBFA.
Factor a band matrix using Gaussian elimination.
Solve the complex band system A*X=B or CTRANS(A)*X=B using the factors computed by CGBCO or CGBFA.
Factor a matrix using Gaussian elimination and estimate the condition number of the matrix.
Compute the determinant and inverse of a matrix using the factors computed by CGECO or CGEFA.
Factor a matrix using Gaussian elimination.
Solve the complex system A*X=B or CTRANS(A)*X=B using the factors computed by CGECO or CGEFA.
Solve a tridiagonal linear system.
Factor a complex Hermitian matrix by elimination with sym- metric pivoting and estimate the condition of the matrix.
Compute the determinant, inertia and inverse of a complex Hermitian matrix using the factors obtained from CHIFA.
Factor a complex Hermitian matrix by elimination (symmetric pivoting).
Solve the complex Hermitian system using factors obtained from CHIFA.
Factor a complex Hermitian matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant, inertia and inverse of a complex Hermitian matrix stored in packed form using the factors obtained from CHPFA.
Factor a complex Hermitian matrix stored in packed form by elimination with symmetric pivoting.
Solve a complex Hermitian system using factors obtained from CHPFA.
Factor a complex Hermitian positive definite matrix stored in band form and estimate the condition number of the matrix.
Compute the determinant of a complex Hermitian positive definite band matrix using the factors computed by CPBCO or CPBFA.
Factor a complex Hermitian positive definite matrix stored in band form.
Solve the complex Hermitian positive definite band system using the factors computed by CPBCO or CPBFA.
Factor a complex Hermitian positive definite matrix and estimate the condition number of the matrix.
Compute the determinant and inverse of a certain complex Hermitian positive definite matrix using the factors computed by CPOCO, CPOFA, or CQRDC.
Factor a complex Hermitian positive definite matrix.
Solve the complex Hermitian positive definite linear system using the factors computed by CPOCO or CPOFA.
Factor a complex Hermitian positive definite matrix stored in packed form and estimate the condition number of the matrix.
Compute the determinant and inverse of a complex Hermitian positive definite matrix using factors from CPPCO or CPPFA.
Factor a complex Hermitian positive definite matrix stored in packed form.
Solve the complex Hermitian positive definite system using the factors computed by CPPCO or CPPFA.
Solve a positive definite tridiagonal linear system.
Use Householder transformations to compute the QR factorization of an N by P matrix. Column pivoting is a users option.
Apply the output of CQRDC to compute coordinate transfor- mations, projections, and least squares solutions.
Factor a complex symmetric matrix by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant and inverse of a complex symmetric matrix using the factors from CSIFA.
Factor a complex symmetric matrix by elimination with symmetric pivoting.
Solve a complex symmetric system using the factors obtained from CSIFA.
Factor a complex symmetric matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant and inverse of a complex symmetric matrix stored in packed form using the factors from CSPFA.
Factor a complex symmetric matrix stored in packed form by elimination with symmetric pivoting.
Solve a complex symmetric system using the factors obtained from CSPFA.
Perform the singular value decomposition of a rectangular matrix.
Estimate the condition number of a triangular matrix.
Compute the determinant and inverse of a triangular matrix.
Solve a system of the form T*X=B or CTRANS(T)*X=B, where T is a triangular matrix. Here CTRANS(T) is the conjugate transpose.
Compute the Cholesky decomposition of a positive definite matrix. A pivoting option allows the user to estimate the condition number of a positive definite matrix or determine the rank of a positive semidefinite matrix.
Downdate an augmented Cholesky decomposition or the triangular factor of an augmented QR decomposition.
Update the Cholesky factorization A=TRANS(R)*R of a positive definite matrix A of order P under diagonal permutations of the form TRANS(E)*A*E, where E is a permutation matrix.
Update an augmented Cholesky decomposition of the triangular part of an augmented QR decomposition.
Factor a band matrix by Gaussian elimination and estimate the condition number of the matrix.
Compute the determinant of a band matrix using the factors computed by DGBCO or DGBFA.
Factor a band matrix using Gaussian elimination.
Solve the real band system A*X=B or TRANS(A)*X=B using the factors computed by DGBCO or DGBFA.
Factor a matrix using Gaussian elimination and estimate the condition number of the matrix.
Compute the determinant and inverse of a matrix using the factors computed by DGECO or DGEFA.
Factor a matrix using Gaussian elimination.
Solve the real system A*X=B or TRANS(A)*X=B using the factors computed by DGECO or DGEFA.
Solve a tridiagonal linear system.
Factor a real symmetric positive definite matrix stored in band form and estimate the condition number of the matrix.
Compute the determinant of a symmetric positive definite band matrix using the factors computed by DPBCO or DPBFA.
Factor a real symmetric positive definite matrix stored in in band form.
Solve a real symmetric positive definite band system using the factors computed by DPBCO or DPBFA.
Factor a real symmetric positive definite matrix and estimate the condition of the matrix.
Compute the determinant and inverse of a certain real symmetric positive definite matrix using the factors computed by DPOCO, DPOFA or DQRDC.
Factor a real symmetric positive definite matrix.
Solve the real symmetric positive definite linear system using the factors computed by DPOCO or DPOFA.
Factor a symmetric positive definite matrix stored in packed form and estimate the condition number of the matrix.
Compute the determinant and inverse of a real symmetric positive definite matrix using factors from DPPCO or DPPFA.
Factor a real symmetric positive definite matrix stored in packed form.
Solve the real symmetric positive definite system using the factors computed by DPPCO or DPPFA.
Solve a positive definite tridiagonal linear system.
Use Householder transformations to compute the QR factorization of an N by P matrix. Column pivoting is a users option.
Apply the output of DQRDC to compute coordinate transfor- mations, projections, and least squares solutions.
Factor a symmetric matrix by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant, inertia and inverse of a real symmetric matrix using the factors from DSIFA.
Factor a real symmetric matrix by elimination with symmetric pivoting.
Solve a real symmetric system using the factors obtained from SSIFA.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant, inertia, inverse of a real symmetric matrix stored in packed form using the factors from DSPFA.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting.
Solve a real symmetric system using the factors obtained from DSPFA.
Perform the singular value decomposition of a rectangular matrix.
Estimate the condition number of a triangular matrix.
Compute the determinant and inverse of a triangular matrix.
Solve a system of the form T*X=B or TRANS(T)*X=B, where T is a triangular matrix.
Compute the Cholesky decomposition of a positive definite matrix. A pivoting option allows the user to estimate the condition number of a positive definite matrix or determine the rank of a positive semidefinite matrix.
Downdate an augmented Cholesky decomposition or the triangular factor of an augmented QR decomposition.
Update the Cholesky factorization A=TRANS(R)*R of A positive definite matrix A of order P under diagonal permutations of the form TRANS(E)*A*E, where E is a permutation matrix.
Update an augmented Cholesky decomposition of the triangular part of an augmented QR decomposition.
Factor a band matrix by Gaussian elimination and estimate the condition number of the matrix.
Compute the determinant of a band matrix using the factors computed by SGBCO or SGBFA.
Factor a band matrix using Gaussian elimination.
Solve the real band system A*X=B or TRANS(A)*X=B using the factors computed by SGBCO or SGBFA.
Factor a matrix using Gaussian elimination and estimate the condition number of the matrix.
Compute the determinant and inverse of a matrix using the factors computed by SGECO or SGEFA.
Factor a matrix using Gaussian elimination.
Solve the real system A*X=B or TRANS(A)*X=B using the factors of SGECO or SGEFA.
Solve a tridiagonal linear system.
Factor a real symmetric positive definite matrix stored in band form and estimate the condition number of the matrix.
Compute the determinant of a symmetric positive definite band matrix using the factors computed by SPBCO or SPBFA.
Factor a real symmetric positive definite matrix stored in band form.
Solve a real symmetric positive definite band system using the factors computed by SPBCO or SPBFA.
Factor a real symmetric positive definite matrix and estimate the condition number of the matrix.
Compute the determinant and inverse of a certain real symmetric positive definite matrix using the factors computed by SPOCO, SPOFA or SQRDC.
Factor a real symmetric positive definite matrix.
Solve the real symmetric positive definite linear system using the factors computed by SPOCO or SPOFA.
Factor a symmetric positive definite matrix stored in packed form and estimate the condition number of the matrix.
Compute the determinant and inverse of a real symmetric positive definite matrix using factors from SPPCO or SPPFA.
Factor a real symmetric positive definite matrix stored in packed form.
Solve the real symmetric positive definite system using the factors computed by SPPCO or SPPFA.
Solve a positive definite tridiagonal linear system.
Use Householder transformations to compute the QR factorization of an N by P matrix. Column pivoting is a users option.
Apply the output of SQRDC to compute coordinate transfor- mations, projections, and least squares solutions.
Factor a symmetric matrix by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant, inertia and inverse of a real symmetric matrix using the factors from SSIFA.
Factor a real symmetric matrix by elimination with symmetric pivoting.
Solve a real symmetric system using the factors obtained from SSIFA.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant, inertia, inverse of a real symmetric matrix stored in packed form using the factors from SSPFA.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting.
Solve a real symmetric system using the factors obtained from SSPFA.
Perform the singular value decomposition of a rectangular matrix.
Estimate the condition number of a triangular matrix.
Compute the determinant and inverse of a triangular matrix.
Solve a system of the form T*X=B or TRANS(T)*X=B, where T is a triangular matrix.
Compute the log gamma correction factor so that LOG(CGAMMA(Z)) = 0.5*LOG(2.*PI) + (Z-0.5)*LOG(Z) - Z + C9LGMC(Z).
Compute the log Gamma correction factor so that LOG(DGAMMA(X)) = LOG(SQRT(2*PI)) + (X-5.)*LOG(X) - X + D9LGMC(X).
Compute the log Gamma correction factor so that LOG(GAMMA(X)) = LOG(SQRT(2*PI)) + (X-.5)*LOG(X) - X + R9LGMC(X).
Compute the logarithm of the absolute value of the Gamma function.
Evaluate ln(1+X) accurate in the sense of relative error.
Compute the log gamma correction factor so that LOG(CGAMMA(Z)) = 0.5*LOG(2.*PI) + (Z-0.5)*LOG(Z) - Z + C9LGMC(Z).
Evaluate LOG(1+Z) from second order relative accuracy so that LOG(1+Z) = Z - Z**2/2 + Z**3*C9LN2R(Z).
Compute the logarithm of the absolute value of the Gamma function.
Evaluate ln(1+X) accurate in the sense of relative error.
Compute the log complementary incomplete Gamma function for large X and for A .LE. X.
Compute the logarithm of Tricomi's incomplete Gamma function with Perron's continued fraction for large X and A .GE. X.
Compute the log Gamma correction factor so that LOG(DGAMMA(X)) = LOG(SQRT(2*PI)) + (X-5.)*LOG(X) - X + D9LGMC(X).
Evaluate LOG(1+X) from second order relative accuracy so that LOG(1+X) = X - X**2/2 + X**3*D9LN2R(X)
Compute the logarithm of the absolute value of the Gamma function.
Evaluate ln(1+X) accurate in the sense of relative error.
Compute the log complementary incomplete Gamma function for large X and for A .LE. X.
Compute the logarithm of Tricomi's incomplete Gamma function with Perron's continued fraction for large X and A .GE. X.
Compute the log Gamma correction factor so that LOG(GAMMA(X)) = LOG(SQRT(2*PI)) + (X-.5)*LOG(X) - X + R9LGMC(X).
Evaluate LOG(1+X) from second order relative accuracy so that LOG(1+X) = X - X**2/2 + X**3*R9LN2R(X).
Compute the logarithm of the Gamma function
Compute the logarithm of the Gamma function
Compute the natural logarithm of the complete Beta function.
Compute the natural logarithm of the complete Beta function.
Compute the natural logarithm of the complete Beta function.
Compute the logarithmic confluent hypergeometric function.
Evaluate for large Z Z**A * U(A,B,Z) where U is the logarithmic confluent hypergeometric function.
Compute the logarithmic confluent hypergeometric function.
Evaluate for large Z Z**A * U(A,B,Z) where U is the logarithmic confluent hypergeometric function.
Lower Triangle Preconditioner SLAP Set Up. Routine to store the lower triangle of a matrix stored in the SLAP Column format.
Lower Triangle Preconditioner SLAP Set Up. Routine to store the lower triangle of a matrix stored in the SLAP Column format.
Solve linear programming problems involving at most a few thousand constraints and variables. Takes advantage of sparsity in the constraint matrix.
Solve linear programming problems involving at most a few thousand constraints and variables. Takes advantage of sparsity in the constraint matrix.
Solve a linear least squares problems by performing a QR factorization of the input matrix using Householder transformations. Emphasis is put on detecting possible rank deficiency.
Solve an underdetermined linear system of equations by performing an LQ factorization of the matrix using Householder transformations. Emphasis is put on detecting possible rank deficiency.
Solve a linear least squares problems by performing a QR factorization of the matrix using Householder transformations. Emphasis is put on detecting possible rank deficiency.
Solve an underdetermined linear system of equations by performing an LQ factorization of the matrix using Householder transformations. Emphasis is put on detecting possible rank deficiency.
Compute the eigenvalues of a complex upper Hessenberg matrix using the modified LR method.
Compute the eigenvalues and eigenvectors of a complex upper Hessenberg matrix using the modified LR method.
Return floating point machine dependent constants.
Return integer machine dependent constants.
Return floating point machine dependent constants.
Return floating point machine dependent constants.
Return floating point machine dependent constants.
Return floating point machine dependent constants.
Return integer machine dependent constants.
Return integer machine dependent constants.
Return integer machine dependent constants.
Return integer machine dependent constants.
Return floating point machine dependent constants.
Return floating point machine dependent constants.
Return floating point machine dependent constants.
Return floating point machine dependent constants.
Return floating point machine dependent constants.
Compute the Cholesky decomposition of a positive definite matrix. A pivoting option allows the user to estimate the condition number of a positive definite matrix or determine the rank of a positive semidefinite matrix.
Downdate an augmented Cholesky decomposition or the triangular factor of an augmented QR decomposition.
Update the Cholesky factorization A=TRANS(R)*R of a positive definite matrix A of order P under diagonal permutations of the form TRANS(E)*A*E, where E is a permutation matrix.
Update an augmented Cholesky decomposition of the triangular part of an augmented QR decomposition.
Compute the determinant of a complex band matrix using the factors from CGBCO or CGBFA.
Solve the complex band system A*X=B or CTRANS(A)*X=B using the factors computed by CGBCO or CGBFA.
Compute the determinant and inverse of a matrix using the factors computed by CGECO or CGEFA.
Solve the complex system A*X=B or CTRANS(A)*X=B using the factors computed by CGECO or CGEFA.
Solve a tridiagonal linear system.
Compute the determinant, inertia and inverse of a complex Hermitian matrix using the factors obtained from CHIFA.
Compute the eigenvalues and, optionally, the eigenvectors of a complex Hermitian matrix.
Solve the complex Hermitian system using factors obtained from CHIFA.
Compute the determinant, inertia and inverse of a complex Hermitian matrix stored in packed form using the factors obtained from CHPFA.
Solve a complex Hermitian system using factors obtained from CHPFA.
Compute the determinant of a complex Hermitian positive definite band matrix using the factors computed by CPBCO or CPBFA.
Solve the complex Hermitian positive definite band system using the factors computed by CPBCO or CPBFA.
Compute the determinant and inverse of a certain complex Hermitian positive definite matrix using the factors computed by CPOCO, CPOFA, or CQRDC.
Solve the complex Hermitian positive definite linear system using the factors computed by CPOCO or CPOFA.
Compute the determinant and inverse of a complex Hermitian positive definite matrix using factors from CPPCO or CPPFA.
Solve the complex Hermitian positive definite system using the factors computed by CPPCO or CPPFA.
Solve a positive definite tridiagonal linear system.
Use Householder transformations to compute the QR factorization of an N by P matrix. Column pivoting is a users option.
Apply the output of CQRDC to compute coordinate transfor- mations, projections, and least squares solutions.
Compute the determinant and inverse of a complex symmetric matrix using the factors from CSIFA.
Solve a complex symmetric system using the factors obtained from CSIFA.
Compute the determinant and inverse of a complex symmetric matrix stored in packed form using the factors from CSPFA.
Solve a complex symmetric system using the factors obtained from CSPFA.
Perform the singular value decomposition of a rectangular matrix.
Compute the Cholesky decomposition of a positive definite matrix. A pivoting option allows the user to estimate the condition number of a positive definite matrix or determine the rank of a positive semidefinite matrix.
Downdate an augmented Cholesky decomposition or the triangular factor of an augmented QR decomposition.
Update the Cholesky factorization A=TRANS(R)*R of a positive definite matrix A of order P under diagonal permutations of the form TRANS(E)*A*E, where E is a permutation matrix.
Update an augmented Cholesky decomposition of the triangular part of an augmented QR decomposition.
Compute the determinant of a band matrix using the factors computed by DGBCO or DGBFA.
Solve the real band system A*X=B or TRANS(A)*X=B using the factors computed by DGBCO or DGBFA.
Compute the determinant and inverse of a matrix using the factors computed by DGECO or DGEFA.
Solve the real system A*X=B or TRANS(A)*X=B using the factors computed by DGECO or DGEFA.
Solve a tridiagonal linear system.
Compute the determinant of a symmetric positive definite band matrix using the factors computed by DPBCO or DPBFA.
Solve a real symmetric positive definite band system using the factors computed by DPBCO or DPBFA.
Compute the determinant and inverse of a certain real symmetric positive definite matrix using the factors computed by DPOCO, DPOFA or DQRDC.
Solve the real symmetric positive definite linear system using the factors computed by DPOCO or DPOFA.
Compute the determinant and inverse of a real symmetric positive definite matrix using factors from DPPCO or DPPFA.
Solve the real symmetric positive definite system using the factors computed by DPPCO or DPPFA.
Solve a positive definite tridiagonal linear system.
Use Householder transformations to compute the QR factorization of an N by P matrix. Column pivoting is a users option.
Apply the output of DQRDC to compute coordinate transfor- mations, projections, and least squares solutions.
Compute the determinant, inertia and inverse of a real symmetric matrix using the factors from DSIFA.
Solve a real symmetric system using the factors obtained from SSIFA.
Compute the determinant, inertia, inverse of a real symmetric matrix stored in packed form using the factors from DSPFA.
Solve a real symmetric system using the factors obtained from DSPFA.
Perform the singular value decomposition of a rectangular matrix.
Compute the Cholesky decomposition of a positive definite matrix. A pivoting option allows the user to estimate the condition number of a positive definite matrix or determine the rank of a positive semidefinite matrix.
Downdate an augmented Cholesky decomposition or the triangular factor of an augmented QR decomposition.
Update the Cholesky factorization A=TRANS(R)*R of A positive definite matrix A of order P under diagonal permutations of the form TRANS(E)*A*E, where E is a permutation matrix.
Update an augmented Cholesky decomposition of the triangular part of an augmented QR decomposition.
Compute the determinant of a band matrix using the factors computed by SGBCO or SGBFA.
Solve the real band system A*X=B or TRANS(A)*X=B using the factors computed by SGBCO or SGBFA.
Compute the determinant and inverse of a matrix using the factors computed by SGECO or SGEFA.
Solve the real system A*X=B or TRANS(A)*X=B using the factors of SGECO or SGEFA.
Solve a tridiagonal linear system.
Compute the determinant of a symmetric positive definite band matrix using the factors computed by SPBCO or SPBFA.
Solve a real symmetric positive definite band system using the factors computed by SPBCO or SPBFA.
Compute the determinant and inverse of a certain real symmetric positive definite matrix using the factors computed by SPOCO, SPOFA or SQRDC.
Solve the real symmetric positive definite linear system using the factors computed by SPOCO or SPOFA.
Compute the determinant and inverse of a real symmetric positive definite matrix using factors from SPPCO or SPPFA.
Solve the real symmetric positive definite system using the factors computed by SPPCO or SPPFA.
Solve a positive definite tridiagonal linear system.
Use Householder transformations to compute the QR factorization of an N by P matrix. Column pivoting is a users option.
Apply the output of SQRDC to compute coordinate transfor- mations, projections, and least squares solutions.
Compute the determinant, inertia and inverse of a real symmetric matrix using the factors from SSIFA.
Compute the eigenvalues and, optionally, the eigenvectors of a real symmetric matrix.
Solve a real symmetric system using the factors obtained from SSIFA.
Compute the determinant, inertia, inverse of a real symmetric matrix stored in packed form using the factors from SSPFA.
Solve a real symmetric system using the factors obtained from SSPFA.
Perform the singular value decomposition of a rectangular matrix.
Compute the determinant and inverse of a triangular matrix.
Factor a band matrix by Gaussian elimination and estimate the condition number of the matrix.
Factor a band matrix using Gaussian elimination.
Factor a matrix using Gaussian elimination and estimate the condition number of the matrix.
Factor a matrix using Gaussian elimination.
Factor a complex Hermitian matrix by elimination with sym- metric pivoting and estimate the condition of the matrix.
Factor a complex Hermitian matrix by elimination (symmetric pivoting).
Factor a complex Hermitian matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Factor a complex Hermitian matrix stored in packed form by elimination with symmetric pivoting.
Factor a band matrix using Gaussian elimination and estimate the condition number.
Factor a band matrix by elimination.
Factor a complex Hermitian positive definite matrix stored in band form and estimate the condition number of the matrix.
Factor a complex Hermitian positive definite matrix stored in band form.
Factor a complex Hermitian positive definite matrix and estimate the condition number of the matrix.
Factor a complex Hermitian positive definite matrix.
Factor a complex Hermitian positive definite matrix stored in packed form and estimate the condition number of the matrix.
Factor a complex Hermitian positive definite matrix stored in packed form.
Factor a complex symmetric matrix by elimination with symmetric pivoting and estimate the condition number of the matrix.
Factor a complex symmetric matrix by elimination with symmetric pivoting.
Factor a complex symmetric matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Factor a complex symmetric matrix stored in packed form by elimination with symmetric pivoting.
Factor a band matrix by Gaussian elimination and estimate the condition number of the matrix.
Factor a band matrix using Gaussian elimination.
Factor a matrix using Gaussian elimination and estimate the condition number of the matrix.
Factor a matrix using Gaussian elimination.
Factor a band matrix using Gaussian elimination and estimate the condition number.
Factor a band matrix by elimination.
Factor a real symmetric positive definite matrix stored in band form and estimate the condition number of the matrix.
Factor a real symmetric positive definite matrix stored in in band form.
Factor a real symmetric positive definite matrix and estimate the condition of the matrix.
Factor a real symmetric positive definite matrix.
Factor a symmetric positive definite matrix stored in packed form and estimate the condition number of the matrix.
Factor a real symmetric positive definite matrix stored in packed form.
Factor a symmetric matrix by elimination with symmetric pivoting and estimate the condition number of the matrix.
Factor a real symmetric matrix by elimination with symmetric pivoting.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting.
Factor a band matrix by Gaussian elimination and estimate the condition number of the matrix.
Factor a band matrix using Gaussian elimination.
Factor a matrix using Gaussian elimination and estimate the condition number of the matrix.
Factor a matrix using Gaussian elimination.
Factor a band matrix using Gaussian elimination and estimate the condition number.
Factor a real band matrix by elimination.
Factor a real symmetric positive definite matrix stored in band form and estimate the condition number of the matrix.
Factor a real symmetric positive definite matrix stored in band form.
Factor a real symmetric positive definite matrix and estimate the condition number of the matrix.
Factor a real symmetric positive definite matrix.
Factor a symmetric positive definite matrix stored in packed form and estimate the condition number of the matrix.
Factor a real symmetric positive definite matrix stored in packed form.
Factor a symmetric matrix by elimination with symmetric pivoting and estimate the condition number of the matrix.
Factor a real symmetric matrix by elimination with symmetric pivoting.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting.
Read a Sparse Linear System in the Boeing/Harwell Format. The matrix is read in and if the right hand side is also present in the input file then it too is read in. The matrix is then modified to be in the SLAP Column format.
Read a Sparse Linear System in the Boeing/Harwell Format. The matrix is read in and if the right hand side is also present in the input file then it too is read in. The matrix is then modified to be in the SLAP Column format.
SLAP Column Format Sparse Matrix Transpose Vector Product. Routine to calculate the sparse matrix vector product: Y = A'*X, where ' denotes transpose.
SLAP Column Format Sparse Matrix Transpose Vector Product. Routine to calculate the sparse matrix vector product: Y = A'*X, where ' denotes transpose.
SLAP Column Format Sparse Matrix Vector Product. Routine to calculate the sparse matrix vector product: Y = A*X.
SLAP Column Format Sparse Matrix Vector Product. Routine to calculate the sparse matrix vector product: Y = A*X.
Find the smallest index of the component of a complex vector having the maximum sum of magnitudes of real and imaginary parts.
Find the smallest index of that component of a vector having the maximum magnitude.
Find the smallest index of that component of a vector having the maximum magnitude.
Check the gradients of M nonlinear functions in N variables, evaluated at a point X, for consistency with the functions themselves.
Check the gradients of M nonlinear functions in N variables, evaluated at a point X, for consistency with the functions themselves.
Compute the hyperbolic Bessel function of the first kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order zero.
Compute the modified (hyperbolic) Bessel function of the first kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order one.
Compute the modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the modified (hyperbolic) Bessel function of the third kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order one.
Compute a sequence of exponentially scaled modified Bessel functions of the third kind of fractional order.
Compute a sequence of modified Bessel functions of the third kind of fractional order.
Compute the hyperbolic Bessel function of the first kind of order zero.
Compute the modified (hyperbolic) Bessel function of the first kind of order one.
Compute the modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the modified (hyperbolic) Bessel function of the third kind of order one.
Compute a sequence of modified Bessel functions of the third kind of fractional order.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order one.
Compute a sequence of exponentially scaled modified Bessel functions of the third kind of fractional order.
Compute a sequence of the Bessel functions I(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Compute a sequence of the Bessel functions K(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Compute a sequence of the Bessel functions I(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Compute a sequence of the Bessel functions K(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
This routine computes modified Chebyshev moments. The K-th modified Chebyshev moment is defined as the integral over (-1,1) of W(X)*T(K,X), where T(K,X) is the Chebyshev polynomial of degree K.
This routine computes modified Chebyshev moments. The K-th modified Chebyshev moment is defined as the integral over (-1,1) of W(X)*T(K,X), where T(K,X) is the Chebyshev polynomial of degree K.
Apply a modified Givens transformation.
Construct a modified Givens transformation.
Apply a modified Givens transformation.
Construct a modified Givens transformation.
Evaluate the Airy modulus and phase.
Evaluate the modulus and phase for the J0 and Y0 Bessel functions.
Evaluate the modulus and phase for the J1 and Y1 Bessel functions.
Evaluate the Airy modulus and phase.
Check a cubic Hermite function for monotonicity.
Set derivatives needed to determine a piecewise monotone piecewise cubic Hermite interpolant to given data. User control is available over boundary conditions and/or treatment of points where monotonicity switches direction.
Set derivatives needed to determine a monotone piecewise cubic Hermite interpolant to given data. Boundary values are provided which are compatible with monotonicity. The interpolant will have an extremum at each point where mono- tonicity switches direction. (See DPCHIC if user control is desired over boundary or switch conditions.)
Check a cubic Hermite function for monotonicity.
Documentation for PCHIP, a Fortran package for piecewise cubic Hermite interpolation of data.
Set derivatives needed to determine a piecewise monotone piecewise cubic Hermite interpolant to given data. User control is available over boundary conditions and/or treatment of points where monotonicity switches direction.
Set derivatives needed to determine a monotone piecewise cubic Hermite interpolant to given data. Boundary values are provided which are compatible with monotonicity. The interpolant will have an extremum at each point where mono- tonicity switches direction. (See PCHIC if user control is desired over boundary or switch conditions.)
Solve a square system of nonlinear equations.
Solve a square system of nonlinear equations.
Integrate a function using a 7-point adaptive Newton-Cotes quadrature rule.
Integrate a function using a 7-point adaptive Newton-Cotes quadrature rule.
Preconditioned BiConjugate Gradient Sparse Ax = b Solver. Routine to solve a Non-Symmetric linear system Ax = b using the Preconditioned BiConjugate Gradient method.
Preconditioned BiConjugate Gradient Squared Ax=b Solver. Routine to solve a Non-Symmetric linear system Ax = b using the Preconditioned BiConjugate Gradient Squared method.
Preconditioned GMRES iterative sparse Ax=b solver. This routine uses the generalized minimum residual (GMRES) method with preconditioning to solve non-symmetric linear systems of the form: Ax = b.
Internal routine for DGMRES.
Internal routine for DGMRES.
Preconditioned Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Preconditioned Orthomin method.
Internal routine for DGMRES.
Internal routine for DGMRES.
Internal routine for DGMRES.
Diagonally Scaled BiConjugate Gradient Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient method with diagonal scaling.
Diagonally Scaled CGS Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient Squared method with diagonal scaling.
Diagonally scaled GMRES iterative sparse Ax=b solver. This routine uses the generalized minimum residual (GMRES) method with diagonal scaling to solve possibly non-symmetric linear systems of the form: Ax = b.
Incomplete LU Decomposition Preconditioner SLAP Set Up. Routine to generate the incomplete LDU decomposition of a matrix. The unit lower triangular factor L is stored by rows and the unit upper triangular factor U is stored by columns. The inverse of the diagonal matrix D is stored. No fill in is allowed.
Incomplete LU BiConjugate Gradient Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient method with Incomplete LU decomposition preconditioning.
Incomplete LU CG Sparse Ax=b Solver for Normal Equations. Routine to solve a general linear system Ax = b using the incomplete LU decomposition with the Conjugate Gradient method applied to the normal equations, viz., AA'y = b, x = A'y.
Incomplete LU BiConjugate Gradient Squared Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient Squared method with Incomplete LU decomposition preconditioning.
Incomplete LU GMRES iterative sparse Ax=b solver. This routine uses the generalized minimum residual (GMRES) method with incomplete LU factorization for preconditioning to solve possibly non-symmetric linear systems of the form: Ax = b.
Incomplete LU Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Orthomin method with Incomplete LU decomposition.
Internal routine for DGMRES.
Preconditioned BiConjugate Gradient Stop Test. This routine calculates the stop test for the BiConjugate Gradient iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned CG on Normal Equations Stop Test. This routine calculates the stop test for the Conjugate Gradient iteration scheme applied to the normal equations. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned BiConjugate Gradient Squared Stop Test. This routine calculates the stop test for the BiConjugate Gradient Squared iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Orthomin Stop Test. This routine calculates the stop test for the Orthomin iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned BiConjugate Gradient Stop Test. This routine calculates the stop test for the BiConjugate Gradient iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned CG on Normal Equations Stop Test. This routine calculates the stop test for the Conjugate Gradient iteration scheme applied to the normal equations. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned BiConjugate Gradient Squared Stop Test. This routine calculates the stop test for the BiConjugate Gradient Squared iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Orthomin Stop Test. This routine calculates the stop test for the Orthomin iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned BiConjugate Gradient Sparse Ax = b Solver. Routine to solve a Non-Symmetric linear system Ax = b using the Preconditioned BiConjugate Gradient method.
Preconditioned BiConjugate Gradient Squared Ax=b Solver. Routine to solve a Non-Symmetric linear system Ax = b using the Preconditioned BiConjugate Gradient Squared method.
Preconditioned GMRES Iterative Sparse Ax=b Solver. This routine uses the generalized minimum residual (GMRES) method with preconditioning to solve non-symmetric linear systems of the form: Ax = b.
Internal routine for SGMRES.
Internal routine for SGMRES.
Preconditioned Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Preconditioned Orthomin method.
Internal routine for SGMRES.
Internal routine for SGMRES.
Internal routine for SGMRES.
Diagonally Scaled BiConjugate Gradient Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient method with diagonal scaling.
Diagonally Scaled CGS Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient Squared method with diagonal scaling.
Diagonally Scaled GMRES Iterative Sparse Ax=b Solver. This routine uses the generalized minimum residual (GMRES) method with diagonal scaling to solve possibly non-symmetric linear systems of the form: Ax = b.
Incomplete LU Decomposition Preconditioner SLAP Set Up. Routine to generate the incomplete LDU decomposition of a matrix. The unit lower triangular factor L is stored by rows and the unit upper triangular factor U is stored by columns. The inverse of the diagonal matrix D is stored. No fill in is allowed.
Incomplete LU BiConjugate Gradient Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient method with Incomplete LU decomposition preconditioning.
Incomplete LU CG Sparse Ax=b Solver for Normal Equations. Routine to solve a general linear system Ax = b using the incomplete LU decomposition with the Conjugate Gradient method applied to the normal equations, viz., AA'y = b, x = A'y.
Incomplete LU BiConjugate Gradient Squared Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient Squared method with Incomplete LU decomposition preconditioning.
Incomplete LU GMRES Iterative Sparse Ax=b Solver. This routine uses the generalized minimum residual (GMRES) method with incomplete LU factorization for preconditioning to solve possibly non-symmetric linear systems of the form: Ax = b.
Incomplete LU Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Orthomin method with Incomplete LU decomposition.
Internal routine for SGMRES.
Preconditioned CG Sparse Ax=b Solver for Normal Equations. Routine to solve a general linear system Ax = b using the Preconditioned Conjugate Gradient method applied to the normal equations AA'y = b, x=A'y.
Diagonally Scaled CG Sparse Ax=b Solver for Normal Eqn's. Routine to solve a general linear system Ax = b using diagonal scaling with the Conjugate Gradient method applied to the the normal equations, viz., AA'y = b, where x = A'y.
Diagonally Scaled Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Orthomin method with diagonal scaling.
SLAP MSOLVE for LDU Factorization. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes (LDU) B = X.
SLAP Backsolve for LDU Factorization. Routine to solve a system of the form L*D*U X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix.
SLAP Backsolve for LDU Factorization. Routine to solve a system of the form (L*D*U)' X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix and ' denotes transpose.
Preconditioned CG Sparse Ax=b Solver for Normal Equations. Routine to solve a general linear system Ax = b using the Preconditioned Conjugate Gradient method applied to the normal equations AA'y = b, x=A'y.
Diagonally Scaled CG Sparse Ax=b Solver for Normal Eqn's. Routine to solve a general linear system Ax = b using diagonal scaling with the Conjugate Gradient method applied to the the normal equations, viz., AA'y = b, where x = A'y.
Diagonally Scaled Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Orthomin method with diagonal scaling.
SLAP MSOLVE for LDU Factorization. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes (LDU) B = X.
SLAP Backsolve for LDU Factorization. Routine to solve a system of the form L*D*U X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix.
SLAP Backsolve for LDU Factorization. Routine to solve a system of the form (L*D*U)' X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix and ' denotes transpose.
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Check the gradients of M nonlinear functions in N variables, evaluated at a point X, for consistency with the functions themselves.
Check the gradients of M nonlinear functions in N variables, evaluated at a point X, for consistency with the functions themselves.
Search for a zero of a function F(X) in a given interval (B,C). It is designed primarily for problems where F(B) and F(C) have opposite signs.
Calculate the covariance matrix for a nonlinear data fitting problem. It is intended to be used after a successful return from either DNLS1 or DNLS1E.
Minimize the sum of the squares of M nonlinear functions in N variables by a modification of the Levenberg-Marquardt algorithm.
An easy-to-use code which minimizes the sum of the squares of M nonlinear functions in N variables by a modification of the Levenberg-Marquardt algorithm.
Calculate the covariance matrix for a nonlinear data fitting problem. It is intended to be used after a successful return from either SNLS1 or SNLS1E.
Minimize the sum of the squares of M nonlinear functions in N variables by a modification of the Levenberg-Marquardt algorithm.
An easy-to-use code which minimizes the sum of the squares of M nonlinear functions in N variables by a modification of the Levenberg-Marquardt algorithm.
Solve a square system of nonlinear equations.
Search for a zero of a function F(X) in a given interval (B,C). It is designed primarily for problems where F(B) and F(C) have opposite signs.
Solve a square system of nonlinear equations.
Calculate the covariance matrix for a nonlinear data fitting problem. It is intended to be used after a successful return from either DNLS1 or DNLS1E.
Minimize the sum of the squares of M nonlinear functions in N variables by a modification of the Levenberg-Marquardt algorithm.
An easy-to-use code which minimizes the sum of the squares of M nonlinear functions in N variables by a modification of the Levenberg-Marquardt algorithm.
Calculate the covariance matrix for a nonlinear data fitting problem. It is intended to be used after a successful return from either SNLS1 or SNLS1E.
Minimize the sum of the squares of M nonlinear functions in N variables by a modification of the Levenberg-Marquardt algorithm.
An easy-to-use code which minimizes the sum of the squares of M nonlinear functions in N variables by a modification of the Levenberg-Marquardt algorithm.
Find a zero of a system of a N nonlinear functions in N variables by a modification of the Powell hybrid method.
An easy-to-use code to find a zero of a system of N nonlinear functions in N variables by a modification of the Powell hybrid method.
Find a zero of a system of a N nonlinear functions in N variables by a modification of the Powell hybrid method.
An easy-to-use code to find a zero of a system of N nonlinear functions in N variables by a modification of the Powell hybrid method.
Solve a linearly constrained least squares problem with equality constraints and nonnegativity constraints on selected variables.
Solve a linearly constrained least squares problem with equality constraints and nonnegativity constraints on selected variables.
Factor a band matrix using Gaussian elimination and estimate the condition number.
Compute the determinant of a band matrix using the factors computed by CNBCO or CNBFA.
Factor a band matrix by elimination.
Solve a general nonsymmetric banded system of linear equations.
Solve a general nonsymmetric banded system of linear equations. Iterative refinement is used to obtain an error estimate.
Solve a complex band system using the factors computed by CNBCO or CNBFA.
Factor a band matrix using Gaussian elimination and estimate the condition number.
Compute the determinant of a band matrix using the factors computed by DNBCO or DNBFA.
Factor a band matrix by elimination.
Solve a general nonsymmetric banded system of linear equations.
Solve a real band system using the factors computed by DNBCO or DNBFA.
Factor a band matrix using Gaussian elimination and estimate the condition number.
Compute the determinant of a band matrix using the factors computed by SNBCO or SNBFA.
Factor a real band matrix by elimination.
Solve a general nonsymmetric banded system of linear equations.
Solve a general nonsymmetric banded system of linear equations. Iterative refinement is used to obtain an error estimate.
Solve a real band system using the factors computed by SNBCO or SNBFA.
Generate a normally distributed (Gaussian) random number.
Sparse Linear Algebra Package Version 2.0.2 Documentation. Routines to solve large sparse symmetric and nonsymmetric positive definite linear systems, Ax = b, using precondi- tioned iterative methods.
Preconditioned CG on Normal Equations Stop Test. This routine calculates the stop test for the Conjugate Gradient iteration scheme applied to the normal equations. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned CG on Normal Equations Stop Test. This routine calculates the stop test for the Conjugate Gradient iteration scheme applied to the normal equations. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Sparse Linear Algebra Package Version 2.0.2 Documentation. Routines to solve large sparse symmetric and nonsymmetric positive definite linear systems, Ax = b, using precondi- tioned iterative methods.
Preconditioned CG Sparse Ax=b Solver for Normal Equations. Routine to solve a general linear system Ax = b using the Preconditioned Conjugate Gradient method applied to the normal equations AA'y = b, x=A'y.
Preconditioned CG Sparse Ax=b Solver for Normal Equations. Routine to solve a general linear system Ax = b using the Preconditioned Conjugate Gradient method applied to the normal equations AA'y = b, x=A'y.
Return the permutation vector generated by sorting a given array and, optionally, rearrange the elements of the array. The array may be sorted in increasing or decreasing order. A slightly modified quicksort algorithm is used.
Return the permutation vector generated by sorting a given array and, optionally, rearrange the elements of the array. The array may be sorted in increasing or decreasing order. A slightly modified quicksort algorithm is used.
Return the permutation vector generated by sorting a given array and, optionally, rearrange the elements of the array. The array may be sorted in increasing or decreasing order. A slightly modified quicksort algorithm is used.
Integrate a real function of one variable over a finite interval using an adaptive 8-point Legendre-Gauss algorithm. Intended primarily for high accuracy integration or integration of smooth functions.
Evaluate the definite integral of a piecewise cubic Hermite function over an arbitrary interval.
Evaluate the definite integral of a piecewise cubic Hermite function over an interval whose endpoints are data points.
Integrate a real function of one variable over a finite interval using an adaptive 8-point Legendre-Gauss algorithm. Intended primarily for high accuracy integration or integration of smooth functions.
Evaluate the definite integral of a piecewise cubic Hermite function over an arbitrary interval.
Evaluate the definite integral of a piecewise cubic Hermite function over an interval whose endpoints are data points.
The function of CDRIV1 is to solve N (200 or fewer) ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. CDRIV1 allows complex-valued differential equations.
The function of CDRIV2 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. CDRIV2 allows complex-valued differential equations.
The function of CDRIV3 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. Other important options are available. CDRIV3 allows complex-valued differential equations.
Solve an initial value problem in ordinary differential equations using an Adams-Bashforth method.
Solve an initial value problem in ordinary differential equations using backward differentiation formulas. It is intended primarily for stiff problems.
Solve an initial value problem in ordinary differential equations using a Runge-Kutta-Fehlberg scheme.
The function of DDRIV1 is to solve N (200 or fewer) ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. DDRIV1 uses double precision arithmetic.
The function of DDRIV2 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. DDRIV2 uses double precision arithmetic.
The function of DDRIV3 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. Other important options are available. DDRIV3 uses double precision arithmetic.
Solve an initial value problem in ordinary differential equations using an Adams-Bashforth method.
Solve an initial value problem in ordinary differential equations using backward differentiation formulas. It is intended primarily for stiff problems.
Solve an initial value problem in ordinary differential equations using a Runge-Kutta-Fehlberg scheme.
Approximate the solution at XOUT by evaluating the polynomial computed in DSTEPS at XOUT. Must be used in conjunction with DSTEPS.
Integrate a system of first order ordinary differential equations one step.
The function of SDRIV1 is to solve N (200 or fewer) ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. SDRIV1 uses single precision arithmetic.
The function of SDRIV2 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. SDRIV2 uses single precision arithmetic.
The function of SDRIV3 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. Other important options are available. SDRIV3 uses single precision arithmetic.
Approximate the solution at XOUT by evaluating the polynomial computed in STEPS at XOUT. Must be used in conjunction with STEPS.
Integrate a system of first order ordinary differential equations one step.
Compute the modified (hyperbolic) Bessel function of the first kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order one.
Compute the Bessel function of the first kind of order one.
Compute the modified (hyperbolic) Bessel function of the third kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order one.
Compute the Bessel function of the second kind of order one.
Compute the modified (hyperbolic) Bessel function of the first kind of order one.
Compute the Bessel function of the first kind of order one.
Compute the modified (hyperbolic) Bessel function of the third kind of order one.
Compute the Bessel function of the second kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order one.
Compute the hyperbolic Bessel function of the first kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order zero.
Compute the Bessel function of the first kind of order zero.
Compute the modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the Bessel function of the second kind of order zero.
Compute the hyperbolic Bessel function of the first kind of order zero.
Compute the Bessel function of the first kind of order zero.
Compute the modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the Bessel function of the second kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order zero.
The function of CDRIV1 is to solve N (200 or fewer) ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. CDRIV1 allows complex-valued differential equations.
The function of CDRIV2 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. CDRIV2 allows complex-valued differential equations.
The function of CDRIV3 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. Other important options are available. CDRIV3 allows complex-valued differential equations.
Solve an initial value problem in ordinary differential equations using an Adams-Bashforth method.
Solve an initial value problem in ordinary differential equations using backward differentiation formulas. It is intended primarily for stiff problems.
Solve an initial value problem in ordinary differential equations using a Runge-Kutta-Fehlberg scheme.
The function of DDRIV1 is to solve N (200 or fewer) ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. DDRIV1 uses double precision arithmetic.
The function of DDRIV2 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. DDRIV2 uses double precision arithmetic.
The function of DDRIV3 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. Other important options are available. DDRIV3 uses double precision arithmetic.
Solve an initial value problem in ordinary differential equations using an Adams-Bashforth method.
Solve an initial value problem in ordinary differential equations using backward differentiation formulas. It is intended primarily for stiff problems.
Solve an initial value problem in ordinary differential equations using a Runge-Kutta-Fehlberg scheme.
Approximate the solution at XOUT by evaluating the polynomial computed in DSTEPS at XOUT. Must be used in conjunction with DSTEPS.
Integrate a system of first order ordinary differential equations one step.
The function of SDRIV1 is to solve N (200 or fewer) ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. SDRIV1 uses single precision arithmetic.
The function of SDRIV2 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. SDRIV2 uses single precision arithmetic.
The function of SDRIV3 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. Other important options are available. SDRIV3 uses single precision arithmetic.
Approximate the solution at XOUT by evaluating the polynomial computed in STEPS at XOUT. Must be used in conjunction with STEPS.
Integrate a system of first order ordinary differential equations one step.
Determine the number of terms needed in an orthogonal polynomial series so that it meets a specified accuracy.
Determine the number of terms needed in an orthogonal polynomial series so that it meets a specified accuracy.
Determine the number of terms needed in an orthogonal polynomial series so that it meets a specified accuracy.
Determine the number of terms needed in an orthogonal polynomial series so that it meets a specified accuracy.
Use Householder transformations to compute the QR factorization of an N by P matrix. Column pivoting is a users option.
Apply the output of CQRDC to compute coordinate transfor- mations, projections, and least squares solutions.
Use Householder transformations to compute the QR factorization of an N by P matrix. Column pivoting is a users option.
Apply the output of DQRDC to compute coordinate transfor- mations, projections, and least squares solutions.
Use Householder transformations to compute the QR factorization of an N by P matrix. Column pivoting is a users option.
Apply the output of SQRDC to compute coordinate transfor- mations, projections, and least squares solutions.
Sparse Linear Algebra Package Version 2.0.2 Documentation. Routines to solve large sparse symmetric and nonsymmetric positive definite linear systems, Ax = b, using precondi- tioned iterative methods.
Preconditioned Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Preconditioned Orthomin method.
Preconditioned Orthomin Stop Test. This routine calculates the stop test for the Orthomin iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Orthomin Stop Test. This routine calculates the stop test for the Orthomin iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Sparse Linear Algebra Package Version 2.0.2 Documentation. Routines to solve large sparse symmetric and nonsymmetric positive definite linear systems, Ax = b, using precondi- tioned iterative methods.
Preconditioned Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Preconditioned Orthomin method.
Solve a linear two-point boundary value problem using superposition coupled with an orthonormalization procedure and a variable-step integration scheme.
Solve a linear two-point boundary value problem using superposition coupled with an orthonormalization procedure and a variable-step integration scheme.
Pack a base 2 exponent into a floating point number.
Pack a base 2 exponent into a floating point number.
Factor a complex Hermitian matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant, inertia and inverse of a complex Hermitian matrix stored in packed form using the factors obtained from CHPFA.
Factor a complex Hermitian matrix stored in packed form by elimination with symmetric pivoting.
Solve a complex Hermitian system using factors obtained from CHPFA.
Factor a complex Hermitian positive definite matrix stored in packed form and estimate the condition number of the matrix.
Compute the determinant and inverse of a complex Hermitian positive definite matrix using factors from CPPCO or CPPFA.
Factor a complex Hermitian positive definite matrix stored in packed form.
Solve the complex Hermitian positive definite system using the factors computed by CPPCO or CPPFA.
Factor a complex symmetric matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant and inverse of a complex symmetric matrix stored in packed form using the factors from CSPFA.
Factor a complex symmetric matrix stored in packed form by elimination with symmetric pivoting.
Solve a complex symmetric system using the factors obtained from CSPFA.
Factor a symmetric positive definite matrix stored in packed form and estimate the condition number of the matrix.
Compute the determinant and inverse of a real symmetric positive definite matrix using factors from DPPCO or DPPFA.
Factor a real symmetric positive definite matrix stored in packed form.
Solve the real symmetric positive definite system using the factors computed by DPPCO or DPPFA.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant, inertia, inverse of a real symmetric matrix stored in packed form using the factors from DSPFA.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting.
Solve a real symmetric system using the factors obtained from DSPFA.
Factor a symmetric positive definite matrix stored in packed form and estimate the condition number of the matrix.
Compute the determinant and inverse of a real symmetric positive definite matrix using factors from SPPCO or SPPFA.
Factor a real symmetric positive definite matrix stored in packed form.
Solve the real symmetric positive definite system using the factors computed by SPPCO or SPPFA.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant, inertia, inverse of a real symmetric matrix stored in packed form using the factors from SSPFA.
Compute the eigenvalues and, optionally, the eigenvectors of a real symmetric matrix stored in packed form.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting.
Solve a real symmetric system using the factors obtained from SSPFA.
Return the permutation vector generated by sorting a given array and, optionally, rearrange the elements of the array. The array may be sorted in increasing or decreasing order. A slightly modified quicksort algorithm is used.
Return the permutation vector generated by sorting a substring within a character array and, optionally, rearrange the elements of the array. The array may be sorted in forward or reverse lexicographical order. A slightly modified quicksort algorithm is used.
Return the permutation vector generated by sorting a given array and, optionally, rearrange the elements of the array. The array may be sorted in increasing or decreasing order. A slightly modified quicksort algorithm is used.
Return the permutation vector generated by sorting a given array and, optionally, rearrange the elements of the array. The array may be sorted in increasing or decreasing order. A slightly modified quicksort algorithm is used.
Evaluate a cubic polynomial given in Hermite form and its first derivative at an array of points. While designed for use by PCHFD, it may be useful directly as an evaluator for a piecewise cubic Hermite function in applications, such as graphing, where the interval is known in advance. If only function values are required, use CHFEV instead.
Evaluate a cubic polynomial given in Hermite form at an array of points. While designed for use by PCHFE, it may be useful directly as an evaluator for a piecewise cubic Hermite function in applications, such as graphing, where the interval is known in advance.
Evaluate a cubic polynomial given in Hermite form and its first derivative at an array of points. While designed for use by DPCHFD, it may be useful directly as an evaluator for a piecewise cubic Hermite function in applications, such as graphing, where the interval is known in advance. If only function values are required, use DCHFEV instead.
Evaluate a cubic polynomial given in Hermite form at an array of points. While designed for use by DPCHFE, it may be useful directly as an evaluator for a piecewise cubic Hermite function in applications, such as graphing, where the interval is known in advance.
Check a cubic Hermite function for monotonicity.
Evaluate a piecewise cubic Hermite function and its first derivative at an array of points. May be used by itself for Hermite interpolation, or as an evaluator for DPCHIM or DPCHIC. If only function values are required, use DPCHFE instead.
Evaluate a piecewise cubic Hermite function at an array of points. May be used by itself for Hermite interpolation, or as an evaluator for DPCHIM or DPCHIC.
Evaluate the definite integral of a piecewise cubic Hermite function over an arbitrary interval.
Set derivatives needed to determine a piecewise monotone piecewise cubic Hermite interpolant to given data. User control is available over boundary conditions and/or treatment of points where monotonicity switches direction.
Evaluate the definite integral of a piecewise cubic Hermite function over an interval whose endpoints are data points.
Set derivatives needed to determine a monotone piecewise cubic Hermite interpolant to given data. Boundary values are provided which are compatible with monotonicity. The interpolant will have an extremum at each point where mono- tonicity switches direction. (See DPCHIC if user control is desired over boundary or switch conditions.)
Set derivatives needed to determine the Hermite represen- tation of the cubic spline interpolant to given data, with specified boundary conditions.
Check a cubic Hermite function for monotonicity.
Documentation for PCHIP, a Fortran package for piecewise cubic Hermite interpolation of data.
Evaluate a piecewise cubic Hermite function and its first derivative at an array of points. May be used by itself for Hermite interpolation, or as an evaluator for PCHIM or PCHIC. If only function values are required, use PCHFE instead.
Evaluate a piecewise cubic Hermite function at an array of points. May be used by itself for Hermite interpolation, or as an evaluator for PCHIM or PCHIC.
Evaluate the definite integral of a piecewise cubic Hermite function over an arbitrary interval.
Set derivatives needed to determine a piecewise monotone piecewise cubic Hermite interpolant to given data. User control is available over boundary conditions and/or treatment of points where monotonicity switches direction.
Evaluate the definite integral of a piecewise cubic Hermite function over an interval whose endpoints are data points.
Set derivatives needed to determine a monotone piecewise cubic Hermite interpolant to given data. Boundary values are provided which are compatible with monotonicity. The interpolant will have an extremum at each point where mono- tonicity switches direction. (See PCHIC if user control is desired over boundary or switch conditions.)
Set derivatives needed to determine the Hermite represen- tation of the cubic spline interpolant to given data, with specified boundary conditions.
Solve by a cyclic reduction algorithm the linear system of equations that results from a finite difference approximation to certain 2-d elliptic PDE's on a centered grid .
Solve the standard five-point finite difference approximation on a staggered grid to the Helmholtz equation in Cartesian coordinates.
Solve the standard five-point finite difference approximation on a staggered grid to the modified Helmholtz equation in spherical coordinates assuming axisymmetry (no dependence on longitude).
Solve the standard five-point finite difference approximation on a staggered grid to the modified Helmholtz equation in cylindrical coordinates.
Solve the standard five-point finite difference approximation on a staggered grid to the Helmholtz equation in polar coordinates.
Solve the standard five-point finite difference approximation on a staggered grid to the Helmholtz equation in spherical coordinates and on the surface of the unit sphere (radius of 1).
Solve the standard seven-point finite difference approximation to the Helmholtz equation in Cartesian coordinates.
Solves the standard five-point finite difference approximation to the Helmholtz equation in Cartesian coordinates.
Solve a finite difference approximation to the modified Helmholtz equation in spherical coordinates assuming axisymmetry (no dependence on longitude).
Solve a standard finite difference approximation to the Helmholtz equation in cylindrical coordinates.
Solve a finite difference approximation to the Helmholtz equation in polar coordinates.
Solve a finite difference approximation to the Helmholtz equation in spherical coordinates and on the surface of the unit sphere (radius of 1).
Solve a block tridiagonal system of linear equations that results from a staggered grid finite difference approximation to 2-D elliptic PDE's.
Discretize and solve a second and, optionally, a fourth order finite difference approximation on a uniform grid to the general separable elliptic partial differential equation on a rectangle with any combination of periodic or mixed boundary conditions.
Solve for either the second or fourth order finite difference approximation to the solution of a separable elliptic partial differential equation on a rectangle. Any combination of periodic or mixed boundary conditions is allowed.
Rearrange a given array according to a prescribed permutation vector.
Compute the logarithm of Tricomi's incomplete Gamma function with Perron's continued fraction for large X and A .GE. X.
Compute the logarithm of Tricomi's incomplete Gamma function with Perron's continued fraction for large X and A .GE. X.
Evaluate the Airy modulus and phase.
Evaluate the modulus and phase for the J0 and Y0 Bessel functions.
Evaluate the modulus and phase for the J1 and Y1 Bessel functions.
Evaluate the Airy modulus and phase.
Evaluate a piecewise cubic Hermite function and its first derivative at an array of points. May be used by itself for Hermite interpolation, or as an evaluator for DPCHIM or DPCHIC. If only function values are required, use DPCHFE instead.
Evaluate a piecewise cubic Hermite function at an array of points. May be used by itself for Hermite interpolation, or as an evaluator for DPCHIM or DPCHIC.
Evaluate a piecewise cubic Hermite function and its first derivative at an array of points. May be used by itself for Hermite interpolation, or as an evaluator for PCHIM or PCHIC. If only function values are required, use PCHFE instead.
Evaluate a piecewise cubic Hermite function at an array of points. May be used by itself for Hermite interpolation, or as an evaluator for PCHIM or PCHIC.
Piecewise Cubic Hermite to B-Spline converter.
Check a cubic Hermite function for monotonicity.
Set derivatives needed to determine a piecewise monotone piecewise cubic Hermite interpolant to given data. User control is available over boundary conditions and/or treatment of points where monotonicity switches direction.
Set derivatives needed to determine a monotone piecewise cubic Hermite interpolant to given data. Boundary values are provided which are compatible with monotonicity. The interpolant will have an extremum at each point where mono- tonicity switches direction. (See DPCHIC if user control is desired over boundary or switch conditions.)
Set derivatives needed to determine the Hermite represen- tation of the cubic spline interpolant to given data, with specified boundary conditions.
Piecewise Cubic Hermite to B-Spline converter.
Check a cubic Hermite function for monotonicity.
Documentation for PCHIP, a Fortran package for piecewise cubic Hermite interpolation of data.
Set derivatives needed to determine a piecewise monotone piecewise cubic Hermite interpolant to given data. User control is available over boundary conditions and/or treatment of points where monotonicity switches direction.
Set derivatives needed to determine a monotone piecewise cubic Hermite interpolant to given data. Boundary values are provided which are compatible with monotonicity. The interpolant will have an extremum at each point where mono- tonicity switches direction. (See PCHIC if user control is desired over boundary or switch conditions.)
Set derivatives needed to determine the Hermite represen- tation of the cubic spline interpolant to given data, with specified boundary conditions.
Convert the B-representation of a B-spline to the piecewise polynomial (PP) form.
Convert the B-representation of a B-spline to the piecewise polynomial (PP) form.
Apply a plane Givens rotation.
Apply a plane Givens rotation.
Apply a plane Givens rotation.
Evaluate a generalization of Pochhammer's symbol.
Calculate a generalization of Pochhammer's symbol starting from first order.
Evaluate a generalization of Pochhammer's symbol.
Calculate a generalization of Pochhammer's symbol starting from first order.
Solve a three-dimensional block tridiagonal linear system which arises from a finite difference approximation to a three-dimensional Poisson equation using the Fourier transform package FFTPAK written by Paul Swarztrauber.
Solve the standard five-point finite difference approximation on a staggered grid to the Helmholtz equation in polar coordinates.
Solve a finite difference approximation to the Helmholtz equation in polar coordinates.
Compute the complex arc tangent in the proper quadrant.
Compute the coefficients of the polynomial fit (including Hermite polynomial fits) produced by a previous call to POLINT.
Compute the coefficients of the polynomial fit (including Hermite polynomial fits) produced by a previous call to POLINT.
Use the coefficients generated by DPOLFT to evaluate the polynomial fit of degree L, along with the first NDER of its derivatives, at a specified point.
Use the coefficients generated by POLFIT to evaluate the polynomial fit of degree L, along with the first NDER of its derivatives, at a specified point.
Calculate the value of a polynomial and its first NDER derivatives where the polynomial was produced by a previous call to DPLINT.
Calculate the value of a polynomial and its first NDER derivatives where the polynomial was produced by a previous call to POLINT.
Convert the DPOLFT coefficients to Taylor series form.
Fit discrete data in a least squares sense by polynomials in one variable.
Convert the POLFIT coefficients to Taylor series form.
Fit discrete data in a least squares sense by polynomials in one variable.
Produce the polynomial which interpolates a set of discrete data points.
Produce the polynomial which interpolates a set of discrete data points.
Find the zeros of a polynomial with complex coefficients.
Find the zeros of a polynomial with complex coefficients.
Find the zeros of a polynomial with real coefficients.
Find the zeros of a polynomial with real coefficients.
Find the zeros of a polynomial with complex coefficients.
Find the zeros of a polynomial with complex coefficients.
Find the zeros of a polynomial with real coefficients.
Find the zeros of a polynomial with real coefficients.
Compute the Cholesky decomposition of a positive definite matrix. A pivoting option allows the user to estimate the condition number of a positive definite matrix or determine the rank of a positive semidefinite matrix.
Update the Cholesky factorization A=TRANS(R)*R of a positive definite matrix A of order P under diagonal permutations of the form TRANS(E)*A*E, where E is a permutation matrix.
Factor a complex Hermitian positive definite matrix stored in band form and estimate the condition number of the matrix.
Compute the determinant of a complex Hermitian positive definite band matrix using the factors computed by CPBCO or CPBFA.
Factor a complex Hermitian positive definite matrix stored in band form.
Solve the complex Hermitian positive definite band system using the factors computed by CPBCO or CPBFA.
Factor a complex Hermitian positive definite matrix and estimate the condition number of the matrix.
Compute the determinant and inverse of a certain complex Hermitian positive definite matrix using the factors computed by CPOCO, CPOFA, or CQRDC.
Factor a complex Hermitian positive definite matrix.
Solve a positive definite symmetric complex system of linear equations.
Solve a positive definite Hermitian system of linear equations. Iterative refinement is used to obtain an error estimate.
Solve the complex Hermitian positive definite linear system using the factors computed by CPOCO or CPOFA.
Factor a complex Hermitian positive definite matrix stored in packed form and estimate the condition number of the matrix.
Compute the determinant and inverse of a complex Hermitian positive definite matrix using factors from CPPCO or CPPFA.
Factor a complex Hermitian positive definite matrix stored in packed form.
Solve the complex Hermitian positive definite system using the factors computed by CPPCO or CPPFA.
Solve a positive definite tridiagonal linear system.
Compute the Cholesky decomposition of a positive definite matrix. A pivoting option allows the user to estimate the condition number of a positive definite matrix or determine the rank of a positive semidefinite matrix.
Update the Cholesky factorization A=TRANS(R)*R of a positive definite matrix A of order P under diagonal permutations of the form TRANS(E)*A*E, where E is a permutation matrix.
Factor a real symmetric positive definite matrix stored in band form and estimate the condition number of the matrix.
Compute the determinant of a symmetric positive definite band matrix using the factors computed by DPBCO or DPBFA.
Factor a real symmetric positive definite matrix stored in in band form.
Solve a real symmetric positive definite band system using the factors computed by DPBCO or DPBFA.
Factor a real symmetric positive definite matrix and estimate the condition of the matrix.
Compute the determinant and inverse of a certain real symmetric positive definite matrix using the factors computed by DPOCO, DPOFA or DQRDC.
Factor a real symmetric positive definite matrix.
Solve a positive definite symmetric system of linear equations.
Solve the real symmetric positive definite linear system using the factors computed by DPOCO or DPOFA.
Factor a symmetric positive definite matrix stored in packed form and estimate the condition number of the matrix.
Compute the determinant and inverse of a real symmetric positive definite matrix using factors from DPPCO or DPPFA.
Factor a real symmetric positive definite matrix stored in packed form.
Solve the real symmetric positive definite system using the factors computed by DPPCO or DPPFA.
Solve a positive definite tridiagonal linear system.
Compute the Cholesky decomposition of a positive definite matrix. A pivoting option allows the user to estimate the condition number of a positive definite matrix or determine the rank of a positive semidefinite matrix.
Update the Cholesky factorization A=TRANS(R)*R of A positive definite matrix A of order P under diagonal permutations of the form TRANS(E)*A*E, where E is a permutation matrix.
Factor a real symmetric positive definite matrix stored in band form and estimate the condition number of the matrix.
Compute the determinant of a symmetric positive definite band matrix using the factors computed by SPBCO or SPBFA.
Factor a real symmetric positive definite matrix stored in band form.
Solve a real symmetric positive definite band system using the factors computed by SPBCO or SPBFA.
Factor a real symmetric positive definite matrix and estimate the condition number of the matrix.
Compute the determinant and inverse of a certain real symmetric positive definite matrix using the factors computed by SPOCO, SPOFA or SQRDC.
Factor a real symmetric positive definite matrix.
Solve a positive definite symmetric system of linear equations.
Solve a positive definite symmetric system of linear equations. Iterative refinement is used to obtain an error estimate.
Solve the real symmetric positive definite linear system using the factors computed by SPOCO or SPOFA.
Factor a symmetric positive definite matrix stored in packed form and estimate the condition number of the matrix.
Compute the determinant and inverse of a real symmetric positive definite matrix using factors from SPPCO or SPPFA.
Factor a real symmetric positive definite matrix stored in packed form.
Solve the real symmetric positive definite system using the factors computed by SPPCO or SPPFA.
Solve a positive definite tridiagonal linear system.
Find a zero of a system of a N nonlinear functions in N variables by a modification of the Powell hybrid method.
An easy-to-use code to find a zero of a system of N nonlinear functions in N variables by a modification of the Powell hybrid method.
Find a zero of a system of a N nonlinear functions in N variables by a modification of the Powell hybrid method.
An easy-to-use code to find a zero of a system of N nonlinear functions in N variables by a modification of the Powell hybrid method.
Sparse Linear Algebra Package Version 2.0.2 Documentation. Routines to solve large sparse symmetric and nonsymmetric positive definite linear systems, Ax = b, using precondi- tioned iterative methods.
Sparse Linear Algebra Package Version 2.0.2 Documentation. Routines to solve large sparse symmetric and nonsymmetric positive definite linear systems, Ax = b, using precondi- tioned iterative methods.
Solve an initial value problem in ordinary differential equations using an Adams-Bashforth method.
Solve an initial value problem in ordinary differential equations using an Adams-Bashforth method.
Approximate the solution at XOUT by evaluating the polynomial computed in DSTEPS at XOUT. Must be used in conjunction with DSTEPS.
Integrate a system of first order ordinary differential equations one step.
Approximate the solution at XOUT by evaluating the polynomial computed in STEPS at XOUT. Must be used in conjunction with STEPS.
Integrate a system of first order ordinary differential equations one step.
Print error messages processed by XERMSG.
Compute the Psi (or Digamma) function.
Compute the Psi (or Digamma) function.
Compute derivatives of the Psi function.
To compute values of the Psi function for DXLEGF.
Compute derivatives of the Psi function.
Compute the Psi (or Digamma) function.
To compute values of the Psi function for XLEGF.
Compute the eigenvalues of symmetric tridiagonal matrix by the QL method.
Compute the eigenvalues of symmetric tridiagonal matrix using a rational variant of the QL method.
Use Householder transformations to compute the QR factorization of an N by P matrix. Column pivoting is a users option.
Use Householder transformations to compute the QR factorization of an N by P matrix. Column pivoting is a users option.
Use Householder transformations to compute the QR factorization of an N by P matrix. Column pivoting is a users option.
Solve a linear least squares problems by performing a QR factorization of the input matrix using Householder transformations. Emphasis is put on detecting possible rank deficiency.
Solve linear least squares problems by performing a QR factorization of the input matrix using Householder transformations. Emphasis is put on detecting possible rank deficiency.
Solve a linear least squares problems by performing a QR factorization of the matrix using Householder transformations.
Solve a linear least squares problems by performing a QR factorization of the matrix using Householder transformations. Emphasis is put on detecting possible rank deficiency.
Solve a linear least squares problems by performing a QR factorization of the matrix using Householder transformations. Emphasis is put on detecting possible rank deficiency.
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT)LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESLT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given INTEGRAL I = Integral of F over (BOUND,+INFINITY) OR I = Integral of F over (-INFINITY,BOUND) OR I = Integral of F over (-INFINITY,+INFINITY) Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given integral I = Integral of F over (BOUND,+INFINITY) or I = Integral of F over (-INFINITY,BOUND) or I = Integral of F over (-INFINITY,+INFINITY), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy break points of the integration interval, where local difficulties of the integrand may occur (e.g. SINGULARITIES, DISCONTINUITIES), are provided by the user.
Approximate a given definite integral I = Integral of F over (A,B), hopefully satisfying the accuracy claim: ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)). Break points of the integration interval, where local difficulties of the integrand may occur (e.g. singularities or discontinuities) are provided by the user.
The routine calculates an approximation result to a given Definite integral I = Integral of F over (A,B), Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a Cauchy principal value I = INTEGRAL of F*W over (A,B) (W(X) = 1/((X-C), C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABE,EPSREL*ABS(I)).
The routine calculates an approximation result to a CAUCHY PRINCIPAL VALUE I = Integral of F*W over (A,B) (W(X) = 1/(X-C), (C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
The routine calculates an approximation result to a given Fourier integral I=Integral of F(X)*W(X) over (A,INFINITY) where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
The routine calculates an approximation result to a given Fourier integral I = Integral of F(X)*W(X) over (A,INFINITY) where W(X)=COS(OMEGA*X) or W(X)=SIN(OMEGA*X), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
Calculate an approximation to a given definite integral I= Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X)=SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points, see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
To compute I = Integral of F*W over (A,B) with error estimate, where W(X) = 1/(X-C)
To compute the integral I=Integral of F(X) over (A,B) Where W(X) = COS(OMEGA*X) or W(X)=SIN(OMEGA*X) and to compute J = Integral of ABS(F) over (A,B). For small value of OMEGA or small intervals (A,B) the 15-point GAUSS-KRONRO Rule is used. Otherwise a generalized CLENSHAW-CURTIS method is used.
To compute I = Integral of F*W over (BL,BR), with error estimate, where the weight function W has a singular behaviour of ALGEBRAICO-LOGARITHMIC type at the points A and/or B. (BL,BR) is a part of (A,B).
To compute I = Integral of F over (A,B), with error estimate J = integral of ABS(F) over (A,B)
The original (infinite integration range is mapped onto the interval (0,1) and (A,B) is a part of (0,1). it is the purpose to compute I = Integral of transformed integrand over (A,B), J = Integral of ABS(Transformed Integrand) over (A,B).
To compute I = Integral of F*W over (A,B), with error estimate J = Integral of ABS(F*W) over (A,B)
To compute I = Integral of F over (A,B), with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B) with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B), with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B) with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B) with error estimate J = Integral of ABS(F) over (A,B)
This routine computes modified Chebyshev moments. The K-th modified Chebyshev moment is defined as the integral over (-1,1) of W(X)*T(K,X), where T(K,X) is the Chebyshev polynomial of degree K.
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT)LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESLT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given INTEGRAL I = Integral of F over (BOUND,+INFINITY) OR I = Integral of F over (-INFINITY,BOUND) OR I = Integral of F over (-INFINITY,+INFINITY) Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given integral I = Integral of F over (BOUND,+INFINITY) or I = Integral of F over (-INFINITY,BOUND) or I = Integral of F over (-INFINITY,+INFINITY), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy break points of the integration interval, where local difficulties of the integrand may occur(e.g. SINGULARITIES, DISCONTINUITIES), are provided by the user.
Approximate a given definite integral I = Integral of F over (A,B), hopefully satisfying the accuracy claim: ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)). Break points of the integration interval, where local difficulties of the integrand may occur (e.g. singularities or discontinuities) are provided by the user.
The routine calculates an approximation result to a given Definite integral I = Integral of F over (A,B), Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a Cauchy principal value I = INTEGRAL of F*W over (A,B) (W(X) = 1/((X-C), C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABE,EPSREL*ABS(I)).
The routine calculates an approximation result to a CAUCHY PRINCIPAL VALUE I = Integral of F*W over (A,B) (W(X) = 1/(X-C), (C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
The routine calculates an approximation result to a given Fourier integral I = Integral of F(X)*W(X) over (A,INFINITY) where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
The routine calculates an approximation result to a given Fourier integral I = Integral of F(X)*W(X) over (A,INFINITY) where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points, see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
To compute I = Integral of F*W over (A,B) with error estimate, where W(X) = 1/(X-C)
To compute the integral I=Integral of F(X) over (A,B) Where W(X) = COS(OMEGA*X) Or (WX)=SIN(OMEGA*X) and to compute J=Integral of ABS(F) over (A,B). For small value of OMEGA or small intervals (A,B) 15-point GAUSS- KRONROD Rule used. Otherwise generalized CLENSHAW-CURTIS us
To compute I = Integral of F*W over (BL,BR), with error estimate, where the weight function W has a singular behaviour of ALGEBRAICO-LOGARITHMIC type at the points A and/or B. (BL,BR) is a part of (A,B).
To compute I = Integral of F over (A,B), with error estimate J = integral of ABS(F) over (A,B)
The original (infinite integration range is mapped onto the interval (0,1) and (A,B) is a part of (0,1). it is the purpose to compute I = Integral of transformed integrand over (A,B), J = Integral of ABS(Transformed Integrand) over (A,B).
To compute I = Integral of F*W over (A,B), with error estimate J = Integral of ABS(F*W) over (A,B)
To compute I = Integral of F over (A,B), with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B) with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B), with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B) with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B) with error estimate J = Integral of ABS(F) over (A,B)
This routine computes modified Chebyshev moments. The K-th modified Chebyshev moment is defined as the integral over (-1,1) of W(X)*T(K,X), where T(K,X) is the Chebyshev polynomial of degree K.
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Documentation for QUADPACK, a package of subprograms for automatic evaluation of one-dimensional definite integrals.
Compute the complex arc tangent in the proper quadrant.
Solve a linearly constrained least squares problem with equality and inequality constraints, and optionally compute a covariance matrix.
Solve a linearly constrained least squares problem with equality constraints and nonnegativity constraints on selected variables.
Solve a linearly constrained least squares problem with equality and inequality constraints, and optionally compute a covariance matrix.
Solve a linearly constrained least squares problem with equality constraints and nonnegativity constraints on selected variables.
Integrate a function tabulated at arbitrarily spaced abscissas using overlapping parabolas.
Compute the integral of a product of a function and a derivative of a B-spline.
Compute the integral of a K-th order B-spline using the B-representation.
Integrate a function tabulated at arbitrarily spaced abscissas using overlapping parabolas.
Compute the integral of a product of a function and a derivative of a K-th order B-spline.
Compute the integral of a K-th order B-spline using the B-representation.
Evaluate the definite integral of a piecewise cubic Hermite function over an arbitrary interval.
Evaluate the definite integral of a piecewise cubic Hermite function over an interval whose endpoints are data points.
Compute the integral on (X1,X2) of a product of a function F and the ID-th derivative of a B-spline, (PP-representation).
Compute the integral on (X1,X2) of a K-th order B-spline using the piecewise polynomial (PP) representation.
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT)LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESLT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given INTEGRAL I = Integral of F over (BOUND,+INFINITY) OR I = Integral of F over (-INFINITY,BOUND) OR I = Integral of F over (-INFINITY,+INFINITY) Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given integral I = Integral of F over (BOUND,+INFINITY) or I = Integral of F over (-INFINITY,BOUND) or I = Integral of F over (-INFINITY,+INFINITY), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy break points of the integration interval, where local difficulties of the integrand may occur (e.g. SINGULARITIES, DISCONTINUITIES), are provided by the user.
Approximate a given definite integral I = Integral of F over (A,B), hopefully satisfying the accuracy claim: ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)). Break points of the integration interval, where local difficulties of the integrand may occur (e.g. singularities or discontinuities) are provided by the user.
The routine calculates an approximation result to a given Definite integral I = Integral of F over (A,B), Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a Cauchy principal value I = INTEGRAL of F*W over (A,B) (W(X) = 1/((X-C), C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABE,EPSREL*ABS(I)).
The routine calculates an approximation result to a CAUCHY PRINCIPAL VALUE I = Integral of F*W over (A,B) (W(X) = 1/(X-C), (C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
The routine calculates an approximation result to a given Fourier integral I=Integral of F(X)*W(X) over (A,INFINITY) where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
The routine calculates an approximation result to a given Fourier integral I = Integral of F(X)*W(X) over (A,INFINITY) where W(X)=COS(OMEGA*X) or W(X)=SIN(OMEGA*X), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
Calculate an approximation to a given definite integral I= Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X)=SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points, see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
To compute I = Integral of F*W over (A,B) with error estimate, where W(X) = 1/(X-C)
To compute the integral I=Integral of F(X) over (A,B) Where W(X) = COS(OMEGA*X) or W(X)=SIN(OMEGA*X) and to compute J = Integral of ABS(F) over (A,B). For small value of OMEGA or small intervals (A,B) the 15-point GAUSS-KRONRO Rule is used. Otherwise a generalized CLENSHAW-CURTIS method is used.
To compute I = Integral of F*W over (BL,BR), with error estimate, where the weight function W has a singular behaviour of ALGEBRAICO-LOGARITHMIC type at the points A and/or B. (BL,BR) is a part of (A,B).
To compute I = Integral of F over (A,B), with error estimate J = integral of ABS(F) over (A,B)
The original (infinite integration range is mapped onto the interval (0,1) and (A,B) is a part of (0,1). it is the purpose to compute I = Integral of transformed integrand over (A,B), J = Integral of ABS(Transformed Integrand) over (A,B).
To compute I = Integral of F*W over (A,B), with error estimate J = Integral of ABS(F*W) over (A,B)
To compute I = Integral of F over (A,B), with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B) with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B), with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B) with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B) with error estimate J = Integral of ABS(F) over (A,B)
This routine computes modified Chebyshev moments. The K-th modified Chebyshev moment is defined as the integral over (-1,1) of W(X)*T(K,X), where T(K,X) is the Chebyshev polynomial of degree K.
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Evaluate the definite integral of a piecewise cubic Hermite function over an arbitrary interval.
Evaluate the definite integral of a piecewise cubic Hermite function over an interval whose endpoints are data points.
Compute the integral on (X1,X2) of a product of a function F and the ID-th derivative of a B-spline, (PP-representation).
Compute the integral on (X1,X2) of a K-th order B-spline using the piecewise polynomial (PP) representation.
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT)LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESLT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given INTEGRAL I = Integral of F over (BOUND,+INFINITY) OR I = Integral of F over (-INFINITY,BOUND) OR I = Integral of F over (-INFINITY,+INFINITY) Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given integral I = Integral of F over (BOUND,+INFINITY) or I = Integral of F over (-INFINITY,BOUND) or I = Integral of F over (-INFINITY,+INFINITY), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy break points of the integration interval, where local difficulties of the integrand may occur(e.g. SINGULARITIES, DISCONTINUITIES), are provided by the user.
Approximate a given definite integral I = Integral of F over (A,B), hopefully satisfying the accuracy claim: ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)). Break points of the integration interval, where local difficulties of the integrand may occur (e.g. singularities or discontinuities) are provided by the user.
The routine calculates an approximation result to a given Definite integral I = Integral of F over (A,B), Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a Cauchy principal value I = INTEGRAL of F*W over (A,B) (W(X) = 1/((X-C), C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABE,EPSREL*ABS(I)).
The routine calculates an approximation result to a CAUCHY PRINCIPAL VALUE I = Integral of F*W over (A,B) (W(X) = 1/(X-C), (C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
The routine calculates an approximation result to a given Fourier integral I = Integral of F(X)*W(X) over (A,INFINITY) where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
The routine calculates an approximation result to a given Fourier integral I = Integral of F(X)*W(X) over (A,INFINITY) where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points, see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
To compute I = Integral of F*W over (A,B) with error estimate, where W(X) = 1/(X-C)
To compute the integral I=Integral of F(X) over (A,B) Where W(X) = COS(OMEGA*X) Or (WX)=SIN(OMEGA*X) and to compute J=Integral of ABS(F) over (A,B). For small value of OMEGA or small intervals (A,B) 15-point GAUSS- KRONROD Rule used. Otherwise generalized CLENSHAW-CURTIS us
To compute I = Integral of F*W over (BL,BR), with error estimate, where the weight function W has a singular behaviour of ALGEBRAICO-LOGARITHMIC type at the points A and/or B. (BL,BR) is a part of (A,B).
To compute I = Integral of F over (A,B), with error estimate J = integral of ABS(F) over (A,B)
The original (infinite integration range is mapped onto the interval (0,1) and (A,B) is a part of (0,1). it is the purpose to compute I = Integral of transformed integrand over (A,B), J = Integral of ABS(Transformed Integrand) over (A,B).
To compute I = Integral of F*W over (A,B), with error estimate J = Integral of ABS(F*W) over (A,B)
To compute I = Integral of F over (A,B), with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B) with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B), with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B) with error estimate J = Integral of ABS(F) over (A,B)
To compute I = Integral of F over (A,B) with error estimate J = Integral of ABS(F) over (A,B)
This routine computes modified Chebyshev moments. The K-th modified Chebyshev moment is defined as the integral over (-1,1) of W(X)*T(K,X), where T(K,X) is the Chebyshev polynomial of degree K.
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Documentation for QUADPACK, a package of subprograms for automatic evaluation of one-dimensional definite integrals.
Evaluate the 3j symbol f(L1) = ( L1 L2 L3) (-M2-M3 M2 M3) for all allowed values of L1, the other parameters being held fixed.
Evaluate the 3j symbol g(M2) = (L1 L2 L3 ) (M1 M2 -M1-M2) for all allowed values of M2, the other parameters being held fixed.
Evaluate the 6j symbol h(L1) = {L1 L2 L3} {L4 L5 L6} for all allowed values of L1, the other parameters being held fixed.
Evaluate the 3j symbol f(L1) = ( L1 L2 L3) (-M2-M3 M2 M3) for all allowed values of L1, the other parameters being held fixed.
Evaluate the 3j symbol g(M2) = (L1 L2 L3 ) (M1 M2 -M1-M2) for all allowed values of M2, the other parameters being held fixed.
Evaluate the 6j symbol h(L1) = {L1 L2 L3} {L4 L5 L6} for all allowed values of L1, the other parameters being held fixed.
Generate a uniformly distributed random number.
Generate a normally distributed (Gaussian) random number.
Generate a uniformly distributed random number.
Find the zeros of a polynomial with complex coefficients.
Find the zeros of a polynomial with real coefficients.
Rearrange a given array according to a prescribed permutation vector.
Save or recall global variables needed by error handling routines.
Compute the reciprocal of the Gamma function.
Compute the reciprocal of the Gamma function.
Compute the reciprocal of the Gamma function.
Solve an initial value problem in ordinary differential equations using a Runge-Kutta-Fehlberg scheme.
Solve an initial value problem in ordinary differential equations using a Runge-Kutta-Fehlberg scheme.
Compute the cube root.
Compute the cube root.
Compute the cube root.
Search for a zero of a function F(X) in a given interval (B,C). It is designed primarily for problems where F(B) and F(C) have opposite signs.
Solve a square system of nonlinear equations.
Search for a zero of a function F(X) in a given interval (B,C). It is designed primarily for problems where F(B) and F(C) have opposite signs.
Solve a square system of nonlinear equations.
Solve an initial value problem in ordinary differential equations using a Runge-Kutta-Fehlberg scheme.
Solve an initial value problem in ordinary differential equations using a Runge-Kutta-Fehlberg scheme.
Save or recall global variables needed by error handling routines.
Multiply a vector by a constant.
Scale a complex vector.
Multiply a vector by a constant.
Multiply a vector by a constant.
The function of CDRIV1 is to solve N (200 or fewer) ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. CDRIV1 allows complex-valued differential equations.
The function of CDRIV2 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. CDRIV2 allows complex-valued differential equations.
The function of CDRIV3 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. Other important options are available. CDRIV3 allows complex-valued differential equations.
The function of DDRIV1 is to solve N (200 or fewer) ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. DDRIV1 uses double precision arithmetic.
The function of DDRIV2 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. DDRIV2 uses double precision arithmetic.
The function of DDRIV3 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. Other important options are available. DDRIV3 uses double precision arithmetic.
The function of SDRIV1 is to solve N (200 or fewer) ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. SDRIV1 uses single precision arithmetic.
The function of SDRIV2 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. SDRIV2 uses single precision arithmetic.
The function of SDRIV3 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. Other important options are available. SDRIV3 uses single precision arithmetic.
Compute the Bessel function of the second kind of order zero.
Compute the Bessel function of the second kind of order one.
Compute the Bessel function of the second kind of order zero.
Compute the Bessel function of the second kind of order one.
Evaluate LOG(1+Z) from second order relative accuracy so that LOG(1+Z) = Z - Z**2/2 + Z**3*C9LN2R(Z).
Evaluate LOG(1+X) from second order relative accuracy so that LOG(1+X) = X - X**2/2 + X**3*D9LN2R(X)
Evaluate LOG(1+X) from second order relative accuracy so that LOG(1+X) = X - X**2/2 + X**3*R9LN2R(X).
Discretize and solve a second and, optionally, a fourth order finite difference approximation on a uniform grid to the general separable elliptic partial differential equation on a rectangle with any combination of periodic or mixed boundary conditions.
Solve for either the second or fourth order finite difference approximation to the solution of a separable elliptic partial differential equation on a rectangle. Any combination of periodic or mixed boundary conditions is allowed.
Compute a sequence of exponentially scaled modified Bessel functions of the third kind of fractional order.
Compute a sequence of modified Bessel functions of the third kind of fractional order.
Compute a sequence of modified Bessel functions of the third kind of fractional order.
Compute a sequence of exponentially scaled modified Bessel functions of the third kind of fractional order.
This routine maintains the descending ordering in the list of the local error estimated resulting from the interval subdivision process. At each call two error estimates are inserted using the sequential search method, top-down for the largest error estimate and bottom-up for the smallest error estimate.
Subsidiary to QAGE, QAGIE, QAGPE, QAGSE, QAWCE, QAWOE and QAWSE
Set derivatives needed to determine a piecewise monotone piecewise cubic Hermite interpolant to given data. User control is available over boundary conditions and/or treatment of points where monotonicity switches direction.
Set derivatives needed to determine a piecewise monotone piecewise cubic Hermite interpolant to given data. User control is available over boundary conditions and/or treatment of points where monotonicity switches direction.
Solve a linear two-point boundary value problem using superposition coupled with an orthonormalization procedure and a variable-step integration scheme.
Solve a linear two-point boundary value problem using superposition coupled with an orthonormalization procedure and a variable-step integration scheme.
Compute the sine of an argument in degrees.
Compute the sine of an argument in degrees.
The function of SDRIV1 is to solve N (200 or fewer) ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. SDRIV1 uses single precision arithmetic.
The function of SDRIV2 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. SDRIV2 uses single precision arithmetic.
The function of SDRIV3 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. Other important options are available. SDRIV3 uses single precision arithmetic.
Return the permutation vector generated by sorting a given array and, optionally, rearrange the elements of the array. The array may be sorted in increasing or decreasing order. A slightly modified quicksort algorithm is used.
Sort an array and optionally make the same interchanges in an auxiliary array. The array may be sorted in increasing or decreasing order. A slightly modified QUICKSORT algorithm is used.
Return the permutation vector generated by sorting a substring within a character array and, optionally, rearrange the elements of the array. The array may be sorted in forward or reverse lexicographical order. A slightly modified quicksort algorithm is used.
Return the permutation vector generated by sorting a given array and, optionally, rearrange the elements of the array. The array may be sorted in increasing or decreasing order. A slightly modified quicksort algorithm is used.
Sort an array and optionally make the same interchanges in an auxiliary array. The array may be sorted in increasing or decreasing order. A slightly modified QUICKSORT algorithm is used.
Sort an integer array, moving an integer and DP array. This routine sorts the integer array IA and makes the same interchanges in the integer array JA and the double pre- cision array A. The array IA may be sorted in increasing order or decreasing order. A slightly modified QUICKSORT algorithm is used.
Sort an integer array, moving an integer and real array. This routine sorts the integer array IA and makes the same interchanges in the integer array JA and the real array A. The array IA may be sorted in increasing order or decreas- ing order. A slightly modified QUICKSORT algorithm is used.
Return the permutation vector generated by sorting a given array and, optionally, rearrange the elements of the array. The array may be sorted in increasing or decreasing order. A slightly modified quicksort algorithm is used.
Sort an array and optionally make the same interchanges in an auxiliary array. The array may be sorted in increasing or decreasing order. A slightly modified QUICKSORT algorithm is used.
Perform the singular value decomposition of a rectangular matrix.
Perform the singular value decomposition of a rectangular matrix.
Perform the singular value decomposition of a rectangular matrix.
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy break points of the integration interval, where local difficulties of the integrand may occur (e.g. SINGULARITIES, DISCONTINUITIES), are provided by the user.
Approximate a given definite integral I = Integral of F over (A,B), hopefully satisfying the accuracy claim: ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)). Break points of the integration interval, where local difficulties of the integrand may occur (e.g. singularities or discontinuities) are provided by the user.
The routine calculates an approximation result to a given definite integral I = Integral of F over (A,B), hopefully satisfying following claim for accuracy break points of the integration interval, where local difficulties of the integrand may occur(e.g. SINGULARITIES, DISCONTINUITIES), are provided by the user.
Approximate a given definite integral I = Integral of F over (A,B), hopefully satisfying the accuracy claim: ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)). Break points of the integration interval, where local difficulties of the integrand may occur (e.g. singularities or discontinuities) are provided by the user.
Preconditioned BiConjugate Gradient Sparse Ax = b Solver. Routine to solve a Non-Symmetric linear system Ax = b using the Preconditioned BiConjugate Gradient method.
Preconditioned Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the Preconditioned Conjugate Gradient method.
Preconditioned CG Sparse Ax=b Solver for Normal Equations. Routine to solve a general linear system Ax = b using the Preconditioned Conjugate Gradient method applied to the normal equations AA'y = b, x=A'y.
Preconditioned BiConjugate Gradient Squared Ax=b Solver. Routine to solve a Non-Symmetric linear system Ax = b using the Preconditioned BiConjugate Gradient Squared method.
SLAP WORK/IWORK Array Bounds Checker. This routine checks the work array lengths and interfaces to the SLATEC error handler if a problem is found.
Preconditioned GMRES iterative sparse Ax=b solver. This routine uses the generalized minimum residual (GMRES) method with preconditioning to solve non-symmetric linear systems of the form: Ax = b.
Internal routine for DGMRES.
Internal routine for DGMRES.
Preconditioned Iterative Refinement Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using iterative refinement with a matrix splitting.
SLAP Backsolve routine for LDL' Factorization. Routine to solve a system of the form L*D*L' X = B, where L is a unit lower triangular matrix and D is a diagonal matrix and ' means transpose.
Sparse Linear Algebra Package Version 2.0.2 Documentation. Routines to solve large sparse symmetric and nonsymmetric positive definite linear systems, Ax = b, using precondi- tioned iterative methods.
Preconditioned Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Preconditioned Orthomin method.
Internal routine for DGMRES.
Internal routine for DGMRES.
Internal routine for DGMRES.
Diagonally Scaled BiConjugate Gradient Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient method with diagonal scaling.
Diagonally Scaled Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the Preconditioned Conjugate Gradient method. The preconditioner is diagonal scaling.
Diagonally Scaled CG Sparse Ax=b Solver for Normal Eqn's. Routine to solve a general linear system Ax = b using diagonal scaling with the Conjugate Gradient method applied to the the normal equations, viz., AA'y = b, where x = A'y.
Diagonally Scaled CGS Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient Squared method with diagonal scaling.
Diagonally scaled GMRES iterative sparse Ax=b solver. This routine uses the generalized minimum residual (GMRES) method with diagonal scaling to solve possibly non-symmetric linear systems of the form: Ax = b.
Diagonal Matrix Vector Multiply. Routine to calculate the product X = DIAG*B, where DIAG is a diagonal matrix.
Diagonally Scaled Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Orthomin method with diagonal scaling.
Gauss-Seidel Method Iterative Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using Gauss-Seidel iteration.
Incomplete Cholesky Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the incomplete Cholesky Preconditioned Conjugate Gradient method.
Incomplete LU Iterative Refinement Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using the incomplete LU decomposition with iterative refinement.
Incomplete LU Decomposition Preconditioner SLAP Set Up. Routine to generate the incomplete LDU decomposition of a matrix. The unit lower triangular factor L is stored by rows and the unit upper triangular factor U is stored by columns. The inverse of the diagonal matrix D is stored. No fill in is allowed.
Jacobi's Method Iterative Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using Jacobi iteration.
SLAP MSOLVE for Lower Triangle Matrix. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes L B = X.
SLAP Lower Triangle Matrix Backsolve. Routine to solve a system of the form Lx = b , where L is a lower triangular matrix.
SLAP MSOLVE for LDL' (IC) Factorization. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes (LDL') B = X.
Incomplete LU BiConjugate Gradient Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient method with Incomplete LU decomposition preconditioning.
Incomplete LU CG Sparse Ax=b Solver for Normal Equations. Routine to solve a general linear system Ax = b using the incomplete LU decomposition with the Conjugate Gradient method applied to the normal equations, viz., AA'y = b, x = A'y.
Incomplete LU BiConjugate Gradient Squared Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient Squared method with Incomplete LU decomposition preconditioning.
Incomplete LU GMRES iterative sparse Ax=b solver. This routine uses the generalized minimum residual (GMRES) method with incomplete LU factorization for preconditioning to solve possibly non-symmetric linear systems of the form: Ax = b.
SLAP MSOLVE for LDU Factorization. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes (LDU) B = X.
SLAP Backsolve for LDU Factorization. Routine to solve a system of the form L*D*U X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix.
SLAP Backsolve for LDU Factorization. Routine to solve a system of the form (L*D*U)' X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix and ' denotes transpose.
Incomplete LU Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Orthomin method with Incomplete LU decomposition.
SLAP MTSOLV for LDU Factorization. This routine acts as an interface between the SLAP generic MTSOLV calling convention and the routine that actually -T computes (LDU) B = X.
SLAP Backsolve for LDU Factorization of Normal Equations. To solve a system of the form (L*D*U)*(L*D*U)' X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix and ' denotes transpose.
SLAP MSOLVE for LDU Factorization of Normal Equations. This routine acts as an interface between the SLAP generic MMTSLV calling convention and the routine that actually -1 computes [(LDU)*(LDU)'] B = X.
SLAP Column Format Sparse Matrix Transpose Vector Product. Routine to calculate the sparse matrix vector product: Y = A'*X, where ' denotes transpose.
SLAP Column Format Sparse Matrix Vector Product. Routine to calculate the sparse matrix vector product: Y = A*X.
Internal routine for DGMRES.
Preconditioned BiConjugate Gradient Stop Test. This routine calculates the stop test for the BiConjugate Gradient iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Conjugate Gradient Stop Test. This routine calculates the stop test for the Conjugate Gradient iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned CG on Normal Equations Stop Test. This routine calculates the stop test for the Conjugate Gradient iteration scheme applied to the normal equations. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned BiConjugate Gradient Squared Stop Test. This routine calculates the stop test for the BiConjugate Gradient Squared iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Generalized Minimum Residual Stop Test. This routine calculates the stop test for the Generalized Minimum RESidual (GMRES) iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Iterative Refinement Stop Test. This routine calculates the stop test for the iterative refinement iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Orthomin Stop Test. This routine calculates the stop test for the Orthomin iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned BiConjugate Gradient Stop Test. This routine calculates the stop test for the BiConjugate Gradient iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Conjugate Gradient Stop Test. This routine calculates the stop test for the Conjugate Gradient iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned CG on Normal Equations Stop Test. This routine calculates the stop test for the Conjugate Gradient iteration scheme applied to the normal equations. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned BiConjugate Gradient Squared Stop Test. This routine calculates the stop test for the BiConjugate Gradient Squared iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Generalized Minimum Residual Stop Test. This routine calculates the stop test for the Generalized Minimum RESidual (GMRES) iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Iterative Refinement Stop Test. This routine calculates the stop test for the iterative refinement iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Orthomin Stop Test. This routine calculates the stop test for the Orthomin iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Sort an integer array, moving an integer and DP array. This routine sorts the integer array IA and makes the same interchanges in the integer array JA and the double pre- cision array A. The array IA may be sorted in increasing order or decreasing order. A slightly modified QUICKSORT algorithm is used.
Sort an integer array, moving an integer and real array. This routine sorts the integer array IA and makes the same interchanges in the integer array JA and the real array A. The array IA may be sorted in increasing order or decreas- ing order. A slightly modified QUICKSORT algorithm is used.
Preconditioned BiConjugate Gradient Sparse Ax = b Solver. Routine to solve a Non-Symmetric linear system Ax = b using the Preconditioned BiConjugate Gradient method.
Preconditioned Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the Preconditioned Conjugate Gradient method.
Preconditioned CG Sparse Ax=b Solver for Normal Equations. Routine to solve a general linear system Ax = b using the Preconditioned Conjugate Gradient method applied to the normal equations AA'y = b, x=A'y.
Preconditioned BiConjugate Gradient Squared Ax=b Solver. Routine to solve a Non-Symmetric linear system Ax = b using the Preconditioned BiConjugate Gradient Squared method.
SLAP WORK/IWORK Array Bounds Checker. This routine checks the work array lengths and interfaces to the SLATEC error handler if a problem is found.
Preconditioned Iterative Refinement Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using iterative refinement with a matrix splitting.
Preconditioned GMRES Iterative Sparse Ax=b Solver. This routine uses the generalized minimum residual (GMRES) method with preconditioning to solve non-symmetric linear systems of the form: Ax = b.
Internal routine for SGMRES.
Internal routine for SGMRES.
SLAP Backsolve routine for LDL' Factorization. Routine to solve a system of the form L*D*L' X = B, where L is a unit lower triangular matrix and D is a diagonal matrix and ' means transpose.
Sparse Linear Algebra Package Version 2.0.2 Documentation. Routines to solve large sparse symmetric and nonsymmetric positive definite linear systems, Ax = b, using precondi- tioned iterative methods.
Preconditioned Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Preconditioned Orthomin method.
Internal routine for SGMRES.
Internal routine for SGMRES.
Internal routine for SGMRES.
Diagonally Scaled BiConjugate Gradient Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient method with diagonal scaling.
Diagonally Scaled Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the Preconditioned Conjugate Gradient method. The preconditioner is diagonal scaling.
Diagonally Scaled CG Sparse Ax=b Solver for Normal Eqn's. Routine to solve a general linear system Ax = b using diagonal scaling with the Conjugate Gradient method applied to the the normal equations, viz., AA'y = b, where x = A'y.
Diagonally Scaled CGS Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient Squared method with diagonal scaling.
Diagonally Scaled GMRES Iterative Sparse Ax=b Solver. This routine uses the generalized minimum residual (GMRES) method with diagonal scaling to solve possibly non-symmetric linear systems of the form: Ax = b.
Diagonal Matrix Vector Multiply. Routine to calculate the product X = DIAG*B, where DIAG is a diagonal matrix.
Diagonally Scaled Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Orthomin method with diagonal scaling.
Gauss-Seidel Method Iterative Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using Gauss-Seidel iteration.
Incomplete Cholesky Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the incomplete Cholesky Preconditioned Conjugate Gradient method.
Incomplete LU Iterative Refinement Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using the incomplete LU decomposition with iterative refinement.
Incomplete LU Decomposition Preconditioner SLAP Set Up. Routine to generate the incomplete LDU decomposition of a matrix. The unit lower triangular factor L is stored by rows and the unit upper triangular factor U is stored by columns. The inverse of the diagonal matrix D is stored. No fill in is allowed.
Jacobi's Method Iterative Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using Jacobi iteration.
SLAP MSOLVE for Lower Triangle Matrix. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes L B = X.
SLAP Lower Triangle Matrix Backsolve. Routine to solve a system of the form Lx = b , where L is a lower triangular matrix.
SLAP MSOLVE for LDL' (IC) Factorization. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes (LDL') B = X.
Incomplete LU BiConjugate Gradient Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient method with Incomplete LU decomposition preconditioning.
Incomplete LU CG Sparse Ax=b Solver for Normal Equations. Routine to solve a general linear system Ax = b using the incomplete LU decomposition with the Conjugate Gradient method applied to the normal equations, viz., AA'y = b, x = A'y.
Incomplete LU BiConjugate Gradient Squared Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient Squared method with Incomplete LU decomposition preconditioning.
Incomplete LU GMRES Iterative Sparse Ax=b Solver. This routine uses the generalized minimum residual (GMRES) method with incomplete LU factorization for preconditioning to solve possibly non-symmetric linear systems of the form: Ax = b.
SLAP MSOLVE for LDU Factorization. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes (LDU) B = X.
SLAP Backsolve for LDU Factorization. Routine to solve a system of the form L*D*U X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix.
SLAP Backsolve for LDU Factorization. Routine to solve a system of the form (L*D*U)' X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix and ' denotes transpose.
Incomplete LU Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Orthomin method with Incomplete LU decomposition.
SLAP MTSOLV for LDU Factorization. This routine acts as an interface between the SLAP generic MTSOLV calling convention and the routine that actually -T computes (LDU) B = X.
SLAP Backsolve for LDU Factorization of Normal Equations. To solve a system of the form (L*D*U)*(L*D*U)' X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix and ' denotes transpose.
SLAP MSOLVE for LDU Factorization of Normal Equations. This routine acts as an interface between the SLAP generic MMTSLV calling convention and the routine that actually -1 computes [(LDU)*(LDU)'] B = X.
SLAP Column Format Sparse Matrix Transpose Vector Product. Routine to calculate the sparse matrix vector product: Y = A'*X, where ' denotes transpose.
SLAP Column Format Sparse Matrix Vector Product. Routine to calculate the sparse matrix vector product: Y = A*X.
Internal routine for SGMRES.
Read a Sparse Linear System in the Boeing/Harwell Format. The matrix is read in and if the right hand side is also present in the input file then it too is read in. The matrix is then modified to be in the SLAP Column format.
Printer Plot of SLAP Column Format Matrix. Routine to print out a SLAP Column format matrix in a "printer plot" graphical representation.
Lower Triangle Preconditioner SLAP Set Up. Routine to store the lower triangle of a matrix stored in the SLAP Column format.
SLAP Triad to SLAP Column Format Converter. Routine to convert from the SLAP Triad to SLAP Column format.
Diagonal Scaling Preconditioner SLAP Normal Eqns Set Up. Routine to compute the inverse of the diagonal of the matrix A*A', where A is stored in SLAP-Column format.
Diagonal Scaling Preconditioner SLAP Set Up. Routine to compute the inverse of the diagonal of a matrix stored in the SLAP Column format.
Diagonal Scaling of system Ax = b. This routine scales (and unscales) the system Ax = b by symmetric diagonal scaling.
Incompl. Cholesky Decomposition Preconditioner SLAP Set Up. Routine to generate the Incomplete Cholesky decomposition, L*D*L-trans, of a symmetric positive definite matrix, A, which is stored in SLAP Column format. The unit lower triangular matrix L is stored by rows, and the inverse of the diagonal matrix D is stored.
Read in SLAP Triad Format Linear System. Routine to read in a SLAP Triad format matrix and right hand side and solution to the system, if known.
Write out SLAP Triad Format Linear System. Routine to write out a SLAP Triad format matrix and right hand side and solution to the system, if known.
Read a Sparse Linear System in the Boeing/Harwell Format. The matrix is read in and if the right hand side is also present in the input file then it too is read in. The matrix is then modified to be in the SLAP Column format.
Printer Plot of SLAP Column Format Matrix. Routine to print out a SLAP Column format matrix in a "printer plot" graphical representation.
Lower Triangle Preconditioner SLAP Set Up. Routine to store the lower triangle of a matrix stored in the SLAP Column format.
SLAP Triad to SLAP Column Format Converter. Routine to convert from the SLAP Triad to SLAP Column format.
Diagonal Scaling Preconditioner SLAP Normal Eqns Set Up. Routine to compute the inverse of the diagonal of the matrix A*A', where A is stored in SLAP-Column format.
Diagonal Scaling Preconditioner SLAP Set Up. Routine to compute the inverse of the diagonal of a matrix stored in the SLAP Column format.
Diagonal Scaling of system Ax = b. This routine scales (and unscales) the system Ax = b by symmetric diagonal scaling.
Incompl. Cholesky Decomposition Preconditioner SLAP Set Up. Routine to generate the Incomplete Cholesky decomposition, L*D*L-trans, of a symmetric positive definite matrix, A, which is stored in SLAP Column format. The unit lower triangular matrix L is stored by rows, and the inverse of the diagonal matrix D is stored.
Read in SLAP Triad Format Linear System. Routine to read in a SLAP Triad format matrix and right hand side and solution to the system, if known.
Write out SLAP Triad Format Linear System. Routine to write out a SLAP Triad format matrix and right hand side and solution to the system, if known.
Compute the complementary incomplete Gamma function for A near a negative integer and X small.
Compute Tricomi's incomplete Gamma function for small arguments.
Compute the complementary incomplete Gamma function for A near a negative integer and for small X.
Compute Tricomi's incomplete Gamma function for small arguments.
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = integral of F over (A,B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Approximate the solution at XOUT by evaluating the polynomial computed in DSTEPS at XOUT. Must be used in conjunction with DSTEPS.
Approximate the solution at XOUT by evaluating the polynomial computed in STEPS at XOUT. Must be used in conjunction with STEPS.
Solve a square system of nonlinear equations.
Solve a square system of nonlinear equations.
Solve the complex band system A*X=B or CTRANS(A)*X=B using the factors computed by CGBCO or CGBFA.
Solve the complex system A*X=B or CTRANS(A)*X=B using the factors computed by CGECO or CGEFA.
Solve a tridiagonal linear system.
Solve the complex Hermitian system using factors obtained from CHIFA.
Solve a complex Hermitian system using factors obtained from CHPFA.
Solve a complex band system using the factors computed by CNBCO or CNBFA.
Solve the complex Hermitian positive definite band system using the factors computed by CPBCO or CPBFA.
Solve the complex Hermitian positive definite linear system using the factors computed by CPOCO or CPOFA.
Solve the complex Hermitian positive definite system using the factors computed by CPPCO or CPPFA.
Solve a positive definite tridiagonal linear system.
Apply the output of CQRDC to compute coordinate transfor- mations, projections, and least squares solutions.
Solve a complex symmetric system using the factors obtained from CSIFA.
Solve a complex symmetric system using the factors obtained from CSPFA.
Solve the real band system A*X=B or TRANS(A)*X=B using the factors computed by DGBCO or DGBFA.
Solve the real system A*X=B or TRANS(A)*X=B using the factors computed by DGECO or DGEFA.
Solve a tridiagonal linear system.
Solve a real band system using the factors computed by DNBCO or DNBFA.
Solve a real symmetric positive definite band system using the factors computed by DPBCO or DPBFA.
Solve the real symmetric positive definite linear system using the factors computed by DPOCO or DPOFA.
Solve the real symmetric positive definite system using the factors computed by DPPCO or DPPFA.
Solve a positive definite tridiagonal linear system.
Apply the output of DQRDC to compute coordinate transfor- mations, projections, and least squares solutions.
Solve a real symmetric system using the factors obtained from SSIFA.
Solve a real symmetric system using the factors obtained from DSPFA.
Solve the real band system A*X=B or TRANS(A)*X=B using the factors computed by SGBCO or SGBFA.
Solve the real system A*X=B or TRANS(A)*X=B using the factors of SGECO or SGEFA.
Solve a tridiagonal linear system.
Solve a real band system using the factors computed by SNBCO or SNBFA.
Solve a real symmetric positive definite band system using the factors computed by SPBCO or SPBFA.
Solve the real symmetric positive definite linear system using the factors computed by SPOCO or SPOFA.
Solve the real symmetric positive definite system using the factors computed by SPPCO or SPPFA.
Solve a positive definite tridiagonal linear system.
Apply the output of SQRDC to compute coordinate transfor- mations, projections, and least squares solutions.
Solve a real symmetric system using the factors obtained from SSIFA.
Solve a real symmetric system using the factors obtained from SSPFA.
Return the permutation vector generated by sorting a given array and, optionally, rearrange the elements of the array. The array may be sorted in increasing or decreasing order. A slightly modified quicksort algorithm is used.
Sort an array and optionally make the same interchanges in an auxiliary array. The array may be sorted in increasing or decreasing order. A slightly modified QUICKSORT algorithm is used.
Return the permutation vector generated by sorting a substring within a character array and, optionally, rearrange the elements of the array. The array may be sorted in forward or reverse lexicographical order. A slightly modified quicksort algorithm is used.
Return the permutation vector generated by sorting a given array and, optionally, rearrange the elements of the array. The array may be sorted in increasing or decreasing order. A slightly modified quicksort algorithm is used.
Sort an array and optionally make the same interchanges in an auxiliary array. The array may be sorted in increasing or decreasing order. A slightly modified QUICKSORT algorithm is used.
Sort an integer array, moving an integer and DP array. This routine sorts the integer array IA and makes the same interchanges in the integer array JA and the double pre- cision array A. The array IA may be sorted in increasing order or decreasing order. A slightly modified QUICKSORT algorithm is used.
Sort an integer array, moving an integer and real array. This routine sorts the integer array IA and makes the same interchanges in the integer array JA and the real array A. The array IA may be sorted in increasing order or decreas- ing order. A slightly modified QUICKSORT algorithm is used.
Return the permutation vector generated by sorting a given array and, optionally, rearrange the elements of the array. The array may be sorted in increasing or decreasing order. A slightly modified quicksort algorithm is used.
Sort an array and optionally make the same interchanges in an auxiliary array. The array may be sorted in increasing or decreasing order. A slightly modified QUICKSORT algorithm is used.
Sort an array and optionally make the same interchanges in an auxiliary array. The array may be sorted in increasing or decreasing order. A slightly modified QUICKSORT algorithm is used.
Sort an array and optionally make the same interchanges in an auxiliary array. The array may be sorted in increasing or decreasing order. A slightly modified QUICKSORT algorithm is used.
Sort an integer array, moving an integer and DP array. This routine sorts the integer array IA and makes the same interchanges in the integer array JA and the double pre- cision array A. The array IA may be sorted in increasing order or decreasing order. A slightly modified QUICKSORT algorithm is used.
Sort an integer array, moving an integer and real array. This routine sorts the integer array IA and makes the same interchanges in the integer array JA and the real array A. The array IA may be sorted in increasing order or decreas- ing order. A slightly modified QUICKSORT algorithm is used.
Sort an array and optionally make the same interchanges in an auxiliary array. The array may be sorted in increasing or decreasing order. A slightly modified QUICKSORT algorithm is used.
Preconditioned BiConjugate Gradient Sparse Ax = b Solver. Routine to solve a Non-Symmetric linear system Ax = b using the Preconditioned BiConjugate Gradient method.
Preconditioned Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the Preconditioned Conjugate Gradient method.
Preconditioned CG Sparse Ax=b Solver for Normal Equations. Routine to solve a general linear system Ax = b using the Preconditioned Conjugate Gradient method applied to the normal equations AA'y = b, x=A'y.
Preconditioned BiConjugate Gradient Squared Ax=b Solver. Routine to solve a Non-Symmetric linear system Ax = b using the Preconditioned BiConjugate Gradient Squared method.
Preconditioned GMRES iterative sparse Ax=b solver. This routine uses the generalized minimum residual (GMRES) method with preconditioning to solve non-symmetric linear systems of the form: Ax = b.
Internal routine for DGMRES.
Internal routine for DGMRES.
Preconditioned Iterative Refinement Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using iterative refinement with a matrix splitting.
SLAP Backsolve routine for LDL' Factorization. Routine to solve a system of the form L*D*L' X = B, where L is a unit lower triangular matrix and D is a diagonal matrix and ' means transpose.
Preconditioned Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Preconditioned Orthomin method.
Internal routine for DGMRES.
Internal routine for DGMRES.
Internal routine for DGMRES.
Diagonally Scaled BiConjugate Gradient Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient method with diagonal scaling.
Diagonally Scaled Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the Preconditioned Conjugate Gradient method. The preconditioner is diagonal scaling.
Diagonally Scaled CG Sparse Ax=b Solver for Normal Eqn's. Routine to solve a general linear system Ax = b using diagonal scaling with the Conjugate Gradient method applied to the the normal equations, viz., AA'y = b, where x = A'y.
Diagonally Scaled CGS Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient Squared method with diagonal scaling.
Diagonally scaled GMRES iterative sparse Ax=b solver. This routine uses the generalized minimum residual (GMRES) method with diagonal scaling to solve possibly non-symmetric linear systems of the form: Ax = b.
Diagonal Matrix Vector Multiply. Routine to calculate the product X = DIAG*B, where DIAG is a diagonal matrix.
Diagonally Scaled Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Orthomin method with diagonal scaling.
Gauss-Seidel Method Iterative Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using Gauss-Seidel iteration.
Incomplete Cholesky Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the incomplete Cholesky Preconditioned Conjugate Gradient method.
Incomplete LU Iterative Refinement Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using the incomplete LU decomposition with iterative refinement.
Incomplete LU Decomposition Preconditioner SLAP Set Up. Routine to generate the incomplete LDU decomposition of a matrix. The unit lower triangular factor L is stored by rows and the unit upper triangular factor U is stored by columns. The inverse of the diagonal matrix D is stored. No fill in is allowed.
Jacobi's Method Iterative Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using Jacobi iteration.
SLAP MSOLVE for Lower Triangle Matrix. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes L B = X.
SLAP Lower Triangle Matrix Backsolve. Routine to solve a system of the form Lx = b , where L is a lower triangular matrix.
SLAP MSOLVE for LDL' (IC) Factorization. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes (LDL') B = X.
Incomplete LU BiConjugate Gradient Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient method with Incomplete LU decomposition preconditioning.
Incomplete LU CG Sparse Ax=b Solver for Normal Equations. Routine to solve a general linear system Ax = b using the incomplete LU decomposition with the Conjugate Gradient method applied to the normal equations, viz., AA'y = b, x = A'y.
Incomplete LU BiConjugate Gradient Squared Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient Squared method with Incomplete LU decomposition preconditioning.
Incomplete LU GMRES iterative sparse Ax=b solver. This routine uses the generalized minimum residual (GMRES) method with incomplete LU factorization for preconditioning to solve possibly non-symmetric linear systems of the form: Ax = b.
SLAP MSOLVE for LDU Factorization. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes (LDU) B = X.
SLAP Backsolve for LDU Factorization. Routine to solve a system of the form L*D*U X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix.
SLAP Backsolve for LDU Factorization. Routine to solve a system of the form (L*D*U)' X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix and ' denotes transpose.
Incomplete LU Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Orthomin method with Incomplete LU decomposition.
SLAP MTSOLV for LDU Factorization. This routine acts as an interface between the SLAP generic MTSOLV calling convention and the routine that actually -T computes (LDU) B = X.
SLAP Backsolve for LDU Factorization of Normal Equations. To solve a system of the form (L*D*U)*(L*D*U)' X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix and ' denotes transpose.
SLAP MSOLVE for LDU Factorization of Normal Equations. This routine acts as an interface between the SLAP generic MMTSLV calling convention and the routine that actually -1 computes [(LDU)*(LDU)'] B = X.
SLAP Column Format Sparse Matrix Transpose Vector Product. Routine to calculate the sparse matrix vector product: Y = A'*X, where ' denotes transpose.
SLAP Column Format Sparse Matrix Vector Product. Routine to calculate the sparse matrix vector product: Y = A*X.
Internal routine for DGMRES.
Preconditioned BiConjugate Gradient Stop Test. This routine calculates the stop test for the BiConjugate Gradient iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Conjugate Gradient Stop Test. This routine calculates the stop test for the Conjugate Gradient iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned CG on Normal Equations Stop Test. This routine calculates the stop test for the Conjugate Gradient iteration scheme applied to the normal equations. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned BiConjugate Gradient Squared Stop Test. This routine calculates the stop test for the BiConjugate Gradient Squared iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Generalized Minimum Residual Stop Test. This routine calculates the stop test for the Generalized Minimum RESidual (GMRES) iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Iterative Refinement Stop Test. This routine calculates the stop test for the iterative refinement iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Orthomin Stop Test. This routine calculates the stop test for the Orthomin iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned BiConjugate Gradient Stop Test. This routine calculates the stop test for the BiConjugate Gradient iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Conjugate Gradient Stop Test. This routine calculates the stop test for the Conjugate Gradient iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned CG on Normal Equations Stop Test. This routine calculates the stop test for the Conjugate Gradient iteration scheme applied to the normal equations. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned BiConjugate Gradient Squared Stop Test. This routine calculates the stop test for the BiConjugate Gradient Squared iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Generalized Minimum Residual Stop Test. This routine calculates the stop test for the Generalized Minimum RESidual (GMRES) iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Iterative Refinement Stop Test. This routine calculates the stop test for the iterative refinement iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Orthomin Stop Test. This routine calculates the stop test for the Orthomin iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned BiConjugate Gradient Sparse Ax = b Solver. Routine to solve a Non-Symmetric linear system Ax = b using the Preconditioned BiConjugate Gradient method.
Preconditioned Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the Preconditioned Conjugate Gradient method.
Preconditioned CG Sparse Ax=b Solver for Normal Equations. Routine to solve a general linear system Ax = b using the Preconditioned Conjugate Gradient method applied to the normal equations AA'y = b, x=A'y.
Preconditioned BiConjugate Gradient Squared Ax=b Solver. Routine to solve a Non-Symmetric linear system Ax = b using the Preconditioned BiConjugate Gradient Squared method.
Preconditioned Iterative Refinement Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using iterative refinement with a matrix splitting.
Preconditioned GMRES Iterative Sparse Ax=b Solver. This routine uses the generalized minimum residual (GMRES) method with preconditioning to solve non-symmetric linear systems of the form: Ax = b.
Internal routine for SGMRES.
Internal routine for SGMRES.
SLAP Backsolve routine for LDL' Factorization. Routine to solve a system of the form L*D*L' X = B, where L is a unit lower triangular matrix and D is a diagonal matrix and ' means transpose.
Preconditioned Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Preconditioned Orthomin method.
Internal routine for SGMRES.
Internal routine for SGMRES.
Internal routine for SGMRES.
Diagonally Scaled BiConjugate Gradient Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient method with diagonal scaling.
Diagonally Scaled Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the Preconditioned Conjugate Gradient method. The preconditioner is diagonal scaling.
Diagonally Scaled CG Sparse Ax=b Solver for Normal Eqn's. Routine to solve a general linear system Ax = b using diagonal scaling with the Conjugate Gradient method applied to the the normal equations, viz., AA'y = b, where x = A'y.
Diagonally Scaled CGS Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient Squared method with diagonal scaling.
Diagonally Scaled GMRES Iterative Sparse Ax=b Solver. This routine uses the generalized minimum residual (GMRES) method with diagonal scaling to solve possibly non-symmetric linear systems of the form: Ax = b.
Diagonal Matrix Vector Multiply. Routine to calculate the product X = DIAG*B, where DIAG is a diagonal matrix.
Diagonally Scaled Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Orthomin method with diagonal scaling.
Gauss-Seidel Method Iterative Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using Gauss-Seidel iteration.
Incomplete Cholesky Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the incomplete Cholesky Preconditioned Conjugate Gradient method.
Incomplete LU Iterative Refinement Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using the incomplete LU decomposition with iterative refinement.
Incomplete LU Decomposition Preconditioner SLAP Set Up. Routine to generate the incomplete LDU decomposition of a matrix. The unit lower triangular factor L is stored by rows and the unit upper triangular factor U is stored by columns. The inverse of the diagonal matrix D is stored. No fill in is allowed.
Jacobi's Method Iterative Sparse Ax = b Solver. Routine to solve a general linear system Ax = b using Jacobi iteration.
SLAP MSOLVE for Lower Triangle Matrix. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes L B = X.
SLAP Lower Triangle Matrix Backsolve. Routine to solve a system of the form Lx = b , where L is a lower triangular matrix.
SLAP MSOLVE for LDL' (IC) Factorization. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes (LDL') B = X.
Incomplete LU BiConjugate Gradient Sparse Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient method with Incomplete LU decomposition preconditioning.
Incomplete LU CG Sparse Ax=b Solver for Normal Equations. Routine to solve a general linear system Ax = b using the incomplete LU decomposition with the Conjugate Gradient method applied to the normal equations, viz., AA'y = b, x = A'y.
Incomplete LU BiConjugate Gradient Squared Ax=b Solver. Routine to solve a linear system Ax = b using the BiConjugate Gradient Squared method with Incomplete LU decomposition preconditioning.
Incomplete LU GMRES Iterative Sparse Ax=b Solver. This routine uses the generalized minimum residual (GMRES) method with incomplete LU factorization for preconditioning to solve possibly non-symmetric linear systems of the form: Ax = b.
SLAP MSOLVE for LDU Factorization. This routine acts as an interface between the SLAP generic MSOLVE calling convention and the routine that actually -1 computes (LDU) B = X.
SLAP Backsolve for LDU Factorization. Routine to solve a system of the form L*D*U X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix.
SLAP Backsolve for LDU Factorization. Routine to solve a system of the form (L*D*U)' X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix and ' denotes transpose.
Incomplete LU Orthomin Sparse Iterative Ax=b Solver. Routine to solve a general linear system Ax = b using the Orthomin method with Incomplete LU decomposition.
SLAP MTSOLV for LDU Factorization. This routine acts as an interface between the SLAP generic MTSOLV calling convention and the routine that actually -T computes (LDU) B = X.
SLAP Backsolve for LDU Factorization of Normal Equations. To solve a system of the form (L*D*U)*(L*D*U)' X = B, where L is a unit lower triangular matrix, D is a diagonal matrix, and U is a unit upper triangular matrix and ' denotes transpose.
SLAP MSOLVE for LDU Factorization of Normal Equations. This routine acts as an interface between the SLAP generic MMTSLV calling convention and the routine that actually -1 computes [(LDU)*(LDU)'] B = X.
SLAP Column Format Sparse Matrix Transpose Vector Product. Routine to calculate the sparse matrix vector product: Y = A'*X, where ' denotes transpose.
SLAP Column Format Sparse Matrix Vector Product. Routine to calculate the sparse matrix vector product: Y = A*X.
Internal routine for SGMRES.
Solve linear programming problems involving at most a few thousand constraints and variables. Takes advantage of sparsity in the constraint matrix.
Solve linear programming problems involving at most a few thousand constraints and variables. Takes advantage of sparsity in the constraint matrix.
Sparse Linear Algebra Package Version 2.0.2 Documentation. Routines to solve large sparse symmetric and nonsymmetric positive definite linear systems, Ax = b, using precondi- tioned iterative methods.
Sparse Linear Algebra Package Version 2.0.2 Documentation. Routines to solve large sparse symmetric and nonsymmetric positive definite linear systems, Ax = b, using precondi- tioned iterative methods.
Evaluate the Airy function.
Calculate the Airy function for a negative argument and an exponentially scaled Airy function for a non-negative argument.
Compute the natural logarithm of the complete Beta function.
Compute the logarithm of the absolute value of the Gamma function.
Compute the logarithmic integral.
Compute the logarithm of the absolute value of the Gamma function.
Compute an N member sequence of I Bessel functions I/SUB(ALPHA+K-1)/(X), K=1,...,N or scaled Bessel functions EXP(-X)*I/SUB(ALPHA+K-1)/(X), K=1,...,N for non-negative ALPHA and X.
Compute the hyperbolic Bessel function of the first kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order zero.
Compute the modified (hyperbolic) Bessel function of the first kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order one.
Compute an N member sequence of J Bessel functions J/SUB(ALPHA+K-1)/(X), K=1,...,N for non-negative ALPHA and X.
Compute the Bessel function of the first kind of order zero.
Compute the Bessel function of the first kind of order one.
Implement forward recursion on the three term recursion relation for a sequence of non-negative order Bessel functions K/SUB(FNU+I-1)/(X), or scaled Bessel functions EXP(X)*K/SUB(FNU+I-1)/(X), I=1,...,N for real, positive X and non-negative orders FNU.
Compute the modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the modified (hyperbolic) Bessel function of the third kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order one.
Compute a sequence of exponentially scaled modified Bessel functions of the third kind of fractional order.
Compute a sequence of modified Bessel functions of the third kind of fractional order.
Implement forward recursion on the three term recursion relation for a sequence of non-negative order Bessel functions Y/SUB(FNU+I-1)/(X), I=1,...,N for real, positive X and non-negative orders FNU.
Compute the Bessel function of the second kind of order zero.
Compute the Bessel function of the second kind of order one.
Compute the complete Beta function.
Calculate the incomplete Beta function.
Evaluate the Bairy function (the Airy function of the second kind).
Calculate the Bairy function for a negative argument and an exponentially scaled Bairy function for a non-negative argument.
Compute the binomial coefficients.
Evaluate (Z+0.5)*LOG((Z+1.)/Z) - 1.0 with relative accuracy.
Compute the log gamma correction factor so that LOG(CGAMMA(Z)) = 0.5*LOG(2.*PI) + (Z-0.5)*LOG(Z) - Z + C9LGMC(Z).
Compute the complete Beta function.
Compute the complete Gamma function.
Compute the reciprocal of the Gamma function.
Compute the logarithmic confluent hypergeometric function.
Compute the natural logarithm of the complete Beta function.
Compute the logarithm of the absolute value of the Gamma function.
Compute the Psi (or Digamma) function.
Evaluate a Chebyshev series.
Evaluate the Airy modulus and phase.
Evaluate the modulus and phase for the J0 and Y0 Bessel functions.
Evaluate the modulus and phase for the J1 and Y1 Bessel functions.
Evaluate for large Z Z**A * U(A,B,Z) where U is the logarithmic confluent hypergeometric function.
Compute the complementary incomplete Gamma function for A near a negative integer and X small.
Compute Tricomi's incomplete Gamma function for small arguments.
Compute Bessel functions EXP(X)*K-SUB-XNU(X) and EXP(X)* K-SUB-XNU+1(X) for 0.0 .LE. XNU .LT. 1.0.
Compute the log complementary incomplete Gamma function for large X and for A .LE. X.
Compute the logarithm of Tricomi's incomplete Gamma function with Perron's continued fraction for large X and A .GE. X.
Compute the log Gamma correction factor so that LOG(DGAMMA(X)) = LOG(SQRT(2*PI)) + (X-5.)*LOG(X) - X + D9LGMC(X).
Evaluate the Airy function.
Calculate the Airy function for a negative argument and an exponentially scaled Airy function for a non-negative argument.
Compute Dawson's function.
Compute an N member sequence of I Bessel functions I/SUB(ALPHA+K-1)/(X), K=1,...,N or scaled Bessel functions EXP(-X)*I/SUB(ALPHA+K-1)/(X), K=1,...,N for nonnegative ALPHA and X.
Compute the hyperbolic Bessel function of the first kind of order zero.
Compute the modified (hyperbolic) Bessel function of the first kind of order one.
Compute an N member sequence of J Bessel functions J/SUB(ALPHA+K-1)/(X), K=1,...,N for non-negative ALPHA and X.
Compute the Bessel function of the first kind of order zero.
Compute the Bessel function of the first kind of order one.
Implement forward recursion on the three term recursion relation for a sequence of non-negative order Bessel functions K/SUB(FNU+I-1)/(X), or scaled Bessel functions EXP(X)*K/SUB(FNU+I-1)/(X), I=1,...,N for real, positive X and non-negative orders FNU.
Compute the modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the modified (hyperbolic) Bessel function of the third kind of order one.
Compute a sequence of modified Bessel functions of the third kind of fractional order.
Implement forward recursion on the three term recursion relation for a sequence of non-negative order Bessel functions Y/SUB(FNU+I-1)/(X), I=1,...,N for real, positive X and non-negative orders FNU.
Compute the Bessel function of the second kind of order zero.
Compute the Bessel function of the second kind of order one.
Compute the complete Beta function.
Calculate the incomplete Beta function.
Evaluate the Bairy function (the Airy function of the second kind).
Calculate the Bairy function for a negative argument and an exponentially scaled Bairy function for a non-negative argument.
Compute the binomial coefficients.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the first kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order one.
Compute a sequence of exponentially scaled modified Bessel functions of the third kind of fractional order.
Compute the logarithmic confluent hypergeometric function.
Evaluate a Chebyshev series.
Compute Dawson's function.
Compute the exponential integral E1(X).
Compute the exponential integral Ei(X).
Compute the error function.
Compute the complementary error function.
Compute an M member sequence of exponential integrals E(N+K,X), K=0,1,...,M-1 for N .GE. 1 and X .GE. 0.
Compute the factorial function.
Evaluate the incomplete Gamma function.
Calculate the complementary incomplete Gamma function.
Calculate Tricomi's form of the incomplete Gamma function.
Compute the minimum and maximum bounds for the argument in the Gamma function.
Compute the complete Gamma function.
Compute the reciprocal of the Gamma function.
Compute the natural logarithm of the complete Beta function.
Compute the logarithm of the absolute value of the Gamma function.
Compute the logarithmic integral.
Compute the logarithm of the absolute value of the Gamma function.
Evaluate a generalization of Pochhammer's symbol.
Calculate a generalization of Pochhammer's symbol starting from first order.
Compute the Psi (or Digamma) function.
Compute a form of Spence's integral due to K. Mitchell.
Compute the exponential integral E1(X).
Compute the exponential integral Ei(X).
Compute the error function.
Compute the complementary error function.
Compute an M member sequence of exponential integrals E(N+K,X), K=0,1,...,M-1 for N .GE. 1 and X .GE. 0.
Compute the factorial function.
Documentation for FNLIB, a collection of routines for evaluating elementary and special functions.
Evaluate the incomplete Gamma function.
Calculate the complementary incomplete Gamma function.
Calculate Tricomi's form of the incomplete Gamma function.
Compute the minimum and maximum bounds for the argument in the Gamma function.
Compute the complete Gamma function.
Compute the reciprocal of the Gamma function.
Determine the number of terms needed in an orthogonal polynomial series so that it meets a specified accuracy.
Determine the number of terms needed in an orthogonal polynomial series so that it meets a specified accuracy.
Compute the Psi (or Digamma) function.
Evaluate a generalization of Pochhammer's symbol.
Calculate a generalization of Pochhammer's symbol starting from first order.
Evaluate the Airy modulus and phase.
Evaluate for large Z Z**A * U(A,B,Z) where U is the logarithmic confluent hypergeometric function.
Compute the complementary incomplete Gamma function for A near a negative integer and for small X.
Compute Tricomi's incomplete Gamma function for small arguments.
Generate a uniformly distributed random number.
Compute Bessel functions EXP(X)*K-SUB-XNU(X) and EXP(X)* K-SUB-XNU+1(X) for 0.0 .LE. XNU .LT. 1.0.
Compute the log complementary incomplete Gamma function for large X and for A .LE. X.
Compute the logarithm of Tricomi's incomplete Gamma function with Perron's continued fraction for large X and A .GE. X.
Compute the log Gamma correction factor so that LOG(GAMMA(X)) = LOG(SQRT(2*PI)) + (X-.5)*LOG(X) - X + R9LGMC(X).
Generate a normally distributed (Gaussian) random number.
Generate a uniformly distributed random number.
Compute a form of Spence's integral due to K. Mitchell.
The routine calculates an approximation result to a Cauchy principal value I = INTEGRAL of F*W over (A,B) (W(X) = 1/((X-C), C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABE,EPSREL*ABS(I)).
The routine calculates an approximation result to a CAUCHY PRINCIPAL VALUE I = Integral of F*W over (A,B) (W(X) = 1/(X-C), (C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
Calculate an approximation to a given definite integral I= Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X)=SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points, see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a Cauchy principal value I = INTEGRAL of F*W over (A,B) (W(X) = 1/((X-C), C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABE,EPSREL*ABS(I)).
The routine calculates an approximation result to a CAUCHY PRINCIPAL VALUE I = Integral of F*W over (A,B) (W(X) = 1/(X-C), (C.NE.A, C.NE.B), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
Calculate an approximation to a given definite integral I = Integral of F(X)*W(X) over (A,B), where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying the following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given definite integral I = Integral of F*W over (A,B), (where W shows a singular behaviour at the end points, see parameter INTEGR). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given Fourier integral I=Integral of F(X)*W(X) over (A,INFINITY) where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
The routine calculates an approximation result to a given Fourier integral I = Integral of F(X)*W(X) over (A,INFINITY) where W(X)=COS(OMEGA*X) or W(X)=SIN(OMEGA*X), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
The routine calculates an approximation result to a given Fourier integral I = Integral of F(X)*W(X) over (A,INFINITY) where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X). Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
The routine calculates an approximation result to a given Fourier integral I = Integral of F(X)*W(X) over (A,INFINITY) where W(X) = COS(OMEGA*X) or W(X) = SIN(OMEGA*X), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.EPSABS.
Compute a form of Spence's integral due to K. Mitchell.
Compute a form of Spence's integral due to K. Mitchell.
Solve the standard five-point finite difference approximation on a staggered grid to the modified Helmholtz equation in spherical coordinates assuming axisymmetry (no dependence on longitude).
Solve the standard five-point finite difference approximation on a staggered grid to the Helmholtz equation in spherical coordinates and on the surface of the unit sphere (radius of 1).
Solve a finite difference approximation to the modified Helmholtz equation in spherical coordinates assuming axisymmetry (no dependence on longitude).
Solve a finite difference approximation to the Helmholtz equation in spherical coordinates and on the surface of the unit sphere (radius of 1).
Set derivatives needed to determine the Hermite represen- tation of the cubic spline interpolant to given data, with specified boundary conditions.
Set derivatives needed to determine the Hermite represen- tation of the cubic spline interpolant to given data, with specified boundary conditions.
Documentation for BSPLINE, a package of subprograms for working with piecewise polynomial functions in B-representation.
Calculate the value of the spline and its derivatives from the B-representation.
Calculate the value of the spline and its derivatives from the B-representation.
Compute the largest integer ILEFT in 1 .LE. ILEFT .LE. LXT such that XT(ILEFT) .LE. X where XT(*) is a subdivision of the X interval.
Compute the integral on (X1,X2) of a product of a function F and the ID-th derivative of a B-spline, (PP-representation).
Compute the integral on (X1,X2) of a K-th order B-spline using the piecewise polynomial (PP) representation.
Calculate the value of the IDERIV-th derivative of the B-spline from the PP-representation.
Compute the largest integer ILEFT in 1 .LE. ILEFT .LE. LXT such that XT(ILEFT) .LE. X where XT(*) is a subdivision of the X interval.
Compute the integral on (X1,X2) of a product of a function F and the ID-th derivative of a B-spline, (PP-representation).
Compute the integral on (X1,X2) of a K-th order B-spline using the piecewise polynomial (PP) representation.
Calculate the value of the IDERIV-th derivative of the B-spline from the PP-representation.
The function of CDRIV1 is to solve N (200 or fewer) ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. CDRIV1 allows complex-valued differential equations.
The function of CDRIV2 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. CDRIV2 allows complex-valued differential equations.
The function of CDRIV3 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. Other important options are available. CDRIV3 allows complex-valued differential equations.
Solve an initial value problem in ordinary differential equations using backward differentiation formulas. It is intended primarily for stiff problems.
The function of DDRIV1 is to solve N (200 or fewer) ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. DDRIV1 uses double precision arithmetic.
The function of DDRIV2 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. DDRIV2 uses double precision arithmetic.
The function of DDRIV3 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. Other important options are available. DDRIV3 uses double precision arithmetic.
Solve an initial value problem in ordinary differential equations using backward differentiation formulas. It is intended primarily for stiff problems.
The function of SDRIV1 is to solve N (200 or fewer) ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. SDRIV1 uses single precision arithmetic.
The function of SDRIV2 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. SDRIV2 uses single precision arithmetic.
The function of SDRIV3 is to solve N ordinary differential equations of the form dY(I)/dT = F(Y(I),T), given the initial conditions Y(I) = YI. The program has options to allow the solution of both stiff and non-stiff differential equations. Other important options are available. SDRIV3 uses single precision arithmetic.
Preconditioned BiConjugate Gradient Stop Test. This routine calculates the stop test for the BiConjugate Gradient iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Conjugate Gradient Stop Test. This routine calculates the stop test for the Conjugate Gradient iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned BiConjugate Gradient Squared Stop Test. This routine calculates the stop test for the BiConjugate Gradient Squared iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Generalized Minimum Residual Stop Test. This routine calculates the stop test for the Generalized Minimum RESidual (GMRES) iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Iterative Refinement Stop Test. This routine calculates the stop test for the iterative refinement iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Orthomin Stop Test. This routine calculates the stop test for the Orthomin iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned BiConjugate Gradient Stop Test. This routine calculates the stop test for the BiConjugate Gradient iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Conjugate Gradient Stop Test. This routine calculates the stop test for the Conjugate Gradient iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned BiConjugate Gradient Squared Stop Test. This routine calculates the stop test for the BiConjugate Gradient Squared iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Generalized Minimum Residual Stop Test. This routine calculates the stop test for the Generalized Minimum RESidual (GMRES) iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Iterative Refinement Stop Test. This routine calculates the stop test for the iterative refinement iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Preconditioned Orthomin Stop Test. This routine calculates the stop test for the Orthomin iteration scheme. It returns a non-zero if the error estimate (the type of which is determined by ITOL) is less than the user specified tolerance TOL.
Return the permutation vector generated by sorting a substring within a character array and, optionally, rearrange the elements of the array. The array may be sorted in forward or reverse lexicographical order. A slightly modified quicksort algorithm is used.
Compute the sum of the magnitudes of the elements of a vector.
Compute the sum of the magnitudes of the elements of a vector.
Compute the sum of the magnitudes of the real and imaginary elements of a complex vector.
Documentation for QUADPACK, a package of subprograms for automatic evaluation of one-dimensional definite integrals.
Compute the eigenvalues and, optionally, the eigenvectors of a complex Hermitian matrix.
Solve a positive definite symmetric complex system of linear equations.
Solve a positive definite Hermitian system of linear equations. Iterative refinement is used to obtain an error estimate.
Factor a complex symmetric matrix by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant and inverse of a complex symmetric matrix using the factors from CSIFA.
Factor a complex symmetric matrix by elimination with symmetric pivoting.
Solve a complex symmetric system using the factors obtained from CSIFA.
Factor a complex symmetric matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant and inverse of a complex symmetric matrix stored in packed form using the factors from CSPFA.
Factor a complex symmetric matrix stored in packed form by elimination with symmetric pivoting.
Solve a complex symmetric system using the factors obtained from CSPFA.
Solve a positive definite symmetric system of linear equations.
Factor a symmetric matrix by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant, inertia and inverse of a real symmetric matrix using the factors from DSIFA.
Factor a real symmetric matrix by elimination with symmetric pivoting.
Solve a real symmetric system using the factors obtained from SSIFA.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant, inertia, inverse of a real symmetric matrix stored in packed form using the factors from DSPFA.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting.
Solve a real symmetric system using the factors obtained from DSPFA.
Solve a positive definite symmetric system of linear equations.
Solve a positive definite symmetric system of linear equations. Iterative refinement is used to obtain an error estimate.
Factor a symmetric matrix by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant, inertia and inverse of a real symmetric matrix using the factors from SSIFA.
Compute the eigenvalues and, optionally, the eigenvectors of a real symmetric matrix.
Factor a real symmetric matrix by elimination with symmetric pivoting.
Solve a real symmetric system using the factors obtained from SSIFA.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting and estimate the condition number of the matrix.
Compute the determinant, inertia, inverse of a real symmetric matrix stored in packed form using the factors from SSPFA.
Compute the eigenvalues and, optionally, the eigenvectors of a real symmetric matrix stored in packed form.
Factor a real symmetric matrix stored in packed form by elimination with symmetric pivoting.
Solve a real symmetric system using the factors obtained from SSPFA.
Preconditioned Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the Preconditioned Conjugate Gradient method.
Diagonally Scaled Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the Preconditioned Conjugate Gradient method. The preconditioner is diagonal scaling.
Incomplete Cholesky Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the incomplete Cholesky Preconditioned Conjugate Gradient method.
Preconditioned Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the Preconditioned Conjugate Gradient method.
Diagonally Scaled Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the Preconditioned Conjugate Gradient method. The preconditioner is diagonal scaling.
Incomplete Cholesky Conjugate Gradient Sparse Ax=b Solver. Routine to solve a symmetric positive definite linear system Ax = b using the incomplete Cholesky Preconditioned Conjugate Gradient method.
SLAP Backsolve routine for LDL' Factorization. Routine to solve a system of the form L*D*L' X = B, where L is a unit lower triangular matrix and D is a diagonal matrix and ' means transpose.
SLAP Backsolve routine for LDL' Factorization. Routine to solve a system of the form L*D*L' X = B, where L is a unit lower triangular matrix and D is a diagonal matrix and ' means transpose.
Integrate a function tabulated at arbitrarily spaced abscissas using overlapping parabolas.
Integrate a function tabulated at arbitrarily spaced abscissas using overlapping parabolas.
Compute the complex tangent.
Calculate a double precision approximation to DRC(X,Y) = Integral from zero to infinity of -1/2 -1 (1/2)(t+X) (t+Y) dt, where X is nonnegative and Y is positive.
Compute the incomplete or complete elliptic integral of the 2nd kind. For X and Y nonnegative, X+Y and Z positive, DRD(X,Y,Z) = Integral from zero to infinity of -1/2 -1/2 -3/2 (3/2)(t+X) (t+Y) (t+Z) dt. If X or Y is zero, the integral is complete.
Compute the incomplete or complete elliptic integral of the 1st kind. For X, Y, and Z non-negative and at most one of them zero, RF(X,Y,Z) = Integral from zero to infinity of -1/2 -1/2 -1/2 (1/2)(t+X) (t+Y) (t+Z) dt. If X, Y or Z is zero, the integral is complete.
Compute the incomplete or complete (X or Y or Z is zero) elliptic integral of the 3rd kind. For X, Y, and Z non- negative, at most one of them zero, and P positive, RJ(X,Y,Z,P) = Integral from zero to infinity of -1/2 -1/2 -1/2 -1 (3/2)(t+X) (t+Y) (t+Z) (t+P) dt.
Calculate an approximation to RC(X,Y) = Integral from zero to infinity of -1/2 -1 (1/2)(t+X) (t+Y) dt, where X is nonnegative and Y is positive.
Compute the incomplete or complete elliptic integral of the 2nd kind. For X and Y nonnegative, X+Y and Z positive, RD(X,Y,Z) = Integral from zero to infinity of -1/2 -1/2 -3/2 (3/2)(t+X) (t+Y) (t+Z) dt. If X or Y is zero, the integral is complete.
Compute the incomplete or complete elliptic integral of the 1st kind. For X, Y, and Z non-negative and at most one of them zero, RF(X,Y,Z) = Integral from zero to infinity of -1/2 -1/2 -1/2 (1/2)(t+X) (t+Y) (t+Z) dt. If X, Y or Z is zero, the integral is complete.
Compute the incomplete or complete (X or Y or Z is zero) elliptic integral of the 3rd kind. For X, Y, and Z non- negative, at most one of them zero, and P positive, RJ(X,Y,Z,P) = Integral from zero to infinity of -1/2 -1/2 -1/2 -1 (3/2)(t+X) (t+Y) (t+Z) (t+P) dt.
Compute the modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the modified (hyperbolic) Bessel function of the third kind of order one.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order one.
Compute a sequence of exponentially scaled modified Bessel functions of the third kind of fractional order.
Compute a sequence of modified Bessel functions of the third kind of fractional order.
Compute the modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the modified (hyperbolic) Bessel function of the third kind of order one.
Compute a sequence of modified Bessel functions of the third kind of fractional order.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order zero.
Compute the exponentially scaled modified (hyperbolic) Bessel function of the third kind of order one.
Compute a sequence of exponentially scaled modified Bessel functions of the third kind of fractional order.
The routine calculates an approximation result to a given INTEGRAL I = Integral of F over (BOUND,+INFINITY) OR I = Integral of F over (-INFINITY,BOUND) OR I = Integral of F over (-INFINITY,+INFINITY) Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given integral I = Integral of F over (BOUND,+INFINITY) or I = Integral of F over (-INFINITY,BOUND) or I = Integral of F over (-INFINITY,+INFINITY), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
The routine calculates an approximation result to a given INTEGRAL I = Integral of F over (BOUND,+INFINITY) OR I = Integral of F over (-INFINITY,BOUND) OR I = Integral of F over (-INFINITY,+INFINITY) Hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I)).
The routine calculates an approximation result to a given integral I = Integral of F over (BOUND,+INFINITY) or I = Integral of F over (-INFINITY,BOUND) or I = Integral of F over (-INFINITY,+INFINITY), hopefully satisfying following claim for accuracy ABS(I-RESULT).LE.MAX(EPSABS,EPSREL*ABS(I))
Compute a constant times a vector plus a vector.
Compute a constant times a vector plus a vector.
Compute a constant times a vector plus a vector.
Compute the determinant and inverse of a triangular matrix.
Solve a system of the form T*X=B or CTRANS(T)*X=B, where T is a triangular matrix. Here CTRANS(T) is the conjugate transpose.
Solve a system of the form T*X=B or TRANS(T)*X=B, where T is a triangular matrix.
Solve a system of the form T*X=B or TRANS(T)*X=B, where T is a triangular matrix.
Estimate the condition number of a triangular matrix.
Compute the determinant and inverse of a triangular matrix.
Solve a system of the form T*X=B or CTRANS(T)*X=B, where T is a triangular matrix. Here CTRANS(T) is the conjugate transpose.
Estimate the condition number of a triangular matrix.
Compute the determinant and inverse of a triangular matrix.
Solve a system of the form T*X=B or TRANS(T)*X=B, where T is a triangular matrix.
Estimate the condition number of a triangular matrix.
Solve a system of the form T*X=B or TRANS(T)*X=B, where T is a triangular matrix.
Compute Tricomi's incomplete Gamma function for small arguments.
Compute the logarithm of Tricomi's incomplete Gamma function with Perron's continued fraction for large X and A .GE. X.
Calculate Tricomi's form of the incomplete Gamma function.
Calculate Tricomi's form of the incomplete Gamma function.
Compute Tricomi's incomplete Gamma function for small arguments.
Compute the logarithm of Tricomi's incomplete Gamma function with Perron's continued fraction for large X and A .GE. X.
Solve a tridiagonal linear system.
Solve a positive definite tridiagonal linear system.
Solve a tridiagonal linear system.
Solve a positive definite tridiagonal linear system.
Solve by a cyclic reduction algorithm the linear system of equations that results from a finite difference approximation to certain 2-d elliptic PDE's on a centered grid .
Solve a block tridiagonal system of linear equations that results from a staggered grid finite difference approximation to 2-D elliptic PDE's.
Solve a tridiagonal linear system.
Solve a positive definite tridiagonal linear system.
Solve a block tridiagonal system of linear equations (usually resulting from the discretization of separable two-dimensional elliptic equations).
Solve a block tridiagonal system of linear equations (usually resulting from the discretization of separable two-dimensional elliptic equations).
Solve a complex block tridiagonal linear system of equations by a cyclic reduction algorithm.
Compute the complex arc cosine.
Compute the complex arc sine.
Compute the complex arc tangent.
Compute the complex arc tangent in the proper quadrant.
Compute the cotangent.
Compute the cosine of an argument in degrees.
Compute the cotangent.
Compute the complex tangent.
Evaluate DATAN(X) from first order relative accuracy so that DATAN(X) = X + X**3*D9ATN1(X).
Compute the cosine of an argument in degrees.
Compute the cotangent.
Compute the sine of an argument in degrees.
Evaluate ATAN(X) from first order relative accuracy so that ATAN(X) = X + X**3*R9ATN1(X).
Compute the sine of an argument in degrees.
Solve a linear two-point boundary value problem using superposition coupled with an orthonormalization procedure and a variable-step integration scheme.
Solve a linear two-point boundary value problem using superposition coupled with an orthonormalization procedure and a variable-step integration scheme.
Solve an underdetermined linear system of equations by performing an LQ factorization of the matrix using Householder transformations. Emphasis is put on detecting possible rank deficiency.
Solve an underdetermined linear system of equations by performing an LQ factorization of the matrix using Householder transformations. Emphasis is put on detecting possible rank deficiency.
Solve a linear least squares problems by performing a QR factorization of the input matrix using Householder transformations. Emphasis is put on detecting possible rank deficiency.
Solve a linear least squares problems by performing a QR factorization of the matrix using Householder transformations. Emphasis is put on detecting possible rank deficiency.
Generate a uniformly distributed random number.
Generate a uniformly distributed random number.
Compute the Euclidean length (L2 norm) of a vector.
Compute the unitary norm of a complex vector.
Compute the Euclidean length (L2 norm) of a vector.
Unpack a floating point number X so that X = Y*2**N.
Unpack a floating point number X so that X = Y*2**N.
Update an augmented Cholesky decomposition of the triangular part of an augmented QR decomposition.
Update an augmented Cholesky decomposition of the triangular part of an augmented QR decomposition.
Update an augmented Cholesky decomposition of the triangular part of an augmented QR decomposition.
Check a cubic Hermite function for monotonicity.
Check a cubic Hermite function for monotonicity.
Compute a constant times a vector plus a vector.
Copy a vector.
Compute the inner product of two vectors with extended precision accumulation.
Dot product of two complex vectors using the complex conjugate of the first vector.
Compute the inner product of two vectors.
Construct a Givens transformation.
Multiply a vector by a constant.
Apply a plane Givens rotation.
Scale a complex vector.
Interchange two vectors.
Compute a constant times a vector plus a vector.
Compute the inner product of two vectors with extended precision accumulation and result.
Copy a vector.
Copy the negative of a vector to a vector.
Compute the inner product of two vectors.
Compute the Euclidean length (L2 norm) of a vector.
Apply a plane Givens rotation.
Construct a plane Givens rotation.
Apply a modified Givens transformation.
Construct a modified Givens transformation.
Multiply a vector by a constant.
Compute the inner product of two vectors with extended precision accumulation and result.
Interchange two vectors.
Find the smallest index of the component of a complex vector having the maximum sum of magnitudes of real and imaginary parts.
Copy a vector.
Find the smallest index of that component of a vector having the maximum magnitude.
Find the smallest index of that component of a vector having the maximum magnitude.
Interchange two vectors.
Compute a constant times a vector plus a vector.
Compute the unitary norm of a complex vector.
Copy a vector.
Copy the negative of a vector to a vector.
Compute the inner product of two vectors.
Compute the inner product of two vectors with extended precision accumulation.
Compute the Euclidean length (L2 norm) of a vector.
Apply a plane Givens rotation.
Construct a plane Givens rotation.
Apply a modified Givens transformation.
Construct a modified Givens transformation.
Multiply a vector by a constant.
Interchange two vectors.
Evaluate the 3j symbol f(L1) = ( L1 L2 L3) (-M2-M3 M2 M3) for all allowed values of L1, the other parameters being held fixed.
Evaluate the 3j symbol g(M2) = (L1 L2 L3 ) (M1 M2 -M1-M2) for all allowed values of M2, the other parameters being held fixed.
Evaluate the 6j symbol h(L1) = {L1 L2 L3} {L4 L5 L6} for all allowed values of L1, the other parameters being held fixed.
Evaluate the 3j symbol f(L1) = ( L1 L2 L3) (-M2-M3 M2 M3) for all allowed values of L1, the other parameters being held fixed.
Evaluate the 3j symbol g(M2) = (L1 L2 L3 ) (M1 M2 -M1-M2) for all allowed values of M2, the other parameters being held fixed.
Evaluate the 6j symbol h(L1) = {L1 L2 L3} {L4 L5 L6} for all allowed values of L1, the other parameters being held fixed.
SLATEC Common Mathematical Library disclaimer and version.
Compute a sequence of the Bessel functions Y(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Compute a sequence of the Bessel functions Y(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
This function subprogram is used together with the routine DQAWC and defines the WEIGHT function.
This function subprogram is used together with the routine DQAWS and defines the WEIGHT function.
This function subprogram is used together with the routine QAWC and defines the WEIGHT function.
This function subprogram is used together with the routine QAWS and defines the WEIGHT function.
Fit a piecewise polynomial curve to discrete data. The piecewise polynomials are represented as B-splines. The fitting is done in a weighted least squares sense. Equality and inequality constraints can be imposed on the fitted curve.
Fit a piecewise polynomial curve to discrete data. The piecewise polynomials are represented as B-splines. The fitting is done in a weighted least squares sense.
Fit a piecewise polynomial curve to discrete data. The piecewise polynomials are represented as B-splines. The fitting is done in a weighted least squares sense. Equality and inequality constraints can be imposed on the fitted curve.
Evaluate the 3j symbol f(L1) = ( L1 L2 L3) (-M2-M3 M2 M3) for all allowed values of L1, the other parameters being held fixed.
Evaluate the 3j symbol g(M2) = (L1 L2 L3 ) (M1 M2 -M1-M2) for all allowed values of M2, the other parameters being held fixed.
Evaluate the 6j symbol h(L1) = {L1 L2 L3} {L4 L5 L6} for all allowed values of L1, the other parameters being held fixed.
Evaluate the 3j symbol f(L1) = ( L1 L2 L3) (-M2-M3 M2 M3) for all allowed values of L1, the other parameters being held fixed.
Evaluate the 3j symbol g(M2) = (L1 L2 L3 ) (M1 M2 -M1-M2) for all allowed values of M2, the other parameters being held fixed.
Evaluate the 6j symbol h(L1) = {L1 L2 L3} {L4 L5 L6} for all allowed values of L1, the other parameters being held fixed.
SLAP WORK/IWORK Array Bounds Checker. This routine checks the work array lengths and interfaces to the SLATEC error handler if a problem is found.
SLAP WORK/IWORK Array Bounds Checker. This routine checks the work array lengths and interfaces to the SLATEC error handler if a problem is found.
Symbolic dump (should be locally written).
Save or recall global variables needed by error handling routines.
Return the most recent error number.
Reset current error number to zero.
Allow user control over handling of errors.
Print the error tables and then clear them.
Abort program execution and print error message.
Set maximum number of times any error message is to be printed.
Process error messages for SLATEC and other libraries.
Print error messages processed by XERMSG.
Record that an error has occurred.
Return the current value of the error control flag.
Return unit number(s) to which error messages are being sent.
Return the (first) output file to which error messages are being sent.
Set the error control flag.
Set logical unit numbers (up to 5) to which error messages are to be sent.
Set output file to which error messages are to be sent.
Implement forward recursion on the three term recursion relation for a sequence of non-negative order Bessel functions Y/SUB(FNU+I-1)/(X), I=1,...,N for real, positive X and non-negative orders FNU.
Implement forward recursion on the three term recursion relation for a sequence of non-negative order Bessel functions Y/SUB(FNU+I-1)/(X), I=1,...,N for real, positive X and non-negative orders FNU.
Compute a sequence of the Bessel functions Y(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Compute a sequence of the Bessel functions Y(a,z) for complex argument z and real nonnegative orders a=b,b+1, b+2,... where b>0. A scaling option is available to help avoid overflow.
Search for a zero of a function F(X) in a given interval (B,C). It is designed primarily for problems where F(B) and F(C) have opposite signs.
Find a zero of a system of a N nonlinear functions in N variables by a modification of the Powell hybrid method.
An easy-to-use code to find a zero of a system of N nonlinear functions in N variables by a modification of the Powell hybrid method.
Search for a zero of a function F(X) in a given interval (B,C). It is designed primarily for problems where F(B) and F(C) have opposite signs.
Find a zero of a system of a N nonlinear functions in N variables by a modification of the Powell hybrid method.
An easy-to-use code to find a zero of a system of N nonlinear functions in N variables by a modification of the Powell hybrid method.