HLIBpro  2.5.1
Solvers

# Solver Classes

𝖧𝖫𝖨𝖡𝗉𝗋𝗈 implements several solvers for linear equation systems

• TLinearIteration: standard linear iteration with iteration matrix ,
• TCG: conjugate gradient method [1],
• TCGS: square conjugate gradient method [2],
• TBiCGStab: biconjugate gradient stabilized method [3],
• TTFQMR: transpose free quasi-minimal residual method [4]
• TMINRES: minimal residual method for symmetric matrices [5],
• TGMRES: generalized minimal residual method [6].

All solvers have the same interface function for solving the given system:

void solve ( const TLinearOperator * A,
TVector * x,
const TVector * b,
const TLinearOperator * W,
TSolverInfo * info ) const

Here, W is an optional operator used as a (left) preconditioner for solving the equation system. Furthermore, info is an optional object for storing information about the iteration process, e.g., if converged, residual norm or convergence rate.

Assuming a given matrix object A and vectors x and y, the minimal solving code looks as

TGMRES solver( 20 );
solver.solve( A, x, b );
Remarks
The parameter 20 for GMRES defines the restart of the method, e.g., the dimension of the local Krylov subspace.

By default, the initial guess is initialized to some solver dependent value before the iteration. To disable this, you may use the function initialise_start_value:

TGMRES solver( 20 );
solver.initialise_start_value( false );
solver.solve( A, x, b );

## TAutoSolver

Since not all solvers are suitable for all matrices, e.g., not applicable for unsymmetric matrices, TAutoSolver decides which solver to use for a given combination of matrix and preconditioner.

## Stop Criterion

The iteration process is controlled using objects of type TSolver::TStopCriterion. The following parameters may be used to define the stop of the iteration:

• max_iter: maximal number of iteration steps,
• abs_res_reduct: absolute residual norm (often also called tolerance), i.e., stop if ,
• rel_res_reduct: relative residual norm with respect to initial residual, i.e., stop if ,
• rel_res_growth: relative residual growth, i.e., stop if (in case of divergence).

All parameters may be specified while construction TStopCriterion objects:

TStopCriterion ( uint max_iter,
real abs_res_red,
real rel_res_red,
real rel_res_growth );

However, resonable default values are set.

## Solver Information

To obtain more information of the solution process, an object of type TSolverInfo has to be provided to the solve function. Afterwards, the following function may be used to gather the corresponding data:

• has_converged() : did the iteration converge,
• has_diverged() : did the iteration diverge (bases on TStopCriterion),
• has_failed() : did the iteration failed, e.g., due to breakdown,
• n_iter() : return number of iteration steps
• conv_rate() : return convergence rate,
• res_norm() : return norm of residual after iteration stopped.

During construction of TSolverInfo, one may also specify to store the complete iteration history and to print this data during iteration:

TSolverInfo ( bool store_hist, bool print );

# Preconditioning

Beside various solvers, 𝖧𝖫𝖨𝖡𝗉𝗋𝗈 also implements different preconditioning techniques for the iteration methods.

## ℋ-Matrix based Preconditioning

Of course, the most important of these preconditioners is based on some form of matrix inversion using ℋ-arithmetic, e.g., ℋ-LU factorization, ℋ-WAZ factorization of ℋ-inversion. Simply provide the corresponding linear operator object to the solve function:

TMatrix * A = ...;
TMatrix * A_copy = A->copy();
TTruncAcc acc = ...;
auto A_inv = factorise_inv( A_copy.get(), acc );
TVector * x = ...;
TVector * b = ...;
TGMRES solver( 20 );
solver.solve( A, x, b, A_inv.get() );
Remarks
A copy of the original matrix is used since is modified during factorization.

## Classical Preconditioners

Beside ℋ-based methods, 𝖧𝖫𝖨𝖡𝗉𝗋𝗈 also implements some classical preconditioners:

• TJacobi: point-wise and block-wise Jacobi method, e.g., , with being the (block) diagonal of .
• TGaussSeidel: point-wise and block-wise forward/backward/symmetric Gauss-Seidel method, e.g., using the lower or upper triangular part of .
• TSOR: point-wise forward/backward/symmetric SOR method.

However, the point-wise Gauss-Seidel and SOR methods may only be used (and only make sense) when using a sparse matrix (TSparseMatrix). For a ℋ-matrix, only block-wise methods are available for Gauss-Seidel.

In case of Jacobi, both, point- and block-wise may be used for all matrix types. Furthermore, it also allows to define the size of the diagonal blocks.

The example above using a block-wise Gauss-Seidel precondtionier would look like:

TMatrix * A = ...;
TVector * x = ...;
TVector * b = ...;
TGaussSeidel gs( A, GS_SYMMETRIC );
TGMRES solver( 20 );
solver.solve( A, x, b, & gs );
Remarks
Please note, that when used together with TLinearIteration, the resulting iteration method corresponds to the Jacobi, Gauss-Seidel or SOR iteration, respectively. Without any preconditioner TLinearIteration is identical to the Richardson iteration.

# Bibliography

1. M.R. Hestenes and E.L. Stiefel: "Methods of conjugate gradients for solving linear systems", J. Research Nat. Bur. Standards Section B, 49, 409-432, 1952.
2. P. Sonneveld, "CGS, a fast Lanczos-type solver for nonsymmetric linear systems", SIAM J. Sci. Statist. Comput. 10, pp. 36-52, 1989.
3. H.A. van der Vorst, "BI-CGSTAB: A fast and smoothly converging variant of BI-CG for the solution of nonsymmetric linear systems", SIAM J. Sci. Stat. Comput., Vol. 13, No. 2, pp. 631-644, 1992.
4. R.W. Freund, "A transpose-free quasi-minimum residual algorithm for non-Hermitian linear systems", SIAM J. Sci. Comput. 14, pp. 470-482, 1993.
5. C.C. Paige and M.A. Saunders: "Solution of sparse indefinite systems of linear equations", SIAM J. Numerical Analysis 12, 617-629., 1975.
6. Y. Saad and M.H. Schultz, "GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems", SIAM J. Sci. Stat. Comput., 7:856-869, 1986.