HLIBpro 3.1
Loading...
Searching...
No Matches
Solvers

Solver Classes

𝖧𝖫𝖨𝖡𝗉𝗋𝗈 implements several solvers for linear equation systems \(Ax = b\):

Solver Description
TLinearIteration standard linear iteration \(x_{i+1} \leftarrow x_i + N ( Ax - b )\) with iteration matrix \(N\)
TCG conjugate gradient method [1]
TCGS square conjugate gradient method [2]
TBiCGStab biconjugate gradient stabilized method [3]
TTFQMR transpose free quasi-minimal residual method [4]
TMINRES minimal residual method for symmetric matrices [5]
TGMRES generalized minimal residual method [6]

All solvers have the same interface function for solving the given system:

void solve ( const TLinearOperator< value_t > * A,
TSolverInfo * info ) const
hlib_ complex_t hlib_ complex(const hlib_ real_t re, const hlib_ real_t im)
Definition hpro-c-compat.h:2880

Here, W is an optional operator used as a (left) preconditioner for solving the equation system. Furthermore, info is an optional object for storing information about the iteration process, e.g., if converged, residual norm or convergence rate.

Assuming a given matrix object A and vectors x and y, the minimal solving code looks as

TGMRES solver( 20 );
solver.solve( A, x, b );
Remarks
The parameter 20 for GMRES defines the restart of the method, e.g., the dimension of the local Krylov subspace.

By default, the initial guess \(x\) is initialized to some solver dependent value before the iteration. To disable this, you may use the function initialise_start_value:

TGMRES solver( 20 );
solver.initialise_start_value( false );
solver.solve( A, x, b );

TAutoSolver

Since not all solvers are suitable for all matrices, e.g., not applicable for unsymmetric matrices, TAutoSolver decides which solver to use for a given combination of matrix and preconditioner.

Stop Criterion

The iteration process is controlled using objects of type TStopCriterion. The following parameters may be used to define the stop of the iteration:

  • max_iter: maximal number of iteration steps,
  • abs_res_reduct: absolute residual norm (often also called \i tolerance), i.e., stop if \(|r_i| \le \varepsilon\),
  • rel_res_reduct: relative residual norm with respect to initial residual, i.e., stop if \(|r_i| \le \varepsilon |r_0|\),
  • rel_res_growth: relative residual growth, i.e., stop if \(|r_i| \ge \varepsilon |r_0|\) (in case of divergence).

All parameters may be specified while construction TStopCriterion objects:

TStopCriterion ( uint max_iter,
real abs_res_red,
real rel_res_red,
real rel_res_growth );

However, resonable default values are set.

Alternatively, the functions max_steps, relative_reduction and absolute_reduction can be combined to construct TStopCriterion objects, e.g.,

auto stop = max_steps( 100 ) + relative_reduction( 1e-6 );

All parameters not defined are set with default parameters (see Hpro::CFG::Solver).

Solver Information

To obtain more information of the solution process, an object of type TSolverInfo has to be provided to the solve function. Afterwards, the following function may be used to gather the corresponding data:

  • has_converged() : did the iteration converge,
  • has_diverged() : did the iteration diverge (bases on TStopCriterion),
  • has_failed() : did the iteration failed, e.g., due to breakdown,
  • n_iter() : return number of iteration steps
  • conv_rate() : return convergence rate,
  • res_norm() : return norm of residual after iteration stopped.

During construction of TSolverInfo, one may also specify to store the complete iteration history and to print this data during iteration:

TSolverInfo ( bool store_hist, bool print );

Preconditioning

Beside various solvers, 𝖧𝖫𝖨𝖡𝗉𝗋𝗈 also implements different preconditioning techniques for the iteration methods.

H-Matrix based Preconditioning

Of course, the most important of these preconditioners is based on some form of matrix inversion using H-arithmetic, e.g., H-LU factorization, H-WAZ factorization of H-inversion. Simply provide the corresponding linear operator object to the solve function:

TTruncAcc acc = ...;
auto A_inv = factorise_inv( A_copy.get(), acc );
TGMRES solver( 20 );
solver.solve( A, x, b, A_inv.get() );
virtual auto copy() const -> std::unique_ptr< TMatrix< value_t > >
return copy of matrix
Remarks
A copy of the original matrix is used since \(A\) is modified during factorization.

Classical Preconditioners

Beside H-based methods, 𝖧𝖫𝖨𝖡𝗉𝗋𝗈 also implements some classical preconditioners:

  • TJacobi: point-wise and block-wise Jacobi method, e.g., \(D^{-1}\), with \(D\) being the (block) diagonal of \(A\).
  • TGaussSeidel: point-wise and block-wise forward/backward/symmetric Gauss-Seidel method, e.g., using the lower or upper triangular part of \(A\).
  • TSOR: point-wise forward/backward/symmetric SOR method.

However, the point-wise Gauss-Seidel and SOR methods may only be used (and only make sense) when using a sparse matrix (TSparseMatrix). For a H-matrix, only block-wise methods are available for Gauss-Seidel.

In case of Jacobi, both, point- and block-wise may be used for all matrix types. Furthermore, it also allows to define the size of the diagonal blocks.

The example above using a block-wise Gauss-Seidel precondtionier would look like:

TGaussSeidel< value_t > gs( A, GS_SYMMETRIC );
TGMRES solver( 20 );
solver.solve( A, x, b, & gs );
Remarks
Please note, that when used together with TLinearIteration, the resulting iteration method corresponds to the Jacobi, Gauss-Seidel or SOR iteration, respectively. Without any preconditioner TLinearIteration is identical to the Richardson iteration.

Mixed-Prec Preconditioners

The above definition of the solve function is not actually identical to the version in 𝖧𝖫𝖨𝖡𝗉𝗋𝗈. There, the preconditioner may have a different value type compared to the matrix and the vectors. With this, you may use a single precision preconditioner for a double precision matrix and vectors.

TMatrix< double > * A_copy = convert< float >( *A ); // convert to single precision
TTruncAcc acc = ...;
auto A_inv = factorise_inv( A_copy.get(), acc );
TGMRES solver( 20 );
solver.solve( *A, *x, *b, *A_inv );

Bibliography

  1. M.R. Hestenes and E.L. Stiefel: "Methods of conjugate gradients for solving linear systems", J. Research Nat. Bur. Standards Section B, 49, 409-432, 1952.
  2. P. Sonneveld, "CGS, a fast Lanczos-type solver for nonsymmetric linear systems", SIAM J. Sci. Statist. Comput. 10, pp. 36-52, 1989.
  3. H.A. van der Vorst, "BI-CGSTAB: A fast and smoothly converging variant of BI-CG for the solution of nonsymmetric linear systems", SIAM J. Sci. Stat. Comput., Vol. 13, No. 2, pp. 631-644, 1992.
  4. R.W. Freund, "A transpose-free quasi-minimum residual algorithm for non-Hermitian linear systems", SIAM J. Sci. Comput. 14, pp. 470-482, 1993.
  5. C.C. Paige and M.A. Saunders: "Solution of sparse indefinite systems of linear equations", SIAM J. Numerical Analysis 12, 617-629., 1975.
  6. Y. Saad and M.H. Schultz, "GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems", SIAM J. Sci. Stat. Comput., 7:856-869, 1986.