HLIBpro  3.0
Solving a Sparse Equation System

Remarks
This example corresponds to the C++ example "Solving a Sparse Equation System".

Given a sparse matrix $$A$$ and a right hand hand side $$b$$, the solution $$x$$ of

$Ax = b$

is sought, whereby H-LU factorisation shall be used to improve the convergence of an iterative solver. No geometrical data is assumed and hence, algebraic clustering is performed for converting $$A$$ into an H-matrix. Furthermore, nested dissection is applied to improve the efficiency of the H-LU factorisation.

Remarks
During the example, the definition of variables is done when needed to make the presentation more structured. This of course is not valid C code. In the plain program at the end of the presentation, all variable definitions are placed correctly at the start of code blocks.

The program starts with the inclusion of the needed header files and the definition of the error handling macro.

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <hpro-c.h>
#define CHECK_INFO { if ( info != HPRO_NO_ERROR ) \
{ char buf; hpro_error_desc( buf, 1024 ); \
printf( "\n%s\n\n", buf ); exit(1); } }
/*
* main function
*/
int
main ( int argc, char ** argv )
{

The next step is the definition of the sparse matrix, which shall be read from a file. 𝖧𝖫𝖨𝖡𝗉𝗋𝗈 supports various foreign matrix formats, e.g. Matlab, SAMG, Harwell-Boeing or Matrixmarket. In this example, the matrix is stored in Harwell/Boeing format format. One also needs to know wether the file contains a real or complex valued matrix and choose the wanted floating point precision. For this example, we assume a real valued matrix and use double precision:

hpro_d_matrix_t S = NULL;
S = hpro_d_load_matrix( "matrix.hb", & info ); CHECK_INFO;

The function hpro_d_load_matrix performs autodetection of the file format and is therefore suited for all supported formats.

Next, some properties of the matrix are queried:

int is_sym = 0;
is_sym = ( hpro_d_matrix_is_sym( S, & info ) || hpro_d_matrix_is_herm( S, & info ) ? 1 : 0 );
printf( " matrix has dimension %lu x %lu\n", hpro_d_matrix_rows( S, & info ),
hpro_d_matrix_cols( S, & info ) );
printf( " symmetric = %d\n", is_sym );

We also want to read the right-hand side of the equation system, which is also expected to be stored in Harwell/Boeing format:

hpro_d_vector_t b;
b = hpro_d_load_vector( "rhs.hb", & info ); CHECK_INFO;

# Conversion into H-Format

Having a right hand side, the equation system above could now be solved using an iterative scheme. Unfortunately, most sparse matrices need some form of preconditioning for an acceptable convergence behaviour. The preconditioner shall be in the form of a H-LU factorisation of $$A$$ and hence, of S. For this, we need a cluster tree and a block cluster tree.

Since, by assumption, no geometrical data is available, the clustering is performed purely algebraically by using the connectivity relation of the indices stored in the sparse matrix itself. Furthermore, nested dissection is to be used, which needs some special clustering.

Algebraic clustering uses graph partitioning as the underlying technique. 𝖧𝖫𝖨𝖡𝗉𝗋𝗈 implements two graph partitioning algorithms (HPRO_ALG_BFS and HPRO_ALG_ML) and supports external algorithms, e.g. METIS and Scotch (HPRO_ALG_METIS and HPRO_ALG_SCOTCH). Here we use the BFS-based partitioning strategy together with nested dissection:

hpro_clustertree_t ct;
ct = hpro_d_clt_build_alg_nd( S, HPRO_ALG_BFS, 40, & info ); CHECK_INFO;

Having the cluster tree, the block cluster tree can be computed:

hpro_blockclustertree_t bct;
hpro_clt_perm_i2e( ct, & info ),
hpro_clt_perm_i2e( ct, & info ),
& info ); CHECK_INFO;
bct = hpro_bct_build( ct, ct, adm, & info ); CHECK_INFO;

Here, the weak admissibility condition for graphs (HPRO_ADM_WEAK) is used. Since the block clusters to test are based on the internal ordering of the H-matrix but the sparse matrix is defined in the external ordering, the corresponding mappings between both is needed by the admissibility condition.

The final step is the conversion of the sparse matrix into H-format. The accuracy can be usually neglected (set to some arbitrary value), since low rank blocks are usually empty (clusters of positive distance usually have no overlapping basis functions).

hpro_d_matrix_t A;
hpro_acc_t acc = hpro_acc_fixed_eps( 0.0 );
A = hpro_d_matrix_build_sparse( bct, S, acc, & info ); CHECK_INFO;

# H-LU factorisation

The factorisation is performed in place, i.e. the data of A is overwritten by the corresponding factores. This implies, that A is actually not a matrix object after the factorisation since the stored data depends on special handling. For this, linear operators are used with special versions for each factorisation algorithm to permit evaluation of the factorised matrix or its inverse.

hpro_d_linearoperator_t LU;
hpro_acc_t lu_acc = hpro_acc_fixed_eps( 1e-4 );
LU = hpro_d_matrix_factorise_inv( A, lu_acc, & info ); CHECK_INFO;

The factorisation is performed up to a block-wise accuracy of $$10^{-4}$$. Depending on the given matrix this has to be changed to obtain a direct solver (single matrix-vector mult.) or a good preconditioner (small number of iterations).

Some properties of the factorisation are printed next, most notably the inversion error $$\|I-(LU)^{-1}S\|_2$$ and the condition estimate $$\|LU\mathbf{1}\|_2$$. However, to compare $$(LU)^{-1}$$ with $$S$$, both need to have the same ordering of the indices. Instead of reordering $$S$$, the factorised matrix may be wrapped in an object representing a permuted matrix:

hpro_d_linearoperator_t PLU;
PLU = hpro_d_perm_linearoperator( hpro_clt_perm_i2e( ct, & info ),
LU,
hpro_clt_perm_e2i( ct, & info ),
& info ); CHECK_INFO;

Here, an operator was built, which first reorders w.r.t. the H-matrix, then applies the factorised matrix and finally reorders back.

Now, the inversion error maybe computed:

printf( " inversion error = %.4e\n",
hpro_d_linearoperator_norm_inv_approx( (hpro_d_linearoperator_t) S, PLU, & info ) ); CHECK_INFO;

# Solving the Equation System

Finally, the equation system can be solved using the inverse of the matrix factorisation as a preconditioner. The solution vector is created using the hpro_col_vector function of S, which returns a vector defined over the column index set of S.

hpro_d_vector_t x;
hpro_solver_t solver;
hpro_solve_info_t sinfo;
x = hpro_d_matrix_col_vector( S, & info ); CHECK_INFO;
solver = hpro_solver_auto( & info ); CHECK_INFO;
hpro_solver_initialise_start_value( solver, 1, & info ); CHECK_INFO;
hpro_d_solver_solve( solver, (hpro_d_linearoperator_t) S, x, b, PLU,
& sinfo, & info ); CHECK_INFO;
if ( solve_info.converged )
printf( " converged in %u steps with rate %.2e, |r| = %.2e\n",
solve_info.steps, solve_info.conv_rate, solve_info.res_norm );
else
printf( " not converged in %u steps\n", solve_info.steps );

The iterative solver is automatically chosen depending on S and the factorisation (MINRES for symmetric matrices and GMRES for unsymmetric matrices). The start value of the iteration should be initialised by the solver instead of using the data provided by the user (no guess/data available).

The program is finished with the finalisation of 𝖧𝖫𝖨𝖡𝗉𝗋𝗈 and the closing of the try block:

hpro_solver_free( solver, & info ); CHECK_INFO;
hpro_d_vector_free( x, & info ); CHECK_INFO;
hpro_d_linearoperator_free( PLU, & info ); CHECK_INFO;
hpro_d_linearoperator_free( LU, & info ); CHECK_INFO;
hpro_d_vector_free( b, & info ); CHECK_INFO;
hpro_d_matrix_free( A, & info ); CHECK_INFO;
hpro_bct_free( bct, & info ); CHECK_INFO;
hpro_clt_free( ct, & info ); CHECK_INFO;
hpro_matrix_free( S, & info ); CHECK_INFO;
hpro_done( & info ); CHECK_INFO;
}

# The Plain Program

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <hpro-c.h>
#define CHECK_INFO { if ( info != HPRO_NO_ERROR ) \
{ char buf; hpro_error_desc( buf, 1024 ); \
printf( "\n%s\n\n", buf ); exit(1); } }
/*
* main function
*/
int
main ( int argc, char ** argv )
{
hpro_matrix_t S = NULL;
int is_sym = 0;
hpro_d_vector_t b;
hpro_clustertree_t ct;
hpro_blockclustertree_t bct;
hpro_d_matrix_t A;
hpro_acc_t acc;
hpro_d_linearoperator_t LU;
hpro_acc_t lu_acc;
hpro_d_linearoperator_t PLU;
hpro_d_vector_t x;
hpro_solver_t solver;
hpro_solve_info_t sinfo;
S = hpro_d_load_matrix( "matrix.hb", & info ); CHECK_INFO;
is_sym = ( hpro_d_matrix_is_sym( S, & info ) || hpro_d_matrix_is_herm( S, & info ) ? 1 : 0 );
printf( " matrix has dimension %lu x %lu\n", hpro_d_matrix_rows( S, & info ),
hpro_d_matrix_cols( S, & info ) );
printf( " symmetric = %d\n", is_sym );
b = hpro_d_load_vector( "rhs.hb", & info ); CHECK_INFO;
ct = hpro_d_clt_build_alg_nd( S, HPRO_ALG_BFS, 40, & info ); CHECK_INFO;
hpro_clt_perm_i2e( ct, & info ),
hpro_clt_perm_i2e( ct, & info ),
& info ); CHECK_INFO;
bct = hpro_bct_build( ct, ct, adm, & info ); CHECK_INFO;
acc = hpro_acc_fixed_eps( 0.0 );
A = hpro_d_matrix_build_sparse( bct, S, acc, & info ); CHECK_INFO;
lu_acc = hpro_acc_fixed_eps( 1e-4 );
LU = hpro_d_matrix_factorise_inv( A, lu_acc, & info ); CHECK_INFO;
PLU = hpro_d_perm_linearoperator( hpro_clt_perm_i2e( ct, & info ),
LU,
hpro_clt_perm_e2i( ct, & info ),
& info ); CHECK_INFO;
printf( " inversion error = %.4e\n",
hpro_d_linearoperator_norm_inv_approx( (hpro_d_linearoperator_t) S, PLU, & info ) ); CHECK_INFO;
x = hpro_d_matrix_col_vector( S, & info ); CHECK_INFO;
solver = hpro_solver_auto( & info ); CHECK_INFO;
hpro_solver_initialise_start_value( solver, 1, & info ); CHECK_INFO;
hpro_d_solver_solve( solver, (hpro_linearoperator_t) S, x, b, PLU,
& sinfo, & info ); CHECK_INFO;
if ( solve_info.converged )
printf( " converged in %u steps with rate %.2e, |r| = %.2e\n",
solve_info.steps, solve_info.conv_rate, solve_info.res_norm );
else
printf( " not converged in %u steps\n", solve_info.steps );
hpro_solver_free( solver, & info ); CHECK_INFO;
hpro_d_vector_free( x, & info ); CHECK_INFO;
hpro_d_linearoperator_free( PLU, & info ); CHECK_INFO;
hpro_d_linearoperator_free( LU, & info ); CHECK_INFO;
hpro_d_vector_free( b, & info ); CHECK_INFO;
hpro_d_matrix_free( A, & info ); CHECK_INFO;
hpro_bct_free( bct, & info ); CHECK_INFO;