Wednesday, November 23, 2011

A follow up on lower bounds on fill-in

This is an follow up on my previous post on lower bounds on the fill in when doing a sparse Cholesky factorization.

I posted my question to the NA-NET and below is a commented version of the replies I got.

Sivan Toledo pointed out the paper:

  • A Polynomial Approximation Algorithm for the Minimum Fill-In Problem
  • A. Natanzon, R. Shamir and R.Sharan
  • SIAM Journal on Computing, Vol. 30, No. 4, pp. 1067-1079 (2000)

and mentioned there might be a paper by Phil Klein. Most likely it is the report:
and the paper
  • Cutting down on fill using nested dissection: provably good elimination orderings 
  • Ajit Agrawal, Philip N. Klein, and R. Ravi 
  • Graph Theory and Sparse Matrix Computation, edited by A. George, J. Gilbert, and J. W. H. Liu, volume 56 in the IMA Volumes in Mathematics and its Applications, Springer-Verlag (1993), pp. 31-55.
A related work to the above work the Wee-Liang  Heng with the title: "Approximately Optimal Elimination Orderings for Sparse Matrices".  I have printed copy somewhere but cannot locate it now.

The above mentioned work provides algorithm for computing a good  symmetric permutation. However, to the best of my knowledge they are not used in practice but definitely something that I should check out.

Esmond Ng replied and said it is hard to come up with lower bounds general matrices but mentioned bounds can be obtained for special matrices. The relevant papers are

  • Complexity Bounds for Regular Finite Difference and Finite Element Grids
  • Hoffman, Alan J.; Martin, Michael S.; Rose, Donald J.
  • SIAM Journal on Numerical Analysis, vol. 10, no. 2, pp. 364-369.
  • Nested Dissection of a Regular Finite Element Mesh 
  • A. George
  • SIAM J. Numer. Anal. 10, pp. 345-363.
I am aware of this work but I am was mainly looking for information about general matrices since the matrices I experience in MOSEK almost always never have grid structure. MOSEK is an optimization package and hence the underlying applications are mostly from economics and planning which give rise to very different matrices than those coming from physics applications. 
Jeff Ovall point a diffrent line of research in his reply:
If you are willing to relax your notion of fill-in a bit, then I may be able to point you in a helpful direction.  Instead of thinking of fill-in in terms of putting non-zeros where their used to be zeros, one can also think of it as (slightly) increasing the rank of low-rank blocks.  For example, the inverse
of the tridiagonal matrix with stencil (-1,2,-1) has no non-zero entries, but the rank of any off-diagonal block is precisely one, so the "fill-in" in this sense is small (requiring only O(n log n) storage instead of n^2).  Hierarchical matrices (Hackbusch, Grasedyck, Boerm, Bebendorf, ... ) exploit this notion of low-rank fill-in not only to compress the "important" information in a matrix, but also to maintain a compressed format while performing factorizations (LU, Cholesky).  From the algebraic point of view, it is the Rank-Nullity Theorem which implies that inverses of sparse matrices will have large blocks which are of low rank.  If the LU-factorization is thought of block-wise, then  this theorem also has something to say about the ranks of the blocks which appear, though it is not obvious to me how to get sharp lower-bounds.  The paper: 
  • The interplay of ranks of submatrices
  • Strang, G. & Nguyen, T. 
  • SIAM Rev., 2004, 46, 637-646 (electronic)
To summarize then there does not seem to be any good lower bound on the minimal amount of fill in possible when computing a sparse Cholesky factorization of a symmetric permuted matrix.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.