Thursday, November 17, 2011

Lower bounds on the minimal fill-in when factorizing a positive symmetric matrix. Any help out there?

When computing a Cholesky factorization of a positive definite
symmetric matrix A, then it is well-known that a suitable symmetric
reordering helps keeping the the number of fill-ins and the total
number flops down.

Finding the optimal ordering is NP-hard but good orderings can be
obtained with the minimum degree, nested dissection (aka graph
partitioning based) algorithms.  Those algorithms provides an upper
bound on the minimal amount of fill-in. However, I wonder is there any
way to compute a nontrivial lower bound on the minimal amount of
fill-in?

I have been searching the literature but have not found a any good
reference.  Do you have any suggestions? The problem sounds hard I
know.

Why is a lower bound important? Well, it would help evaluate the
quality of the ordering algorithms that we have implemented in MOSEK
(www.mosek.com).  Moreover, the minimum degree ordering is usually
computational cheap whereas nested dissection is expensive. A lower
bound could help me determine when the minimum degree ordering
potentially could be improved by using nested dissection.




Thursday, September 8, 2011

Conference in Honor of Etienne Loute

Yesterday, I was at a conference honoring Etienne Loute where I presented the talk: "Convex Optimization : Conic Versus Functional Form". The messages of the talk is if you can formulate an optimization problem as a conic quadratic optimization problem (aka SOCP) then there are many good reasons to prefer this form over other forms. Some reasons are:
  • Conic quadratic problems are convex construction.
  • They are almost as simple as LPs to deal with in software.
  • Duality theory is almost as simple as the linear case.
  • The (Nesterov-Todd) primal-dual algorithm for conic quadratic problems is extremely good.

I had feared that the distinguished audience would consider my talk too simple. However, the speaker before me was Bob Fourer of AMPL. The title of his talk was about checking convexity of general optimization problems formulated in AMPL and it had two parts. The first part was about checking convexity and the second part was about how you in some cases automatically could convert optimization problems on functional form to a conic quadratic optimization problem. So the two talks complemented each other very well and it made me feel better about my talk.

Finally, I would like to mention that Yurii Nesterov was one of the speakers. He must have enjoyed the day since two speakers was talking about his baby named optimization over symmetric cones.

Monday, May 23, 2011

A report from SIAM Optimization 2011

Last week I was at the SIAM Optimization 2011 conference in Darmstadt. It was very nice conference for the following reasons:
  • It was very easy to go there since Darmstadt is close to the major airport in Frankfurt.
  • The conference center Darmstadium in the center on Darmstadt was execellent.
  • The hotel was excellent and only 2 mins walk from the conference center.
  • The food was fairly cheap and very good. The same was true for the beer.
  • The scientific program was excellent but also very packed. I did not experience any no shows. 
Some of major topis on the conference was MINLP (mixed-integer nonlinear optimization) and SDP (semi-definite optimization). These two topics kind of got together in the plenary talk of Jon Lee the last day because it seems that SDP will play an important role in MINLP. In fact to concluded his talk by saying: "We need powerful SDP software". Since we plan to support SDP at MOSEK then this was a nice conclusion for us.

One of the most interesting talks I saw was a tutorial by my friend Etienne de Klerk. He discussed a preprocessing technique that can be used in semi-definite optimization to reduce the computational complexity for some problems. The idea is that a big semi-definite variable can decomposed into a number of smaller number of semi-definite variables given some transformations are performed on the problem data.

Another major topic at the conference was sparse optimization where by sparse is meant that the solution is sparse. There was plenary about this topic by Steve Wright and a tutorial by Michael Friedlander. Michael presented among other things a framework that could be used to understand and evaluate all the various algorithms suggested to solve structured solution sparse optimization problems.  This topic was also addressed a talk about robust support vector machines by Laurent El Ghaoui which  I liked quite a bit.

Finally, a couple MOSEK guys gave a presentation in a session. The other speakers in that session was Joachim Lofberg the author of YALMIP and Christian Bliek of IBM. Christian talked about the conic interior-point optimizer  in CPLEX. One slightly surprising announcement Christian made was that CPLEX will employ the homogenous model in their interior-point optimizer as the default from version 12.3. In MOSEK the homogeneous model always been the default.


I have definitely overlooked or forgotten something important at the conference but with 4 days conference from early morning to late evening then that is avoidable unfortunately.

Wednesday, April 27, 2011

Formulating linear programs are hard!

When I was younger I was teaching linear programming (LP) to business students. One lesson I learned is that formulating an LP is much harder than it appears when reading a standard text book. Recently I came  across the paper  "Formulating Integer Linear Programs: A Rogues’ Gallery" which provides many ideas that can be useful when formulating LPs.

Wednesday, April 20, 2011

In need of a optimal basis improvement procedure in linear programming.

One of our MOSEK customers is solving an LP with the primal simplex optimizer. Next she does a sensitivity analysis but that fails because the optimal basis is singular. This should not happen in ideal world but can happen for at least three reasons:
  • A bug may cause the optimal basis to be singular.
  • The primal simplex optimizer works on a presolved and scaled problem. Whereas the sensitivity analysis is performed on the original problems. The basis might be well-conditioned in the presolved and scaled "space" and not the original space.
  • The simplex optimizer usually starts with a nicely conditioned basis. In each iteration an LU representation of the basis is updated using rank 1 updates. If the rank 1 update signals that the basis is ill-conditioned then the iterations is continued. Using an idea by John Tomlin it is possible in some cases to discover that the basis becomes ill-conditioned during the rank-1 update. Hence, it might very well be that an ill-conditioned basis is not discovered. Particularly if it becomes ill-conditioned in the last simplex iteration.
Since most real world LPs have multiple optimal basic solutions then looking for the best conditioned (near) optimal basis might be very useful before doing sensitivity analysis or even hotstart. Finding the best conditioned optimal basis is most likely not computationally feasible but maybe it can done in approximate way.

At ISMP 2009 I saw a talk about the cutting plane methods. There was some relation between the quality of the cuts generated and the conditioning of the basis.

Btw. the John Tomlin article I am referring to is "An Accuracy Test for Updating Triangular Factors", Mathematical Programming Study 4, pp. 142-145 (1975).

Wednesday, March 2, 2011

Is it safe to move lower bounds to zero?

Assume we have the problem

  min c'x
  st.   A x = b        (P0)
         x >= l

where l is large in absolute value say l_j=-1000 for all js. (l is short for lower bounds). The dual problem is


   max b'y + l's
   st.    A'y + s = c  (D0)
           s >= 0

 It is very common to transform the problem to

  min c'x + c'l
  st.   A x = b- A l  (P1)
         x >= 0

for efficiency reasons. Indeed all interior-point optimizers will do that.

The dual problem is

   max b'y + c'l
   st.    A'y + s = c  (D1)
           s >= 0


Let us say we solve (P1) and (D1). Moreover,

   A'y + s =  c

holds only approximate which definitely be the case for interior-point methods. To be precise we have

   A'y + s =  c + e

holds exactly where 0 < |e|| << 1.


This implies if (y,s) from (D1) is reported as an optimal solution to (D0) then there can be a big error in the dual objective value. Note that is not the case if l=0. Now if instead we report (y,c-A'y) as the optimal dual solution to (D0) then objective value will be correct but then s>=0 might be violated.

The question is that which dual solution to report. I guess the answer is that it depends on your priorities.

I will leave it as an exercise to the interested reader to construct a small example demonstrating this. Since I just spend all day figuring out that happening on instance with 100K variables.


       

Thursday, November 4, 2010

Which cones are needed to represent almost all convex optimization problems?

Recently I visited my Professor Yinyu Ye at Stanford university.  I worked with during my ph.d. studies where we did some nice work together. Now we hope we might have some new ideas to explore. More about that later. During my visit I also talked Stephen Boyd and Jacob Mattingley. Jacob also gave me a presentation of his interesting ph.d. work which he defended while I was at Stanford.

During a lunch with Yinyu, Stephen and some other Standford guys Stephen said something like: "Almost all convex optimization problem can be formulated using a combination of  linear, quadratic, semi-definite and the exponential cones". It is a view I am sharing. Given it is true it has an important practical ramification I will return to shortly. Stephen is aware that his statement is hard to prove but I think it is true like almost all large scale LPs are sparse. It is not something  that is not universally true but it is true in practice.

Please note that any convex problem can be formulated in conic form but Stephens postulate that only very few cone types are require. Moreover, currently it is only the exponential cone we do not know how to deal with in practice because it is a so-called nonsymmetric cone.

Occasionally at MOSEK support we get a question like: "I have a nonlinear model. How do I solve it with MOSEK?"  My first reply is always: "Is your model convex?" because MOSEK only deals with convex models. If the user knows what convexity is then user will usually reply yes. Maybe, the user will add that it looks complicated to solve general convex convex models with MOSEK. That is particularly true if you do not use a modeling language like AMPL or GAMS. Here comes Stephen Boyds observation handy because if your model does not include exponential terms (or logarithmic terms) or semi-definite terms then most likely the problem can be formulated as a conic quadratic optimization problem. That is fortunate because
  • conic quadratic optimization problems are as easy as linear problems to deal with software wise. The user does not have to bother with gradients and Hessians for instance.
  • the optimizer for conic quadratic optimization problems is much more robust than the optimizer for general convex optimization problems.
Recently after I came back from Stanford an user wrote to MOSEK  support: "I have this complicated convex problems that cannot be formulated as conic quadratic optimization problem. How should I solve it with MOSEK?" After some back and forth I got him to reveal that it essentially had nonlinear functions of the type


      max(x,0)^2 <= t

Now that is in fact conic quadratic representable as follows

     s^2 <= t
     x    <= s


So, it turned that the complicated convex model is a conic quadratic representable contrary to the initial statement.


The upshot from Stephen Boyds statement is that if your model is convex and does not include exponential or logarithmic terms then is very likely it can represented by conic quadratic or a semi-definite problem. I think that is a very useful observation in practice.

PS. Usually when reformulating a problem as conic quadratic optimization problem then the number of constraints and variables are expanded. Some thinks that inefficient. However, it should be noted that the structure introduced is very sparse and hence does not hurt performance much.  There even some cases (thinks QPs with a low rank dense Hessian) where the conic quadratic representation is bug but requires much less space when stored sparse.