Monday, December 28, 2009

A simple example demonstrating the effect scaling in optimization problems.

Now assume you have the constraints

                   x  <= 1000000
1000001<= x

in an optimization model. Clearly the problem is infeasible. Now assume you have the constraints

                  1.0e-6  x  <= 1 + epsilon
1000001<=            x

then they are still infeasible if epsilon=0.0. Note the second set of constraints is equivalent to first set of constraints except the first constraint has be scaled by a factor of 1.0e-6 (assuming epsilon=0.0).
 
However for epsilon of 1.0e-6 the constraints are FEASIBLE! The status of the problem has changed.

Most optimizatizers for LPs and nonlinear problems will declare the solution feasible it satisfies all the constraints within a small tolerance of say 1.0e-6. And hence they will report the second set of constraints as feasible even for epsilon=0.0. Whereas they will consider the first set of constraints infeasible.


The following can learned form this small example:
  • Scaling of constrains (and variables) has an effect.
  • The effect can be quite dramatic if the problem is near infeasible or is near feasible.
  • Different optimizers will scale differently and hence it is very likely they will reach different conclusions i.e. whether the problem is infeasible or not.
B.t.w. most users on optimization software will prefer the software say feasible. At least we less likely to get complaints given that outcome.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.