## Reviewing Classical Literature in Operations Research – Lagrangian Relaxation

This post is a summary of professor Marshall Fisher’s “An applications oriented guide to Lagrangian relaxation” published in 1985. I found it an interesting paper. This is an open document feel free to use it without permission or without citing this page.

*Summary*

Lagrangian Relaxation comes from this idea that we can make the mathematical model easier by adding one of the hard constraints of the problem to the objective function as a penalty and introduce a new dual decision variable.

Marshall Fisher’s paper explains the applications of this simple idea. He provides a very basic introduction to Lagrangian Relaxation and mentions a number of easy to understand numerical examples. He explains why Lagrangian relaxation is increasingly being used in Large-scale mathematical programming applications.

The article is written in 1985 and was published in “Interfaces”. Fisher mentions that at that time there has been several publications on the theoretical foundation of Lagrangian relaxation but it was not used as much in practice simply because there was not an established framework for using it (in other words people did not know how to efficiently use it).

In Lagrangian relaxation complicated side constraints are replaced by penalty terms added to the objective function. Lagrangian problem is easier to solve and for a maximization of an integer problem provides an upper bound on the optimal value. This idea can replace the linear relaxation for providing bound for branch and bound methods.

In page 11 Fisher provides a numerical example of how a Lagrangian relaxation can be done. And explains why it gives an upper bound to the problem (because we only add a nonnegative term to the objective function)

He then goes on to answer the following questions:

- Which constraints should be relaxed
- How to compute good multipliers (dual variable u)
- How to deduce a good feasible solution to the original problem, given a solution to the relaxed problem

Which constraints should be relaxed?

The relaxed problem should make the problem significantly easier

How to compute good multipliers u?

Fisher provides a general purpose procedure called “the sub gradient method”, he explains that one can use this generic approach or go on to use a more ad-hoc/problem specific approach.

How to deduce a good feasible solution to the original problem, given a solution to the relaxed problem?

It is also problem specific

**The subgradient method for setting the dual variables:**

The dual variable (sometime called the Lagrangian multiplier) can be found by the procedure that Fisher explains. He draws an interesting contrast that in the Lagrangian problem each term like (4-4u)x_{4} can be interpreted as 4 supplies and 4u demands of resource x_{4}. The dual variable, u, can be interpreted as a price charged for the resource. It turns out that if we can discover a price for which the supply and demand for the resource are equal then this value will also give a tight upper bound.However such a price might not exist.

On page 13 he first relaxes the integral constraints and replaces the binary x with any value of x and shows that the optimal value for the dual variable minimizes the maximization problem (which means that we are minimizing the penalty).

The whole idea for an algorithm for finding the value of u comes from this fact that the cost function for the Lagrangian problem is convex (Z_{D}(u) is convex). Therefore we can find the value of u by using gradient methods for the point that Z_{D}(u) is differentiable.

**Fisher’s procedure for finding a sequence of u solves the following problem**

In which ` is a scalar step size and `

` is an optimal solution to the Lagrangian problem.`

Held, Wolfe and Crowder [1974] showed that if as ` we get `

` and `

` then the objective value for `

` converges to its optimal value `

For practical use, Fisher recommends the following value for

Where Z* is the objective value of the best known feasible solution to the problem and ` is a scalar between 0 and 2. Readers can find more explanations for this value in Held, Wolf and Crowder [1974]`

*Other procedures:*

There are other procedures for finding the dual variable than the sub gradient method, these methods are called “multiplier adjustment methods” These methods typically exploit the special structure of the dual problem.

Fisher gives and overview of literature on following problem specific method:

- Erlenkotter [1978]: Algorithm for the uncapacitated location problem

Comparing with linear programming bounds: He shows that for his numerical example the upper bound that he gets from the Lagrangian problem matches the LP upper bound. Secondly the LP dual value is exactly equal to the Lagrangian multiplier (duh!)

Geoffrion [1974] states that <code></code> for any Lagrangian relaxation.

<code></code> only if the Lagrangian problem is unaffected by removing the integrality requirement on x

In page 16 Fisher uses a different relaxation by dualizing other constraints and gets the Knapsack problem. Knapsack problem is known to be difficult but there are efficient algorithms for solving Knapsack (e.g., Dynamic Programming)

He shows that by carefully chosing the constraint to dualize one can get upper bounds that are significantly tighter than the LP bounds

In page 17 he provides a figure for generic Lagrangian relaxation algorithm

Following articles have used Lagrangian relaxation successfully:

- Bean [1984] maximizes the diversity of a portfolio
- Fisher, Jaikumar, Greenfield and Kedia [1982] use it for scheduling a fleet of trucks for bulk delivery
- Bell [1983] Uses it to reduce the expenses in chemical products
- Shepard [1984] used it for Exxon tank truck scheduing
- Fisher, Jaikumar, Greenfield and Kedia [1982] used it for DuPont for vehicle routing
- Glover, Karney and Klingman [1979] used it for job assignment problems
- Graves [1982] uses it for product planning
- Manero [1984] has used it for a check processing operation for a bank
- Mulvey [1980] has used it for condensing large databases

It is not always trivial to find a Lagrangian relaxation that is computationally feasible. More research needs to be done and we will see in the future if Lagrangian Relaxation is still being used

Interesting! I was going to make a comment on using distance integrals in the penalty term – as opposed to explicit functions of the type described – but I don’t know exactly what to write. Well commenting is cheap so I’ll write this anyway.

F Pait said this on November 29, 2009 at 2:51 am |