EconGraphs Logo BETA
Note: This work is under development and has not yet been professionally edited.
If you catch a typo or error, or just have a suggestion, please submit a note here. Thanks!
Chapter 5 / Constrained Optimization when Calculus Works

5.4 The Lagrange Multiplier Method


We just showed that, for the case of two goods, under certain conditions the optimal bundle is characterized by two conditions:

It turns out that this is a special case of a more general optimization tool called the Lagrange multiplier method.

The Lagrange Multiplier method: General Formula

The Lagrange multiplier method (or just “Lagrange” for short) says that to solve the constrained optimization problem maximizing some objective function of $n$ variables \(f(x_1, x_2, ..., x_n)\) subject to some constraint on those variables \(g(x_1, x_2, ..., x_n) = k\) We can convert it to an unconstrained optimization problem, and find the critical points of an expression called a Lagrangian, which is a function not only of the original variables $x_1, x_2, …, x_n$ but also a new variable called the “Lagrange muliplier,” $\lambda$: \(\mathcal{L}(x_1, x_2, ..., x_n,\lambda) = f(x_1, x_2, ..., x_n) + \lambda(k - g(x_1, x_2, ..., x_n))\) Note that this is constructed by adding the objective function with an expression which is equal to zero at any point along the constraint, multiplied by a new variable $\lambda$.

Under certain conditions, this function will have a unique global maximum characterized by the first-order conditions (or FOCs) setting the partial derivatives of the function with respect to each of the $n + 1$ variables equal to zero: \(\begin{aligned} {\partial \mathcal{L} \over \partial x_1} &= 0 &\Rightarrow {\partial f \over \partial x_1} - \lambda {\partial g \over \partial x_1} = 0 &\Rightarrow \lambda = {\partial f/\partial x_1 \over \partial g/\partial x_1}\\ \\ {\partial \mathcal{L} \over \partial x_2} &= 0 &\Rightarrow {\partial f \over \partial x_2} - \lambda {\partial g \over \partial x_2} = 0 &\Rightarrow \lambda = {\partial f/\partial x_2 \over \partial g/\partial x_2}\\ \\ \vdots\\ \\ {\partial \mathcal{L} \over \partial x_n} &= 0 &\Rightarrow {\partial f \over \partial x_n} - \lambda {\partial g \over \partial x_n} = 0 &\Rightarrow \lambda = {\partial f/\partial x_n \over \partial g/\partial x_n}\\ \\ {\partial \mathcal{L} \over \partial \lambda} &= 0 &\Rightarrow k - g(x_1,x_2,...,x_n) = 0 &\Rightarrow g(x_1,x_2,...,x_n) = k \end{aligned}\)

Applying Lagrange to our example

For example, if we take the numerical example from the last section, the objective function was \(u(x_1,x_2) = 16 \ln x_1 + 9 \ln x_2\) and the constraint was \({x_1^2 \over 100} + {x_2^2 \over 36} = 100\) Therefore the Lagrangian for this problem would be \(\mathcal{L}(x_1,x_2,\lambda) = 16 \ln x_1 + 9 \ln x_2 + \lambda \left(100 - {x_1^2 \over 100} - {x_2^2 \over 36}\right)\) and the first-order conditions would be \(\begin{aligned} {\partial \mathcal{L} \over \partial x_1} &= 0 &\Rightarrow {16 \over x_1} - \lambda {x_1 \over 50} = 0 &\Rightarrow \lambda = {800 \over x_1^2}\\ \\ {\partial \mathcal{L} \over \partial x_2} &= 0 &\Rightarrow {9 \over x_2} - \lambda {x_2 \over 18} = 0 &\Rightarrow \lambda = {162 \over x_2^2}\\ \\ {\partial \mathcal{L} \over \partial \lambda} &= 0 &\Rightarrow 100 - {x_1^2 \over 100} - {x_2^2 \over 36} &\Rightarrow {x_1^2 \over 100} + {x_2^2 \over 36} = 100 \end{aligned}\) If we set the values of $\lambda$ equal to one another, we get \({800 \over x_1^2} = {162 \over x_2^2} \Rightarrow x_2 = {9 \over 20}x_1\) which is our tangency condition; and the last condition just gives us our constraint back. In other words, the Lagrange method is really just a fancy (and more general) way of deriving the tangency condition.

The meaning of the Lagrange multiplier

In addition to being able to handle situations with more than two choice variables, though, the Lagrange method has another advantage: the $\lambda$ term has a real economic meaning.

To see why, let’s take a closer look at the Lagrangian in our example. Recall that we got the equation of the PPF by plugging in the labor requirement functions $L_1(x_1)$ and $L_2(x_2)$ into the resource constraint $L_1 + L_2 = \overline L$. So, we could write the Lagrangian for a general utility maximization subject to a PPF as \(\mathcal{L}(x_1,x_2,\lambda) = u(x_1,x_2) + \lambda (\overline L - L_1(x_1) - L_2(x_2))\) In this case, the first FOC becomes \({\partial \mathcal{L} \over \partial x_1} = {\partial u \over \partial x_1} - \lambda {dL_1 \over dx_1} = 0 \Rightarrow \lambda = {\partial u \over \partial x_1} \times {dx_1 \over dL_1} = MU_1 \times MP_{L1}\) But this is just what we said, when deriving the “gravitational pull” argument, represented the marginal utility from an additional hour of fishing! In short, the $\lambda$ represents an “exchange rate” between the units of the objective function (utils) and the units of the constraint (hours of labor). Indeed, you need this “exchange rate” to make the units of the Lagrangian consistent.

Another way of thinking of $\lambda$ is that it’s the amount by which the objective function would change if the constraint was “relaxed” by one unit: that is, if Chuck were to increase his total labor $\overline L$ from 100 to 101, the Lagrangian function would increase by $\lambda$. You can check to see that this is the amount by which his optimized utility would increase: that is, it’s the amount by which his happiness from consuming fish and coconuts would go up if he worked for one more hour.

This seems hard. Do we have to use it all the time?

Because the Lagrange method is used widely in economics, it’s important to get some good practice with it. The live class for this chapter will be spent entirely on the Lagrange multiplier method, and the homework will have several exercises for getting used to it.

For a two-variable problem, however, it’s generally sufficient to just write down the tangency condition and the constraint condition and solve for the optimal bundle, rather than pulling out all the machinery of the Lagrangian. In particular, on an exam, you do not need to write down the Lagrangian unless you are explicitly asked to; and if you’re simply asked what bundle the Lagrange method would find, it’s sufficient to use the tangency condition-budget constraint method.

Previous: The Tangency Condition
Next: Summary
Copyright (c) Christopher Makler / econgraphs.org