The key principle of optimization is this:
A quantity u can only take a maximum (or minimum) value when its differential du is zero or undefined.If you write u as f(x, y), where f is a fixed differentiable function of (say) 2 variables, and x and y are quantities whose range of possible values you already understand (typically intervals), then du = D1f(x, y) dx + D2f(x, y) dy, or equivalently, du = ∂u/∂x dx + ∂u/∂y dy.
So one way that u might conceivably take an extreme value is if either (or both) of its partial derivatives are undefined. Another way is if both (not just one) of its partial derivatives are zero. If you can vary x and y smoothly however you please (essentially, if you are in the interior of the domain of f and you are free to access the entire domain), then these are the only possibilities. However, if you cannot vary them smoothly (essentially, if you are on the boundary of the domain of f or if the sitaution is otherwise constrained so that you cannot access the entire domain of f), then there are more possibilities!
If your constraint (essentially, the boundary of f) can be written as an equation g(x y) = 0 (or really, with any constant on the right-hand side), then as long as the gradient ∇g is never zero on this boundary, then you can use the method of Lagrange multipliers. Here, you set up an equation ∇f(x, y) = λ ∇g(x, y), combine this with the equation g(x, y) = 0, and try to solve for x, y, and λ. (Since a vector equation is equivalent to 2 scalar equations, this amounts to a system of 3 equations in 3 variables, so there is hope to solve for it.) If you're working in 3 variables, then you might need two equations to specify the constraint, in which case there are two functions in the place of g and two Lagrange multipliers, making 4 equations in 4 variables. (But you can also have just one g even in 3 dimensions; it's a question of whether the boundary in question is a surface or a curve.) While λ ultimately doesn't matter, the solutions that you get for the original variables give you additional critical points to check for extreme values.
On the other hand, you don't need Lagrange multipliers. Writing v for g(x, y), if the constraint is v = 0 (or v equals any constant), then differentiate this to get dv = 0. (In fact, you could take any equation and just differentiate both sides.) Then if you try to solve the system of equations consisting of du = 0 and dv = 0 for the differentials dx and dy, you should immediately see that dx = 0 and dy = 0 is a solution. However, if you actually go through the steps of solving this as a system of linear equations (which you can always do because differentials are always linear in the differentials of the independent variables), you'll find that at some point you need to divide by some quantity involving x and y, which is invalid if that quantity is zero! So, setting whatever you divide by to zero and combining that with the constraint equation v = 0, you get two equations to solve for the two variables x and y. This will give you the other critical points to check for extreme values.
Be careful, because u might not have a maximum or minimum value! Assuming that u varies continuously (which it must if Calclulus is to be useful at all), then it must have a maximum and minimum value whenever the domain of the function (including any constraints) is both closed and bounded (which is called compact); this means that if you pass continuously through the possibilities in any way, then you are always approaching some limiting possibility. However, if the range of possibilities heads off to infinity in some way, then you also have to take a limit to see what value u is approaching, which can be very difficult to do in more than one dimension. Or if there is a boundary that's not included in the domain, then you have to take a limit approaching that boundary, although in that case you can hope that you can check the boundary as if it were included, the same way as above. If any such limit is larger than every value that u actually reaches (which includes the possibility that a limit is ∞), then u has no maximum value; if any such limit is smaller than every value that u actually reaches (which includes the possibility that a limit is −∞), then u has no minimum value.
So in the end, you look at these possibilities:
The permanent URI of this web page
is
http://tobybartels.name/MATH-2080/2016SP/minmax/
.