## Convex Global Underestimator

The convex global underestimator technique can be used, if the search space roughly resembles 'mountains' with many local maximum values that tend to increase in direction of the global maximum. Its basic idea is to randomly look for some values ofthe function to generate a concave function (respectively convex for minimization problems) that comes close to the points, but stays above (respectively below) and has a simple structure, so its maximum (minimum) can easily be determined. This hypothetical global maximum is used to reduce the region of the search space, where iteratively new points are calculated.

For example, a concave function g can be defined by

- - div] 2 i i where Cj and d are constants that are yet to be found.

The optimization problem for the constants (so g stays above but as close as possible to the randomly calculated solutions) is linear. Hence, the simplex algorithm described before can be used. Usually, the region around the maximum of g is set just at a size that the best calculated solution is still included. Of course, the algorithm works better if first some local search routine has been applied with the previously calculated set of solutions. This can for example be done by gradient-based techniques or by simulated annealing.

Was this article helpful?

Get All The Support And Guidance You Need To Be A Success At Helping Save The Earth. This Book Is One Of The Most Valuable Resources In The World When It Comes To How To Recycle to Create a Better Future for Our Children.

Get My Free Ebook

## Post a comment