Jump to content

Random optimization

From Wikipedia, the free encyclopedia

Random optimization (RO) is a family of numerical optimization methods that do not require the gradient of the problem to be optimized and RO can hence be used on functions that are not continuous or differentiable. Such optimization methods are also known as direct-search, derivative-free, or black-box methods.

The name random optimization is attributed to Matyas [1] who made an early presentation of RO along with basic mathematical analysis. RO works by iteratively moving to better positions in the search-space which are sampled using e.g. a normal distribution surrounding the current position.

Algorithm

[edit]

Let be the fitness or cost function which must be minimized. Let designate a position or candidate solution in the search-space. The basic RO algorithm can then be described as:

  • Initialize x with a random position in the search-space.
  • Until a termination criterion is met (e.g. number of iterations performed, or adequate fitness reached), repeat the following:
    • Sample a new position y by adding a normally distributed random vector to the current position x
    • If (f(y) < f(x)) then move to the new position by setting x = y
  • Now x holds the best-found position.

This algorithm corresponds to a (1+1) evolution strategy with constant step-size.

Convergence and variants

[edit]

Matyas showed the basic form of RO converges to the optimum of a simple unimodal function by using a limit-proof which shows convergence to the optimum is certain to occur if a potentially infinite number of iterations are performed. However, this proof is not useful in practice because a finite number of iterations can only be executed. In fact, such a theoretical limit-proof will also show that purely random sampling of the search-space will inevitably yield a sample arbitrarily close to the optimum.

Mathematical analyses are also conducted by Baba [2] and Solis and Wets [3] to establish that convergence to a region surrounding the optimum is inevitable under some mild conditions for RO variants using other probability distributions for the sampling. An estimate on the number of iterations required to approach the optimum is derived by Dorea.[4] These analyses are criticized through empirical experiments by Sarma [5] who used the optimizer variants of Baba and Dorea on two real-world problems, showing the optimum to be approached very slowly and moreover that the methods were actually unable to locate a solution of adequate fitness, unless the process was started sufficiently close to the optimum to begin with.

See also

[edit]

References

[edit]
  1. ^ Matyas, J. (1965). "Random optimization". Automation and Remote Control. 26 (2): 246–253.
  2. ^ Baba, N. (1981). "Convergence of a random optimization method for constrained optimization problems". Journal of Optimization Theory and Applications. 33 (4): 451–461. doi:10.1007/bf00935752.
  3. ^ Solis, Francisco J.; Wets, Roger J.-B. (1981). "Minimization by random search techniques". Mathematics of Operations Research. 6 (1): 19–30. doi:10.1287/moor.6.1.19.
  4. ^ Dorea, C.C.Y. (1983). "Expected number of steps of a random optimization method". Journal of Optimization Theory and Applications. 39 (3): 165–171. doi:10.1007/bf00934526.
  5. ^ Sarma, M.S. (1990). "On the convergence of the Baba and Dorea random optimization methods". Journal of Optimization Theory and Applications. 66 (2): 337–343. doi:10.1007/bf00939542.
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy