Simulated annealing is an extension of hill climbing, which uses randomness to avoid getting stuck in local maxima and plateaux.
Simulated annealing is an extension of hill climbing, which uses randomness to avoid getting stuck in local maxima and plateaux.
a) As defined in your textbook, simulated annealing returns the current state when the end of the annealing schedule is reached and if the annealing schedule is slow enough. Given that we know the value (measure of goodness) of each state we visit, is there anything smarter we could do?
(b) Simulated annealing requires a very small amount of memory, just enough to store two states: the current state and the proposed next state. Suppose we had enough memory to hold two million states. Propose a modification to simulated annealing that makes productive use of the additional memory.
In particular, suggest something that will likely perform better than just running simulated annealing a million times consecutively with random restarts. [Note: There are multiple correct answers here.]
(c) Gradient ascent search is prone to local optima just like hill climbing. Describe how you might adapt randomness in simulated annealing to gradient ascent search avoid trap of local maximum.
Trending now
This is a popular solution!
Step by step
Solved in 2 steps with 2 images