The Wang-Landau algorithm is a method which converges upon
to within a given tolerance. First, we must understand
what energy levels are accessible to our system; let us call this set
, where is the number of
levels. Let us imagine conducting a random walk in energy space such
that the observed histogram of visited levels, , is uniform, or
``flat.'' This random walk must be guided by some underlying
probability distribution, , which tells us whether to
accept or reject a proposed step,
, in the walk.
Generally, is incorporated in a Metropolis acceptance
rule:
Now consider that each energy level has allowable
microstates, each with the same probability, , by
the fundamental hypothesis. The probability of observing the system
in energy level is the composite probability of observing it
in one of the microstates with energy . If we want the probability
of observing the system in energy level to be uniform, then the
probability of observing any particular microstate should be
proportional to . So, the probability distribution
we should use is
(345) |
Now, here is why we want to construct a flat histogram. If we can, then we know that have the ``correct'' . But we don't know to begin with. So now we need a way to begin with a guess for and then systematically refine it until we achieve a flat histogram in energy space. An efficient technique for successively refining a guess for from a sequence of Monte-Carlo simulations was the major contribution of Wang and Landau. [22,23]
In this algorithm, we start with the initial guess ,
and the histogram is initialized as . Then, we begin making
trial moves and accepting or rejecting based on the acceptance rule in
Eq. 347. After a move, we find ourselves in energy state
, and we update both the histogram and the density of
states:
(347) | |||
(348) |
By ``sufficiently flat,'' Wang and Landau suggest that the histogram is such that the bin with the minimum number of hits is within 95% of the mean number of hits per bin. This is a particularly strong criterion, and more recent work suggests that it is adequate to require merely that each bin has a minimum number of hits, regardless of the mean value [26].
The factor update relation (Eq. 350) is also somewhat
arbitrary. What is required is that approach unity in a scheduled
manner. A more general rule might be
(350) |
It is important to note that, because the acceptance rule is changing during the simulation, detailed balance is not strictly obeyed. However, it is very close to being obeyed when . How does one choose a proper initial value and update scheme for ? There is unfortunately no general guidance on this in the current literature, other than the original realization of Wang and Landau that too large a value of introduces large statistical fluctuations in which might become ``frozen in,'' while too small a value for will require much too many visits to produce a flat histogram.
Also, the magnitude of is normally such that we can't tabulate
it directly; it is advantageous instead to tabulate its log. So the
array used to store actually stores , and the
update of a bin becomes:
(351) |
(352) |
Finally, because the acceptance rule involves a ratio of probabilities, is not determined absolutely, but rather to within a constant factor. Unless we know the exact for some reference state (as we do for certain lattice models, such at the Ising magnet and the Potts glass), we cannot obtain absolute thermal quantities (entropy and free energies) from the obtained using this method. This is usually not a problem, either because we can ``pin down'' at a known reference, or because we don't care about absolutes, but only differences.