Monte Carlo Methods applied to the Ising model
©2010
Bachelorarbeit
99 Seiten
Zusammenfassung
The thermodynamic observables of the classical one– and two–dimensional ferromagnetic and antiferromagnetic Ising models on a square lattice are simulated, especially at the phase transitions (if applicable) using the classical Monte Carlo algorithm of Metropolis. Finite size effects and the influence of an external magnetic field are described. The critical temperature of the 2d ferromagnetic Ising model is obtained using finite size scaling.
Leseprobe
Inhaltsverzeichnis
Adler, Michael: Monte Carlo Methods applied to the Ising model, Hamburg, Diplomica
Verlag GmbH 2014
PDFeBookISBN: 9783956366581
Herstellung: Diplomica Verlag GmbH, Hamburg, 2014
Zugl. Johann Wolfgang GoetheUniversität Frankfurt am Main, Frankfurt am Main,
Deutschland, Bachelorarbeit, 2010
Das Werk einschließlich aller seiner Teile ist urheberrechtlich geschützt. Jede Verwertung
außerhalb der Grenzen des Urheberrechtsgesetzes ist ohne Zustimmung des Verlages
unzulässig und strafbar. Dies gilt insbesondere für Vervielfältigungen, Übersetzungen,
Mikroverfilmungen und die Einspeicherung und Bearbeitung in elektronischen Systemen.
Die Wiedergabe von Gebrauchsnamen, Handelsnamen, Warenbezeichnungen usw. in
diesem Werk berechtigt auch ohne besondere Kennzeichnung nicht zu der Annahme,
dass solche Namen im Sinne der Warenzeichen und MarkenschutzGesetzgebung als frei
zu betrachten wären und daher von jedermann benutzt werden dürften.
Die Informationen in diesem Werk wurden mit Sorgfalt erarbeitet. Dennoch können
Fehler nicht vollständig ausgeschlossen werden und die Diplomica Verlag GmbH, die
Autoren oder Übersetzer übernehmen keine juristische Verantwortung oder irgendeine
Haftung für evtl. verbliebene fehlerhafte Angaben und deren Folgen.
Alle Rechte vorbehalten
© Diplom.de, Imprint der Diplomica Verlag GmbH
Hermannstal 119k, 22119 Hamburg
http://www.diplom.de, Hamburg 2014
Printed in Germany
Monte Carlo Methods applied to
the Ising model
Michael Adler
The thermodynamic observables of the classical one and twodimensional
ferromagnetic and antiferromagnetic Ising models on a square lattice are
simulated, especially at the phase transitions (if applicable) using the
classical Monte Carlo algorithm of Metropolis. Finite size effects and
the influence of an external magnetic field are described. The critical
temperature of the 2d ferromagnetic Ising model is obtained using finite
size scaling.
"My parents"
vii
viii
Contents
1. Statistical mechanics a short review
3
1.1. Description of state of a system
. . . . . . . . . . . . . . . . . . . . . . .
4
1.1.1. Phase space
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
1.1.2. Statistical ensembles
. . . . . . . . . . . . . . . . . . . . . . . . .
5
1.2. Transition between different states
. . . . . . . . . . . . . . . . . . . . .
7
1.3. Thermodynamic observables
. . . . . . . . . . . . . . . . . . . . . . . . .
8
1.4. Phase transitions
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
1.5. nvector models
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
1.5.1. Ising model
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
2. Monte Carlo simulation
17
2.1. Fundamentals
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
2.1.1. Crude Monte Carlo
. . . . . . . . . . . . . . . . . . . . . . . . . .
18
2.1.2. Markov Chain Monte Carlo
. . . . . . . . . . . . . . . . . . . . .
19
2.1.3. Markov chains
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
21
2.2. Monte Carlo simulations in statistical mechanics
. . . . . . . . . . . . . .
22
2.2.1. Ergodicity and detailed balance
. . . . . . . . . . . . . . . . . . .
22
2.2.2. Acceptance ratios
. . . . . . . . . . . . . . . . . . . . . . . . . . .
23
2.2.3. Metropolis algorithm
. . . . . . . . . . . . . . . . . . . . . . . . .
23
2.2.4. Initialization bias
. . . . . . . . . . . . . . . . . . . . . . . . . . .
24
2.2.5. Autocorrelation in equilibrium
. . . . . . . . . . . . . . . . . . . .
25
2.3. Analyzing the data T
c
and more
. . . . . . . . . . . . . . . . . . . . . .
26
2.3.1. Finite size scaling and the Binder ratio
. . . . . . . . . . . . . . .
26
3. Selected results
29
3.1. Onedimensional Ising model
. . . . . . . . . . . . . . . . . . . . . . . .
29
3.2. Twodimensional Ising models
. . . . . . . . . . . . . . . . . . . . . . . .
33
3.2.1. The ferromagnetic Ising model
. . . . . . . . . . . . . . . . . . . .
33
3.2.2. Antiferromagnetic Ising model
. . . . . . . . . . . . . . . . . . . .
39
ix
Contents
1
3.3. The phase transition
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
44
3.4. The Critical temperature and critical exponents
. . . . . . . . . . . . . .
45
3.4.1. Binder cumulant
. . . . . . . . . . . . . . . . . . . . . . . . . . .
45
3.4.2. Critical exponents
. . . . . . . . . . . . . . . . . . . . . . . . . . .
46
A. Source code of MC integrators
49
A.1. Crude Monte Carlo
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
A.2. Markov chain MC integrator
. . . . . . . . . . . . . . . . . . . . . . . . .
50
B. Implementation of the Ising models
55
B.1. Skeleton of the code
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55
B.2. Source codes of the Ising models
. . . . . . . . . . . . . . . . . . . . . . .
57
B.2.1. One dimensional Ising model
. . . . . . . . . . . . . . . . . . . . .
57
B.3. Two dimensional Ising model
. . . . . . . . . . . . . . . . . . . . . . . . .
70
Bibliography
91
List of Figures
93
2
Chapter 1.
Statistical mechanics a short
review
Before presenting the Ising model we will refresh the basic concepts of statistical mech
anis. For more information on this important topic the reader is referred to (
9
,
22
)
In thermodynamics we are interested in the properties of systems consisting of a large
number of particles. However the sheer number of particles makes it impossible to write
down and solve about 10
23
coupled equations of motion. We would also need the same
number of initial conditions. Obviously it is also impossible to get experimentally all the
initial conditions required. But we are not interested in the microscopic properties (eoms
of the particles of our system) but rather in macroscopic observables like the energy,
temperature, heat capacity etc. Hence it is sufficient to have only information about the
statistical properties of our system. Henceforth we will use the terms microscopic and
macroscopic systems. A macroscopis system consisting of N
1
particles fulfills:
1
N
1
Any system that does not fulfill this conditions is microscopic. Applications of statistical
methods to such a system would entail unreasonable errors.
1
N is typically
6 · 10
23
or more.
3
4
Statistical mechanics a short review
1.1. Description of state of a system
1.1.1. Phase space
As stated above we are interested in the statistical properties of our system. Thus it is
sufficient to have knowledge about the average motion of particles in our system. Instead
of considering the motion of only one system we can consider an ensemble consisting of
copies of the system we are interested in. The copies only differ in the initial values of
their motion.
As an alternative we could take a sufficiently long time and determine the average
of the interesting quantity. We will however, follow the way proposed by the American
physicist Gibbs. Instead of the timeaverage he took the average over many similar
systems all having the same macroscopic properties like internal energy, pressure, . . . This
means that all elements of the ensemble are in states accessible to the system. The fact
that we can decide which average (time or ensemble average) we'll consider is far from
trivial we'll later give the details.
Classically the state of a system is fully described by the generalized coordinates and
momenta q
i
and p
i
of every particle i; with p = (p
x
, p
y
, p
z
) and analogously for q. The
description is only complete if we know the equations of motion for the particles. In
the sixdimensional phase space = span
{p, q}, i. e. the space spun by the vectors of
momentum and space every state of our macrosystem is represented by a point in phase
space:
= (
1
,
2
,
· · · ,
2
s) = (q, p)
Due to errors in measurement of p and q the system is specified by a cell of area h
0
in
phase space. Thus:
(p)
2
· (q)
2
h
0
Quantum mechanics sets the lower boundary for h
0
: h
0
= ¯
h/2. In quantum mechanics
the state of the system is specified by a wavefunction at time t:
(q, s, t)
s internal degrees of freedom
Statistical mechanics a short review
5
s can be for instance the spin of the particle. Now we introduce the propability density
, i. e. the propabiliy per volume in phase space of finding the system at time t
0
in
state x:
p
x
=
x
(t
0
)
· (p)
s
(q)
s
(1.1)
In the classical regime is (
9
):
x
def
=
1
I
i
(x
 x
(i)
)(p
 p
(i)
)
(1.2)
1.1.2. Statistical ensembles
In general we specify certain macroscopic properties like the total internale energy E, the
temperature T , the pressure p,. . . of our system. Of course the values of the observables
are subject to fluctuations due to the finite precision of all possible measurements. In
general, however the obversables are timedependent. If our system is in equilibrium
its observables are no longer timedependent. The time until the system has reached
equilibrium is called equilibration time
eq
. The fundamental postulat of statistical
mechanics, the principle of equal apriori propabilities states that:
The system
2
is equally likely to be found in any of its accessible states.
This holds only for macroscopic systems in equilibrium. This postulate cannot be
proven rigorously.
There are different possible restraints for the system. The simplest is to specify
that the internal energy E
n
has to be whithin the interval: (E
 E, E]. If we require
an isolated system, i. e. N = const. and V = const. we'll get the microcanonical
ensemble: denotes the number of accessible microstates. The chance of finding
a system chosen by random from the ensemble in a given microstate is
1
. Thus the
propability density (E
n
) that depends on the internal Energy E
n
of the macrosystem
is:
(E
n
) =
1
if: E
 E E
n
E,
0
else.
(1.3)
2
We refer here to an isolated system.
6
Statistical mechanics a short review
The number of possible states at constant E, N and V is called microcanonical partitition
function Z
m
:
Z
m
=
E
n
(N,V )<E
(1.4)
E
n
is the energy of the nth state at given E, N and V . In a later section we'll see
that the knowledge of an analytical partition function is tantamount to knowledge of
the thermodynamic observables.
In experiment however the situation of an isolated system is rather an exception.
If we allow exchange of thermal energy with a heat reservoir we'll get a more feasible
ensemble, the canonical ensemble: The number of particles in the heat bath N has
to be much greater than the number of particles N in the system of interest. It should
be noted that volume and temperature of our system of interest as well as of the heat
reservoir (obviously) are constant. Since we are considering a system that exchanges
heat with a reservoir we'll have a more complicated partition function. The propability
density is:
n
=
1
Z
e
E
n
(1.5)
With the partition function:
Z
c
=
n
e
E
n
(1.6)
We call exp(
E
n
) Boltzmann factor and = 1/kT reduced temperature. In terms
of the Hamilton operator we may write:
n
= 1/Z exp(
H) and Z
c
= tr(
H).
Going one step further loosening the restraints on the system of interest we allow
exchange of particles with a heat reservoir. Only the total energy and the number of
particles of both systems together are conserved. We find:
n
(E
n
, N
n
) =
1
Z
e
(E
n
N
n
)
(1.7)
Statistical mechanics a short review
7
We expect a more general partition function containing the canonical partition func
tion:
Z =
n
e
(E
n

n
N)
(1.8)
The change in energy due to adding a particle to the system if entropy and volume are
held fixed is given by the chemical potential . Unless indicated differently we'll use
the canonical ensemble. It has to be stressed, however that they are equivalent in the
thermodynamic limit; i.e. if N
.
1.2. Transition between different states
Now we'll introduce the important concept of the master equation. Suppose our system
is in a state i at time t. We denote the probability of a transition to another state j t
later by T
ij
. We'll assume that T
ij
is timeindependent. The timedependent propability
of finding the system in a state i is w
i
(t). Since every systems is in a certain state:
i
w
i
(t) = 1
(1.9)
Now we can write down the master equation:
dw
i
dt
=
j
(T
ji
w
i
 T
ij
w
j
)
(1.10)
The master equation describes the timedependent change of the propability for find
ing our system in a state i. Thus the first term of the righthand side of the master
equation describes the rate of transitions from other states to state i. The transition
from state j to other states is represented by the second term on the righthand side. In
equilibrium the lefthand side of the master equation vanishes and we get the equilibrium
occupation propability p
i
p
i
= lim
t
w
i
(t)
(1.11)
8
Statistical mechanics a short review
We can state now the equation for the average of an observable
O:
O =
i
O
i
p
i
(1.12)
O
i
is the value of
O if the system is in state i. The propability for this is p
i
. A detailed
exposition of the important topic of master equations is given in (
9
).
1.3. Thermodynamic observables
Once we hav obtained the partition function it is straightforward to calculate thermody
namic observables. They describe the macroscopic properties of our system and we are
primarily interested in them. In most cases we won't be lucky having knowledge about
the partition function. Ways to calculate thermodynamic observables in this instance
will be explained in later chapters. As far as macrosystem are concerened all three
mentioned partition functions may be used.
The expectation value of an observable
O is defined as:
O =
1
Z
i
exp(
E
i
)
O
i
(1.13)
Internal energy: This is the most important observable it is denoted by U . However
since we are interested in E we'll drop the brackets and use E meaning the average
energy in later section dealing only with our results and therefore only with averages.
U = E =
1
Z
n
E
n
exp(
E
n
) =

1
Z
Z
=

ln Z
(1.14)
Now we may introduce the heat capacity
C:
C =
U
T
=
k
2
U
= k
2
2
log Z
2
(1.15)
C is a measure for the amount of heat Q required to change a body's temperature by
a given temperature T . Since we can write
E
2
=
1
Z
n
E
2
n
exp(
E
n
) =
1
Z
2
Z
2
(1.16)
Statistical mechanics a short review
9
and
E
2
 E
2
=
1
Z
2
log Z
2
(1.17)
it follows
C = k
2
E
2
 E
2
.
(1.18)
The next observable of interest is the total magnetization
M:
M =
d
dV
.
(1.19)
If we use the equation for the potential magnetic energy
E =
 · B,
(1.20)
it follows
M
n
=

E
n
B
i
.
3
(1.21)
We can calculate the average magnetization directly by summing over the spins s
i
:
M =
i
s
i
(1.22)
The average magnetization per spin is just
1
N
M with N being the number of spins. It
characterizes the strength of magnetism in a certain substance. Another very important
observable that is connected with M is the magnetic suscebptibiliy
m
4
:
=
M
B
(1.23)
It measures the change of magnetization due to an external magnetic field. In the most
general case it cannot be described by a scalar but a tensor has to be used due to
anistropy of the material. Entropy
S:
S = k
B
ln
(1.24)
4
We'll use
without the subscricpt m since we are only dealing with magnetic susceptibilities. In
the general case, however, a susceptibity
X /Y describes the strength of response of X to a
change in
Y .
10
Statistical mechanics a short review
is the number of accessible microstates. So the entropy is a measure for the number of
accessible microstates of a given macroscopic system in a specified state. Because of the
postualate of equal a priori propabilities the number of accessible states is maximized
for an isolated system in equilibrium. Thus S is maximal in equilibrium for an isolated
system (or a sufficiently large system). Now we can define the free energy
F :
5
F = U
 T S = k
B
T log Z
(1.25)
It describes the maximal work that may be obtained from an isolated system with
V = const. and T = const.
1.4. Phase transitions
6
The transition between the solid, liquid and gaseous phase, the transition between
normalconductors and superconductors or the transition between ferromagnetism and
paramagnetism are wellknown examples of phase transitions; i. e. transition from a
normal phase to an ordered phase. They can be classified according to their order of
transition. First order transitions involve latent heat: The system interacts with its
environment and thereby absorbs or releases a fixed amount of energy without change of
its temperature. First order phase transitions are associated with mixedphase regimes,
i. e. only one part of the substance has undergone phase transition. An example is boiling
water. Second order phase transitions do not entail latent heat which is why they are
also referred to as continuous phase transitions. It is this type of phase transitions we
are interested in. The free energy has a singularity at the transition; this can be seen
from the powerlaw behavior of observables calculated from F . Another feature is the
diverging correlation length . It is a measure for the order and correlation in a system.
The correlation function G(x) itself measures the order in a system, i. e. it describes the
way microscopic variables are correlated at different places, e. g. spins:
G(x) = s(x)s(x + x )
 s(x)
2
(1.26)
5
Note that the IUPAP recommends the name Helmholtz energy connected with the letter A instead
of F .
6
For more details see for instance (
18
).
Statistical mechanics a short review
11
For T
T
C
we find typically
7
G
1
r
· g(r/) with g(r/) exp(r/) for r .
(1.27)
To measure the degree of order in a system e. g. magnetic order, the order parameter
is introduced.
=
0
T < T
C
T
T
C
(1.28)
In magnetic system the order parameter is the (total) magnetization of the system.
In the ferromagnetic regime it has a value = 0. Below the Curie temperature which
is the critical temperature of ferromagnetic systems, ferromagnetism occurs because
permanent magnetic moments line up parallel. At higher temperatures thermodynamic
fluctuations destroy the magnetic order and M = 0. Close to T
C
observables
O can be
described by a power law in first order since higher order contributions are negligible.
O a

with
=
T
T
C
 1
(1.29)
is the reduced distance from the critical temperature. Near phase transitions the
following relations hold:
1. C = C
0

,
2. m = m
0
,
3. =
0

,
4. =
0

.
The fact that the critical exponents are independent of whether T < T
C
or T > T
C
is
justified by the scaling relations. Using these relations it can be shown that there are
in fact only two independent exponents for the twodimensional Ising model introduced
later.
Another interesting feature of phase transitions is universality (
18
): The critical
exponents we introduced above depend only on three parameters of a given system:
7
Note that there are
two correlation length and also two functions g(r) depending on whether T > T
C
or T < T
C
. This general case is only realized in anisotropic systems which is why will ignore it and
assume just
one value.
12
Statistical mechanics a short review
1. Dimension d of system
2. Range of interaction
3. Dimension of spin
The second and third parameter require explanation. The range of interaction follows
r
(d+2+x)
.
(1.30)
x is constant
8
The term spin dimension will be explained in the next section. Though
there are strong arguments for universality of critical exponents it has not (yet?) been
proven rigorously.
1.5. nvector models
The Ising model that the thesis on hand will examine belongs to the class of nvector
models with n = 1. They are all defined on a lattice and may under certain circumstances
be used to describe phenomena like ferromagnetism. In the following Hamiltonian the
indices i and j are used to refer to different spins on the lattice. In the most general
Hamiltonian we assume a pair interaction between all spins. The Hamiltonian for the
general spin model is:
9
H = 
i,j
J
i,j
s
i
s
j
 H
i
s
i
(1.31)
with the coupling constant J
ij
J
ij
> 0
ferromagnetic
< 0
antiferromagnetic
= 0
noninteracting
(1.32)
We'll deal with ferromagnetic systems, i. e. J
ij
> 0, furthermore we assume that the
coupling constant does not depend on the position on the lattice. Later we'll examine
8
x >
0 is referred to as longranged interaction; x < d/2  2 < 0 is called shortrangedinteraction.
9
See (
18
).
Statistical mechanics a short review
13
also antiferromagnetic systems. For the sake of notation we set J
ij
J 1. Additionally
we restrict the interaction of spins to next neighbors. This assumption is essential if we
want to solve the model. This is indicated using i, j or sometimes (i, j) in the sum:
H = 
i,j
s
i
s
j
 H
0
i
s
i
(1.33)
Depending on the dimension n
10
of the spin s we'll get different nvector models:
n =
1
Ising model
2
XY model
3
Heisenberg model
(1.34)
The first and the second model are exactly solved for next neighbor interaction and
for B
0
= 0. The first has also been solved for B
0
= 0. For the Heisenberg model so
far no analytical solution is known. The Heisenberg model, however, may be tackled
numerically with very high precision.
1.5.1. Ising model
The Ising model was originally invented by Wilhelm Lenz (
14
). Ernst Ising solved
this model in one dimension in his PhD thesis (
11
) thereby making it known as Ising
model. The one dimensional Ising model has a phase transition from paramagnetism
to ferromagnetism at T = 0. Lars Onsager solved the two dimensional Ising model
finding a phase transition at T = 0 (
19
).
The Hamiltonian in one dimension on a finite lattice is
H = 
N1
i=1
s
i
s
i+1

i
s
i
(1.35)
if we assume periodic boundary conditions, i. e. s
1
= s
N
. Thereby equivalence of all
sites is ensured and the system is translationally invariant (
4
). s is onedimensional thus
s =
±1 and it is straightforward to analyze the model. The general relation for M is
10
This is the n of the nVector model.
14
Statistical mechanics a short review
(
18
):
M (T, B
0
) = N
sinh B
0
cosh
2
B
0
 2e
J
sinh 2J
(1.36)
N is the number of spins and is the permeability. M
N for B
0
; this is the
saturation of M . We see from eqn. (
1.36
) that M = 0 if B
0
= 0 and T = 0. We can
calculate the partition function for large Systems with B
0
= 0 (
4
):
Z
N
(T ) = 2
N
cosh J with T = 0.
(1.37)
It can be shown (
18
) that the heat capacity is
C
B
= k
B
(J )
2
cosh
2
(J )
.
(1.38)
The magnetic susceptibility is
(T ) =
2
0
1
 tanh(J)
.
(1.39)
Furthermore we know that the internal energy for a N
× N model is
E =
(N  1)J tanh(J).
(1.40)
The Hamiltonian of the twodimensional Ising model is
H = 
i,j
s
i
s
j
 H
i
s
i
.
(1.41)
This model has been solved on a square lattice It has been found that T
C
= 2.2692 k
1
B
(
18
). The magnetization function is:
M
C
(T ) =
(1
 sinh
4
2J )
if T < T
C
0
if T > T
C
(1.42)
We do not know the general solution for the twodimensional Ising model with arbitrary
external magnetic field. The magnetization is the order parameter in these systems. Be
low the known critical temperature we expect ferromagnetism; above T
C
paramagnetism
is observed. The analytic solution for the threedimensional Ising model is still subject
Statistical mechanics a short review
15
to research. The ground state of both models described so far is two times degenerate. It
consists of the spin configuration with all spins being aligned parallely in one direction.
This direction is dependent upon the external magnetic field.
If J < 0 we are dealing with the antiferromagnetic case. Antiparallel alignment of
the spins is thus preferred. Hence we find that the ground state is a checkerboard (
1
). In
the case of a vanishing external magnetic field we will get the same energy and therefore
heat capacity as in the ferromagnetic case. Because of the antiferromagnetic coupling we
get of course different M and c. If we have a bipartite lattice (i. e. it can be divided into
two sublattices A and B: Site A has only B neighbors and vice versa) we can consider
this two sublattices separately.
11
For next neighbor coupling we can define new spins
(
1
)
s
i
=
+s
j
if j
A
s
j
if j
B
(1.43)
Since s
i
s
j
=
s
i
s
j
introducing new spins changes the spin of J and we retain the results
of the ferrmagnetic case. If H = 0 then H has to be reversed on the B lattice. So
we obtain the thermodynamic properties of the ferromagnet if we switch the sign of J
and introduce the staggered field H
A
= H and H
B
=
H. The problem may be tackled
using variational methods for the sublattices. We find two competing states in the phase
diagram. The ferromagnetic state with m
A
= m
B
and the antiferromagnetic state with
m
A
=
m
B
(
1
).
11
A square lattice for instance is bipartite, we are only considering this lattice type here.
16
Details
 Seiten
 Erscheinungsform
 Erstauflage
 Jahr
 2010
 ISBN (eBook)
 9783956363146
 ISBN (Paperback)
 9783956366581
 Dateigröße
 1014 KB
 Sprache
 Englisch
 Institution / Hochschule
 Johann Wolfgang GoetheUniversität Frankfurt am Main – Institut für theoretische Physik
 Erscheinungsdatum
 2014 (Juli)
 Note
 1,3
 Schlagworte
 Monte Carlo Ising Modell Statistische Physik Uni Frankfurt