Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
afeiguin
GitHub Repository: afeiguin/comp-phys
Path: blob/master/11_02_Ising2D.ipynb
374 views
Kernel: Python 3

Simulation of the 2D Ising model

One of the most interesting phenomena in nature is ferromagnetism. A FM material exhibits a non-zero spontaneous magnetization in the absence of an applied magnetic field. This occurs below a well-defined critical temperature TcT_c known as the Curie temperature. For T>TcT > T_c the magnetization vanishes. Hence TcT_c separates two phases, a disordered one for T>TcT>T_c, and a ferromagnetic one for T<TcT<T_c.

Although the Ising model is too simple, it already contains much of the physics of the FM phase transition. In order to explore the properties of this model, we need to calculate some physical quantities of interest, including the mean energy E\langle E \rangle, the mean magnetization M\langle M \rangle, the heat capacity CC, and the magnetic susceptibility χ\chi.

The Ising model

Consider a lattice with NN sites, where each site ii can assume two possible states si=+1,1s_i=+1,-1, or spin “up” and spin “down”. A particular configuration or microstate of the lattice is specified by the set of variables {s1,s2,...sN}\{s_1,s_2,...s_N\} for all lattice sites.

Now we need to know the dependence of the energy EE of a given microstate, according to the configuration of spins. The total energy in the presence of a uniform magnetic field is given by the “Ising model”: E=Jijsisjhi=1Nsi,E=-J\sum_{\langle ij \rangle}s_is_j-h\sum_{i=1}^Ns_i, where the first summation is over all nearest neighbor pairs and the second summation is over all the spins of the lattice. The “exchange constant” JJ is a measure of the strength of the interaction between nearest neighbor spins. If J>0J>0, the states with the spins aligned \uparrow \uparrow and \downarrow \downarrow are energetically favored, while for J<0J<0 the configurations with the spins antiparallel \uparrow \downarrow and \downarrow \uparrow are the ones that are preferred. In the first case, we expect that the state with lower energy is “ferromagnetic”, while in the second case, we expect it to be “antiferromagnetic”. If we subject the system to a uniform magnetic field hh directed upward, the spins \uparrow and \downarrow possess and additional energy h-h and +h+h respectively. Note that we chose the units of hh such that the magnetic moment per spin is unity.

Instead of obeying Newton’s laws, the dynamics of the Ising model corresponds to “spin flip” processes: a spin is chosen randomly, and the trial change corresponds to a flip of the spin \uparrow \rightarrow \downarrow or \downarrow \rightarrow \uparrow.

Physical quantities

The net magnetic moment or "magnetization" MM is given by M=i=1Nsi.M=\sum_{i=1}^N s_i. Usually we are interested in the average M\langle M \rangle and the fluctuations M2M2\langle M^2 \rangle - \langle M \rangle ^2 as a function of the temperature of the system and the applied magnetic field. Notice that in the absence of a magnetic field, the value of MM should average zero. This is because the system is as likely to be found with all the spins pointing up or down. Therefore, one typically measures m=M2m=\sqrt{M^2} or m=Mm=|M|, which is always a positive quantity.

The heat capacity

One way to measure the heat capacity at constant external field id from the definition: C=ET.C=\frac{\partial \langle E \rangle}{\partial T}. Another way is to use the statistical fluctuations for the total energy in the canonical ensemble: C=1(kBT)2(E2E2).C=\frac{1}{(k_BT)^2}\left( \langle E^2 \rangle - \langle E \rangle ^2 \right).

The magnetic susceptibility

The magnetic susceptibility χ\chi is an example of a “response function ”, since it measures the ability of a spin to “respond” or flip with a change in the external magnetic field. The zero isothermal magnetic susceptibility is defined by the thermodynamic derivative χ=limH0MH.\chi=\lim _{H \rightarrow 0} \frac{\partial \langle M \rangle}{\partial H}. The zero field susceptibility can be related to the magnetization fluctuations in the system: χ=1kBT(M2M2),\chi=\frac{1}{k_BT} \left( \langle M^2 \rangle - \langle M \rangle ^2 \right), where M2\langle M^2 \rangle and M2\langle M \rangle ^2 are zero field values.

Metropolis algorithm

Moves

One typically picks a random spin and it is flipped or not by calculating the energy difference between the considered spin and its 4 nearest neighbors using the formula:

U=Jspin(x,y)[spin(x+1,y)+spin(x1,y)+spin(x,y+1)+spin(x,y1)]U = J spin(x,y) [spin(x+1,y) + spin(x-1,y) + spin(x,y+1)+ spin(x,y-1)]

The energy change is ΔU=2U\Delta U = 2U. The spin then directly flips if ΔU0\Delta U \le 0. Otherwise, it only flips if a randomly chosen number between 0 or 1 is smaller than the Boltzmann factor exp(kTΔU)\exp(-kT\Delta U).

Notice that the sum in brackets can run from -4 to +4, so we can easily store the values for all the possible configurations in lookup tables.

Boundary conditions

Since we are interested in the properties of an infinite system, we have to consider the boundary conditions. The simplest case corresponds to “free boundary condition”. In a 1D chain, the spins at sites 11 and NN are open ends and have one interaction each. In general a better choice is periodic boundary conditions, where sites 1 and NN interact with each other closing a loop. In this situation, the chain has the topology of a ring, and all the spins have the same number of interactions. We also say that there is translational invariance, since the origin can be picked arbitrarily.

As discussed earlier, the use of PBC minimizes the finite size effects. However, a disadvantage is that they reduce the minimum separation between two spins to half the length of the system. In a 2D system we need to account for the periodicity on both the xx and yy directions, and our system will have the topology of a torus.

Initial conditions and equilibration

We can pick a random initial configuration. However, as we shall see, in some simulations the equilibration process can account for a substantial fraction of the total computer time. The most practical choice of an initial condition is an “equilibrium” configuration of a previous run which is at a temperature close to the desired temperature.

Tricks

It is convenient that we store all the transition probabilities in lookup tables, so we do not have to calculate them at each step. Another trick consists of storing all the positions of the spins and their neighbors to avoid calculating many random numbers. If you need to perform several runs for different values of the temperature TT, you can do it at the same time using the same random numbers.

Exercise 11.2: Equilibration of the 2D Ising model

  1. Run your simulation with L=8L=8 and T=2T=2 and choose the initial spins to be all up. Plot the variation of the energy and the magnetization with time. How much time is necessary for the system to reach equilibrium?

  2. Visually inspect several “equilibrium” configurations. Is the system ordered or disordered?

  3. Run the program with T=1.5T=1.5 and chose the same initial configuration with all the spins up. How long does it take for the system to reach equilibrium?

  4. Visually inspect several equilibrium configurations with T=1.5T=1.5. Are they more or less ordered than those in part 2?

  5. Is the acceptance ratio and increasing or decreasing function of TT? Does the Metropolis algorithm become more or less efficient at low temperatures?

%matplotlib inline import numpy as np from matplotlib import pyplot import matplotlib.animation as animation class BoundaryCondition: RBC, PBC = range(2) class Direction: RIGHT, TOP, LEFT, BOTTOM = range(4) class Ising(object): def __init__ (self, L, J, T): self.L = L self.N = L*L self.TWOJ = 2.*J self.T = T self.beta = 1./T # Initialize site positions # Initialize neighbors table for boundary conditions self.nn = np.zeros(shape=(self.N,4), dtype=np.int16) self.position = np.zeros(shape=(L,L), dtype=np.int16) self.x = np.zeros(self.N, dtype=np.int16) self.y = np.zeros(self.N, dtype=np.int16) # Periodic boundary conditions n = 0 for iy in range(L): for ix in range(L): self.position[iy,ix] = n self.x[n] = ix self.y[n] = iy self.nn[n,Direction.LEFT] = n-1 self.nn[n,Direction.RIGHT] = n+1 self.nn[n,Direction.TOP] = n+L self.nn[n,Direction.BOTTOM] = n-L if(ix == 0): self.nn[n,Direction.LEFT] = n+L-1 if(ix == L-1): self.nn[n,Direction.RIGHT] = n-(L-1) if(iy == 0): self.nn[n, Direction.BOTTOM] = n+(L-1)*L if(iy == L-1): self.nn[n, Direction.TOP] = n-(L-1)*L n += 1 # Initialize spins r = np.random.random(self.N)*2-1 self.spin = np.ones(self.N, dtype=np.int16) for i in range(self.N): if(r[i] < 0): self.spin[i] *= -1 self.Mtot = np.sum(self.spin) self.E = 0. for i in range(self.N): self.E += -J*self.spin[i]*(self.spin[self.nn[i,Direction.RIGHT]]+self.spin[self.nn[i,Direction.TOP]]) # Transition probabilities self.de = np.zeros(shape=(3,9)) self.w = np.zeros(shape=(3,9)) self.set_temp(self.T) def set_temp(self, T): self.T = T self.beta = 1./T # Lookup tables for transition probabilities for i in range(-4,5): self.de[0,i+4] = -self.TWOJ*i self.de[2,i+4] = self.TWOJ*i p = np.exp(-self.beta*self.de[0,i+4]) self.w[0,i+4] = min(p, 1.) self.w[2,i+4] = min(1./p,1.) def metropolis(self): nchanges = 0 for n in range(self.N): # trial spin change # pick a random particle i = int(np.random.random()*self.N) # change in energy iright = self.nn[i, Direction.LEFT] ileft = self.nn[i, Direction.RIGHT] itop = self.nn[i, Direction.TOP] ibottom = self.nn[i, Direction.BOTTOM] spin_sum = self.spin[ileft] + self.spin[iright] + self.spin[itop] + self.spin[ibottom] s = self.spin[i] deltaE = self.de[s+1,spin_sum+4] if(deltaE <= 0. or np.random.random() < self.w[s+1,spin_sum+4]): self.spin[i] *= -1 self.Mtot += 2*(-s) self.E += deltaE nchanges += 1 return nchanges
L=10 Nwarmup = 100 Nsteps = 1000 Ndecorr = 3 Temp = 3 J = 1. S = Ising(L, J, Temp) E = np.zeros(Nsteps) M = np.zeros(Nsteps) for i in range(Nwarmup): S.metropolis() naccept = 0 for i in range(Nsteps): for n in range(Ndecorr): naccept += S.metropolis() E[i] = S.E M[i] = abs(S.Mtot) E /= S.N M /= S.N Et = np.sum(E)/Nsteps E2t = np.sum(E**2)/Nsteps Mt = np.sum(M)/Nsteps M2t = np.sum(M**2)/Nsteps print("T = ", Temp) print("<E>/N = ", Et) print("<E^2>/N = ", E2t) print("<M>/N = ", Mt) print("<M^2>/N = ", M2t) print("C=", (E2t-Et*Et)/Temp/Temp) print("chi=", (M2t-Mt*Mt)/Temp) print("Acceptance ratio = ", float(naccept)/S.N/Nsteps/Ndecorr) pyplot.plot(np.arange(0,Nsteps,1),E,ls='-',c='blue'); pyplot.xlabel("Iteration") pyplot.ylabel("Energy");
T = 3 <E>/N = -0.82472 <E^2>/N = 0.7169248 <M>/N = 0.2751 <M^2>/N = 0.10981480000000002 C= 0.0040846357333333356 chi= 0.011378263333333338 Acceptance ratio = 0.45854333333333336
Image in a Jupyter notebook
pyplot.plot(np.arange(0,Nsteps,1),M,ls='-',c='red'); pyplot.xlabel("Iteration") pyplot.ylabel("Magnetization");
Image in a Jupyter notebook
T = np.arange(0.2,8,0.2) Mt = np.zeros(T.size) Et = np.zeros(T.size) M2t = np.zeros(T.size) E2t = np.zeros(T.size) S = Ising(L, J, 0.2) Nsteps = 1000 Nwarmup = 10000 n = 0 for t in T: S.set_temp(t) for i in range(Nwarmup): S.metropolis() for i in range(Nsteps): for j in range(Ndecorr): S.metropolis() Et[n] += S.E Mt[n] += abs(S.Mtot) E2t[n] += S.E**2 M2t[n] += abs(S.Mtot)**2 print(t, Mt[n], Mt[n]/Nsteps/S.N) n += 1 Mt /= float(Nsteps*S.N) Et /= float(Nsteps*S.N) E2t /= float(Nsteps*S.N*S.N) M2t /= float(Nsteps*S.N*S.N) ErrorE = np.sqrt((E2t-Et**2)/Nsteps) ErrorM = np.sqrt((M2t-Mt**2)/Nsteps)
0.2 100000.0 1.0 0.4 100000.0 1.0 0.6000000000000001 100000.0 1.0 0.8 99992.0 0.99992 1.0 99942.0 0.99942 1.2 99702.0 0.99702 1.4000000000000001 99128.0 0.99128 1.6 98086.0 0.98086 1.8 95720.0 0.9571999999999999 2.0 90848.0 0.90848 2.2 80178.0 0.8017799999999999 2.4000000000000004 66042.0 0.66042 2.6000000000000005 45392.0 0.45392000000000005 2.8000000000000003 36088.0 0.36088000000000003 3.0000000000000004 27252.0 0.27252 3.2 23986.0 0.23986000000000002 3.4000000000000004 21472.0 0.21472000000000002 3.6000000000000005 18714.0 0.18713999999999997 3.8000000000000003 18534.0 0.18533999999999998 4.0 16042.0 0.16042 4.2 16450.0 0.16449999999999998 4.4 15248.0 0.15248 4.6000000000000005 14750.0 0.1475 4.800000000000001 14170.0 0.1417 5.000000000000001 13548.0 0.13548 5.2 12864.0 0.12864 5.4 12714.0 0.12714 5.6000000000000005 12526.0 0.12526 5.800000000000001 12084.0 0.12084 6.000000000000001 12092.0 0.12092 6.2 12198.0 0.12198 6.4 11662.0 0.11662 6.6000000000000005 11182.0 0.11182 6.800000000000001 11772.0 0.11772 7.000000000000001 11436.0 0.11436 7.2 11260.0 0.11259999999999999 7.4 10568.0 0.10568 7.6000000000000005 10802.0 0.10801999999999999 7.800000000000001 10558.0 0.10558
pyplot.plot(T,Mt,ls='-',c='blue') pyplot.errorbar(T, Mt, yerr=[ErrorM, ErrorM], fmt='--o') pyplot.ylabel("Magnetization") pyplot.xlabel("Temperature");
Image in a Jupyter notebook
pyplot.plot(T,Et,ls='-',c='blue') pyplot.errorbar(T, Et, yerr=[ErrorE, ErrorE], fmt='--o') pyplot.ylabel("Energy") pyplot.xlabel("Temperature");
Image in a Jupyter notebook
pyplot.plot(T,(E2t-Et**2)/T/T,marker='o',ls='-',c='blue'); pyplot.ylabel("Cv") pyplot.xlabel("Temperature");
Image in a Jupyter notebook
pyplot.plot(T,(M2t-Mt**2)/T,marker='o',ls='-',c='blue'); pyplot.ylabel("Chi") pyplot.xlabel("Temperature");
Image in a Jupyter notebook
x, y = np.meshgrid(np.arange(0,L,1),np.arange(0,L,1)) s = S.spin.reshape(L,L) pyplot.pcolormesh(x,y,s) pyplot.ylabel("Y") pyplot.xlabel("X");
Image in a Jupyter notebook
from IPython.display import HTML # Animation S.set_temp(2.27) #2.27 S.spin.fill(1) fig, ax = pyplot.subplots() ax = pyplot.axes(xlim=(0, L), ylim=(0, L)) Sdata = S.spin.reshape(L,L) cont = ax.contourf(x,y,Sdata) def evolve(i): global x, y, S S.metropolis() Sdata = S.spin.reshape(L,L) cont = ax.contourf(x,y,Sdata) return cont, anim = animation.FuncAnimation(fig, evolve, frames = 30, interval=100, blit = False) HTML(anim.to_jshtml())
Image in a Jupyter notebook

Challenge 11.2

Modify the code to calculate the correlation function

C(r)=s0srs0srC(r) = \langle s_0 s_r \rangle - \langle s_0 \rangle \langle s_r \rangle

Plot the correlations along the xx direction across the phase transition and describe its behavior.