Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
afeiguin
GitHub Repository: afeiguin/comp-phys
Path: blob/master/16_02_VMC.ipynb
374 views
Kernel: Python 3

Variational Monte Carlo

The main ingredient is a trial wave function ψT(η)|\psi _{T}(\overline{\eta } )\rangle , that depends on a set of parameters η\overline{\eta }. This wave function is represented in terms of a basis of orthogonal states x |x\rangle ψT(η)=xxψT(η)x=xCx(η)x, |\psi _{T}(\overline{\eta })\rangle =\sum_{x}\langle x|\psi _{T}(\overline{ \eta })\rangle |x\rangle =\sum_{x}C_{x}(\overline{\eta })|x\rangle , where the coefficients of the parametrization are known functions of η% \overline{\eta }. We would like the wave function to be a good representation of the actual ground state of a model. Finding the best wave function means finding the right set of parameters η\overline{\eta } that maximize the overlap with the actual ground state. In practice this is impossible since we do not know the groud state \textit{a priori}, and some physical insight is needed to derive a good analytical approximation. Then we apply the variational principle, that states that the variational energy of the trial state is always greater or equal to the exact energy of the ground state: ParseError: KaTeX parse error: Undefined control sequence: \label at position 147: …e }\geq E_{0}, \̲l̲a̲b̲e̲l̲{energia variac… and we use the criterium of minimizing the variational energy. In order to do that we require to calculate this quantity for different sets of parameters η\overline{\eta }, and once we found a proper wave function, we can calculate the physical quantities of interest.

The expectation value of an arbitrary operator OO is ParseError: KaTeX parse error: Undefined control sequence: \label at position 556: …x}P_{x}O_{x}, \̲l̲a̲b̲e̲l̲{valor medio} \… with ParseError: KaTeX parse error: Undefined control sequence: \label at position 146: …right| ^{2}}, \̲l̲a̲b̲e̲l̲{P} \end{equati… and ParseError: KaTeX parse error: Undefined control sequence: \label at position 104: …{T}\rangle }. \̲l̲a̲b̲e̲l̲{o} \end{equati… The equation (\ref{valor medio}) has precisely the form of a mean value in statistical mechanics, with PxP_{x} as the Boltzmann factor: Px0;xPx=1. P_{x}\geq 0;\,\sum_{x}P_{x}=1. The first step in order to calculate it consists of generating a collection of configurations distributed according to this probability. For that purpose we employ the Metropolis algorithm: starting from a configuration x |x\rangle , we accept a new configuration x|x^{\prime }\rangle with probability R=ψTx2/ψTx2R=\left| \langle \psi _{T}\mid x^{\prime }\rangle \right| ^{2}/\left| \langle \psi _{T}\mid x\rangle \right| ^{2}, or 1/(1+R)1/(1+R) if we use the heat bath approach.

The variational simulations are simple to perform and very stable. Since the probabilities do not deppend on the statistics of the particles involved, they do not suffer from the sign problem. However, the results deppend decisively of the quality of the variational wave function, because they are completely pre-determined by it, and the physical arguments that define it. In the particular situation in which the trial function coincides with the exact ground state, the matrix elements (\ref{o}) for O=HO=H are all equal to E0:% E_{0}:% Ex=xHψTxψT=E0. E_{x}=\frac{\langle x\mid H\mid \psi _{T}\rangle }{\langle x\mid \psi _{T}\rangle }=E_{0}. This is the property called "zero variance": the more the wave function resembles the actual ground state, the more rapidly the variational enery converges with the number of iterations.

In general, the computation results more complicated, or numerically more expensive, when the number of \ variational parameters η\overline{\eta } that define the trial state is large. Thus, we always try to keep the form of the wave function simple enough and with only a few variational degrees of freedom.

Quantum Spin Chain

We seek a good representation of the ground state wave function for LL spins interacting via a Heisenberg Hamiltonian: H=i=1N1SizSi+1z12[Si+Si+1+SiSi+1+]{H}=\sum_{i=1}^{N-1} {S}^z_i {S}^z_{i+1} - \frac{1}{2}\left[ {S}^+_i {S}^-_{i+1} + {S}^-_i {S}^+_{i+1} \right] Notice the minus sign in the off diagonal term. This results from a unitary transformation called "Marshall rotation". As a consequence, all coefficients in the wave-function are positive and real, and we don't have to worry about complex phases nor signs. As we learned in previous lectures, we can only solve the problem extraclty for a small number of spins (in reality, there is an exact analytical solution called "Bethe Ansatz"). We will now learn how to solve the problem using a variational approximation to the ground state. Leaving aside the justification for our choice of wave function, we introduce the so-called Jastrow wave-function: Ψ=σˉexp(αi>jSizSjzij)σˉ, |\Psi\rangle = \sum_{\bar{\sigma}} \exp\left({\alpha\sum_{i>j}\frac{S_i^zS_j^z}{|i-j|}}\right)|\bar{\sigma}\rangle,

where σ=±1/2\sigma=\pm 1/2 represents the orientation of the spins along the zz axis. This expression enjoys an elegant simplicity, and depends on only one free variational parameter α\alpha. We start our simulation by chosing a pair of spins; if anti-parallel, we atempt a spin flip. The acceptance probability is determined by the ratio Ψ(σˉ)2/Ψ(σˉ)2|\Psi(\bar{\sigma}')|^2/|\Psi(\bar{\sigma})|^2, where Ψ(σˉ)=exp(αi>jσiσjij)\Psi(\bar{\sigma})=\exp\left({\alpha\sum_{i>j}\frac{\sigma_i \sigma_j}{|i-j|}}\right). Notice that this probability reminds us of a Boltzmann factor with inverse temperature α\alpha and classical Ising-like Hamiltonian H=i>jSizSjzijH'=\sum_{i>j}\frac{S_i^zS_j^z}{|i-j|}. Therefore, in order to implement the code, we only have to introduce minor modifications to our previous Monte Carlo code for the Ising model. There are some important changes: (i) Since we are only inetrested in the sector with zero total spin Sz=0S^z=0, we pick spins in pairs and we attempt spin flips that preserve the total magnetization; (ii) The interactions in the effective classical Hamiltonian are long-range, (iii) more importantly, the expression for the energy involves the actual quantum Hamiltonian and we need to account for all possible spin flips to evaluate the expression for the "local energy": ParseError: KaTeX parse error: Got function '\bar' with no arguments as subscript at position 3: E_\̲b̲a̲r̲{\sigma} = \fra… The last term involves all the allowed spin flips between nearest neighbors, and has at most LL non-zero contributions.

%matplotlib inline import numpy as np from matplotlib import pyplot import matplotlib.animation as animation class Direction: RIGHT, LEFT = range(2) def distance(r,L): return abs(abs(r)-L*int(2*abs(r)/L)) class VMC(object): def __init__ (self, L, alpha): self.L = L self.alpha = alpha # Table for probabilities self.psi2 = np.zeros(shape=(3,int(L/2+1))) # Initialize neighbors table for boundary conditions self.nn = np.zeros(shape=(self.L,2), dtype=np.int16) # Periodic boundary conditions for ix in range(L): self.nn[ix,Direction.LEFT] = ix-1 self.nn[ix,Direction.RIGHT] = ix+1 self.nn[0,Direction.LEFT] = L-1 self.nn[L-1,Direction.RIGHT] = 0 # Initialize spins self.spin = np.ones(self.L, dtype=np.int16) pos = np.random.choice(range(L), int(L/2), replace=False) # we pick L/2 unique random positions and flip for ix in pos: self.spin[pos] = -1 self.set_alpha(alpha) def set_alpha(self, alpha): self.alpha = alpha # Lookup tables for transition probabilities # Pair-wise contribution to the argument in the exponential for i in range(-1,2,2): for l in range(1,int(self.L/2)+1): self.psi2[i+1,l] = self.alpha*i*0.25/l def metropolis(self): nchanges = 0 # trial spin change # pick a random pair of spins i = 0 j = 0 while(self.spin[i] == self.spin[j]): i = int(np.random.random()*self.L) j = int(np.random.random()*self.L) si = self.spin[i] sj = self.spin[j] arg = 0. for k in range(self.L): sk = self.spin[k] if(i != k and k != j): arg += self.psi2[si*sk+1,distance(i-k,self.L)] arg += self.psi2[sj*sk+1,distance(j-k,self.L)] p = np.exp(-4*arg) if(p >= 1 or np.random.random() < p): self.spin[i] *= -1 self.spin[j] *= -1 nchanges += 1 return nchanges def Psi(self, spins): arg = 0. for i in range(self.L): for j in range(i+1,self.L): si = spins[i] sj = spins[j] arg += self.psi2[si*sj+1,distance(i-j,self.L)] return np.exp(arg) def Eloc(self): Ediag = 0. Eoff = 0. for i in range(self.L): j = self.nn[i,Direction.RIGHT] si = self.spin[i] sj = self.spin[j] # Diagonal contribution Ediag += si*sj*0.25 # Off diagonal contribution if(si != sj): arg_flip = 0. for k in range(self.L): sk = self.spin[k] if(i != k and k != j): arg_flip += self.psi2[si*sk+1,distance(i-k,self.L)] arg_flip += self.psi2[sj*sk+1,distance(k-j,self.L)] Eoff -= 0.5*np.exp(-2*arg_flip) return Ediag+Eoff def Eloc2(self): Ediag = 0. Eoff = 0. for i in range(self.L): j = self.nn[i,Direction.RIGHT] si = self.spin[i] sj = self.spin[j] # Diagonal contribution Ediag += si*sj*0.25 # Off diagonal contribution if(si != sj): aux_spin = np.array(self.spin) aux_spin[i] *= -1 aux_spin[j] *= -1 Eoff -= 0.5*self.Psi(aux_spin)/self.Psi(self.spin) return Ediag+Eoff
system = VMC(4,-2.2) for i in range(10): system.metropolis() print(system.spin)
[ 1 -1 1 -1]
print(system.Eloc(),system.Eloc2())
-1.665742167396159 -1.665742167396159
%matplotlib inline from matplotlib import pyplot alphas=np.linspace(-3,3,30) e0 = [] for a in alphas: Etot = 0. Nmeas = 0 system.set_alpha(a) for i in range(50000): system.metropolis() if(i%10 == 0): Etot += system.Eloc() Nmeas += 1 print(a,Etot/Nmeas) e0.append(Etot/Nmeas)
-3.0 -1.729769365550574 -2.793103448275862 -1.7872177163931446 -2.586206896551724 -1.8379791156568903 -2.3793103448275863 -1.8696395603489673 -2.1724137931034484 -1.910439458847357 -1.9655172413793103 -1.9620126001598128 -1.7586206896551724 -1.9789666950170304 -1.5517241379310345 -1.9961183813033483 -1.3448275862068966 -1.999650165178745 -1.1379310344827587 -1.987005771799387 -0.9310344827586206 -1.9625380620435842 -0.7241379310344827 -1.9227976490302667 -0.5172413793103448 -1.8632282313135518 -0.31034482758620685 -1.7999406222654926 -0.10344827586206895 -1.7213720848004934 0.10344827586206895 -1.6448720516273017 0.31034482758620685 -1.4959659072383031 0.5172413793103448 -1.4394582730080288 0.7241379310344827 -1.3531059365830724 0.9310344827586206 -1.216992805825818 1.137931034482759 -1.1071777515003158 1.3448275862068968 -1.006753258119081 1.5517241379310347 -0.9087150450610522 1.7586206896551726 -0.8624710936668375 1.9655172413793105 -0.8100401757427019 2.1724137931034484 -0.6840510660230883 2.3793103448275863 -0.6444484464544109 2.586206896551724 -0.5613096623068099 2.793103448275862 -0.5372378299383832 3.0 -0.4802727068343488
pyplot.xlabel("alpha") pyplot.ylabel("Energy") pyplot.plot(alphas,e0,'bo-',lw=3);
Image in a Jupyter notebook