Quantum Spins
We are going consider a magnetic insulator, a Mott insulator. In this problem we have a valence electron per atom, with very localized wave-functions. The Coulomb repulsion between electrons on the same orbital is so strong, that electrons are bound to their host atom, and cannot move. For this reason, charge disappears from the equation, and the only remaining degree of freedom is the spin of the electrons. The corresponding local state can therefore be either or . The only interaction taking place is a process that “flips” two anti-aligned neighboring spins .
Let us now consider a collection of spins residing on site of a –one-dimensional for simplicity– lattice. An arbitrary state of -spins can be described by using the projection () of each spin as: . As we can easily see, there are of such configurations.
Let us now consider a collection of spins residing on site of a –one-dimensional for simplicity– lattice. An arbitrary state of -spins can be described by using the projection () of each spin as: . As we can easily see, there are of such configurations.
We shall describe the interactions between neighboring spins using the so-called Heisenberg Hamiltonian:
where is the spin operator acting on the spin on site .
Since we are concerned about spins one-half, , all these operators have a matrix representation, related to the well-known Pauli matrices:
These matrices act on two-dimensional vectors defined by the basis states and . It is useful to introduce the identities: where and are the spin raising and lowering operators. It is intuitively easy to see why by looking at how they act on the basis states: and . Their corresponding matrix representations are:
We can now re-write the Hamiltonian ([heisenberg]) as: The first term in this expression is diagonal and does not flip spins. This is the so-called Ising term. The second term is off-diagonal, and involves lowering and raising operators on neighboring spins, and is responsible for flipping anti-aligned spins. This is the “” part of the Hamiltonian.
The Heisenberg spin chain is a paradigmatic model in condensed matter. Not only it is attractive due to its relative simplicity, but can also describe real materials that can be studied experimentally. The Heisenberg chain is also a prototypical integrable system, that can be solved exactly by the Bethe Ansatz, and can be studied using bosonization techniques and conformal field theory.
The Heisenberg spin chain is a paradigmatic model in condensed matter. Not only it is attractive due to its relative simplicity, but can also describe real materials that can be studied experimentally. The Heisenberg chain is also a prototypical integrable system, that can be solved exactly by the Bethe Ansatz, and can be studied using bosonization techniques and conformal field theory.
In these lectures, we will be interested in obtaining its ground state properties of this model by numerically solving the time-independent Schrödinger equation: where is the Hamiltonian of the problem, its eigenstates, with the corresponding eigenvalues, or energies .
Exact diagonalization

Pictorial representation of the Hamiltonian building recursion explained in the text. At each step, the block size is increased by adding a spin at a time.
In this section we introduce a technique that will allow us to calculate the ground state, and even excited states of small Heisenberg chains. Exact Diagonalization (ED) is a conceptually simple technique which basically consists of diagonalizing the Hamiltonian matrix by brute force. Same as for the spin operators, the Hamiltonian also has a corresponding matrix representation. In principle, if we are able to compute all the matrix elements, we can use a linear algebra package to diagonalize it and obtain all the eigenvalues and eigenvectors @lapack.
In these lectures we are going to follow a quite unconventional procedure to describe how this technique works. It is important to point out that this is a quite inefficient and impractical way to diagonalize the Hamiltonian, and more sophisticated techniques are generally used in practice.
Two-spin problem
The Hilbert space for the two-spin problem consists of four possible configurations of two spins
The problem is described by the Hamiltonian:
The corresponding matrix will have dimensions . In order to compute this matrix we shall use some simple matrix algebra to first obtain the single-site operators in the expanded Hilbert space. This is done by following the following simple scheme: And operator acting on the left spin, will have the following matrix form: Similarly, for an operator acting on the right spin: where we introduced the identity matrix . The product of two operators acting on different sites can be obtained as: It is easy to see that the Hamiltonian matrix will be given by: where we used the single spin () matrices and . We leave as an exercise for the reader to show that the final form of the matrix is:
Obtaining the eigenvalues and eigenvectors is also a straightforward exercise: two of them are already given, and the entire problem now reduces to diagonalizing a two by two matrix. We therefore obtain the well known result: The ground state , has energy , and the other three eigenstates form a multiplet with energy .
Many spins
Imagine now that we add a third spin to the right of our two spins. We can use the previous result to obtain the new Hamiltonian matrix as: Here we used the single spin , , and the `tilde` matrices defined in Eqs.([tildeL]) and ([tildeR]): and
It is easy to see that this leads to a recursion scheme to construct the Hamiltonian matrix the step as:
with and
This recursion algorithm can be visualized as a left ‘block’, to which we add new ‘sites’ or spins to the right, one at a time, as shown in Fig.[fig:block].The block has a ‘block Hamiltonian’, , that is iteratively built by connecting to the new spins through the corresponding interaction terms.
Challenge 12.1:
Compute the energy and specific heat of the spin chain with as a function of temperature, for .
A practical exact diagonalization algorithm
Initialization: Topology of the lattice, neighbors, and signs.
Contruction of a basis suitable for the problem.
Constuction of the matrix element of the Hamiltonian.
Diagonalization of the matrix.
Calculation of observables or expectation values.
As we shall see below, we are going to need the concept of “binary word”. A binary word is the binary representation of an integer (in powers of ‘two’, i.e. ), and consist of a sequence of ‘bits’. A bit in the binary basis correspond to our digits in the decimal system, and can assumme the values ‘zero’ or ‘one’. At this stage we consider appropriate to introduce the logical operators AND, OR, XOR. The binary operators act between bits and their multiplication tables are listed below
| AND | 0 | 1 |
|---|---|---|
| 0 | 0 | 0 |
| 1 | 0 | 1 |
| OR | 0 | 1 |
|---|---|---|
| 0 | 0 | 1 |
| 1 | 1 | 1 |
| XOR | 0 | 1 |
|---|---|---|
| 0 | 0 | 1 |
| 1 | 1 | 0 |
The bit manipulation is a very useful tool, and most of the programming languages provide the needed commands.
Initilalization and definitions
In the program, it is convenient to store in arrays all those quantities that will be used more frequently. In particular, we must determine the topology of the cluster, labeling the sites, and storing the components of the lattice vectors. We must also generate arrays with the nearest neighbors, and the next-nearest neighbors, according to our needs.
Construction of the basis
Memory limitations impose severe restrictions on the size of the clusters that can be studied with this method. To understand this point, note that although the lowest energy state can be written in the basis as , this expression is of no practical use unless itself is expressed in a convenient basis to which the Hamiltonian can be easily applied. A natural orthonormal basis for fermion systems is the occupation number representation, describing all the possible distributions of electrons over sites, while for spin systems it is covenient to work in a basis where the is defined at every site, schematically represented as . The size of this type of basis set grows exponentially with the system size. In practice this problem can be considerably alleviated by the use of symmetries of the Hamiltonian that reduces the matrix to a block-form. The most obvious symmetry is the number of particles in the problem which is usually conserved at least for fermionic problems. The total projection of the spin , is also a good quantum number. For translationally invariant problems, the total momentum of the system is also conserved introducing a reduction of in the number of states (this does not hold for models with open boundary conditions or explicit disorder). In addition, several Hamiltonians have additional symmetries. On a square lattice, rotations in about a given site, spin inversion, and reflections with respect to the lattice axis are good quantum numbers (although care must be taken in their implementation since some of these operations are combinations of others and thus not independent).
In the following we shall consider a spin-1/2 Heisenberg chain as a practical example. In this model it is useful to represent the spins pointing in the ‘up’ direction by a digit ‘1’, and the down-spins by a ‘0’. Following this rule, a state in the basis can be represented by a sequence of ones and zeroes, i.e., a “binary word”. Thus, two Néel configurations in the 4-site chain can be seen as Once the up-spins have been placed the whole configuration has been uniquely determined since the remaining sites can only be occupied by down-spins. The resulting binary number can be easily converted into integers , where the summation is over all the sites of the lattice, and can be or . For example:
Using the above convention, we can construct the whole basis for the given problem. However, we must consider the memory limitations of our computer, introducing some symetries to make the problem more tractable. The symetries are operations that commute with the Hamiltonian, allowing us to divide the basis in subspaces with well defined quantum numbers. By means of similarity transformations we can generate a sequence of smaller matrices along the diagonal (i.e. the Hamiltonian matrix is “block diagonal”), and each of them can be diagonalized independently. For fermionic systems, the simplest symmetries are associated to the conservation of the number of particles and the projection of the total spin in the direction. In the spin-1/2 Heisenberg model, a matrix with elements can be reduced to smaller matrices, corresponding to the projections , with .
The dimension of each of these subspaces is obtained from the combinatory number .
Example: 4-site Heisenberg chain
The total basis has states that can be grouped in subspaces with well defined values of the quantum number . The dimensions of these subspaces are:
| m | dimension |
|---|---|
| -2 | 1 |
| -1 | 4 |
| 0 | 6 |
| 2 | 4 |
| 1 | 1 |
Since we know that the ground state of the Hilbert chain is a singlet, we are only interested in the subspace with . For our example, this subspace is given by (in increasing order):
Generating the possible configurations of up-spins (or spinless fermions) in sites is equivalent to the problem of generating the corresponding combinations of zeroes and ones in lexicographic order.
Construction of the Hamiltonian matrix elements
We now have to address the problem of generating all the Hamiltonian matrix elements . These are obtained by the application of the Hamiltonian on each state of the basis , generating all the values . We illustrate this procedure in the following pseudocode:
for each term of the Hamiltonian
for all the states in the basis
end for
end for
As the states are representated by binary words, the spin operators (as well as the creation and destructions operators) can be be easily implemented using the logical operators AND, OR, XOR.
As a practical example let us consider the spin-1/2 Heisenberg Hamiltonian on the 4-sites chain:
with , due to the periodic boundary conditions. To evaluate the matrix elements in the basis ([basis0]) we must apply each term of on such states. The first term is called Ising term, and is diagonal in this basis (also called Ising basis). The last terms, or fluctuation terms, give strictly off-diagonal contributions to the matrix in this representation (when symmetries are considered, they can also have diagonal contributions). These fluctuations cause an exchange in the spin orientation between neighbors with opposite spins, e.g. . The way to implement spin-flip operations of this kind on the computer is defining new ‘masks’. A mask for this operation is a binary number with ‘zeroes’ everywhere, except in the positions of the spins to be flipped, e.g 00..0110...00. Then, the logical operator XOR is used between the initial state and the mask to invert the bits (spins) at the positions indicated by the mask. For example, (0101)XOR.(1100)=(1001), i.e., 5.XOR.12=9. It is useful to store all the masks for a given geometry in memory, generating them immediately after the tables for the sites and neighbors.
In the Heisenberg model the spin flips can be implemented using masks, with ‘zeroes’ everywhere, except in the positions of the spins to be flipped. To illustrate this let us show the effect of the off-diagonal terms on one of the Néel configurations:
After applying the Hamiltonian ([h4]) on our basis states, the reader can verify as an exercise that we obtain
The resulting matrix for the operator in the representation is:
Obtaining the ground-state: Lanczos diagonalization
Once we have a superblock matrix, we can apply a library routine to obtain the ground state of the superblock . The two algorithms widely used for this purpose are the Lanczos and Davidson diagonalization. Both are explained to great extent in Ref.@noack, so we refer the reader to this material for further information. In these notes we will briefly explain the Lanczos procedure.
The basic idea of the Lanczos method is that a special basis can be constructed where the Hamiltonian has a tridiagonal representation. This is carried out iteratively as shown below. First, it is necessary to select an arbitrary seed vector in the Hilbert space of the model being studied. If we are seeking the ground-state of the model, then it is necessary that the overlap between the actual ground-state , and the initial state be nonzero. If no “a priori” information about the ground state is known, this requirement is usually easily satisfied by selecting an initial state with randomly chosen coefficients in the working basis that is being used. If some other information of the ground state is known, like its total momentum and spin, then it is convenient to initiate the iterations with a state already belonging to the subspace having those quantum numbers (and still with random coefficients within this subspace).
After is selected, define a new vector by applying the Hamiltonian , over the initial state. Subtracting the projection over , we obtain that satisfies . Now, we can construct a new state that is orthogonal to the previous two as, It can be easily checked that . The procedure can be generalized by defining an orthogonal basis recursively as, where , and the coefficients are given by supplemented by , . In this basis, it can be shown that the Hamiltonian matrix becomes, i.e. it is tridiagonal as expected. Once in this form the matrix can be diagonalized easily using standard library subroutines. However, note that to diagonalize completely a Hamiltonian on a finite cluster, a number of iterations equal to the size of the Hilbert space (or the subspace under consideration) are needed. In practice this would demand a considerable amount of CPU time. However, one of the advantages of this technique is that accurate enough information about the ground state of the problem can be obtained after a small number of iterations (typically of the order of or less).
Another way to formulate the problem is by obtaining the tridiagonal form of the Hamiltonian starting from a Krylov basis, which is spanned by the vectors and asking that each vector be orthogonal to the previous two. Notice that each new iteration of the process requires one application of the Hamiltonian. Most of the time this simple procedure works for practical purposes, but care must be payed to the possibility of losing orthogonality between the basis vectors. This may happen due to the finite machine precision. In that case, a re-orthogonalization procedure may be required.
Notice that the new super-Hamiltonian matrix has dimensions . This could be a large matrix. In state-of-the-art simulations with a large number of states, one does not build this matrix in memory explicitly, but applies the operators to the state directly in the diagonalization routine.