Why Ising model : 3 reasons for relevance

  • Studying Ising model can be useful to understand phase transition of various systems.
  • Hopfield network or Boltzmann machine to the neural network is just a generalized form of Ising model.
  • Ising model is also useful as a statistical model in its own right.

Ising model

$\boldsymbol{x}$ is the state of an Ising model with $N$ spins be a vector in which each component $\boldsymbol x_n$ takes values $-1$ or $+1$.

If two spins $m$ and $n$ are neighbors we write $(m,n) \in {\cal N}$. The coupling between neighboring spins is $J$. Set $J_{mn}$ = $J$ if $m$ and $n$ are neighbors and otherwise $J_{mn}$ is zero. The energy of a state $\boldsymbol{x}$ is ,

$$ E(\boldsymbol{x};J,H) = - \left[ \frac{1}{2} \sum_{m,n} J_{mn} x_m x_n + \sum_{n} H x_n \right] , $$

where $H$ is the applied field. If $J > 0$ then the model is ferromagnetic and if $J < 0$ it is antiferromagnetic.

Ferromagnetic is simply the model for magnet. Consider some specific metal proper to be a magnet. I omit the detail why it is proper. When the temperature is high, the inner spins of the atoms of the metal direct randomly. However, when it is cooled down, by the local nearby interactions become more dominant and they make group to be aligned. As the temperature is going down and time goes by, the group becomes bigger and bigger and they are mostly aligned to one direction, and the resultant material is the magnet. Thus, the above energy function (Hamiltonian) is the law governed the neighboring spins. Spin interaction is not too strong, so that approximation considering the only nearby interaction is valid.

We make $64 \times 64$ lattice with periodic boundary conditions. The probability that the spin should be $+1$ in the lattice is

$$ P(+1 | b_n)= \frac{1}{1+\exp(- 2 \beta b_n)}, $$

where $\beta = 1/k_{\rm B}T$ and $b_n$ is the local field

$$ b_n = \sum_{m:(m,n) \in {\cal N}} J x_m + H. $$

This probability is obtained from fermi distribution. Atomic or smaller particles with spins are governed by this distribution. Two or more fermions cannot occupy same energy state. Because of this principle, the partition function becomes very simple, and the probability for the one state is as above. The two spin states are ${+1,-1}$ rather than ${ +1 , 0 }$.

By Gibbs sampling, Spin $n$ is set to +1 with that probability, and otherwise to -1. Then the next spin to update is selected at random. After many iterations, this procedure converges to the equilibrium distribution

Or can use Metropolis-Hastings sampling.

$$ \Delta E = 2 x_n b_n , $$

$$ P( {\rm accept} ; \Delta E , \beta ) = \left\{ \begin{array}{cc} 1 & \Delta E \leq 0 \\\
\exp( - \beta \Delta E ) & \Delta E > 0 . \end{array} \right.
$$

I used the Metropolis-Hastings sampling at the following Python code.

from __future__ import division
import numpy as np
import random
import matplotlib.pyplot as plt

%matplotlib inline
plt.style.use('ggplot')

Low temperature, $J = 1$, ferromagnetism

l = 64
T = 2.0
beta = 1/T

s= np.ones((l,l))

for i in range(l):
    for j in range(l):
        if random.random() > 0.5:
            s[i,j] = -s[i,j]
plt.subplot(1,2,1)
plt.pcolor(s)
plt.ylabel('Ising model')
plt.xlabel('random')

turns = 0

dE = np.zeros((l,l))
while turns < 300:
    turns = turns + 1
    for i in range(l):
        for j in range(l):
            dE[i,j] =  0.5 * 2 * s[i,j]*(s[(i-1)%l,j]+s[(i+1)%l,j]+s[i,(j-1)%l]+s[i,(j+1)%l])
            if dE[i,j]  < 0 or random.random() < np.exp(-2*beta* dE[i,j]):
                s[i,j] = - s[i,j]
plt.subplot(1,2,2)                
plt.pcolor(s)
plt.xlabel('low temperature after enough time')

png

The left figure is the randomized initial spins, and the right figure is a ferromagnetic result of the spins at low temperature. Most of the states are aligned with same spin and they will act as a magnet.

Mean energy and mean square magnetization

Using the obtained samples, we can calculate various physical observables. Mean magnetization is just the average of the spin in the lattice, and the mean energy is the expectation value of the energy in the probability distribution.

$$ E(\beta) = \epsilon \frac{ \exp( - \beta \epsilon ) }{ 1 + \exp( - \beta \epsilon ) } = \epsilon \frac{ 1}{ 1 + \exp( \beta \epsilon ) } $$

Let us draw the plot as a function of temperature.

def meanEm(T,turns=300,lattice_size=64):
    beta = 1/T

    s= np.ones((l,l))

    for i in range(l):
        for j in range(l):
            if random.random() > 0.5:
                s[i,j] = -s[i,j]
    turns = 0

    dE = np.zeros((l,l))
    while turns < 300:
        turns = turns + 1
        for i in range(l):
            for j in range(l):
                dE[i,j] =  0.5 * 2 * s[i,j]*(s[(i-1)%l,j]+s[(i+1)%l,j]+s[i,(j-1)%l]+s[i,(j+1)%l])
                if dE[i,j]  < 0 or random.random() < np.exp(-2*beta* dE[i,j]):
                    s[i,j] = - s[i,j]
    return np.mean(-dE/2), abs(sum(sum(s)))/lattice_size**2

Ts = np.arange(0.2,15,0.2)
len(Ts), Ts[0]
(74, 0.20000000000000001)
meanEs = []
meanMs = []
for i in Ts:
    meanEs=meanEs+[meanEm(i)[0]]
    meanMs=meanMs+[meanEm(i)[1]]
plt.figure(figsize=(6,8))

plt.subplot(2,1,1)
plt.plot(np.log(Ts[4:50]),meanEs[4:50],'.')
plt.ylabel('Mean energy')
plt.subplot(2,1,2)
plt.plot(np.log(Ts[4:50]),meanMs[4:50],'.')
plt.xlabel('log scale temperature')
plt.ylabel('Mean square magnetization')


png

For both graphs, the physical and chemical property has a dramatic change around $\log T \sim 0.8$. We are able to say that it is the turning point of the metal transforming to a magnet.

Trial to gain the energy fluctuation.

Based on the probability density and samples, we can try to calculate the energy fluctuation.



var = []
for i in range(len(Ts)-1):
    DE = meanEs[i+1] - meanEs[i]
    DT = Ts[i+1]-Ts[i] # for this case  
    var = var + [DE/DT * Ts[i]**2]
plt.plot(Ts[4:len(Ts)-1],var[4:len(Ts)],'.-')

png

Of course, it is very ugly. The energy plot is not continuous and does not provide the good derivatives

Antiferromagnetic

#low T

# J = -1

l = 64
T = 2.0
beta = 1/T

s= np.ones((l,l))

for i in range(l):
    for j in range(l):
        if random.random() > 0.5:
            s[i,j] = -s[i,j]
plt.subplot(1,2,1)
plt.pcolor(s)
plt.ylabel('Ising model')
plt.xlabel('random')

turns = 0

dE = np.zeros((l,l))
while turns < 300:
    turns = turns + 1
    for i in range(l):
        for j in range(l):
            dE[i,j] =  -0.5 * 2 * s[i,j]*(s[(i-1)%l,j]+s[(i+1)%l,j]+s[i,(j-1)%l]+s[i,(j+1)%l])
            if dE[i,j]  < 0 or random.random() < np.exp(-2*beta* dE[i,j]):
                s[i,j] = - s[i,j]
plt.subplot(1,2,2)                
plt.pcolor(s)
plt.xlabel('low temperature after enough time')

png

Mean energy and mean square magnetization

# J = -1



def meanEm(T,turns=300,l=64):
    beta = 1/T

    s= np.ones((l,l))

    for i in range(l):
        for j in range(l):
            if random.random() > 0.5:
                s[i,j] = -s[i,j]
    turns = 0

    dE = np.zeros((l,l))
    while turns < 300:
        turns = turns + 1
        for i in range(l):
            for j in range(l):
                dE[i,j] =  - 0.5 * 2 * s[i,j]*(s[(i-1)%l,j]+s[(i+1)%l,j]+s[i,(j-1)%l]+s[i,(j+1)%l])
                if dE[i,j]  < 0 or random.random() < np.exp(-2*beta* dE[i,j]):
                    s[i,j] = - s[i,j]
    return np.mean(-dE/2), abs(sum(sum(s)))/l**2

Ts = np.arange(0.2,15,0.2)

meanEs = []
meanMs = []
for i in Ts:
    meanEs=meanEs+[meanEm(i)[0]]
    meanMs=meanMs+[meanEm(i)[1]]

plt.figure(figsize=(6,8))

plt.subplot(2,1,1)
plt.plot(np.log(Ts[4:50]),meanEs[4:50],'.')
plt.ylabel('Mean energy')
plt.subplot(2,1,2)
plt.plot(np.log(Ts[4:50]),meanMs[4:50],'.')
plt.xlabel('log scale temperature')
plt.ylabel('Mean square magnetization')    

png

The graph of the magnetization is really bad. We could tune it more samples including error bars, but I won’t do it here. My purpose is not in order to understand the metallic property or phase transition of the model but in order to see how to use the samples to find the physical quantities in the model.

J = -1, and odd dimensional lattice


def meanEm(T,turns=30,l=9):
    beta = 1/T

    s= np.ones((l,l))

    for i in range(l):
        for j in range(l):
            if random.random() > 0.5:
                s[i,j] = -s[i,j]
    turns = 0

    dE = np.zeros((l,l,100))
    while turns < 30:
        turns = turns + 1
        for i in range(l):
            for j in range(l):
                for k in range(100):
                    dE[i,j,k] =  - 0.5 * 2 * s[i,j]*(s[(i-1)%l,j]+s[(i+1)%l,j]+s[i,(j-1)%l]+s[i,(j+1)%l])
                    if dE[i,j,k]  < 0 or random.random() < np.exp(-2*beta* dE[i,j,k]):
                        s[i,j] = - s[i,j]
    return np.mean(-dE/2), abs(sum(sum(s)))/l**2

Ts = np.arange(0.2,15,0.2)

meanEs = []
meanMs = []
for i in Ts:
    meanEs=meanEs+[meanEm(i)[0]]
    meanMs=meanMs+[meanEm(i)[1]]

plt.figure(figsize=(6,8))

plt.subplot(2,1,1)
plt.plot(np.log(Ts[4:50]),meanEs[4:50],'.')
plt.ylabel('Mean energy')
plt.subplot(2,1,2)
plt.plot(np.log(Ts[4:50]),meanMs[4:50],'.')
plt.xlabel('log scale temperature')
plt.ylabel('Mean square magnetization')    

png

The minimum energy around T = 0 is bigger than J = 1 case because of the boundary condition.

Variational method, mean field approximation of Ising model.

Ising model is very good model to apply for mean field approximation. We can approximate with a separable approximating distribution

$$ Q(\boldsymbol{x}; \boldsymbol{a}) = \frac{1}{Z_Q} \exp \left({ \sum_n a_n x_n }\right). $$

As a result, we obtain two equations.

$$ a_m = \beta \left( \sum_{n} J_{mn} \bar{x}_n + h_m \right) . $$

and

$$ \bar{x}_n = \tanh( a_n ) . $$

This is also one of the first thing physics students do after acknowledge the Hamiltonian (energy) of the system. I omit the detail procedure. You could see the book, but as I mentioned above, the book is not kind to explain the partition function of fermion which is essential to compute the probability distribution of spins and also the entropy. If there is any request to me, I will do it.