Hartree method for Helium ground state

7 Comments

After long preparation, I am ready to do this problem.

The two electron in the helium ground state occupy same spacial orbital but difference spin. Thus, the total wavefunction is

\displaystyle \Psi(x,y) = \frac{1}{\sqrt{2}}(\uparrow \downarrow - \downarrow \uparrow) \psi(x) \psi(y)

Since the Coulomb potential is spin-independent, the Hartree-Fock method reduce to Hartree method. The Hartree operator is

F(x) = H(x) + \langle \psi(y)|G(x,y) |\psi(y) \rangle

where the single-particle Hamiltonian and mutual interaction are

\displaystyle H(x) = -\frac{\hbar^2}{2m} \nabla^2 - \frac{Ze^2}{4\pi\epsilon_0 x} = -\frac{1}{2}\nabla^2 - \frac{Z}{x}

\displaystyle G(x,y) = \frac{e^2}{4\pi\epsilon_0|x-y|} = \frac{1}{|x-y|}

In the last step, we use atomic unit, such that \hbar = 1, m=1, e^2 = 4\pi\epsilon_0. And the energy is in unit of Hartree, 1 \textrm{H} = 27.2114 \textrm{eV}.


We are going to use Hydrogen-like orbital as a basis set.

\displaystyle b_i(r) = R_{nl}(r)Y_{lm}(\Omega) \\= \sqrt{\frac{(n-l-1)!Z}{n^2(n+l)!}}e^{-\frac{Z}{n}r} \left( \frac{2Z}{n}r \right)^{l+1} L_{n-l-1}^{2l+1}\left( \frac{2Z}{n} r \right) \frac{1}{r} Y_{lm}(\Omega)

I like the left the 1/r, because in the integration \int b^2 r^2 dr, the r^2 can be cancelled. Also, the i = nlm is a compact index of the orbital.

Using basis set expansion, we need to calculate the matrix elements of

\displaystyle H_{ij}=\langle b_i(x) |H(x)|b_j(x)\rangle = -\delta \frac{Z^2}{2n^2}

\displaystyle G_{ij}^{hk} = \langle b_i(x) b_h(y) |G(x,y) |b_j(x) b_k(y) \rangle


Now, we will concentrate on evaluate the mutual interaction integral.

Using the well-known expansion,

\displaystyle G(x,y) = \frac{1}{|x-y|}=\frac{1}{r_{12}} = \sum_{l=0}^{\infty} \sum_{m=-l}^{l} \frac{4\pi}{2l+1} \frac{r_<^l}{r_>^{l+1}} Y_{lm}^{*}(\Omega_1)Y_{lm}(\Omega_2)

The angular integral

\displaystyle \langle Y_i(x) Y_h(y)| Y_{lm}^{*}(x) Y_{lm}(y) | Y_j(x) Y_k(y) \rangle \\ = \big( \int Y_i^{*}(x) Y_{lm}^{*}(x) Y_j(x) dx \big) \big( \int Y_h^{*}(y) Y_{lm}(y) Y_k(y) dy \big)

where the integral \int dx = \int_{0}^{\pi} \int_{0}^{2\pi} \sin(\theta_x) d\theta_x d\phi_x .

From this post, the triplet integral of spherical harmonic is easy to compute.

\displaystyle \int Y_h^{*}(y) Y_{lm}(y) Y_k(y) dy = \sqrt{\frac{(2l+1)(2l_k+1)}{4\pi (2l_h+1)}} C_{l0l_k0}^{l_h0} C_{lm l_km_k}^{l_hm_h}

The Clebsch-Gordon coefficient imposed a restriction on l,m.

The radial part,

\displaystyle \langle R_i(x) R_h(y)| \frac{r_<^l}{r_>^{l+1}} | R_j(x) R_k(y) \rangle \\ = \int_0^{\infty} \int_{0}^{\infty} R_i(x) R_h(y) \frac{r_<^l}{r_>^{l+1}} R_j(x) R_k(y) y^2 x^2 dy dx \\ = \int_0^{\infty} R_i(x) R_j(x) \\ \left( \int_{0}^{x} R_h(y) R_k(y) \frac{y^l}{x^{l+1}} y^2dy  + \int_{x}^{\infty} R_h(x)R_k(x) \frac{x^l}{y^{l+1}}  y^2 dy   \right) x^2 dx

The algebraic calculation of the integral is complicated, but after the restriction of l from the Clebsch-Gordon coefficient, only a few terms need to be calculated.


The general consideration is done. now, we use the first 2 even states as a basis set.

\displaystyle b_{1s}(r) = R_{10}(r)Y_{00}(\Omega) = 2Z^{3/2}e^{-Zr}Y_{00}(\Omega)

\displaystyle b_{2s}(r) = R_{20}(r)Y_{00}(\Omega) = \frac{1}{\sqrt{8}}Z^{3/2}(2-Zr)e^{-Zr/2}Y_{00}(\Omega)

These are both s-state orbital. Thus, the Clebsch-Gordon coefficient

\displaystyle C_{lm l_k m_k}^{l_h m_h} = C_{lm00}^{00}

The radial sum only has 1 term. And the mutual interaction becomes

\displaystyle G(x,y) = \frac{1}{|x-y|}=\frac{1}{r_{12}} = 4\pi \frac{1}{r_>} Y_{00}^{*}(\Omega_1)Y_{00}(\Omega_2)

The angular part

\displaystyle \langle Y_i(x) Y_h(y)| Y_{lm}^{*}(x) Y_{lm}(y) | Y_j(x) Y_k(y) \rangle = \frac{1}{4\pi}

Thus, the mutual interaction energy is

G_{ij}^{hk} = \displaystyle \langle b_i(x) b_h(y) |G(x,y) |b_j(x) b_k(y) \rangle = \langle R_i(x) R_h(y)| \frac{1}{r_>} | R_j(x) R_k(y) \rangle

The radial part

G_{ij}^{hk} = \displaystyle \langle R_i(x) R_h(y)| \frac{1}{r_>} | R_j(x) R_k(y) \rangle \\ \begin{pmatrix} G_{11}^{hk} & G_{12}^{hk} \\ G_{21}^{hk} & G_{22}^{hk} \end{pmatrix} = \begin{pmatrix} \begin{pmatrix} G_{11}^{11} & G_{11}^{12} \\ G_{11}^{21} & G_{11}^{22} \end{pmatrix} & \begin{pmatrix} G_{12}^{11} & G_{12}^{12} \\ G_{12}^{21} & G_{12}^{22} \end{pmatrix} \\ \begin{pmatrix} G_{21}^{11} & G_{21}^{12} \\ G_{21}^{21} & G_{21}^{22} \end{pmatrix} & \begin{pmatrix} G_{22}^{11} & G_{22}^{12} \\ G_{22}^{21} & G_{22}^{22} \end{pmatrix} \end{pmatrix} \\= \begin{pmatrix} \begin{pmatrix} 1.25 & 0.17871 \\ 0.17871 & 0.419753 \end{pmatrix} & \begin{pmatrix} 0.17871 & 0.0438957 \\ 0.0439857 & 0.0171633 \end{pmatrix} \\ \begin{pmatrix} 0.17871 & 0.0438957 \\ 0.0438957 & 0.0171633 \end{pmatrix} & \begin{pmatrix} 0.419753 & 0.0171633 \\ 0.0171633 & 0.300781 \end{pmatrix} \end{pmatrix}

We can easy to see that G_{ij}^{hk} = G_{ji}^{hk} = G_{ij}^{kh} = G_{hk}^{ij} = G_{ji}^{kh} . Thus, if we flatten the matrix of matrixes, it is Hermitian, or symmetric.

We also notice that G_{ij}^{hk} is a matrix of matrixes. The inner matrix runs for hk indexes, while the bigger matrix runs for ij index. This will simplify our minds.


Now, we can start doing the Hartree method.

The general solution of the wave function is

\psi(x) = a_1 b_{1s}(x) + a_2 b_{2s}(x)

The Hartree matrix is

F_{ij} = H_{ij} + \sum_{h,k} a_h a_k G_{ij}^{hk}

In this simple 2-states example, index h, k runs from 1 to 2. And remember the G_{ij}^{hk} is a matrix of matrixes, using the identify G_{ij}^{hk} = G_{hk}^{ij} the sum becomes:

\sum_{h,k} a_h a_k G_{ij}^{hk} = a_1^2 G_{11} + a1 a_2 (G_{12} + G_{21}) + a_2^2 G_{22},

where the G_{ij} represent the bigger matrix in G_{ij}^{hk}.

The first trial wave function is the Hydrogen-like orbital,

\psi^{(0)}(x) = b_{1s}(r)

F_{ij}^{(0)} = \begin{pmatrix} -2 & 0 \\ 0 & -0.5 \end{pmatrix}  + \begin{pmatrix} 1.25 & 0.17871 \\ 0.17817 & 0.419753 \end{pmatrix}

Solve for eigen system, we have the energy after 1st trial,

\epsilon^{(1)} = -0.794702 , (a_1^{(1)}, a_2^{(1)}) = (-0.970112, 0.242659)

After 13th trial,

\epsilon^{(13)} = -0.880049 , (a_1^{(13)}, a_2^{(13)}) = (-0.981015, 0.193931)

F_{ij}^{(13)} = \begin{pmatrix} -2 & 0 \\ 0 & -0.5 \end{pmatrix}  + \begin{pmatrix} 1.15078 & 0.155932 \\ 0.155932 & 0.408748 \end{pmatrix}

Thus, the mixing of the 2s state is only 3.7%.

Since the eigen energy contains the 1-body energy and 2-body energy. So, the total energy for 2 electrons is

E_2 = 2 * \epsilon^{(13)} - G = -2.82364 \textrm{H} = -76.835 \textrm{eV}

In which ,

G = \langle \psi(x) \psi(y) |G(x,y) |\psi(x) \psi(y) \rangle = 1.06354 \textrm{H} = 28.9403 \textrm{eV}

So the energies for

From He to He++.  E_2 = -2.82364 \textrm{H} = -76.835 \textrm{eV}
From He+ to He++, E_1^+ = -Z^2/2 = 2 \textrm{H} = -54.422 \textrm{eV} .
From He to He+, is E_1 = E_2 - E_1^+ = -0.823635 \textrm{H} =  -22.4123 \textrm{eV}

The experimental 1 electron ionization energy for Helium atom is

E_1(exp) = -0.90357 \textrm{H} = -24.587 \textrm{eV}
E_1^+(exp) = -1.99982 \textrm{H} = -54.418 \textrm{eV}
E_2(exp) = -2.90339 \textrm{H} = -79.005 \textrm{eV}

The difference with experimental value is 2.175 eV. The following plot shows the Coulomb potential, the screening due to the existence of the other electron, the resultant mean field, the energy, and r \psi(x)

Helium.PNG


Usually, the Hartree method will under estimate the energy, because it neglected the correlation, for example, pairing and spin dependence. In our calculation, the E_2 energy is under estimated.

From the (a_1^{(13)}, a_2^{(13)}) = (-0.981015, 0.193931) , we can see, the mutual interaction between 1s and 2s state is attractive. While the interaction between 1s-1s and 2s-2s states are repulsive. The repulsive can be easily understood. But I am not sure how to explain the attractive between 1s-2s state.

Since the mass correction and the fine structure correction is in order of 10^{-3} \textrm{eV} , so the missing 0.2 eV should be due to something else, for example, the incomplete basis set.

If the basis set only contain the 1s orbit, the mutual interaction is 1.25 Hartree = 34.014 eV. Thus, the mixing reduce the interaction by 5.07 eV, just for 3.7% mixing

I included the 3s state,

\epsilon^{(13)} = -0.888475 , (a_1^{(13)}, a_2^{(13)}, a_3^{(13)}) = (0.981096, -0.181995, -0.06579)

The mutual energy is further reduced to 1.05415 Hartree = 28.6848 eV. The E_2 = -77.038 \textrm{eV} . If 4s orbital included, the E_2 = -77.1058 \textrm{eV} . We can expect, if more orbital in included, the E_2 will approach to E_2(exp).

Hartree-Fock method for 1D infinite potential well

Leave a comment

Following the previous post, I tested my understanding on the Hartree method. Now, I move to the Hartree-Fock method. The “mean field” energy of the Hartree-Fock is

\displaystyle G_{\alpha \beta}= \sum_{j=i+1}^{N} \langle \psi_{\alpha}(i) \psi_{\nu} (j) | G(i,j) \left(| \phi_{\beta}(i) \phi_{\nu}(j) \rangle  - |\phi_{\nu}(i) \phi_{\beta}(j) \rangle \right) \\ = \langle \alpha \nu | \beta \nu \rangle - \langle \alpha \nu | \nu \beta \rangle

I also use the same method, in which, the trial wave function is replaced every iteration and the integration is calculated when needed.

Since the total wave function must be anti-symmetry under permutation, therefore one state can be occupied by only one particle. Thus, if we use the ground state in the mean field, the “meaningful” wave functions are the other states.

It is interesting that the mean field energy is zero when \mu = \nu , the consequence is no mean field for the same state. Suppose the mean field energy is constructed using the ground state, and we only use 3 states, the direct term is

G_D = \begin{pmatrix} \langle 11|11 \rangle & \langle 11|21 \rangle & \langle 11|31 \rangle \\  \langle 21|11 \rangle & \langle 21|21 \rangle & \langle 21|31 \rangle \\ \langle 31|11 \rangle & \langle 31|21 \rangle & \langle 31|31 \rangle \end{pmatrix}

The exchange term is

G_E = \begin{pmatrix} \langle 11|11 \rangle & \langle 11|12 \rangle & \langle 11|13 \rangle \\  \langle 21|11 \rangle & \langle 21|12 \rangle & \langle 21|13 \rangle \\ \langle 31|11 \rangle & \langle 31|12 \rangle & \langle 31|13 \rangle \end{pmatrix}

Due to the symmetry of the mutual interaction. We can see that some off-diagonal terms are cancelled. For example,

\displaystyle \langle 1 1 | 3 1 \rangle = \int \psi_1^*(x) \psi_1^*(y) \cos(x-y) \psi_3(x) \psi_1(y) dy dx

\displaystyle \langle 1 1 | 1 3 \rangle = \int \psi_1^*(x) \psi_1^*(y) \cos(x-y) \psi_1(x) \psi_3(y) dy dx

These two integrals are the same. In fact,

$latex \langle \alpha \nu | \beta \nu \rangle – \langle \alpha \nu | \nu \beta \rangle = $

whenever \alpha = \nu

\displaystyle \langle \nu \nu | \beta \nu \rangle = \int \psi_\nu^*(x) \psi_\nu^*(y) \cos(x-y) \psi_\beta(x) \psi_\nu(y) dy dx

\displaystyle \langle \nu \nu | \nu \beta \rangle = \int \psi_\nu^*(x) \psi_\nu^*(y) \cos(x-y) \psi_\nu(x) \psi_\beta(y) dy dx

We can see, when interchange x \leftrightarrow y , the direct term and the exchange term are identical, and then the mean field energy is zero. Also, when \beta = \nu the mean field energy is also zero.

Due to the zero mean field, the off-diagonal terms of the Hamiltonian H_{\alpha \beta} with \alpha = \nu or \beta = \nu are zero. Then, the eigen energy is the same as the diagonal term and the eigen vector is un-change.


Back to the case, the direct matrix at the 1st trial is,

G_D = \begin{pmatrix} 0.720506 & 0 & -0.144101 \\  0 & 0.576405 & 0 \\ -0.144101 & 0 & 0.555819 \end{pmatrix}

The exchange matrix is

G_E = \begin{pmatrix} 0.720506 & 0 & -0.144101 \\  0 & 0.25 & 0 \\ -0.144101 & 0 & 0.0288202 \end{pmatrix}

Thus, the Fock matrix is

F = \begin{pmatrix} 1 & 0 & 0  \\  0 & 4.3264 & 0 \\ 0 & 0 & 9.527 \end{pmatrix}

Therefore, the eigen states are the basis, unchanged

\displaystyle \psi_1(x) = \sqrt{\frac{2}{\pi}} \sin(x)

\displaystyle \psi_2(x) = \sqrt{\frac{2}{\pi}} \sin(2x)

\displaystyle \psi_3(x) = \sqrt{\frac{2}{\pi}} \sin(3x)

Only the eigen energies are changed, as \epsilon_1 = 1 \epsilon_2 = 4.3264 \epsilon_3 = 9.527

The total wave function for 2 particles at state 1 and state-\mu is

\Psi(x,y) = \frac{1}{\sqrt{2}} ( \psi_1(x) \psi_\mu(y) - \psi_\mu(x) \psi_1(y) )


I found that the “mean field” function is not as trivial as in the Hartree case, because of the exchange term. In principle, the mean field for particle-i at state-\mu is,

G(i) = \int \phi_\nu^*(j) G(i,j) \phi_\nu(j) dj -  \int \phi_\nu^*(j) G(i,j) \phi_\mu(j) dj

However, the direct term is multiply with \psi_\mu(i) , but the exchange term is multiply with \psi_\nu(i) , which are two different functions. i.e., the “mean field” is affecting two functions, or the “mean field” is shared by two states.

Although the mean field for single state can be defined using exchange operator symbolically, but I don’t know how to really implement into a calculation. Thus, I don’t know how to cross-check the result.

Method on solving differential equation

Leave a comment

The foundation of using Hartree-Fock method to get the self-consistence wave function and potential is solving the Hartree-Fock equation, which is kind of differential equation.

The idea of the Hartree-Fock method is

  1. Assume the trial wave function \psi^{(0)}_\mu(i)
  2. put the trial wave function into the Hartree-Fock equation \displaystyle F(\psi_{\nu}^{(0)}(j))
  3. Solve the equation and obtain a new wave function \psi^{(1)}_\mu(i) 
  4. Go to step 2. until converge.

In this post, we will discuss at least 2 method on the step 3.


One method is solve the Hartree-Fock equation using numerical method, for example, Rungu-Kutta method, or Numerov’s method. Those method can also obtain the energy when impose some conditions, for example, the behaviour of the wave function at long range.


The 2nd method is basis expansion. We assume the wave function is a superposition of some basis,

\displaystyle \psi_\mu(i) = \sum_{\alpha}^{N_\alpha} a_{\mu\alpha} \phi_\alpha(i)

Substitute into the Hartree equation (Hartree-Fock equation is similar),

\displaystyle F \left(\sum_{j=i+1}^{N}\psi_\nu(j) \right) \psi_\mu(i) = H(i) \psi_\mu(i) + \sum_{j=i+1}^{N}\langle \psi_\nu(j) | G(i,j) | \psi_\nu(j) \rangle \psi_\mu(i)

\displaystyle \sum_{\alpha} a_{\mu\alpha} F |\phi_{\alpha}(i) \rangle \\ = \sum_{\alpha} a_{\mu\alpha} \left(  H(i) |\phi_{\alpha}(i) \rangle + \sum_{k,l} \sum_{j=i+1}^N a_{\nu k}^{*} a_{\nu l}\langle \phi_{\nu k}(j) | G(i,j) | \phi_{\nu l} (j) \rangle |\phi_{\mu \alpha}(i) \rangle \right)  \\ = \sum_{\alpha} a_{\mu\alpha} \left( H(i)   + G(i) \right) |\phi_{\alpha}(i) \rangle \\ = \sum_{\alpha} a_{\mu \alpha} E |\phi_{\alpha} \rangle

Acting \langle \phi_{\mu \beta}(i)|

\displaystyle \sum_{\alpha} a_{\mu \alpha} \langle \phi_{\mu \beta} (i) | F | \phi_{\mu \alpha} (i) \rangle = \sum_{\alpha} F_{\beta \alpha} a_{\mu \alpha} = E a_{\mu \beta}

Thus, we can solve the coefficient a_{\mu \alpha} , which is the eigen vector. Then, new trial functions with the new coefficients are used.


In the 2nd method, we can see the integral

H_{\beta \alpha} = \langle \phi_\beta(i) | H(i)| \phi_\alpha (i) \rangle

G_{\beta \alpha}^{k l} = \langle \phi_\beta(i) \phi_k(j) | G(i,j) | \phi_l(j) \phi_\alpha(i) \rangle

can be pre-calculated. The calculation becomes iterating the coefficient a_{\mu \alpha} .

In method 2, the basis does not change, but only the coefficients. The advantage, also the disadvantage of this method is that, all possible integrals are pre calculated, even though some of them will never be used. Once the integrals are calculated, it can be used for solving many similar problems. But the calculation time for the integrals may be long and consume memory.


In previous post, I did not pre-calculate the integral, I simply put the new trial function and integrate again.

This method is very similar to the method 2. However, the basis is the wave function itself. After one calculation, a new basis, which is the wave function, will be used. And the old basis will be replaced. In method 2, only the coefficient is replaced.

This method may somehow faster then method 2, because only nesscary integrals are calculated. In Mathematica, method 2 is generally slower, especially for larger basis.


How about perturbation?

Some comments on Hartree-Fock method

Leave a comment

Most confusing thing for the method is that, weather the index is represent particle or state.

Look back the method. The total wave function is product of the wave function of each particle.

\Psi = \phi_1(1) \phi_2(2) .. \phi_N(N)

Here, we fixed our notation that the subscript is state, the parentheses is particle ID.

The single particle energy is,

\displaystyle \langle \Psi |\sum_{i}^{N} H(i)|\Psi \rangle = \sum_{i}^{N} \langle \phi_\mu(i) | H(i)|\phi_\mu(i) \rangle

Here, the i-particle in state \mu

The mutual interaction energy is

\displaystyle \langle \Psi |\sum_{i}^{N-1}\sum_{j=i+1}^{N} G(i,j)|\Psi \rangle = \sum_{i}^{N-1}\sum_{j=i+1}^{N} \langle \phi_\mu(i) \phi_\nu(j) | G(i,j)|\phi_\mu(i) \phi_\nu(j) \rangle

In both energy terms, we can factor out the sum of particle, we get the Hartree equation

\displaystyle \left( H(i) + \sum_{j=i+1}^{N} \langle \phi_\nu(j) | G(i,j)| \phi_\nu(j) \rangle  \right) |\phi_\mu(i) \rangle = \epsilon_\mu | \phi_\mu(i) \rangle

Now, we can see, the sum is sum over the number of particle, not state.


When solving the Hartree equation using a given basis b_k(x) , we set the first trial of the solution to be

| \phi_\mu(i) \rangle = | b_\mu(i) \rangle

Then, in the first trial, the Hartree matrix is

\displaystyle \hat{F}_{\alpha \beta} = \langle b_\alpha(i) | \left( H(i) + \sum_{j=i+1}^{N} \langle b_\nu(j) |G(i,j) | b_\nu(j) \rangle \right) | b_\beta(i) \rangle

In here, we keep the particle ID. Inside the mean field integral, the trial state b_\nu(j) is used for the particle-j.

After a trial, we will get the eigen vector for each state v_\mu, then we construct a new trial function and iterate until converge.

We can see, for multi-particle wave function, the sum is sum over the particle, even thought there are particles share the same state. In fact, since each particle should be at a state, the state should be a function of particle ID. So, the general Hartree matrix is

\displaystyle \hat{F}_{\alpha \beta} = \langle \psi_\alpha(i)|\left( H(i) + \sum_{j=i+1}^{N} \langle \psi_{\nu(j)}(j) | G(i,j)| \psi_{\nu(j)}(j) \rangle  \right) |\psi_\beta(i) \rangle

where \alpha, \beta = 1,2,3,..., k, k is number state to use.

In Hartree-Fock method, the sum is also over all other particle, not state.

 

Hartree method for 1D infinite potential well

Leave a comment

In order to test my understanding on the Hartree method. I played a simple case. The potential is

V(x) = \left\{ \begin{matrix} 0 , 0 \leq x \leq \pi  \\  \infty , else \end{matrix} \right\}

The mutual interaction between 2 particles is, for simple calculation,

G(x_1, x_2) = g_0 \cos(x_1 - x_2)

This mutual interaction is repulsive, we can expect the eigen wave function will be more spread out.

We use the eigen state of the V(x) potential to be the basis,

\displaystyle \phi_n(x) = \sqrt{\frac{2}{\pi}} \sin(nx)

The Hartree method is basically solving the Hartree equation

\displaystyle \left( -\frac{\hbar^2}{2m} \frac{d^2}{dx_1^2} + V(x_1) + \int G(x_1, x_2) (\psi_k(x_2))^2 dx_2 \right) \psi_n(x_1) = \epsilon_n \psi_n(x_1)

We assume the two particles are in the same ground state, thus, k = n = 1. We assume,

\psi_1(x) = \sum a_i \phi_i(x)


The algorithm is that, for the 1 particles matrix

\displaystyle F_{ij} = \int \psi_i(x) \left( -\frac{\hbar^2}{2m} \frac{d^2}{dx^2} + V(x)  \right) \psi_j(x) dx

Define

\displaystyle G_0(x) = \int G(x, x_2) (\psi_1(x_2))^2 dx_2

The 2-particle matrix

\displaystyle G_{ij} = \int \psi_i(x) G_0(x) \psi_j(x) dx

For the 1st trial, we set

\psi_i^{(0)}(x) = \phi_{(2i+1)}(x)

We only use “even” function with respect to x = \pi/2, because mixing parity will result non-reasonable result. ( why ? )

Then we get the matrix of the Hartree Hamiltonian H_{ij} = F_{ij} + G_{ij}, and solve for the eigen value and eigen vector,

\vec{v}^{(0)} = (v_1^{(0)}, v_2^{(0)}, .... v_n^{(0)})

The next trial is

\displaystyle \psi_i^{(1)}(x) = \sum_i v_i^{(0)} \psi_i^{(0)}

The process repeated until converge. The convergent can be exam using the Hartree Hamiltonian that the non-diagonal terms are vanished, i.e.

F_{ij} + G_{ij} = 0, i\neq j


I tried with 2 basis and 6 basis, I set \hbar^2/2m = 1 for simplicity, the Hartree ground state is 1.7125 for both basis. The eigen wavefunction for the 2-basis is

\psi(x) = 0.797752 \sin(x) + 0.0145564 \sin(3x)

with the mean field

V_m(x) = 0.842569 \sin(x)

For 6 basis, the wave function is

\psi(x) = 0.797751 \sin(x) + 0.0145729 \sin(3 x) + 0.000789143 \sin(5 x) + 0.000125871 \sin(7 x) + 0.0000336832 \sin(9 x) + 0.0000119719 \sin(11 x)

we can see, the contribution of the higher energy states are getting smaller and smaller. The higher energy states only contributes 0.03%. Following is the plot compare \phi_1(x) (orange) and \psi(x) (blue). We can see that the wave function becomes little spread out.

psi.PNG

If we increase the strength of the mutual interaction by 10 times, the wave function becomes more spread out.

psi3.PNG

To cross check the result,  we can use the mean field and solve the Schrodinger equation using variational method. The method gives the ground state energy be 1.7125 and converge.

vm.PNG

This is interesting that the variational method needs 4 or 5 basis to converge, while Hartree method only need 2.

At last, we substitute the wave functon back to the Hartree equation, it turns out the result is not satisfied 100%. The total wave function is

\Psi(x_1, x_2) = \psi(x_1) \psi(x_2)

The Schrodinger equation is

\displaystyle  H \Psi = -\frac{\hbar^2}{2m}\left(\frac{d^2}{dx_1^2} + \frac{d^2}{dx_2^2}\right) \Psi(x_1,x_2) + G(x_1,x_2)\Psi(x_1,x_2) = E \Psi(x_1,x_2)

However, the left side and the right side are not equal, but the integration

\langle \Psi |H|\Psi\rangle  = \langle |\Psi|\Psi \rangle

is equal.

I plot the \Psi H\Psi (orange) and $ E \Psi^2 $ (blue), and there difference is the following

diff.PNG

We can imagine, the integration of the bottom plot is zero. I don’t know the reason, but I guess it is not because of the number of basis, because the higher energy states only contributes for 0.03%. I guess, it is because of the energy is the quantity that is optimized rather then the wave function. And I am not sure that the reverse of the following is true.

H|\Psi\rangle = E |\Psi\rangle  \rightarrow \langle \Psi|H|\Psi \rangle = E \langle \Psi|\Psi \rangle


For the case of many particles, the G_0 has to sum over all other particle. When all particles are in the same states, the sum is simple. For particles distribute among different states, in general, the sum is

\displaystyle G_0 =\sum_{k\neq i} \int G(x_1,x_2) \psi_k(x_2)^2 dx

Here, the sum is excluded the particle x_1, which is in state i. For example, if the system consists of 2 particles on state-1, and 2 particles on state-2. The mean field for the particle in state-1 is constructed using 1 particles on state-1 and 2 particles on state-2. Thus the energy for the state-2 may be not correct.

The mean field for the particle on state-2, the mean field should be constructed by 2 particles on state-1 and 1 particle on state-2.

For higher states that without particle, the mean field is built by 2 particles on state-1 and 2 particles on state-2.

In general, the mean field is constructed using other particles.

This fixed mean field method is simple but we have to define the mean field for each state and calculate again and again.

Variational method on 1D potential well using plane wave basis

Leave a comment

in this post, we explained the principle of the variational method, and now, we apply this to a simple 1-D potential.

The potential is

V(x) = \left \{ \begin{matrix} - V_0 & , -L/2 \leq x \leq L/2 \\ 0 & , else \end{matrix} \right \}

The exact solution can be solved by matching boundary condition.

\tan(\sqrt{\frac{2m}{\hbar^2}(E+V_0 ) \frac{L}{2}}) = \sqrt{\frac{|E|}{E+V_0}}


The plane wave basis is

b_n(x) =\frac{1}{\sqrt{a}} \exp\left( i \frac{2\pi}{a} n x \right)

The matrix of the Hamiltonian is

\displaystyle H_{ij} = \langle b_i | \frac{P^2}{2m} + V(x) | b_j \rangle

\displaystyle \langle b_i | \frac{P^2}{2m} | b_j \rangle = \delta_{ij} \frac{L}{a}\frac{\hbar^2}{2m} \left(\frac{2\pi}{a}\right)^2 i^2

\displaystyle \langle b_i | V(x) | b_j \rangle = -V_0 \frac{L}{a} \frac{\sin( \pi (i-j) L/a)}{\pi (i-j) L/a}

Then, we solve for the eigen system.


I use mathematica for solving the eigen system.

For simplicity, I set \hbar = 1, m= 1,  L = 2, V_0 = 1. The exact solution is E = -0.603898 .

The parameter a controls the wavelength of the plane wave. By increasing the number of wave n , we effectively control the minimum wavelength \lambda_0 = a/n Therefore, larger the n, the energy will approach to the actual value.

I generated a from 10 to 200 with step 10, and n from 5 to 60 with step 5. Here is the result. As we can see, the larger the a, we need more number of wave to go to the actual energy. When small a and large n , the result quickly converge to the actual energy. For n = 60, a = 40 , the energy is E = -0.603792, the difference is \Delta E = 0.000106 or 0.02% difference.

planeWave1DWell.PNG

The following plot show a = 40

planeWave1DWell_n.PNG

However, when the a is too small, the plane wave cannot describe the long wavelength behaviour of the actual wave function, then the converge fail. The following plot show n = 60

planeWave1DWell_a.PNG

To investigate, I plot the case for a = 3 and a = 10.

planeWave1DWell_psi.PNG

We can see, since the well width is 2, if the maximum wavelength is only 3, then it cannot capture a longer wavelength component. And we can see the calculated wave functions are repeated.

Variational method & Eigenvalue Problem

Leave a comment

The relationship I haven’t stated explicitly, so, here.


In solving the Schrodinger equation (or a general Hamiltonian problem )

H |\psi \rangle = E|\psi \rangle

The variation method is a golden method to find the minimum energy. For the minimum energy solution \psi_0

\displaystyle \frac{\langle \psi_0|H|\psi_0\rangle}{\langle \psi_0|\psi_0 \rangle} = E_0

For other solution

\displaystyle \frac{\langle \psi|H|\psi\rangle}{\langle \psi|\psi \rangle} \geq E_0

This implies the diagonal elements of H are always larger or equal the minimum energy.


If we expand the solution in to an orthogonal complete basis \phi_i

\displaystyle |\psi\rangle = \sum_i a_i |\phi_i\rangle

The Schrodinger equation

\displaystyle \sum_i a_i H|\phi_i\rangle = \sum_i a_i E |\phi_i\rangle

since the basis is orthogonal, multiple

\displaystyle \sum_i a_i \langle \phi_j |H|\phi_i \rangle = \sum_i a_i E \langle \phi_j |\phi_i \rangle

\displaystyle \sum_i \langle \phi_j | H|\phi_i\rangle a_i = E a_j

\displaystyle \sum_i H_{ji} a_i = E a_j

In matrix form

 H \cdot \vec{a} = E \vec{a}

this is the eignevalue problem.

The eigenvalue problem does not solve the ground state, but also the excited states.

See also this post.

Review on rotation

Leave a comment

The rotation of a vector in a vector space can be done by either rotating the basis vector or the coordinate of the vector. Here, we always use fixed basis for rotation.

For a rigid body, its rotation can be accomplished using Euler rotation, or rotation around an axis.

Whenever a transform preserves the norm of the vector, it is a unitary transform. Rotation preserves the norm and it is a unitary transform, can it can be represented by a unitary matrix. As a unitary matrix, the eigen states are an convenient basis for the vector space.

We will start from 2-D space. Within the 2-D space, we discuss about rotation started by vector and then function. The vector function does not explicitly discussed, but it was touched when discussing on functions. In the course, the eigen state is a key concept, as it is a convenient basis. We skipped the discussion for 3-D space, the connection between 2-D and 3-D space was already discussed in previous post. At the end, we take about direct product space.


In 2-D space. A 2-D vector is rotated by a transform R, and the representation matrix of R has eigen value

\exp(\pm i \omega)

and eigenvector

\displaystyle \hat{e}_\pm = \mp \frac{ \hat{e}_x \pm i \hat{e}_y}{\sqrt{2}}

If all vector expand as a linear combination of the eigen vector, then the rotation can be done by simply multiplying the eigen value.

Now, for a 2-D function, the rotation is done by changing of coordinate. However, The functional space is also a vector space, such that

  1. a* f_1 + b* f_2 still in the space,
  2. exist of  unit and inverse of addition,
  3. the norm can be defined on a suitable domain by \int |f(x,y)|^2 dxdy

For example, the two functions \phi_1(x,y) = x, \phi_2(x,y) = y , the rotation can be done by a rotational matrix,

\displaystyle R = \begin{pmatrix} \cos(\omega) & -\sin(\omega) \\ \sin(\omega) & \cos(\omega) \end{pmatrix}

And, the product x^2, y^2, xy also from a basis. And the rotation on this new basis was induced from the original rotation.

\displaystyle R_2 = \begin{pmatrix} c^2 & s^2 & -2cs \\ s^2 & c^2 & 2cs \\ cs & -cs & c^2 - s^2 \end{pmatrix}

where c = \cos(\omega), s = \sin(\omega) . The space becomes “3-dimensional” because xy = yx, otherwise, it will becomes “4-dimensional”.

The 2-D function can also be expressed in polar coordinate, f(r, \theta) , and further decomposed into g(r) h(\theta) .


How can we find the eigen function for the angular part?

One way is using an operator that commutes with rotation, so that the eigen function of the operator is also the eigen function of the rotation. an example is the Laplacian.

The eigen function for the 2-D Lapacian is the Fourier series.

Therefore, if we can express the function into a polynomial of r^n (\exp(i n \theta)  , \exp(-i n \theta)) , the rotation of the function is simply multiplied by the rotation matrix.

The eigen function is

\displaystyle \phi_{nm}(\theta) = e^{i m \theta}, m = \pm

The D-matrix of rotation (D for Darstellung, representation in German)  \omega is

D^n_{mm'}(\omega) = \delta_{mm'} e^{i m \omega}

The delta function of m, m' indicates that a rotation does not mix the spaces. The transformation of the eigen function is

\displaystyle \phi_{nm}(\theta') = \sum_{nm} \phi_{nm'}(\theta) D^n_{m'm}(\omega)

for example,

f(x,y) = x^2 + k y^2

write in polar coordinate

\displaystyle f(r, \theta) = r^2 (\cos^2(\theta) + k \sin^2(\theta)) = \frac{r^2}{4} \sum_{nm} a_{nm} \phi_{nm}(\theta)

where a_0 = 2 + 2k, a_{2+} = a_{2-} = 1-a, a_{other} = 0.

The rotation is

\displaystyle f(r, \theta' = \theta + \omega ) = \frac{r^2}{4} \sum_{nm} a_{nm} \phi_{nm}(\theta) D^n_{mm}(\omega)  = \frac{r^2}{4} \sum_{nm} a_{nm} \phi_{nm}(\theta + \omega)

If we write the rotated function in Cartesian form,

f(x',y') = x'^2 + k y'^2 = (c^2 + k s^2)x^2 + (s^2 + k c^2)y^2 + 2(k-1) c s x y

where c = \cos(\omega), s = \sin(\omega) .


In 3-D space, the same logic still applicable.

The spherical harmonics Y_{lm} serves as the basis for eigenvalue of l(l+1), eigen spaces for difference l are orthogonal. This is an extension of the 2-D eigen function \exp(\pm n i \theta) .

A 3-D function can be expressed in spherical harmonics, and the rotation is simple multiplied with the Wigner D-matrix.


On above, we show an example of higher order rotation induced by product space. I called it the induced space (I am not sure it is the correct name or not), because the space is the same, but the order is higher.

For two particles system, the direct product space is formed by the product of the basis from two distinct space (could be identical space).

Capture.PNG

Some common direct product spaces are

  • combining two spins
  • combining two orbital angular momentum
  • two particles system

No matter induced space or direct product space, there structure are very similar. In 3-D rotation, the two spaces and the direct product space is related by the Clebsch-Gordon coefficient. While in 2-D rotation, we can see from the above discussion, the coefficient is simply 1.

Lets use 2-D space to show the “induced product” space. For order n=1, which is the primary base that contains only x, y.

For n=2, the space has x^2, y^2, xy, but the linear combination x^2 + y^2 is unchanged after rotation. Thus, the size of the space reduced 3-1 = 2.

For n = 3, the space has x^3, y^3, x^2y, xy^3 , this time, the linear combinations x^3 + xy^2 = x(x^2+y^2) behave like x and y^3 + x^2y behave like y, thus the size of the space reduce to 4 - 2 = 2.

For higher order, the total combination of x^ay^b, a+b = n is C^{n+1}_1 = n+1 , and we can find n-1 repeated combinations, thus the size of the irreducible space of order n is always 2.

For 3-D space, the size of combination of x^ay^bz^c, a + b+ c = n is C^{n+2}_2 = (n+1)(n+2)/2 . We can find n(n-1)/2 repeated combination, thus, the size of the irreducible  space of order n is always 2n+1.

Spherical Harmonics and Fourier Series

Leave a comment

Recently, I read a very interesting article on the origin of spherical harmonics. I like to summarize in here and add some personal comments.


Starting from Laplace equation

\nabla^2 \phi(\vec{r}) = 0

The Laplacian can be separated into radial and spherical part.

\nabla^2 = \nabla_r^2 + \nabla_\Omega^2

The solution is called harmonics, and it can be separated into radial part and angular part too,

\phi(\vec{r}) = R(r) \Theta(\Omega)

Since the Laplacian is coordinate-free, therefore, the solution is also coordinate free and is rotational invariant. We will come back to this point later.


A homogeneous function of degree n has property,

f(t\vec{r}) = t^n f(\vec{r})

In the case of homogeneous harmonics of degree n,

\phi_n(\vec{r}) = r^n \Theta_n(\Omega)

Here, the radial part is R_n(r) = r^n

Substitute this homogeneous harmonics into the Laplace equation, the \nabla_r^2 will produce a coefficient related to the order, and the radial part can be extracted.

0 = f(r) ( \nabla_\Omega^2 - g(n) ) \Theta(\Omega)

we have an eigenvalue problem for the angular part

 \nabla_\Omega^2 \Theta = g(n) \Theta

The eigen function for 2-D Laplacian is the Fourier Series, and that for 3-D is the Spherical Harmonics. In this sense, Fourier Series is a “polar harmonics”.


In 3-D, the angular part of the Lapacian is proportional to the angular momentum operator, -\hbar^2 \nabla_\Omega^2 = L^2 , where \hbar is the reduced Planck constant, which has the dimension of angular momentum.

L^2 Y_{lm}(\theta, \phi) = l(l+1) \hbar^2 Y_{lm}(\theta, \phi)

Here, from the previous discussion, before we solve the equation, we know that the harmonic has maximum order of l . The m is the degeneracy for same eigenvalue l(l+1)

As we mentioned before, the harmonics should be rotational invariant, such that any direction should be equal. However, when we look at the Spherical Harmonics, the poles are clearly two special points and the rotation around the “z-axis” has limited rotational symmetry with degree l. How come?

According to the article, the solution is not necessarily to be separated into \theta, \phi, such that

\displaystyle Y_{lm}(\theta,\phi) = \sqrt{\frac{2l+1}{4\pi} \frac{(l-m)!}{(l+m)!}}P_{lm}(\cos\theta) e^{im\phi}

I quote the original,

“It is not immediately obvious that we can separate variables and assume exponential functions in the φ direction. We are able to do this essentially because the lines of fixed θ are circles. We could also simply assume this form and show the construction succeeds. This organization is not forced, but separating the variables is so useful that there are no competitive options. A disadvantage of this organization is that it makes the poles into special points.”

The limited rotational symmetry with degree of l is due to the limited “band-width” that restricted by the order of the homogeneous function. The relation between the band width and the order of the harmonics can be understood that the number of “sector” or “node” on the circle/sphere is proportional to the order, thus, the “resolution” is also limited by the order and thus the “band-width”.

Since the Platonic solid is coordinate-free that they are the most symmetry. In the next post, I will show the relation between Spherical Harmonics and Platonic solid. This is related to the section 3.2 in the article,

“One would like to have an uniform discretization for the sphere, with all portions equally represented. From such an uniform discretization we could construct a platonic solid. It is known, however, that there are only a few platonic solids, and the largest number of faces is 20 (icosohedron) and largest number of vertices is 20 (dodecahedron). If we want to discretize the sphere with many points, we cannot do it uniformly. Instead we set the goal of using the fewest points to resolve the Spherical Harmonics up to some degree. Since the Spherical Harmonics themselves are “fair” and “uniform”, this gives a good representation for functions on the sphere. “


As the Fourier Series and Spherical Harmonic are closely related, they should share many properties. For instant, they are orthonormal and form a basis. This leads to the Discrete Fourier Transform and also the “Spherical Transform”,

\displaystyle f(\Omega) = \sum_{\alpha} a_\alpha \Theta_\alpha(\Omega)

where \alpha is the id of the basis. One can use the Parseval theorem,

\displaystyle \int |f(\Omega)|^2 d\Omega = \sum_{\alpha} a_\alpha^2

Also, the convolution using discrete Fourier transform can also be applied on the spherical harmonics.

Notice that, the Discrete Fourier Transform can “translate” to Continuous Fourier Transform. However, the order of the spherical harmonics is always discrete.

 

Solving general eigenvalue problem

Leave a comment

Given a matrix H we can find it eigen value on a given basis set |\phi_i\rangle . Suppose the eigen vector is

|\psi \rangle = \sum_i \alpha_i |\phi_i\rangle

Put in the eigen equation

H|\psi\rangle = \epsilon |\psi\rangle \implies  \sum_i H|\phi_i\rangle \alpha_i = \epsilon \sum_i |\phi_i\rangle \alpha_i

We can act \langle \phi_k| on the left, but in general, the basis set are not orthogonal.

H_{ij} \alpha_j = \epsilon S_{ij} \alpha_j \to H\cdot \alpha = \epsilon S\cdot \alpha

H_{ij} = \langle \phi_i|H|\phi_j\rangle , S_{ij} =  \langle \phi_i|\phi_j\rangle

This is the General Eigen Value Problem.


One way to solve the problem, is the “reconfigure” the basis so that it is orthogonal. However, in computation, non-orthogonal basis could give supreme advantage. So, the other way is split the problem. First solve the S\cdot v = \sigma v,

The eigen system of S is

S = P^T \cdot \sigma \cdot P, P\cdot P^T = P^T \cdot P = I

Here, \sigma is a diagonal matrix of eigen value. Now, we define a new non-unitary matrix

\displaystyle R_{ij} = \frac{P_{ij}}{\sqrt{\sigma_j}}

Notices that R^T \neq R^{-1}

Thus,

R^T \cdot S\cdot R = I \implies S = R^{-T} \cdot R^{-1}


We know that, the form R^T \cdot Q \cdot R is a transform that from one basis to another basis, i.e.

|\hat{\phi}_i \rangle = R|\phi_i\rangle

|\psi \rangle = \sum_i \alpha_i |\phi_i\rangle = \sum_i \alpha R^{-1} |\hat{\phi}_i\rangle = \sum_i \beta_i |\hat{\phi}_i \rangle \implies \alpha = R\cdot \beta

and for any operator,

\hat{Q} = R^T\cdot Q\cdot R


We put this back to the general problem

H\cdot\alpha = \epsilon S\cdot \alpha  = \epsilon R^{-T} \cdot R^{-1} \alpha

\implies R^T\cdot H \cdot \alpha = \epsilon R^{-1} \alpha

\implies \hat{H}\cdot \beta = R^T\cdot H \cdot R \cdot \beta = \epsilon \beta

Thus, we can solve the \hat{H} , get the eigen system, then use R\cdot \beta = \alpha


For example,

\displaystyle \phi_1 = (1,0), \phi_2 = \frac{1}{\sqrt{2}}(1,1)

S = \begin{pmatrix} 1 & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & 1 \end{pmatrix}

The eigen system is

P =  \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & -1 \\ 1 & 1 \end{pmatrix}

\sigma =  \begin{pmatrix} 1 + \frac{1}{\sqrt{2}} & 0 \\ 0 & 1 - \frac{1}{\sqrt{2}} \end{pmatrix}

The matrix R is

R =  \begin{pmatrix} \frac{1}{\sqrt{2+\sqrt{2}}} & -\frac{1}{\sqrt{2-\sqrt{2}}} \\ \frac{1}{\sqrt{2+\sqrt{2}}} & \frac{1}{\sqrt{2-\sqrt{2}}} \end{pmatrix}

We can verify that

R^T \cdot S \cdot R = I

R^{-T} \cdot  R^{-1} = S

R^T \cdot R = \begin{pmatrix} \frac{1}{2}(3-\sqrt{5}) & 0 \\ 0 & \frac{1}{2}(3+\sqrt{5}) \end{pmatrix} 

R \cdot R^T = \begin{pmatrix} 2 & -1 \\ -1 & 1 \end{pmatrix} 

Let a Hamilton is

H = \begin{pmatrix} 2 & 0 \\ 0 & 1 \end{pmatrix} 

The Hamiltonian in the basis is

H_S = [H_S]_{ij} = \phi_i \cdot H\cdot \phi_j =  \begin{pmatrix} 2 & 2 \\ 2 & 3 \end{pmatrix} 

The eigen values in the S frame are

\displaystyle \sigma_s = \frac{1}{2}(5 \pm \sqrt{17})

which is not the correct eigen values.

Now, we transform the Hamiltonian into orthogonal basis

\displaystyle H_R = R^T \cdot H_S \cdot R = \begin{pmatrix} \frac{15+\sqrt{5}}{10} & -\frac{1}{\sqrt{5}} \\ -\frac{1}{\sqrt{5}} & \frac{3}{2} - \frac{1}{2\sqrt{5}} \end{pmatrix}

The eigen values are

\sigma_\pm = 2, 1

where are the correct pair. The un-normalized eigen vector are

\displaystyle  \beta_{\pm} = \left(\frac{1}{2}(-1 \pm \sqrt{5}), 1\right)

We can verify that

\alpha_{\pm} = R\cdot \beta_{\pm}

H_S \cdot \alpha_\pm = \sigma_\pm S\cdot \alpha_\pm

 

Older Entries