Hartree method for 1D infinite potential well

Leave a comment

In order to test my understanding on the Hartree method. I played a simple case. The potential is

V(x) = \left\{ \begin{matrix} 0 , 0 \leq x \leq \pi  \\  \infty , else \end{matrix} \right\}

The mutual interaction between 2 particles is, for simple calculation,

G(x_1, x_2) = g_0 \cos(x_1 - x_2)

This mutual interaction is repulsive, we can expect the eigen wave function will be more spread out.

We use the eigen state of the V(x) potential to be the basis,

\displaystyle \phi_n(x) = \sqrt{\frac{2}{\pi}} \sin(nx)

The Hartree method is basically solving the Hartree equation

\displaystyle \left( -\frac{\hbar^2}{2m} \frac{d^2}{dx_1^2} + V(x_1) + \int G(x_1, x_2) (\psi_k(x_2))^2 dx_2 \right) \psi_n(x_1) = \epsilon_n \psi_n(x_1)

We assume the two particles are in the same ground state, thus, k = n = 1. We assume,

\psi_1(x) = \sum a_i \phi_i(x)


The algorithm is that, for the 1 particles matrix

\displaystyle F_{ij} = \int  \psi_i(x) \left( -\frac{\hbar^2}{2m} \frac{d^2}{dx^2} + V(x)  \right) \psi_j(x) dx

Define

\displaystyle G_0(x) = \int G(x, x_2) (\psi_1(x_2))^2 dx_2

The 2-particle matrix

\displaystyle G_{ij} = \int \psi_i(x) G_0(x) \psi_j(x) dx

For the 1st trial, we set

\psi_i^{(0)}(x) = \phi_{(2i+1)}(x)

We only use “even” function with respect to x = \pi/2, because mixing parity will result non-reasonable result. ( why ? )

Then we get the matrix of the Hartree Hamiltonian H_{ij} = F_{ij} + G_{ij}, and solve for the eigen value and eigen vector,

\vec{v}^{(0)} = (v_1^{(0)}, v_2^{(0)}, .... v_n^{(0)})

The next trial is

\displaystyle \psi_i^{(1)}(x) = \sum_i v_i^{(0)} \psi_i^{(0)}

The process repeated until converge. The convergent can be exam using the Hartree Hamiltonian that the non-diagonal terms are vanished, i.e.

F_{ij} + G_{ij} = 0, i\neq j


I tried with 2 basis and 6 basis, I set \hbar^2/2m = 1 for simplicity, the Hartree ground state is 1.7125 for both basis. The eigen wavefunction for the 2-basis is

\psi(x) = 0.797752 \sin(x) + 0.0145564 \sin(3x)

with the mean field

V_m(x) = 0.842569 \sin(x)

For 6 basis, the wave function is

\psi(x) = 0.797751 \sin(x) + 0.0145729 \sin(3 x) + 0.000789143 \sin(5 x) + 0.000125871 \sin(7 x) + 0.0000336832 \sin(9 x) + 0.0000119719 \sin(11 x)

we can see, the contribution of the higher energy states are getting smaller and smaller. The higher energy states only contributes 0.03%. Following is the plot compare \phi_1(x) (orange) and \psi(x) (blue). We can see that the wave function becomes little spread out.

psi.PNG

If we increase the strength of the mutual interaction by 10 times, the wave function becomes more spread out.

psi3.PNG

To cross check the result,  we can use the mean field and solve the Schrodinger equation using variational method. The method gives the ground state energy be 1.7125 and converge.

vm.PNG

This is interesting that the variational method needs 4 or 5 basis to converge, while Hartree method only need 2.

At last, we substitute the wave functon back to the Hartree equation, it turns out the result is not satisfied 100%. The total wave function is

\Psi(x_1, x_2) = \psi(x_1) \psi(x_2)

The Schrodinger equation is

\displaystyle  H \Psi = -\frac{\hbar^2}{2m}\left(\frac{d^2}{dx_1^2} + \frac{d^2}{dx_2^2}\right) \Psi(x_1,x_2) + G(x_1,x_2)\Psi(x_1,x_2) = E \Psi(x_1,x_2)

However, the left side and the right side are not equal, but the integration

\langle \Psi |H|\Psi\rangle  = \langle |\Psi|\Psi \rangle

is equal.

I plot the \Psi H\Psi (orange) and $ E \Psi^2 $ (blue), and there difference is the following

diff.PNG

We can imagine, the integration of the bottom plot is zero. I don’t know the reason, but I guess it is not because of the number of basis, because the higher energy states only contributes for 0.03%. I guess, it is because of the energy is the quantity that is optimized rather then the wave function. And I am not sure that the reverse of the following is true.

H|\Psi\rangle = E |\Psi\rangle  \rightarrow \langle \Psi|H|\Psi \rangle = E \langle \Psi|\Psi \rangle


For the case of many particles, the G_0 has to sum over all other particle. When all particles are in the same states, the sum is simple. For particles distribute among different states, in general, the sum is

\displaystyle G_0 =\sum_{k\neq i} \int G(x_1,x_2) \psi_k(x_2)^2 dx

Here, the sum is excluded the particle x_1, which is in state i. For example, if the system consists of 2 particles on state-1, and 2 particles on state-2. The mean field for the particle in state-1 is constructed using 1 particles on state-1 and 2 particles on state-2. Thus the energy for the state-2 may be not correct.

The mean field for the particle on state-2, the mean field should be constructed by 2 particles on state-1 and 1 particle on state-2.

For higher states that without particle, the mean field is built by 2 particles on state-1 and 2 particles on state-2.

In general, the mean field is constructed using other particles.

This fixed mean field method is simple but we have to define the mean field for each state and calculate again and again.

Advertisements

Algorithm of Wavelet Transform (with Qt class)

Leave a comment

There are many kind of wavelet transform, and I think the names are quite confusing.

For instance, there are continuous and discrete wavelet transforms, in which, the “continuous” and “discrete” are for the wavelet parameters, not for the “data” itself. Therefore, for discrete data, there are “continuous” and “discrete” wavelet transforms, and for function, there are also “continuous” and “discrete” wavelet transforms.

In here, we will focus on discrete wavelet transform for function first. This discrete wavelet transform is also called as wavelet series, which express a compact support function into series of wavelet.

For simplicity, we also focus on orthonormal wavelet.

As the wavelet span the entire space, any compact function can be expressed as

\displaystyle f(t) = \sum_{j,k} \left<f(t)|\psi_{j,k}(t)\right> \psi_{j,k}(t)

\psi_{j,k}(t) = 2^{j/2} \psi(2^j t - k)

where j, k are integer.


Now, we move to discrete data discrete wavelet transform. The data is discrete, we can imagine only t_n = t_0 + n \Delta points are known with finite n .

\displaystyle f_n = f(t_n) = \sum_{j,k} \left<f_n|\psi_{j,k}(t_n) \right> \psi_{j,k}(t_n)

the integration becomes a finite sum.

Without loss of generality, we can set t_0 = 0, \Delta = 1, and then the time axis becomes an integer number axis. We found that j  \leq 0 as the wavelet can only be expand, not shrink. Because there are finite number of data point, i.e. n < \infty, -Log_2(n) < j \leq 0 .

However, this double summation for each f_n is very time consuming. There is a Fast Discrete Wavelet Transform. Before we continuous, we must study the wavelet.


From the last post, we know that the scaling function that generate a MRA must be:

\displaystyle \phi(t) = \sum_{k} g_0(k) \phi(2t-k)

\left<\phi(t-k) | \phi(t-k') \right> = \delta_{kk'}

, where k are integer. The set of shifted scaling function span a space V_0 . For the wavelet,

\displaystyle \psi(t) = \sum_{k} g_1(k) \psi(2t-k)

\left<\psi(t-k) | \psi(t-k') \right> = \delta_{kk'}

The set of shifted wavelet span a space W_0, so that W_0 \perp V_0, so that

\left<\phi(t-k)|\psi(t-k') \right> = 0

Since the wavelet is generated from the scaling function, we expect the coefficient of g_0(k) and g_1(k) are related. In fact, the relationship for orthonormal scaling function and wavelet is

g_1(k) = (-1)^k g_0(1-k)


For discrete data x_i , it can be decomposed into the MRA space. We start by the largest V_0 space, where the wavelet is most shrunken.

\displaystyle x_i = \sum_{k} v_{0,k} \phi(i-k)

to decompose to the V_{-1} and W_{-1} space. We can use the nested property of the MRA space, \phi(2t) can be decomposed into \phi(t-k) and \psi(t-k) ,

\displaystyle \psi(2t-l) = \sum_{k} h_0(2k-l) \phi(t-k) + h_1(2k-l) \psi(t-k)

where (given that \phi(t) and $\latex \psi(t)$ are orthonormal ),

h_0(2k-l) = \left< \phi(2t-l) | \phi(t-k) \right>

h_1(2k-l) = \left< \phi(2t-l) | \psi(t-k) \right>

Therefore, using the coefficient of h_0 and h_1, the wavelet coefficient v_{0,k} can be decomposed to

\displaystyle v_{s-1,k} = \sum_{l} h_0(2k-l) v_{s,l}  

\displaystyle w_{s-1,k} = \sum_{l} h_1(2k-l) v_{s,l}

in graphic representation

h1.PNG

This is a fast discrete wavelet transform.


Due to the nested space of MRA, we also expect that the coefficient h_0 and h_1 are related to g_0 . For orthonormal wavelet,

\displaystyle h_0(k) = \frac{1}{2} g_0(-k)

\displaystyle h_1(k) = \frac{1}{2} (-1)^{k} g_0 (k+1)

Since the g_0 is finite, the g_1, h_0, h_1 are all finite. That greatly reduce the computation cost of the discrete wavelet transform.


To reconstruct the discrete data x_i, we don’t need to use

\displaystyle v_{s+1,l} = \sum_{k} v_{s,k} \phi(l - k) + w_{s,k} \psi(l-k)

using the nested space of MRA, \psi(t) = \sum_{k} g_1(k) \psi(2t-k) ,

\displaystyle v_{s+1,l} = \sum_{k} g_0(l-2k) v_{s,k} + g_1(l-2k) w_{s,k}

in graphical representation,

h2.PNG


I attached the wavelet transfrom class for Qt, feel free to modify.

https://drive.google.com/file/d/0BycN9tiDv9kmMVRfcVFMbDFWS0k/view?usp=sharing

https://drive.google.com/file/d/0BycN9tiDv9kmUmlmNk1kaVJCbEU/view?usp=sharing

in the code, the data did not transform to MRA space. The code treats the data already in the MRA space. Some people said this is a “crime”. But for the seek of “speed”, it is no need to map the original discrete data into MRA space. But i agree, for continuous function, we must map to MRA space.

 

 

Wavelet Analysis or MRA

2 Comments

Although the Fourier transform is a very powerful tool for data analysis, it has some limit due to lack of time information. From physics point of view, any time-data should live in time-frequency space. Since the Fourier transform has very narrow frequency resolution, according to  uncertainty principle, the time resolution will be very large, therefore, no time information can be given by Fourier transform.

Usually, such limitation would not be a problem. However, when analysis musics, long term performance of a device, or seismic survey, time information is very crucial.

To over come this difficulty, there a short-time Fourier transform (STFT) was developed. The idea is the applied a time-window (a piecewise uniform function, or Gaussian) on the data first, then FT. By applying the time-window on difference time of the data (or shifting the window), we can get the time information. However, since the frequency range of the time-window  always covers the low frequency, this means the high frequency  signal is hard to extract.

To improve the STFT, the time-window can be scaled (usually by 2). When the time window is shrink by factor of 2, the frequency range is expanded by factor of 2. If we can subtract the frequency ranges for the time-window and the shrink-time-window, the high frequency range is isolated.

To be more clear, let say the time-window function be

\phi_{[0,1)}(t) = 1 , 0 \leq t < 1

its FT is

\hat{\phi}(\omega) = sinc(\pi \omega)

Lets also define a dilation operator

Df(t) = \sqrt{2} f(2t)

the factor \sqrt{2} is for normalization.

The FT of D\phi(t) has smaller frequency range, like the following graph.

Capture.PNG

We can subtract the orange and blue curve to get the green curve. Then FT back the green curve to get the high-frequency time-window.

We can see that, we can repeat the dilation, or anti-dilation infinite time. Because of this, we can drop the FT basis Exp(-2\pi i t \omega), only use the low-pass time-window to see the low-frequency behaviour of the data, and use the high-pass time-window to see the high-frequency behaviour of the data. Now, we stepped into the Multi-resolution analysis (MRA).

In MRA, the low-pass time-window is called scaling function \phi(t) , and the high-pass time-window is called wavelet \psi(t).

Since the scaling function is craetd by dilation, it has the property

\phi(t) = \sum_{k} g_{0}(k) \phi(2t-k)

where k is integer. This means the vector space span by {\phi(t-k)}_{k}=V_0 is a subspace of the dilated space DV_0 =V_1. The dilation can be go one forever, so that the whole frequency domain will be covered by V_{\infty}.

Also, the space span by the wavelet, {\psi(t-k)}=W_0, is also a subspace of V_1. Thus, we can imagine the structure of MRA is:

Capture.PNG

Therefore, any function f(t) can also be expressed into the wavelet spaces. i.e.

f(t) = \sum_{j,k} w_{j,k} 2^{j/2}\psi(2^j t - k)

where j, k are integers.

I know this introduction is very rough, but it gives a relatively smooth transition from FT to WT (wavelet transform), when compare to the available material on the web.

 

 

 

 

Phase shift of elastics scattering

Leave a comment

I found that the derivation of most “google result” is not clear enough. So here is my derivation. Before process, people may need to review the pervious post.

Most people start from,

u_l = A( \hat{h}_l^- - S_l \hat{h}_l^+ )

that annoying me because it is somehow not “natural”. Why there is a “minus” sign? Why the \hat{h}_l^- is the first term? For my self, a more natural way is,

u_l = a \hat{h}_l^+ + b \hat{h}_l^-

where a, b are complex numbers, but that is still not so natural, because in numerical calculation, for simplicity, there is no complex number, we only have,

u_l = \alpha \hat{j}_l + \beta \hat{n}_l

The first term is alway there as it is the free solution and bounded at r = 0. the second term is caused by the potential.


The goal is to find a solution take the form

\displaystyle \psi = A \left( e^{i \vec{k} \cdot \vec{r}} + f(\theta) \frac{e^{ikr}}{r} \right)

where the first term is free wave and the second term is scattered wave. The solution for elastics scattering is

\displaystyle \psi = \sum C_l P_l (\cos\theta) \frac{u_l}{kr} \rightarrow \sum C_l P_l(\cos\theta) (\alpha \hat{j}_l + \beta \hat{n}_l)

we used the substitution,

\displaystyle R_l(r) = \frac{u_l(\rho)}{\rho},  \rho = kr.

The radial function can be solved using Rungu-Kutta method on the equation,

\displaystyle \frac{d^2}{d\rho^2} u_l = \frac{2 m_\mu}{\hbar^2} (V-E) u_l + \frac{l(l+1)}{\rho^2}

and the solution of u_l at far away is,

u_l \rightarrow  \alpha \hat{j}_l + \beta \hat{n}_l.

the arrow means r \rightarrow \infty. So, the problem is how to rewrite the solution. In the way, we will see how the phase shift or the S-matrix was found.


The free solution is the spherical wave,

\displaystyle e^{i \vec{k} \cdot \vec{r}} = \sum_l (2l+1) i^l P_l(\cos\theta) j_l(kr)

The spherical Bessel function j_l(kr) cna be express as Heankel function

h_l^{\pm} = n_l \pm i j_l \rightarrow e^{\pm i (kr - l \frac{\pi}{2})}

The + sign is outgoing wave.


\displaystyle u_l \rightarrow (\alpha \hat{j}_l + \beta \hat{n}_l)

\displaystyle = \frac{\alpha}{2i} (\hat{h}_l^{+} - \hat{h}_l^{-}) + \frac{\beta}{2}(\hat{h}_l^{+} + \hat{h}_l^{-})

\displaystyle = \frac{\alpha + i \beta}{2i} (\hat{h}_l^{+} - \hat{h}_l^{-}) + \beta \hat{h}_l^{+}

\displaystyle = (\alpha - i \beta ) \left( \frac{\hat{h}_l^+ - \hat{h}_l^-}{2i} + \frac{\beta}{\alpha - i \beta} \hat{h}_l^+\right)

\displaystyle = (\alpha - i \beta ) \left( \hat{j}_l + \frac{\beta}{\alpha - i \beta} \hat{h}_l^+\right)

Since the u_l should be normalized, we can se \alpha = \cos \delta and \beta = \sin\delta.

\displaystyle \frac{\beta}{\alpha - i \beta } = \sin(\delta) e^{i\delta}

We put u_l back

\displaystyle \psi \rightarrow \sum_l C_l P_l (cos\theta)(\alpha - i \beta ) \left( \hat{j}_l + \sin(\delta) e^{i\delta} \hat{h}_l^+\right)

By setting

\displaystyle C_l = A i^l \frac{2l+1}{\alpha - i \beta} ,

we have the first term is the free wave function. In the second term, \hat{h}_l^+ \rightarrow e^{i(kr - l \frac{\pi}{2}}) / kr . Notice that

e^{i l \frac{\pi}{2}} = i^{-l}

That cancel the i^l term in C_l . And we have

 \displaystyle f(\theta) = \sum (2l+1) P_l (\cos\theta) \frac{\sin(\delta) e^{i\delta}}{k}


some people will write the u_l as \hat{h}_l^{\pm} and the S-matrix,

\displaystyle u_l = \frac{\alpha + i \beta} {2i} \hat{h}_l^+ - \frac{\alpha - i \beta}{2i} \hat{h}_l^-

\displaystyle = -\frac{\alpha - i \beta}{2i} \left( \hat{h}_l^- - \frac{\alpha + i \beta}{\alpha - i \beta} \hat{h}_l^+ \right)

\displaystyle = A' (\hat{h}_l^- - S_l \hat{h}_l^+)

where

\displaystyle S_l =\frac{\alpha + i \beta}{\alpha - i \beta} = e^{2i\delta} .

Remember that this is the S-matrix for elastics scattering.

 

Solving radial SE numerically

2 Comments

The time-independent Schrödinger equation is

(-\frac{\hbar^2}{2m}\nabla^2 + V ) \Psi = E \Psi

Using the Laplacian in spherical coordinate. and Set \Psi = R Y

\nabla^2 R Y - \frac{2m}{\hbar^2}(V-E) R Y = 0

\nabla^2 = \frac{1}{r^2}\frac{d}{dr}(r^2 \frac{d}{dr}) - \frac{1}{r^2} L^2

The angular part,

L^2 Y = l(l+1) Y

The radial part,

\frac{d}{dr}(r^2\frac{dR}{dr}) - l(l+1)R - \frac{2mr^2}{\hbar^2}(V-E) R = 0

To simplify the first term,

R = \frac{u}{r}

\frac{d}{dr}(r^2 \frac{dR}{dr})= r \frac{d^2u}{dr^2}

A more easy form of the radial function is,

\frac{d^2u}{dr^2} + \frac{l(l+1)}{r^2} u - \frac{2m}{\hbar^2} (V-E) u = 0

The effective potential U

U = V + \frac{\hbar^2}{m} \frac{l(l+1)}{r^2}

\frac{d^2u}{dr^2} + \frac{2m}{\hbar^2} (E - U) u = 0

We can use Rungu-Kutta method to numerically solve the equation.

RK4.PNG

The initial condition of u has to be 0. (home work)

I used excel to calculate a scattered state of L = 0 of energy 30 MeV. The potential is a Wood-Saxon of depth 50 MeV, radius 3.5 fm, diffusiveness 0.8 fm.

e2.PNG

Another example if bound state of L = 0. I have to search for the energy, so that the wavefunction is flat at large distance. The outermost eigen energy is -7.27 MeV. From the radial function, we know it is a 2s orbit.

ex1.PNG

A reminder for the use of Dirac notation and Translation

Leave a comment

The momentum operator \hat{P} acts on a position ket or bar is tricky.

\left<x\right|\hat{P} A B = -i\frac{d}{dx} (\left<x\right|A) B

B A \hat{P}\left|x\right> = B i\frac{d}{dx} ( A \left|x\right>)

I added A and B to emphasize the tricky point, where A or B can be an operator, a ket, a bar, or a variable. We can see, after the first quantization, the position bar or ket is always waiting for something and the derivative is differentiate the whole thing, not only the bar or ket. Miss use will result incorrect result. for example, the non-commute property of the position and the momentum operator,

\left<x\right|\hat{P} \hat{X} \left|x\right>= -i\frac{d}{dx} (\left<x\right|X)\left|x\right> =-i\frac{d}{dx} (\left<x\right| x)\left|x\right> 

=-i x \frac{d}{dx} (\left<x\right|)\left|x\right> - i  = x \left<x\right|\hat{P}\left|x\right> - i = \left<x\right|(\hat{X}\hat{P} - i)\left|x\right>

If you use it wrongly, you may have something like

\left<x\right|\hat{P} \hat{X}\left|x\right> = -i\frac{d\left<x\right|}{dx} \hat{X}\left|x\right>

or

\left<x\right|\hat{P} \hat{X}\left|x\right> = -i\frac{d}{dx} (\left<x\right|\hat{X}\left|x\right>) = -i\frac{d}{dx}(x) = -i


The translation operator.

In many text book, the translation operator is stated as

D(\epsilon) = 1 - i \hat{P} \epsilon,

where \hat{P} is momentum operator or the generator of displacement.

*********************

Actually, there is a reason, notice that the translation operator is unitary operator,

D(\epsilon)D^{\dagger}(\epsilon) = 1

because

\langle x|D^{\dagger} D |x\rangle = 1

then , since any unitary operator can be written as

D(\epsilon) = \exp(-i \epsilon \hat{Q}) \sim 1 - i \epsilon \hat{Q} +....

where \hat{Q} is a Hermitian operator.

But what is \hat{Q} ? since \hat{Q} is Hermitian, it has eigen ket and eigenvalue,

\hat{Q} \left|q\right> = \left|q\right> q.

The, we can check

\left<x\right|D(\epsilon) \left|q\right> = \left<x\right|\exp(-i \epsilon \hat{Q}) \left|q\right> = \left<x\right|\exp(-i \epsilon q) \left|q\right> =\exp(-i \epsilon q) \left<x|q\right>

also

\left<x\right|D(\epsilon) \left|q\right> = \left<x-\epsilon|q\right> = \exp(-i \epsilon q) \left<x|q\right>

we set a wave function \psi_q(x) = \left<x|q\right>

\psi_q(x-\epsilon) = \exp(-i\epsilon q ) \psi_q(x)

Then, we can have a derivative of the wave function

\frac{d}{dx}\phi_q(x) =\lim_{\epsilon\to 0} \frac{\exp(i\epsilon q ) -1 }{\epsilon} \psi_q(x) = i q \psi_q(x)

the solution is \psi_q(x) = \exp(iqx) , which is a plane wave with momentum q. Therefore, the operator \hat{Q} is the momentum operator, and it is really a generator a translation on the wave function \psi_q(x) \rightarrow \psi_q(x+\epsilon). To find the form of the momentum operator acting on position bar,

\left<x|\hat{Q}|q\right> = q \exp(-i q x) = -i \frac{d}{dx}( \exp(-iqx)) = -i\frac{d}{dx} (\left<x|q\right>)

\left<x\right|\hat{Q} A= -i\frac{d}{dx} (\left<x\right| A)

*********************************

another mathematical way to do is, using Taylor series,

D(\epsilon) \left|x\right> = \left|x+\epsilon\right> = \left|x\right> + \epsilon \frac{d}{dx}\left|x\right> + ...

compare with the exponential, than

\hat{Q}\left|x\right> = i\frac{d}{dx} ( \left|x\right>) .

\left<x|\hat{Q}|q\right> = -i \frac{d}{dx}(\left<x|q\right>) =\left<x|q\right> q.

The time propagation and angle rotation can also do in the same way. The rotation around z-axis is trivial, but a general rotation is not so easy in detail, but the idea can be generalized that the rotation generator is  \vec{\hat{J}}\cdot \vec{n}.

Why the general translation operator is in exponential form? because it is a unitary operator. Why it is unitary? because it is symmetric that translate back and forth result no change. In general, if we have a quantity that can be “translate” and the translation is symmetry, than, the generator must take the same exponential form.

It is interesting that, the translation generator \hat{Q} is a differential of the position \alpha, here \alpha is a position of a general coordinate.  When I started QM, I always feel insecure that change of position like this way, because classically, the change of a position means it moves in time, and there is no time in the \hat{P}. However, look at the rotation generator. The same derivation (between the *****) is applicable on the rotation about z-axis that replace the x \rightarrow \phi, azimuth angle. Then, at the end, we will recognize the generator \hat{Q} = -i\frac{d}{d\phi} = L_z , which is an angular momentum operator on the z-axis, and classically in the sense that the L_z = \frac{d}{d\phi} is known in EM in spherical coordinate (the wave equation). The key point is that the rotation on z-axis takes the form \frac{d}{d\phi}!

There is still some things need to clarify, like the time translation operator is unitary, but the time-reversal operator is anti-unitary. Also, the detail of the general rotation operator \hat{D}_n(\epsilon) \left|\vec{r}\right> = \left|\vec{r'}\right>. we know that the general rotation operator is a matrix form, How?

 

 

Gamma Transition

Leave a comment

 

The gamma decay brings a excited nucleus to a lower energy state by emitting a photon. Photon carry angular momentum with intrinsic spin of 1. Its parity is positive. Thus, we have three constrain from conversation laws immediately.

E_f = E_i + \hbar \omega

J_f^{\pi_f} = J_i^{\pi_i} + L

where E is the energy of the nucleus, \hbar \omega and L are the photon energy and angular momentum respectively. We also have to consider the parities of electric and magnetic transition are different.


 

To calculate the transition rate, we can start from a classical equation of power emission from an antenna, since the photon energy is quantized, the transition rate [number of photon emitted per time] is the power divided by a photon energy.

T(qL) = \frac{2(2L+1)}{\epsilon_0 L [(2L+1)!!]^2 \hbar} (\frac{\omega}{c})^{2L+1} B(qL)

where qL is the electromagnetic multipole with angular momentum L, and B(qL) the the reduced transition probability, it is equal to the square of the magnitude of the transition matrix element M_{fi}(qL).


In the electric transition, the multipole is

qL = e r^L

we assume the transition is conduced by a single nucleon and the rest of the nucleus is unaffected. The transition matrix element than can be written as

M_{fi}(qL) = \left<j_f m_f|e r^L Y_{LM}(\Omega)| j_i m_i\right>

The single particle wave function can be written as

\left|j m\right>= R_{nl}(r) [Y_l \times \chi_{1/2}]_{jm}

The matrix elements becomes,

M_{fi}(qL) = e \int_{0}^{\infty} R_{n_f l_f}^* r^L R_{n_i l_i} r^2 dr \times \left<Y_{LM}\right>

To evaluate the radial integral, we make another assumption that the nucleus is a sphere of uniform density with radius R=r_0 A^{1/3},

R_{nl}(r) = \frac{\sqrt{3}}{R^{3/2}}, for r<R, so that \int_{0}^{R} |R_{nl}(r)|^2 r^2 dr = 1

Then the radial integral is

\left<r^L \right>=\frac{3}{R}\int_{0}^{R} r^{L+2} dr = \frac{3}{L+3} r_0^L A^{L/3}

The reduced transitional probability

B_{sp}(qL)=\sum \limits_{M m_f} |\left<j_f m_f|e r^L Y_{LM}|j_i m_i\right>|^2

= e^{2} \left< r^{L} \right> ^{2} \sum \limits_{M m_f} \left<Y_{LM}\right>

the angular part could be assumed as 1/4\pi as the total solid angle is 4\pi. Thus, with these three assumptions, we have the Weisskopf single particle estimation for the L-pole reduced electric transition probability

B_W(EL) = \frac{1}{4\pi}(\frac{3}{L+3})^2 r_0^{2L} A^{2L/3} [e^2 fm^{2L}]


For the magnetic transition, we have to take into account of the spin and orbit angular momentum. The single particle reduced transition probability

B_{sp}(ML) = \sum \limits_{M m_f} |\left<j_f mf| qL |j_i m_i\right>|^2

the result,

B_{sp}(ML)=L(2L+1) \left< r^{(L-1)} \right> ^2

\sum \limits_{M m_f} ((g_s - \frac{2g_l}{L+1}) \left< [ Y_{L-1} \times \vec{s} ]_{LM} \right> \left< [ Y_{L-1} \times \vec{j} ]_{LM} \right> )^2

The term

L(2L+1)(g_s \frac{2g_l}{L+1})^2 \sim 10

and the rest of the angular part assumed to be 1/4\pi again, then

B_W(ML) = \frac{10}{\pi}(\frac{3}{L+3})^2 r_0^{(2L-2)} A^{(2L-2)/3} \mu_N^2 fm^{2L-2}

and notice that \mu_N = e\hbar / (2m_p).


 

Some results can be deduced form the calculation

B_{sp}(ML)/B_{sp}(EL) \sim 0.3 A^{-2/3}

B_{sp}(EL)/B_{sp}(E(L-1)) \sim \frac{1}{7} \times 10^7 A^{-2/3} E_\gamma^{-2}

T(E1) = 1.0 \times 10^{14} A^{2/3} E_\gamma^3

T(E2) = 7.3 \times 10^{7} A^{4/3} E_\gamma^5

T(E3) = 34 A^{2} E_\gamma^7

T(M1) = 3.1 \times 10^{13} A^{0} E_\gamma^3

T(M2) = 2.2 \times 10^{7} A^{2/3} E_\gamma^5

T(M3) = 10 A^{4/3} E_\gamma^7

The deviation from the single particle limit indicates a strong collective state.


 

0 \rightarrow 0, forbidden

1^+ \rightarrow 0^+, M1

2^+ \rightarrow 0^+,  E2

 

 

 

 

Older Entries