Momentum Matching in Transfer Reaction

Leave a comment

In transfer reaction, the kinematics determines the scattering angles and momenta, but it does not tell the probability of the reaction. Not all angle have same cross section, because when a particle is being transfer to or pickup from a nucleus, the momentum has to be matched. It is like a spaceship need a proper angle and speed in order to orbit around the earth, it the incident angle or speed are not good, the spaceship will just go out or crash into the earth.

Suppose our reaction is A(B,1)2, A is incident particle, B is the target, 1, 2 are the outgoing particles. The momentum transfer from A via 2 to B or 1 is

\vec{q} = \vec{p}_2 - \vec{p}_A = \vec{p}_f - \vec{p}_i

where \vec{p}_i is initial momentum and \vec{p}_f is final momentum.

Assume the reaction take place at the surface of nucleus B. The maximum angular momentum it can create is

\vec{L} = \vec{r} \times \vec{q} \rightarrow L = r q

In QM, the angular momentum must be L = \sqrt{l(l+1)} \hbar . Here we pause for unit conversion. In this blog, we usually using nuclear unit, that momentum in [MeV/c] and \hbar = 197.327 \textrm{MeV fm/c}, thus, to calculate the angular momentum, simple multiple the momentum in MeV/c and nuclear radius in fm. The radius of a nucleus is roughly r = 1.25 A^{1/3} \textrm{fm}.

So, if the momentum \sqrt{(l+1)(l+2)} \hbar > r|q| > \sqrt{l(l+1)} \hbar , the possible angular momentum is \sqrt{l(l+1)} \hbar or the l-orbit.

In the transfer reaction, the equation for \vec{p}_k and \vec{p}_i are known. It is straight forward to calculate r q / \hbar, plot against scattering angle, and we look for the value for \sqrt{l(l+1)} .

Or, we can scale the y-axis by

\displaystyle y' = \sqrt{\left(q\frac{r}{\hbar}\right)^2 + \frac{1}{4}} - \frac{1}{2}

The new y-axis is in unit of \sqrt{l(l+1)} .

Transfer_matching.png


We can also plot the contour of momentum matching on the p_k versus scatering angle. The momentum matching is

\displaystyle q^2 = \frac{l(l+1)\hbar^2}{r^2} = p_f^2 + p_i^2 - 2 p_f p_i \cos(\theta)

where \theta is the scattering angle of particle 1. Solve for p_f

\displaystyle p_f = p_i \cos(\theta) \pm \sqrt{ \left( \frac{l(l+1) \hbar^2}{r^2} \right) - k_i^2 \sin^2(\theta) }

We can plot the contour plot for difference l. and then overlap with the momentum and the scattering angle of the outgoing particle 2.

Transfer_matching2.png

Screen Shot 2018-01-30 at 14.31.12.png

a correction, the y-axis should be p_2, momentum of particle 2.

 

Advertisements

Kinematics of Transfer Reaction

Leave a comment

Another surprise that no post on the kinematics of transfer reaction!


Suppose the reaction is A(B, 1)2 , where A is the incident particle, B is the target nucleus that is at rest, 1, 2 are the outgoing particle.

The Q-value for the reaction is the difference between initial mass and final mass

Q = m_A+ m_B - m_1-m_2

When the incident energy is too low, lower then the Q-value (we will see the relationship later) , the reaction is impossible, because the nuclei-1 and 2 cannot be created.

The equation for the reaction is

P_A + P_B = P_1 + P_2

where P_i are 4-momenta. The number of freedom is 2. The total unknown is 8 from P_1, P_2, the masses of particle-1, and -2 give 2 constrains, the equations give 4 constrains.

P_A = ( E_A , 0, 0, k_A) = ( m_A + T_A, 0, 0, \sqrt{2 m_A T_A + T_A^2})\\ P_B = (m_B, 0, 0, 0)

where T_A is the incident energy. We first transform the system into center of momentum (CM) frame. Since the total momentum is zero in the CM frame, we construct a CM 4-momentum

P_{cm} = (P_A+P_B)/2 = ( m_A+ m_B + T_A, 0, 0, k_A)

The Lorentz beta is \beta = k_A/(m_A+m_B+T_A) .

The invariance mass of the CM 4-momentum is unchanged, which is equal to the total energy of the particles in the CM frame. We can check

P_A' = ( \gamma E_A - \gamma \beta k_A, 0, 0, -\gamma \beta E_A + \gamma k_A) \\ P_B' = ( \gamma m_B , 0, 0, -\gamma \beta m_B )

The total energy in CM frame is

E_{tot} = \gamma( m_A+m_B + T_A) - \gamma \beta k_A = \sqrt{(m_A+m_B+T_A)^2 - k_A^2}


After the reaction, the momenta of the particle-1 and -2 are the same an opposite direction in the CM frame.Balancing the energy and momentum, the momentum is

\displaystyle k  = \frac{1}{2 E_{tot}} \sqrt{(E_{tot}^2 - (m_1+m_2)^2)(E_{tot}^2 - (m_1-m_2)^2)}

In the equation, if either of the bracket is negative, the momentum cannot be formed. The smaller term is

E_{tot}^2 - (m_1+m_2)^2 = (m_A+m_B+T_A)^2 - k_A^2 - (m_1+m_2)^2  \\ =(m_A+m_B+T_A)^2 - k_A^2 - (m_A+m_B-Q)^2 \\ = 2m_BT_A + 2(m_A + m_B)Q - Q^2

If we set the line for this quantity be zero,

\displaystyle 2m_BT_A + 2(m_A + m_B)Q - Q^2 = 0 \\ T_A = \frac{Q^2 - 2(m_A+m_B)}{2m_B}  \\ = \frac{1}{2m_B} ( (Q-m_A-m_B)^2 - (m_A+m_B)^2) \\ = \frac{1}{2m_B} ( (m_1+m_2)^2 - (m_A+m_B)^2)

Thus, if the T_A is smaller then that value, the reaction is impossible.


The 4-momenta of the outgoing particles can be constructed.

P_1' = ( \sqrt{m_1^2 - k^2} , k \cos(\phi) \sin(\theta) , k \sin(\phi) \sin(\theta) , k \cos(\theta)) \\ P_2' = ( \sqrt{m_2^2 - k^2} , -k \cos(\phi) \sin(\theta) ,- k \sin(\phi) \sin(\theta) , -k \cos(\theta))

Back to Lab frame,

P_1 = \begin{pmatrix} \gamma \sqrt{m_1^2 + k^2}  + \gamma \beta k \cos(\theta) \\ k \cos(\phi) \sin(\theta) \\ k \sin(\phi) \sin(\theta) \\ \gamma \beta \sqrt{m_1^2 - k^2}  + \gamma k \cos(\theta)\end{pmatrix}

For simplicity, set \phi = 0,

P_1 = \begin{pmatrix} \gamma \sqrt{m_1^2 + k^2}  + \gamma \beta k \cos(\theta) \\ k \cos(\phi) \sin(\theta) \\ 0 \\ \gamma \beta \sqrt{m_1^2 - k^2}  + \gamma k \cos(\theta) \end{pmatrix}

We can see, the locus in the momentum space is

( k cos(\phi) \sin(\theta) , 0, \gamma \beta \sqrt{m_1^2 - k^2}  + \gamma k \cos(\theta))

This is an ellipse with length k (1, \gamma) , and centered at (0, \gamma \beta \sqrt{m_1^2 + k^2}) . If the relativistic effect is small, it is a circle.

When the ellipse long axis \gamma k is longer then the center position \gamma \beta \sqrt{m_1^2 + k^2} , the scattering angle can be to whole circle, i.e.

\displaystyle \gamma \beta \sqrt{m_1^2 + k^2} < \gamma k \rightarrow \gamma \beta m_1 < k

in here, we can treat \gamma \beta m_1 = k_\beta(1) , this is the momentum “gain” from shifting from CM frame to Lab frame. By compare k_\beta(1) or k_beta(2) to k we can know the span of scattering angle.

Here is an example,

Screen Shot 2018-01-30 at 14.54.31.png

 

Hartree method for 1D infinite potential well

Leave a comment

In order to test my understanding on the Hartree method. I played a simple case. The potential is

V(x) = \left\{ \begin{matrix} 0 , 0 \leq x \leq \pi  \\  \infty , else \end{matrix} \right\}

The mutual interaction between 2 particles is, for simple calculation,

G(x_1, x_2) = g_0 \cos(x_1 - x_2)

This mutual interaction is repulsive, we can expect the eigen wave function will be more spread out.

We use the eigen state of the V(x) potential to be the basis,

\displaystyle \phi_n(x) = \sqrt{\frac{2}{\pi}} \sin(nx)

The Hartree method is basically solving the Hartree equation

\displaystyle \left( -\frac{\hbar^2}{2m} \frac{d^2}{dx_1^2} + V(x_1) + \int G(x_1, x_2) (\psi_k(x_2))^2 dx_2 \right) \psi_n(x_1) = \epsilon_n \psi_n(x_1)

We assume the two particles are in the same ground state, thus, k = n = 1. We assume,

\psi_1(x) = \sum a_i \phi_i(x)


The algorithm is that, for the 1 particles matrix

\displaystyle F_{ij} = \int  \psi_i(x) \left( -\frac{\hbar^2}{2m} \frac{d^2}{dx^2} + V(x)  \right) \psi_j(x) dx

Define

\displaystyle G_0(x) = \int G(x, x_2) (\psi_1(x_2))^2 dx_2

The 2-particle matrix

\displaystyle G_{ij} = \int \psi_i(x) G_0(x) \psi_j(x) dx

For the 1st trial, we set

\psi_i^{(0)}(x) = \phi_{(2i+1)}(x)

We only use “even” function with respect to x = \pi/2, because mixing parity will result non-reasonable result. ( why ? )

Then we get the matrix of the Hartree Hamiltonian H_{ij} = F_{ij} + G_{ij}, and solve for the eigen value and eigen vector,

\vec{v}^{(0)} = (v_1^{(0)}, v_2^{(0)}, .... v_n^{(0)})

The next trial is

\displaystyle \psi_i^{(1)}(x) = \sum_i v_i^{(0)} \psi_i^{(0)}

The process repeated until converge. The convergent can be exam using the Hartree Hamiltonian that the non-diagonal terms are vanished, i.e.

F_{ij} + G_{ij} = 0, i\neq j


I tried with 2 basis and 6 basis, I set \hbar^2/2m = 1 for simplicity, the Hartree ground state is 1.7125 for both basis. The eigen wavefunction for the 2-basis is

\psi(x) = 0.797752 \sin(x) + 0.0145564 \sin(3x)

with the mean field

V_m(x) = 0.842569 \sin(x)

For 6 basis, the wave function is

\psi(x) = 0.797751 \sin(x) + 0.0145729 \sin(3 x) + 0.000789143 \sin(5 x) + 0.000125871 \sin(7 x) + 0.0000336832 \sin(9 x) + 0.0000119719 \sin(11 x)

we can see, the contribution of the higher energy states are getting smaller and smaller. The higher energy states only contributes 0.03%. Following is the plot compare \phi_1(x) (orange) and \psi(x) (blue). We can see that the wave function becomes little spread out.

psi.PNG

If we increase the strength of the mutual interaction by 10 times, the wave function becomes more spread out.

psi3.PNG

To cross check the result,  we can use the mean field and solve the Schrodinger equation using variational method. The method gives the ground state energy be 1.7125 and converge.

vm.PNG

This is interesting that the variational method needs 4 or 5 basis to converge, while Hartree method only need 2.

At last, we substitute the wave functon back to the Hartree equation, it turns out the result is not satisfied 100%. The total wave function is

\Psi(x_1, x_2) = \psi(x_1) \psi(x_2)

The Schrodinger equation is

\displaystyle  H \Psi = -\frac{\hbar^2}{2m}\left(\frac{d^2}{dx_1^2} + \frac{d^2}{dx_2^2}\right) \Psi(x_1,x_2) + G(x_1,x_2)\Psi(x_1,x_2) = E \Psi(x_1,x_2)

However, the left side and the right side are not equal, but the integration

\langle \Psi |H|\Psi\rangle  = \langle |\Psi|\Psi \rangle

is equal.

I plot the \Psi H\Psi (orange) and $ E \Psi^2 $ (blue), and there difference is the following

diff.PNG

We can imagine, the integration of the bottom plot is zero. I don’t know the reason, but I guess it is not because of the number of basis, because the higher energy states only contributes for 0.03%. I guess, it is because of the energy is the quantity that is optimized rather then the wave function. And I am not sure that the reverse of the following is true.

H|\Psi\rangle = E |\Psi\rangle  \rightarrow \langle \Psi|H|\Psi \rangle = E \langle \Psi|\Psi \rangle


For the case of many particles, the G_0 has to sum over all other particle. When all particles are in the same states, the sum is simple. For particles distribute among different states, in general, the sum is

\displaystyle G_0 =\sum_{k\neq i} \int G(x_1,x_2) \psi_k(x_2)^2 dx

Here, the sum is excluded the particle x_1, which is in state i. For example, if the system consists of 2 particles on state-1, and 2 particles on state-2. The mean field for the particle in state-1 is constructed using 1 particles on state-1 and 2 particles on state-2. Thus the energy for the state-2 may be not correct.

The mean field for the particle on state-2, the mean field should be constructed by 2 particles on state-1 and 1 particle on state-2.

For higher states that without particle, the mean field is built by 2 particles on state-1 and 2 particles on state-2.

In general, the mean field is constructed using other particles.

This fixed mean field method is simple but we have to define the mean field for each state and calculate again and again.

Algorithm of Wavelet Transform (with Qt class)

Leave a comment

There are many kind of wavelet transform, and I think the names are quite confusing.

For instance, there are continuous and discrete wavelet transforms, in which, the “continuous” and “discrete” are for the wavelet parameters, not for the “data” itself. Therefore, for discrete data, there are “continuous” and “discrete” wavelet transforms, and for function, there are also “continuous” and “discrete” wavelet transforms.

In here, we will focus on discrete wavelet transform for function first. This discrete wavelet transform is also called as wavelet series, which express a compact support function into series of wavelet.

For simplicity, we also focus on orthonormal wavelet.

As the wavelet span the entire space, any compact function can be expressed as

\displaystyle f(t) = \sum_{j,k} \left<f(t)|\psi_{j,k}(t)\right> \psi_{j,k}(t)

\psi_{j,k}(t) = 2^{j/2} \psi(2^j t - k)

where j, k are integer.


Now, we move to discrete data discrete wavelet transform. The data is discrete, we can imagine only t_n = t_0 + n \Delta points are known with finite n .

\displaystyle f_n = f(t_n) = \sum_{j,k} \left<f_n|\psi_{j,k}(t_n) \right> \psi_{j,k}(t_n)

the integration becomes a finite sum.

Without loss of generality, we can set t_0 = 0, \Delta = 1, and then the time axis becomes an integer number axis. We found that j  \leq 0 as the wavelet can only be expand, not shrink. Because there are finite number of data point, i.e. n < \infty, -Log_2(n) < j \leq 0 .

However, this double summation for each f_n is very time consuming. There is a Fast Discrete Wavelet Transform. Before we continuous, we must study the wavelet.


From the last post, we know that the scaling function that generate a MRA must be:

\displaystyle \phi(t) = \sum_{k} g_0(k) \phi(2t-k)

\left<\phi(t-k) | \phi(t-k') \right> = \delta_{kk'}

, where k are integer. The set of shifted scaling function span a space V_0 . For the wavelet,

\displaystyle \psi(t) = \sum_{k} g_1(k) \psi(2t-k)

\left<\psi(t-k) | \psi(t-k') \right> = \delta_{kk'}

The set of shifted wavelet span a space W_0, so that W_0 \perp V_0, so that

\left<\phi(t-k)|\psi(t-k') \right> = 0

Since the wavelet is generated from the scaling function, we expect the coefficient of g_0(k) and g_1(k) are related. In fact, the relationship for orthonormal scaling function and wavelet is

g_1(k) = (-1)^k g_0(1-k)


For discrete data x_i , it can be decomposed into the MRA space. We start by the largest V_0 space, where the wavelet is most shrunken.

\displaystyle x_i = \sum_{k} v_{0,k} \phi(i-k)

to decompose to the V_{-1} and W_{-1} space. We can use the nested property of the MRA space, \phi(2t) can be decomposed into \phi(t-k) and \psi(t-k) ,

\displaystyle \psi(2t-l) = \sum_{k} h_0(2k-l) \phi(t-k) + h_1(2k-l) \psi(t-k)

where (given that \phi(t) and $\latex \psi(t)$ are orthonormal ),

h_0(2k-l) = \left< \phi(2t-l) | \phi(t-k) \right>

h_1(2k-l) = \left< \phi(2t-l) | \psi(t-k) \right>

Therefore, using the coefficient of h_0 and h_1, the wavelet coefficient v_{0,k} can be decomposed to

\displaystyle v_{s-1,k} = \sum_{l} h_0(2k-l) v_{s,l}  

\displaystyle w_{s-1,k} = \sum_{l} h_1(2k-l) v_{s,l}

in graphic representation

h1.PNG

This is a fast discrete wavelet transform.


Due to the nested space of MRA, we also expect that the coefficient h_0 and h_1 are related to g_0 . For orthonormal wavelet,

\displaystyle h_0(k) = \frac{1}{2} g_0(-k)

\displaystyle h_1(k) = \frac{1}{2} (-1)^{k} g_0 (k+1)

Since the g_0 is finite, the g_1, h_0, h_1 are all finite. That greatly reduce the computation cost of the discrete wavelet transform.


To reconstruct the discrete data x_i, we don’t need to use

\displaystyle v_{s+1,l} = \sum_{k} v_{s,k} \phi(l - k) + w_{s,k} \psi(l-k)

using the nested space of MRA, \psi(t) = \sum_{k} g_1(k) \psi(2t-k) ,

\displaystyle v_{s+1,l} = \sum_{k} g_0(l-2k) v_{s,k} + g_1(l-2k) w_{s,k}

in graphical representation,

h2.PNG


I attached the wavelet transfrom class for Qt, feel free to modify.

https://drive.google.com/file/d/0BycN9tiDv9kmMVRfcVFMbDFWS0k/view?usp=sharing

https://drive.google.com/file/d/0BycN9tiDv9kmUmlmNk1kaVJCbEU/view?usp=sharing

in the code, the data did not transform to MRA space. The code treats the data already in the MRA space. Some people said this is a “crime”. But for the seek of “speed”, it is no need to map the original discrete data into MRA space. But i agree, for continuous function, we must map to MRA space.

 

 

Wavelet Analysis or MRA

2 Comments

Although the Fourier transform is a very powerful tool for data analysis, it has some limit due to lack of time information. From physics point of view, any time-data should live in time-frequency space. Since the Fourier transform has very narrow frequency resolution, according to  uncertainty principle, the time resolution will be very large, therefore, no time information can be given by Fourier transform.

Usually, such limitation would not be a problem. However, when analysis musics, long term performance of a device, or seismic survey, time information is very crucial.

To over come this difficulty, there a short-time Fourier transform (STFT) was developed. The idea is the applied a time-window (a piecewise uniform function, or Gaussian) on the data first, then FT. By applying the time-window on difference time of the data (or shifting the window), we can get the time information. However, since the frequency range of the time-window  always covers the low frequency, this means the high frequency  signal is hard to extract.

To improve the STFT, the time-window can be scaled (usually by 2). When the time window is shrink by factor of 2, the frequency range is expanded by factor of 2. If we can subtract the frequency ranges for the time-window and the shrink-time-window, the high frequency range is isolated.

To be more clear, let say the time-window function be

\phi_{[0,1)}(t) = 1 , 0 \leq t < 1

its FT is

\hat{\phi}(\omega) = sinc(\pi \omega)

Lets also define a dilation operator

Df(t) = \sqrt{2} f(2t)

the factor \sqrt{2} is for normalization.

The FT of D\phi(t) has smaller frequency range, like the following graph.

Capture.PNG

We can subtract the orange and blue curve to get the green curve. Then FT back the green curve to get the high-frequency time-window.

We can see that, we can repeat the dilation, or anti-dilation infinite time. Because of this, we can drop the FT basis Exp(-2\pi i t \omega), only use the low-pass time-window to see the low-frequency behaviour of the data, and use the high-pass time-window to see the high-frequency behaviour of the data. Now, we stepped into the Multi-resolution analysis (MRA).

In MRA, the low-pass time-window is called scaling function \phi(t) , and the high-pass time-window is called wavelet \psi(t).

Since the scaling function is craetd by dilation, it has the property

\phi(t) = \sum_{k} g_{0}(k) \phi(2t-k)

where k is integer. This means the vector space span by {\phi(t-k)}_{k}=V_0 is a subspace of the dilated space DV_0 =V_1. The dilation can be go one forever, so that the whole frequency domain will be covered by V_{\infty}.

Also, the space span by the wavelet, {\psi(t-k)}=W_0, is also a subspace of V_1. Thus, we can imagine the structure of MRA is:

Capture.PNG

Therefore, any function f(t) can also be expressed into the wavelet spaces. i.e.

f(t) = \sum_{j,k} w_{j,k} 2^{j/2}\psi(2^j t - k)

where j, k are integers.

I know this introduction is very rough, but it gives a relatively smooth transition from FT to WT (wavelet transform), when compare to the available material on the web.

 

 

 

 

Phase shift of elastics scattering

Leave a comment

I found that the derivation of most “google result” is not clear enough. So here is my derivation. Before process, people may need to review the pervious post.

Most people start from,

u_l = A( \hat{h}_l^- - S_l \hat{h}_l^+ )

that annoying me because it is somehow not “natural”. Why there is a “minus” sign? Why the \hat{h}_l^- is the first term? For my self, a more natural way is,

u_l = a \hat{h}_l^+ + b \hat{h}_l^-

where a, b are complex numbers, but that is still not so natural, because in numerical calculation, for simplicity, there is no complex number, we only have,

u_l = \alpha \hat{j}_l + \beta \hat{n}_l

The first term is alway there as it is the free solution and bounded at r = 0. the second term is caused by the potential.


The goal is to find a solution take the form

\displaystyle \psi = A \left( e^{i \vec{k} \cdot \vec{r}} + f(\theta) \frac{e^{ikr}}{r} \right)

where the first term is free wave and the second term is scattered wave. The solution for elastics scattering is

\displaystyle \psi = \sum C_l P_l (\cos\theta) \frac{u_l}{kr} \rightarrow \sum C_l P_l(\cos\theta) (\alpha \hat{j}_l + \beta \hat{n}_l)

we used the substitution,

\displaystyle R_l(r) = \frac{u_l(\rho)}{\rho},  \rho = kr.

The radial function can be solved using Rungu-Kutta method on the equation,

\displaystyle \frac{d^2}{d\rho^2} u_l = \frac{2 m_\mu}{\hbar^2} (V-E) u_l + \frac{l(l+1)}{\rho^2}

and the solution of u_l at far away is,

u_l \rightarrow  \alpha \hat{j}_l + \beta \hat{n}_l.

the arrow means r \rightarrow \infty. So, the problem is how to rewrite the solution. In the way, we will see how the phase shift or the S-matrix was found.


The free solution is the spherical wave,

\displaystyle e^{i \vec{k} \cdot \vec{r}} = \sum_l (2l+1) i^l P_l(\cos\theta) j_l(kr)

The spherical Bessel function j_l(kr) cna be express as Heankel function

h_l^{\pm} = n_l \pm i j_l \rightarrow e^{\pm i (kr - l \frac{\pi}{2})}

The + sign is outgoing wave.


\displaystyle u_l \rightarrow (\alpha \hat{j}_l + \beta \hat{n}_l)

\displaystyle = \frac{\alpha}{2i} (\hat{h}_l^{+} - \hat{h}_l^{-}) + \frac{\beta}{2}(\hat{h}_l^{+} + \hat{h}_l^{-})

\displaystyle = \frac{\alpha + i \beta}{2i} (\hat{h}_l^{+} - \hat{h}_l^{-}) + \beta \hat{h}_l^{+}

\displaystyle = (\alpha - i \beta ) \left( \frac{\hat{h}_l^+ - \hat{h}_l^-}{2i} + \frac{\beta}{\alpha - i \beta} \hat{h}_l^+\right)

\displaystyle = (\alpha - i \beta ) \left( \hat{j}_l + \frac{\beta}{\alpha - i \beta} \hat{h}_l^+\right)

Since the u_l should be normalized, we can se \alpha = \cos \delta and \beta = \sin\delta.

\displaystyle \frac{\beta}{\alpha - i \beta } = \sin(\delta) e^{i\delta}

We put u_l back

\displaystyle \psi \rightarrow \sum_l C_l P_l (cos\theta)(\alpha - i \beta ) \left( \hat{j}_l + \sin(\delta) e^{i\delta} \hat{h}_l^+\right)

By setting

\displaystyle C_l = A i^l \frac{2l+1}{\alpha - i \beta} ,

we have the first term is the free wave function. In the second term, \hat{h}_l^+ \rightarrow e^{i(kr - l \frac{\pi}{2}}) / kr . Notice that

e^{i l \frac{\pi}{2}} = i^{-l}

That cancel the i^l term in C_l . And we have

 \displaystyle f(\theta) = \sum (2l+1) P_l (\cos\theta) \frac{\sin(\delta) e^{i\delta}}{k}


some people will write the u_l as \hat{h}_l^{\pm} and the S-matrix,

\displaystyle u_l = \frac{\alpha + i \beta} {2i} \hat{h}_l^+ - \frac{\alpha - i \beta}{2i} \hat{h}_l^-

\displaystyle = -\frac{\alpha - i \beta}{2i} \left( \hat{h}_l^- - \frac{\alpha + i \beta}{\alpha - i \beta} \hat{h}_l^+ \right)

\displaystyle = A' (\hat{h}_l^- - S_l \hat{h}_l^+)

where

\displaystyle S_l =\frac{\alpha + i \beta}{\alpha - i \beta} = e^{2i\delta} .

Remember that this is the S-matrix for elastics scattering.

 

Solving radial SE numerically

2 Comments

The time-independent Schrödinger equation is

(-\frac{\hbar^2}{2m}\nabla^2 + V ) \Psi = E \Psi

Using the Laplacian in spherical coordinate. and Set \Psi = R Y

\nabla^2 R Y - \frac{2m}{\hbar^2}(V-E) R Y = 0

\nabla^2 = \frac{1}{r^2}\frac{d}{dr}(r^2 \frac{d}{dr}) - \frac{1}{r^2} L^2

The angular part,

L^2 Y = l(l+1) Y

The radial part,

\frac{d}{dr}(r^2\frac{dR}{dr}) - l(l+1)R - \frac{2mr^2}{\hbar^2}(V-E) R = 0

To simplify the first term,

R = \frac{u}{r}

\frac{d}{dr}(r^2 \frac{dR}{dr})= r \frac{d^2u}{dr^2}

A more easy form of the radial function is,

\frac{d^2u}{dr^2} + \frac{l(l+1)}{r^2} u - \frac{2m}{\hbar^2} (V-E) u = 0

The effective potential U

U = V + \frac{\hbar^2}{m} \frac{l(l+1)}{r^2}

\frac{d^2u}{dr^2} + \frac{2m}{\hbar^2} (E - U) u = 0

We can use Rungu-Kutta method to numerically solve the equation.

RK4.PNG

The initial condition of u has to be 0. (home work)

I used excel to calculate a scattered state of L = 0 of energy 30 MeV. The potential is a Wood-Saxon of depth 50 MeV, radius 3.5 fm, diffusiveness 0.8 fm.

e2.PNG

Another example if bound state of L = 0. I have to search for the energy, so that the wavefunction is flat at large distance. The outermost eigen energy is -7.27 MeV. From the radial function, we know it is a 2s orbit.

ex1.PNG

Older Entries