Review on rotation

Leave a comment

The rotation of a vector in a vector space can be done by either rotating the basis vector or the coordinate of the vector. Here, we always use fixed basis for rotation.

For a rigid body, its rotation can be accomplished using Euler rotation, or rotation around an axis.

Whenever a transform preserves the norm of the vector, it is a unitary transform. Rotation preserves the norm and it is a unitary transform, can it can be represented by a unitary matrix. As a unitary matrix, the eigen states are an convenient basis for the vector space.

We will start from 2-D space. Within the 2-D space, we discuss about rotation started by vector and then function. The vector function does not explicitly discussed, but it was touched when discussing on functions. In the course, the eigen state is a key concept, as it is a convenient basis. We skipped the discussion for 3-D space, the connection between 2-D and 3-D space was already discussed in previous post. At the end, we take about direct product space.


In 2-D space. A 2-D vector is rotated by a transform R, and the representation matrix of R has eigen value

\exp(\pm i \omega)

and eigenvector

\displaystyle \hat{e}_\pm = \mp \frac{ \hat{e}_x \pm i \hat{e}_y}{\sqrt{2}}

If all vector expand as a linear combination of the eigen vector, then the rotation can be done by simply multiplying the eigen value.

Now, for a 2-D function, the rotation is done by changing of coordinate. However, The functional space is also a vector space, such that

  1. a* f_1 + b* f_2 still in the space,
  2. exist of  unit and inverse of addition,
  3. the norm can be defined on a suitable domain by \int |f(x,y)|^2 dxdy

For example, the two functions \phi_1(x,y) = x, \phi_2(x,y) = y , the rotation can be done by a rotational matrix,

\displaystyle R = \begin{pmatrix} \cos(\omega) & -\sin(\omega) \\ \sin(\omega) & \cos(\omega) \end{pmatrix}

And, the product x^2, y^2, xy also from a basis. And the rotation on this new basis was induced from the original rotation.

\displaystyle R_2 = \begin{pmatrix} c^2 & s^2 & -2cs \\ s^2 & c^2 & 2cs \\ cs & -cs & c^2 - s^2 \end{pmatrix}

where c = \cos(\omega), s = \sin(\omega) . The space becomes “3-dimensional” because xy = yx, otherwise, it will becomes “4-dimensional”.

The 2-D function can also be expressed in polar coordinate, f(r, \theta) , and further decomposed into g(r) h(\theta) .


How can we find the eigen function for the angular part?

One way is using an operator that commutes with rotation, so that the eigen function of the operator is also the eigen function of the rotation. an example is the Laplacian.

The eigen function for the 2-D Lapacian is the Fourier series.

Therefore, if we can express the function into a polynomial of r^n (\exp(i n \theta)  , \exp(-i n \theta)) , the rotation of the function is simply multiplied by the rotation matrix.

The eigen function is

\displaystyle \phi_{nm}(\theta) = e^{i m \theta}, m = \pm

The D-matrix of rotation (D for Darstellung, representation in German)  \omega is

D^n_{mm'}(\omega) = \delta_{mm'} e^{i m \omega}

The delta function of m, m' indicates that a rotation does not mix the spaces. The transformation of the eigen function is

\displaystyle \phi_{nm}(\theta') = \sum_{nm} \phi_{nm'}(\theta) D^n_{m'm}(\omega)

for example,

f(x,y) = x^2 + k y^2

write in polar coordinate

\displaystyle f(r, \theta) = r^2 (\cos^2(\theta) + k \sin^2(\theta)) = \frac{r^2}{4} \sum_{nm} a_{nm} \phi_{nm}(\theta)

where a_0 = 2 + 2k, a_{2+} = a_{2-} = 1-a, a_{other} = 0.

The rotation is

\displaystyle f(r, \theta' = \theta + \omega ) = \frac{r^2}{4} \sum_{nm} a_{nm} \phi_{nm}(\theta) D^n_{mm}(\omega)  = \frac{r^2}{4} \sum_{nm} a_{nm} \phi_{nm}(\theta + \omega)

If we write the rotated function in Cartesian form,

f(x',y') = x'^2 + k y'^2 = (c^2 + k s^2)x^2 + (s^2 + k c^2)y^2 + 2(k-1) c s x y

where c = \cos(\omega), s = \sin(\omega) .


In 3-D space, the same logic still applicable.

The spherical harmonics Y_{lm} serves as the basis for eigenvalue of l(l+1), eigen spaces for difference l are orthogonal. This is an extension of the 2-D eigen function \exp(\pm n i \theta) .

A 3-D function can be expressed in spherical harmonics, and the rotation is simple multiplied with the Wigner D-matrix.


On above, we show an example of higher order rotation induced by product space. I called it the induced space (I am not sure it is the correct name or not), because the space is the same, but the order is higher.

For two particles system, the direct product space is formed by the product of the basis from two distinct space (could be identical space).

Capture.PNG

Some common direct product spaces are

  • combining two spins
  • combining two orbital angular momentum
  • two particles system

No matter induced space or direct product space, there structure are very similar. In 3-D rotation, the two spaces and the direct product space is related by the Clebsch-Gordon coefficient. While in 2-D rotation, we can see from the above discussion, the coefficient is simply 1.

Lets use 2-D space to show the “induced product” space. For order n=1, which is the primary base that contains only x, y.

For n=2, the space has x^2, y^2, xy, but the linear combination x^2 + y^2 is unchanged after rotation. Thus, the size of the space reduced 3-1 = 2.

For n = 3, the space has x^3, y^3, x^2y, xy^3 , this time, the linear combinations x^3 + xy^2 = x(x^2+y^2) behave like x and y^3 + x^2y behave like y, thus the size of the space reduce to 4 - 2 = 2.

For higher order, the total combination of x^ay^b, a+b = n is C^{n+1}_1 = n+1 , and we can find n-1 repeated combinations, thus the size of the irreducible space of order n is always 2.

For 3-D space, the size of combination of x^ay^bz^c, a + b+ c = n is C^{n+2}_2 = (n+1)(n+2)/2 . We can find n(n-1)/2 repeated combination, thus, the size of the irreducible  space of order n is always 2n+1.

Product of Spherical Harmonics

Leave a comment

One mistake I made is that

\displaystyle Y_{LM} = \sum_{m_1 m_2} C_{j_1m_1j_2 m_2}^{LM} Y_{j_1m_1} Y_{j_2m_2}

because

\displaystyle |j_1j_2JM\rangle = \sum_{m_1m_2} C_{j_1m_1j_2 m_2}^{LM} |j_1m_1\rangle |j_2m_2\rangle

but this application is wrong.

The main reason is that, the |j_1j_2JM\rangle is “living” in a tensor product space, while |jm \rangle is living in ordinary space.

We can also see that, the norm of left side is 1, but the norm of the right side is not.


Using the Clebsch-Gordon series, we can deduce the product of spherical harmonics.

First, we need to know the relationship between the Wigner D-matrix and spherical harmonics. Using the equation

\displaystyle Y_{lm}(R(\hat{r})) = \sum_{m'} Y_{lm'}(\hat{r}) D_{m'm}^{l}(R)

We can set \hat{r} = \hat{z} and R(\hat{x}) = \hat{r}

Y_{lm}(\hat{z}) = Y_{lm}(0, 0) = \sqrt{\frac{2l+1}{4\pi}} \delta_{m0}

Thus,

\displaystyle Y_{lm}(\hat{r}) = \sqrt{\frac{2l+1}{4\pi}} D_{0m}^{l}(R)

\Rightarrow D_{0m}^{l} = \sqrt{\frac{4\pi}{2l+1}} Y_{lm}(\hat{r})

Now, recall the Clebsch-Gordon series,

\displaystyle D_{m_1N_1}^{j_1} D_{m_2 N_2}^{j_2} = \sum_{jm} \sum_{M} C_{j_1m_1j_2m_2}^{jM} C_{j_1N_1j_2N_2}^{jm} D_{Mm}^{j}

set m_1 = m_2 = M= 0

\displaystyle D_{0N_1}^{j_1} D_{0 N_2}^{j_2} = \sum_{jm} C_{j_10j_20}^{j0} C_{j_1N_1j_2N_2}^{jm} D_{0m}^{j}

rename some labels

\displaystyle Y_{l_1m_1} Y_{l_2m_2} = \sum_{lm} \sqrt{\frac{(2l_1+1)(2l_2+1)}{4\pi(2l+1)}} C_{l_10l_20}^{l0} C_{l_1m_1l_2m_2}^{lm} Y_{lm}


We can multiply both side by C_{l_1m_1l_2m_2}^{LM} and sum over m_1, m_2,  using

\displaystyle \sum_{m_1m_2} C_{l_1m_1l_2m_2}^{lm}C_{l_1m_1l_2m_2}^{LM} = \delta_{mM} \delta_{lL}

\displaystyle \sum_{m_1m_2} C_{l_1m_1l_2m_2}^{LM} Y_{l_1m_1} Y_{l_2m_2} = \sqrt{\frac{(2l_1+1)(2l_2+1)}{4\pi(2L+1)}} C_{l_10l_20}^{l0} Y_{LM}

 

 

Clebsch-Gordon Series

Leave a comment

One of the important identity for angular momentum theory is the Clebsch-Gordon series, that involved Wigner D-matrix.

The series is deduced from evaluate the follow quantity in two ways

\langle j_1 m_1 j_2 m_2 | U(R) |j m \rangle

If acting the rotation operator to the |jm\rangle , we insert

\displaystyle \sum_{M} |jM\rangle \langle | jM| = 1

\displaystyle \sum_{M} \langle j_1 m_1 j_2 m_2|jM\rangle \langle jM| U(R) |jm\rangle = \sum_{M} C_{j_1m_1j_2m_2}^{jM} D_{Mm}^{j}

If acting the rotation operator to the \langle j_1 m_1 j_2 m_2| , we insert

\displaystyle \sum_{N_1 N_2 } |j_1 N_1 j_2 N_2\rangle \langle  j_1 N_1 j_2 N_2| = 1

\displaystyle \sum_{N_1 N_2} \langle j_1 m_1 j_2 m_2|U(R) | j_1 N_1 j_2 N_2\rangle \langle j_1 N_1 j_2 N_2| jm\rangle

\displaystyle = \sum_{N_1N_2} C_{j_1N_1j_2N_2}^{jm} D_{m_1N_1}^{j_1} D_{m_2 N_2}^{j_2}

Thus,

\displaystyle \sum_{N_1N_2} C_{j_1N_1j_2N_2}^{jm} D_{m_1N_1}^{j_1} D_{m_2 N_2}^{j_2} = \sum_{M} C_{j_1m_1j_2m_2}^{jM} D_{Mm}^{j}

We can multiply both side by C_{j_1 N_1 j_2 N_2}^{jm} , then sum the j, m

using

\displaystyle \sum_{jm} C_{j_1 N_1 j_2 N_2}^{jm} C_{j_1N_1j_2N_2}^{jm} = 1

\displaystyle D_{m_1N_1}^{j_1} D_{m_2 N_2}^{j_2} = \sum_{jm} \sum_{M} C_{j_1m_1j_2m_2}^{jM} C_{j_1N_1j_2N_2}^{jm} D_{Mm}^{j}

 

Wigner-Eckart theorem

Leave a comment

The mathematical form of the theorem is, given a tensor operator of rank k, T^{(k)}, The expectation value on the eigen-state \left|j,m\right> of total angular momentum J is,

\left<j m\right|T_q^{(k)} \left|j' m'\right> = \left<j' m' k q| j m \right> \left<j||T^{(k)}||j'\right>

where, \left<j||T^{(k)}||j\right> is reduced matrix element. The power of the theorem is that, once the reduced matrix element is calculated for the system for a particular (may be the simplest) case, all other matrix element can be calculated.

The theorem works only in spherical symmetry. The state are eigen-state of total angular momentum. We can imagine, when the system rotated, there is something unchanged (which is the reduced matrix element). The quantum numbers m, m' define some particular direction of the state, and these “direction” will cause an additional factor, which is the Clebsch-Gordan coefficient.


Another application is the Replacement theorem.

If any 2 spherical tensors A^{(k)}, B^{(k)} of rank-k, using the theorem, we have,

\displaystyle \left<j m|A^{(k)}|j' m' \right> = \frac{\left<j||A^{(k)}||j'\right>}{\left<j||B^{(k)}||j'\right>} \left<j m|B^{(k)}|j' m' \right>


This can prove the Projection theorem, which is about rank-1 tensor.

L , J are orbital and total angular momentum respectively. The projection of L on  J is

L\cdot J = L_z J_z - L_+ J_- - L_-J_+

The expectation value with same state \left|j m\right> ,

\left< L\cdot J\right> = \left< L_z J_z\right> - \left< L_+ J_-\right> - \left<L_- J_+\right>

using Wigner-Eckart theorem, the right side becomes,

\left< L \cdot J \right> = c_j \left<j||L||j\right>

where the coefficient c_j only depends on j as the dot-product is a scalar, which is isotropic. similarly,

\left< J \cdot J \right> = c_j \left<j||J||j\right> ,

Using the Replacement theorem,

\displaystyle \left< L \right> = \frac{\left<j||L||j\right>}{\left<j||J||j\right>} \left<J \right>

Thus, we have,

\displaystyle \left< L \right> = \frac{\left< L\cdot J \right>}{\left<J\cdot J\right>} \left<J \right>

as the state is arbitrary,

\displaystyle L = \frac{L\cdot J}{J\cdot J} J

this is same as the classical vector projection.

 

 

Angular distribution of emission that carry angular momentum

Leave a comment

Before decay, the nucleus is in state with total angular momentum J and symmetry axis quantization M :

\Phi_{JM}

Say, the emitted radiation (can be EM wave or particle ) carries angular momentum  l and axis quantization m, its wavefunction is:

\phi_{lm}

then the daughter nucleus has angular momentum j and m_j, the wave function is

\Psi_{j m_j}

their relation is:

\Phi_{JM} = \sum_{m, m_j}{\phi_{lm} \Psi_{j m_j} \left< l m j m_j | JM \right>}

where the $Latex \left< l m j m_j | JM \right> $ is Clebsch-Gordan coefficient.

The wave function of the emitted radiation from a central interaction takes the form:

\phi_{lm} = A_0 u_{nl}(r) Y_l^m(\theta,\phi)

The angular distribution is:

\int |\phi_{lm}|^2 dx^3 =A_0^2 \int |u_{nl}|^2 r^2 dr |Y_l^m|^2

for a fixed distance detector, the radial part is a constant. Moreover, not every spherical harmonic contribute the same weight, there is weighting factor due to Clebsch-Gordan coefficient.   Thus, the angular distribution is proportional to:

W(\theta) \propto \sum_{m_j=M-m}{|Y_l^m|^2 |\left<l m j m_j |JM\right>|^2 }

For example, JM=00, The possible (l, j) are (0,0), (1,1), (2,2) and so on, the m=-m_j . The C-G coefficient are,

\langle 0000 | 00 \rangle = 1

\langle lml-m | 00 \rangle = \frac{1}{\sqrt{2l+1}}

 

thus,

Y_0^0 = \frac{1}{4\pi}

\displaystyle \sum_{m}{\left|Y_l^m\right|^2  \left|\langle l m l -m |0 0 \rangle \right|^2 } =  \sum_m|Y_l^m|^2 \frac{1}{2l+1}= constant

Thus, the angular distribution is isotropic.

Clebsch – Gordan Coefficient II

Leave a comment

As last post discussed, finding to CG coefficient is not as straight forward as text book said by recursion.
However, there are another way around, which is by diagonalization of J^2

first we use the identity:

J^2 = J_1^2+J_2^2 + 2 J_{1z} J_{2_z} + J_{1+} J_{2-} + J_{1-} J_{2+}

when we “matrix-lize” the operator. we have 2 choice of basis. one is \left| j_1,m_1;j_2;m_2 \right> , which give you non-diagonal matrix by the J_{\pm} terms. another one is \left|j,m\right>, which give you a diagonal matrix.

Thus, we have 2 matrixs, and we can diagonalized the non-diagonal. and we have the Unitary transform P, from the 2-j basis to j basis, and that is our CG coefficient.

oh, don’t forget the normalized the Unitary matrix.

i found this one is much easy to compute.

Clebsch – Gordan Coefficient

Leave a comment

i am kind of stupid, so, for most text book with algebra example, i am easy to lost in the middle.

Thus, now, i am going to present a detail calculation based on Recursion Relations.

we just need equation and few observations to calculate all. i like to use the J- relation:

K(j,-m-1) C_{m_1 m_2}^{j m}= K(j_1,m_1) C_{m_1+1 m_2}^{j m+1}+ K(j_2,m_2) C_{m_1 m_2+1}^ {j m+1}

K(j,m) = \sqrt{ j(j+1) - m(m+1)}

C_{m_1 m_2}^{j m} is the coefficient.

Notice that the relation is only on fixed j, thus, we will have our m_1 m_2 plane with fixed j, so, we have many planes from j = j_1+j_2 down to j = |j_1-j_2| .

We have 2 observations:

  1. C_{j_1, j_2}^{j_1+j_2 , j_1+j_2} = 1 which is the maximum state. the minimum state also equal 1.
  2. For m \ne m_1+m_2 the coefficient is ZERO.

Thus, on the j = j_1 + j_2 plane. the right-upper corner is 1. then using the relation, we can have all element down and left. and then, we can have all element on the plane.

the problem comes when we consider j = j_1 + j_2 -1 plane. no relation is working! and no book tells us how to find it!

Lets take an example, a super easy one, j_1 = 1/2 , j_2 = 1/2 . possible j = 0, 1 , so we have 2 planes.

The j = 1 plane is no big deal.

but the j = 0 plane, there are only 2 coefficient. and we can just related them and know they are different only a sign. and we have to use the orthonormal condition to find out the value.

See? i really doubt is there somebody really do the actually calculation. J.J.Sakuarai just skip the j = l-1/2 case. he cheats!

when going to higher j1+j2 case, we have w=to use the J- relation to evaluate all coefficient. the way is start from the lower left corner, and use the J- relation to find out the relationship between each lower diagonal coefficients. then, since all lower diagonal coefficients have same m value, thus, the sum of them should be normalized. then, we have our base line and use the J+ to find the rest.

i will add graph and another example. say, j1 = 3, j2 = 1.