Euler angle

Leave a comment

with the help of the post changing frame, we are now good to use the Euler angle.

recall

V_R = R_n ( - \theta ) V_S

for the rotating frame axis is rotating positive with the static frame.

the Euler angle is performed on 3 steps

  1. rotate on Z_S , the z-axis with \alpha , which is R_{zS} ( - \alpha ) . the x-axis and the y-axis is now different, we notate this frame with a 1 .
  2. rotate on Y_1 , the y-axis in the 1- frame  by angle \beta , which is R_{y1} ( - \beta ). the new axis is notated by 2.
  3. rotate on Z_2 , the z-axis in the 2-frame by angle \gamma , which is R_{z2} ( - \gamma ) . the new axis is notated by R.

The rotating frame is related with the static frame by:

V_R = R_{z2} ( - \gamma ) R_{y1} ( - \beta ) R_{zS} ( - \alpha ) V_S

or

R_R ( \alpha, \beta, \gamma ) = R_{z2} ( - \gamma )  R_{y1} ( - \beta ) R_{zS} ( - \alpha )

for each rotation is on a new frame, the computation will be ugly, since, after each rotation, we have to use the rotation matrix in new coordinate.

There is another representation, notice that:

R_{y1} ( -\beta ) = R_{zS} ( - \alpha )  R_{yS} ( - \beta )  R_{zS} ( \alpha)

which mean, the rotating on y1 -axis by \beta is equal to rotate it back to Y_S  on zS -axis and rotated it by \beta on yS – axis, then rotate back the Y_S to Y_1 on zS – axis.

i use a and b for the axis between the transform.

and we have it for the z2-axis.

R_{z2} ( -\gamma ) = R_{y1} ( - \beta ) R_{z1} ( - \gamma ) R_{y1} ( \beta )

by using these 2 equation and notice that the z1-axis is equal to zS-axis.

R_R ( \alpha , \beta, \gamma ) = R_{zS} ( - \alpha ) R_{yS} (- \beta ) R_{zS} ( - \gamma )

which act only on the the same frame.

Rotation operator on x, y in Matrix form

Leave a comment

in the J.J. Sakurai’s book, the formalism of finding the matrix representation of rotation operator is general, but quite long and detail. A general treatment is necessary for understanding the topic, but i think, who will use arbitrary rotation? so,  here i give a simple and direct calculation on J_x and J_y , for use-ready.

the method is diagonalization. because we already knew the matrix form of the angular momentum operator. which is not given in J.J.Sakurai’s book.

recall that the formalism:

f(M) = P \cdot f(D) \cdot P^{-1}

since D is diagonal matrix, thus

f(D)_{ij} = f(\lambda_i) \delta_{ij}

so, we have to find out the P for J_x and J_y .

i am still trying to obtain the equation, but…..

anyway, using program can solve it without headache. ( but typing Latex is )here are some result.

J_x(\frac{1}{2}) = \begin {pmatrix} \cos \left( \frac {\theta}{2} \right) & - i \sin \left( \frac{\theta}{2} \right) \\ -i \sin ( \frac{\theta}{2} ) & \cos (\frac {\theta}{2}) \end {pmatrix}

J_y(\frac{1}{2}) = \begin {pmatrix} \cos \left( \frac {\theta}{2} \right) & - \sin ( \frac{\theta}{2} ) \\ \sin ( \frac{\theta}{2} ) & \cos (\frac {\theta}{2}) \end {pmatrix}

on Diagonalization (reminder)

Leave a comment

since i don’t have algebra book on my hand, so, it is just a reminder, very basic thing.

for any matrix M , it can be diagonalized by it eigenvalue \lambda_i  and eigen vector v_i , given that it eigenvectors span all the space. thus, the transform represented by the matrix not contractive, which is to say, the dimension of the transform space is equal to the dimension of the origin space.

Let denote, D before Diagonal matrix, with it elements are eigenvalues.

D_{ij} = \lambda_i \delta_{ij}

P be the matrix that collect the eigenvectors:

P_{i j} = \left( v_i \right)_j = \begin {pmatrix} v_1 & v_2 & ... & v_i \end {pmatrix}

Thus, the matrix M is :

M = P \cdot D \cdot P^{-1}

there are some special case. since any matrix can be rewritten by symmetric matrix S and anti-symmetric matrix A . so we turn our focus on these 2 matrices.

For symmetric matrix S , the transpose of P also work

S =P \cdot D \cdot P^{-1} = (P^T)^{-1} \cdot D \cdot P^T

which indicated that P^T = P^{-1} . it is because, for a symmetric matrix, M = M^T ,  the eigenvalues are all different, then all eigenvector are all orthogonal, thus P^T \cdot P = 1 .

For anti-symmetric matrix A

A = P \cdot D \cdot P^{-1}

since the interchange of row or column with corresponding exchange of eigenvalues in D still keep the formula working. Thus, the case P = P^T never consider.

___________________________________________________________

For example, the Lorentz Transform

L = \gamma \begin {pmatrix} \beta & 1 \\ 1 & \beta \end {pmatrix}

which has eigenvalues:

D = \gamma \begin {pmatrix} \beta-1 & 0 \\ 0 & \beta+1 \end {pmatrix}

P = \begin {pmatrix} -1 & 1 \\ 1 & 1 \end {pmatrix}

the eigenvector are the light cone. because only light is preserved in the Lorentz Transform.

and it is interesting that

L = P \cdot D \cdot P^{-1} = P^{-1} \cdot D \cdot P = P^T \cdot D \cdot (P^T)^{-1} = (P^T)^{-1} \cdot D \cdot P^T

another example is the Rotation Matrix

R = \begin {pmatrix} cos(\theta) & - sin(\theta) \\ sin(\theta) & cos(\theta) \end{pmatrix}

D = \begin {pmatrix} Exp( - i \theta) & 0 \\ 0 & Exp(i \theta) \end {pmatrix}

P = \begin {pmatrix} -i & i \\ 1 & 1 \end{pmatrix}

the last example to give is the J_x of the spin-½ angular momentum

J_x = \begin {pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}

D = \begin {pmatrix} -1 & 0 \\ 0 & 1 \end {pmatrix}

P = \begin {pmatrix} -1 & 1 \\ 1 & 1 \end{pmatrix}