## Changing of frame II

Few things have to say in advance.

1. A Vector is NOT its coordinate
2. A vector can only be coordinated when there is a frame.
3. A frame is a set of “reference” vectors, which span the whole space. Those reference vectors are called basis of a frame.
4. a transformation is on a vector or its coordinate. And it can be represented by a matrix.
5. A Matrix should act on a coordinate or basis, but not a vector.

where

$\hat{\alpha} = \begin {pmatrix} \hat{\alpha_1} \\ . \\ \hat{\alpha_n} \end{pmatrix}$ is the column vector of  basis reference vector.

$\vec{u_{\alpha}}$ is the coordinate column vector in $\alpha$ basis.

$\vec{U}$ is the vector in space

$\vec{V}$ is the transformed vector in space.

$G$ and $H$ are the matrix of transform.

$G \cdot H \cdot G^{-1}$ has the same meaning of $H$, only the matrix representation of the transform is different due to different basis.

the Euler’s rotation can be illustrated by series of the diagram. each rotation of frame can be made by each $G$ . but when doing real calculation, after we apply the matrix G  on the coordinate, the basis changed. when we using the fact that  a matrix can be regard as a frame transform or vector transform. we have follow:

This diagram can extend to any series of frame rotation. and the $V_s \rightarrow X_s \rightarrow V_2 \rightarrow V_s$ triangle just demonstrate how 2 steps frame transform can be reduced to the vector transform in same frame.

i finally feel that i understand Euler angle and changing of frame fully. :D

HERE is a note on vector transform and frame transform.

## Rotation operator on x, y in Matrix form

in the J.J. Sakurai’s book, the formalism of finding the matrix representation of rotation operator is general, but quite long and detail. A general treatment is necessary for understanding the topic, but i think, who will use arbitrary rotation? so,  here i give a simple and direct calculation on $J_x$ and $J_y$, for use-ready.

the method is diagonalization. because we already knew the matrix form of the angular momentum operator. which is not given in J.J.Sakurai’s book.

recall that the formalism:

$f(M) = P \cdot f(D) \cdot P^{-1}$

since $D$ is diagonal matrix, thus

$f(D)_{ij} = f(\lambda_i) \delta_{ij}$

so, we have to find out the $P$ for $J_x$ and $J_y$.

i am still trying to obtain the equation, but…..

anyway, using program can solve it without headache. ( but typing Latex is )here are some result.

$J_x(\frac{1}{2}) = \begin {pmatrix} cos \left( \frac {\theta}{2} \right) & - i sin \left( \frac{\theta}{2} \right) \\ -i sin ( \frac{\theta}{2} ) & cos (\frac {\theta}{2}) \end {pmatrix}$

$J_y(\frac{1}{2}) = \begin {pmatrix} cos \left( \frac {\theta}{2} \right) & - sin ( \frac{\theta}{2} ) \\ sin ( \frac{\theta}{2} ) & cos (\frac {\theta}{2}) \end {pmatrix}$

## on Diagonalization (reminder)

since i don’t have algebra book on my hand, so, it is just a reminder, very basic thing.

for any matrix $M$ , it can be diagonalized by it eigenvalue $\lambda_i$  and eigen vector $v_i$, given that it eigenvectors span all the space. thus, the transform represented by the matrix not contractive, which is to say, the dimension of the transform space is equal to the dimension of the origin space.

Let denote, D before Diagonal matrix, with it elements are eigenvalues.

$D_{ij} = \lambda_i \delta_{ij}$

P be the matrix that collect the eigenvectors:

$P_{i j} = \left( v_i \right)_j = \begin {pmatrix} v_1 & v_2 & ... & v_i \end {pmatrix}$

Thus, the matrix $M$ is :

$M = P \cdot D \cdot P^{-1}$

there are some special case. since any matrix can be rewritten by symmetric matrix $S$ and anti-symmetric matrix $A$. so we turn our focus on these 2 matrices.

For symmetric matrix $S$, the transpose of $P$ also work

$S =P \cdot D \cdot P^{-1} = (P^T)^{-1} \cdot D \cdot P^T$

which indicated that $P^T = P^{-1}$. it is because, for a symmetric matrix, $M = M^T$ ,  the eigenvalues are all different, then all eigenvector are all orthogonal, thus $P^T \cdot P = 1$.

For anti-symmetric matrix $A$

$A = P \cdot D \cdot P^{-1}$

since the interchange of row or column with corresponding exchange of eigenvalues in D still keep the formula working. Thus, the case $P = P^T$ never consider.

___________________________________________________________

For example, the Lorentz Transform

$L = \gamma \begin {pmatrix} \beta & 1 \\ 1 & \beta \end {pmatrix}$

which has eigenvalues:

$D = \gamma \begin {pmatrix} \beta-1 & 0 \\ 0 & \beta+1 \end {pmatrix}$

$P = \begin {pmatrix} -1 & 1 \\ 1 & 1 \end {pmatrix}$

the eigenvector are the light cone. because only light is preserved in the Lorentz Transform.

and it is interesting that

$L = P \cdot D \cdot P^{-1} = P^{-1} \cdot D \cdot P = P^T \cdot D \cdot (P^T)^{-1} = (P^T)^{-1} \cdot D \cdot P^T$

another example is the Rotation Matrix

$R = \begin {pmatrix} cos(\theta) & - sin(\theta) \\ sin(\theta) & cos(\theta) \end{pmatrix}$

$D = \begin {pmatrix} Exp( - i \theta) & 0 \\ 0 & Exp(i \theta) \end {pmatrix}$

$P = \begin {pmatrix} -i & i \\ 1 & 1 \end{pmatrix}$

the last example to give is the $J_x$ of the spin-½ angular momentum

$J_x = \begin {pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$

$D = \begin {pmatrix} -1 & 0 \\ 0 & 1 \end {pmatrix}$

$P = \begin {pmatrix} -1 & 1 \\ 1 & 1 \end{pmatrix}$

## angular momentum operator in matrix form

in order to fine out the matrix elements, we just need to know 2 equations

$J_z \left| j,m\right> = m \hbar \left|j,m\right>$

$J_\pm \left| j, m \right>=\hbar \sqrt{ j(j+1) -m ( m\pm 1) } \left |j,m \right >$

The calculation is straight forward, but be careful of the sign.

i define the coefficient :

$K_j(m) = \frac{\hbar}{2} \sqrt {j(j+1) -m(m+1) }$

and the matrix coefficients are:

$J_z ^{\mu\nu} (j) = \hbar (j -\mu +1) \delta_{\mu \nu}$

$J_x ^{\mu\nu} (j) = K_j(j-\mu) \delta_{\mu (\nu-1)}+K_j(j-\nu)\delta_{(\mu-1)\nu}$

$J_y ^{\mu\nu} (j) = -i K_j(j-\mu) \delta_{\mu (\nu-1)}+ i K_j (j-\nu)\delta_{(\mu-1)\nu}$

where

$\mu , \nu = 1,2,...,2j+1$

the Kronecker Delta indicated that only 1st upper and lower diagonal elements are non-zero.

δμ(ν-1) means the 1st upper diagonal elements. since ν = μ+1 to make it non-zero.

For example:

$J_x (1) = \frac {\hbar }{2} \begin {pmatrix} 0 & \sqrt {2} & 0 \\ \sqrt{2} & 0 &\sqrt{2} \\ 0 & \sqrt{2} & 0\end{pmatrix}$

$J_x (\frac {3}{2}) = \frac {\hbar }{2} \begin {pmatrix} 0 & \sqrt {3} & 0 & 0 \\ \sqrt{3} & 0 & 2 & 0 \\ 0 & 2 & 0 & \sqrt{3} \\ 0 & 0 & \sqrt{3} & 0 \end{pmatrix}$

To compute $J_y$, we just need to multiply the upper diagonal with $i$ and the lower diagonal with $- i$.

The coefficient $K_j(j-\mu)$  is every interesting. if we only look at the first upper diagonal. and take the square of each element.

$J_x(\frac {1}{2}) = 1$

$J_x(1) = \begin{pmatrix} 2 & 2 \end{pmatrix}$

$J_x(\frac{3}{2}) = \begin{pmatrix} 3 & 4 & 3 \end{pmatrix}$

and we use this to form a bigger matrix

$\begin {pmatrix} ... & 5 & 4 & 3 & 2 & 1 \\ ... & 10 & 8 & 6 & 4 & 2 \\ ... & ... & 12 & 9 & 6 & 3 \\ ... & ... & ...& 12 & 8 & 4 \\ ... & ... & ... & ... & 10 & 5 \end {pmatrix}$

if we read from the top right hand corner, and take the diagonal element. we can see that they fit the 1st upper diagonal element of $J_x ( j)$, if we take square root of each one.

and the pattern are just the multiplication table! How nice~

so, i don’t have to compute for j = 5/2.

$J_x ( \frac{5}{2} ) = \frac {\hbar}{2} \begin {pmatrix} 0 & \sqrt{5} & 0 & 0& 0 & 0 \\ \sqrt{5} & 0 & \sqrt{8} & 0 & 0 & 0 \\ 0 & \sqrt{8} & 0 & \sqrt{9} & 0 & 0 \\ 0 & 0 & \sqrt{9} & 0 & \sqrt{8} & 0 \\ 0& 0 &0 & \sqrt{8} &0 & \sqrt{5} \\ 0 & 0 & 0 & 0 & \sqrt{5} & 0 \end {pmatrix}$

but the physical reason for this trick , i don’t know. for the Pascal triangle, we understand the reason for each element – it is the multiplicity.