On finding the shortest distance between 2 lines

Leave a comment

given 2 lines with equations:

\vec{r_0}+s \vec{n_0} =0

\vec{r_1}+ z \vec{n_1} = 0

the shortest distance between them can be found by 3 methods.

***** method 1 ******

\hat{n} = \frac{\vec{n_0}\times \vec{n_1}}{|\hat{n}|}

d = |(\vec{r_1}-\vec{r_0})\cdot \hat{n}|

***** method 1 ******

the shortest distance line make right angle with 2 given lines. Thus, bu simple geometry, we have

\vec{r_0}+s \vec{n_0} + d (\vec{n_0} \times \vec{n_1}) = \vec{r_1}+ z \vec{n_1}

where d is the distance between 2 lines. there are 2 unknowns : s, d, z.  the 3 components of the vectors give 3 equations. Thus, by solving the matrix equation:

N\cdot (s, z, d) = \vec{r_1}-\vec{r_0}

Notice that the matrix N is formed by 3 independence vectors when the 2 given lines is not parallel, therefore, it has an inverse and the there is always an unique solution.

***** method 2 ******

another method is minimized the distance between 2 lines. the distance between 2 lines is:

d= |\vec{r_0} + s \vec{n_0} - \vec{r_1} - z \vec{n_1}|

by minimize the distance square for variate the parameter s, z, i.e.

\frac{\partial d^2}{\partial s} = (\vec{r_0}-\vec{r_1})\cdot \vec{n_0} + s - z \vec{n_0}\cdot \vec{n_1} = 0

\frac{\partial d^2}{\partial z} = (\vec{r_0}-\vec{r_1}) \cdot \vec{n_1} +s \vec{n_0}\cdot \vec{n_1} -z =0

by solving the matrix equation:

\left(  \begin{array}{cc}  -1 & \vec{n_0}\cdot \vec{n_1} \\  \vec{n_0}\cdot \vec{n_1} & 1  \end{array}  \right) \cdot    \left(  \begin{array}{c}  s \\  n  \end{array}  \right)=    \left(  \begin{array}{c}  \left(\vec{r_0}-\vec{r_1}\right)\cdot \vec{n_0} \\  \left(\vec{r_0}-\vec{r_1}\right)\cdot \vec{n_1}  \end{array}  \right)

solving and get s , z. then get d.

*********** remark *****************

it is interesting that, the method 2 separate the problem into 2 parts, while method 1 solve it at once. And reduced a 3 dimensional problem into 2 dimensional.

Advertisements

Annual Report for my department

Leave a comment

This report only covered some results as the limitation on page.

the report covered from April, 2011 to March, 2011. After the earth quake in March, 2011, i start the experiment on June and getting the system running on Sept. The data were collected from Sept to Dec. After that, i worked on C-13 polarization and moving the Lab. Then preparing a scattering experiment.

report2011_1

this is not the final version for publishing in the official report.

magnetic field of a single coil

3 Comments

by using Biot Savart Law, the calculation is very terrible. since it involve the cubic inverse distance. However, using vector potential is a way to make an analytic solution.  first in cylindrical coordinate, the current density is:

\vec{J} = J_\phi \hat{\phi} = I \delta(z) \delta(\rho-a) \hat{\phi}

the vector potential, without loss of generality, we set the field point on x-z plane and the vector potential is pointing \hat{\phi} direction.

Oh, shit, i am tired on typing equation by Latex. anyway, 

the finial result can be obtained in J.D. Jackson’s book. and it take the form of Elliptic Integral.

A_\phi = \frac{\mu_0 I}{2\pi } \sqrt{\frac{a}{\rho}}\left( \frac{k^2-2}{2k} K(k^2) + \frac{1}{k} E(k^2) \right)

where K(m) is the complete elliptic integral of first kind, and E(m) is the complete elliptic integral of second kind. 

K(m) = \int_0^{\pi/2} \frac{1}{\sqrt{1-m sin^2(\theta)}}d\theta

E(m) = \int_0^{\pi/2} \sqrt{1-m sin^2(\theta)} d\theta

and 

k^2 = \frac{ 4 a \rho }{ (a+\rho)^2+z^2 }

and using the derivative properties of elliptic integral, we can have the magnetic field in analytic form. Here is the PDF for the detail calculation. Field of Single Coil 

 

 

 

Compile C++ in windows, Linux & Mac

Leave a comment

to compile a c++ program in windows, as far as i know, we must have some IDE (Integrated Development Environment) and compile from there. 2 free IDEs are:

  1. Dev-C++ from Bloodshed Software (http://www.bloodshed.net/)
  2. Eclipse IDE for C/C++ Developers (http://www.eclipse.org/)
in Linux and Mac, since they are both running UNIX, thus they are the same.
in the terminal, use the command “g++”:
        g++ file.cpp -o output.file
For execute the output file, type:
        ./output.file

there is other way on Mac, is the Xcode. you can either download it (pay) or install it from the application CD comes with your Mac.

Finding a 90 degree pulse of the NMR system

Leave a comment

Tuning of NMR System check the pdf… so tired to rewrite again….

 

 

 

[Pol. p target] Meeting report (June 8th)

Leave a comment

Done

  1. Calibrating NMR system with water
  • by changing NMR level
  • after found the peak on level = 150, measure the FID area in successively.

Result:

  • the pulse may not be optimized. the 1st pulse gave FID area = 25, and 2nd pulse  gave FID area = 7, 3rd pulse gave 5.
  1. Observed the Coil Relaxation signal depends on the coil impedance. there is a characteristic peak to indicate the change in impedance. it can be used to measure the impedance.
  1. the Coil was wrapped by Teflon tape and fastened the copper wire. Adhesive was used to fix the join of the coil and cable. the insertion mechanism was fixed by optics mounting.

Result:

  • the characteristic peak does not change so much. thus, we have confidence that the variation on NMR signal is NOT by the coil.
  1. Optimization
  • the microwave delay time was measured. since it is not easy to have trigger on -10 us after the laser pulse end. we use 0us instead.
  • the microwave power is optimized at 1.0W
  • Laser pulse duty and chopper frequency.
  1. Laser polarization angle
  • the change is smaller then signal fluctuation.

Wakui San comments

  1. Laser mode
  1. the laser is running at multiline mode, but the power detector is at 514.5nm
  • Crystal expired
  1. the crystal we use is about 5 years ago

ToDo

  1. have to measure the statistics of the data, due to a improvement of the coil.
  2. Crystal orientation
  3. Optimization
  1. laser pulse duty
  • NMR pulse calibration by water
  • T1 and T2 measurement
  • Q-factor of the coil
  • more detail measurement on each parameters
  • thermal polarization

Discussion

  1. the coil is being fixed, we are able to have a more reliable data. we have to find out the statistics. we can compare with a previous data @ Puw = 1.0 on June 3rd, we had collected 15 data for same setting and the s.d. is 30 unit.
  2. To have the absolute polarization measurement
  1. we have to lower the variation of signal
  2. we have to lower the noise level
  3. we have to get same setting for the NMR system
  1. impedance
  2. level
  3. pulse time
  4. gain
  5. Forward and Backward power
  • the change of FID area due to change of external H-field
  1. the data shown the angular frequency is pi per 30us, about 0.1 rad per 1us.
  2. the angular frequency for proton is 267.5 rad per Tesla
  3. if the field change for 1%
  4. the change of the angular frequency will be 2.67 rad per us
  5. or to say, the fluctuation of the field should be less then 0.05%
  • In order to preform Fourier transform, or the Finite time Fourier Transform, we can use wavelet analysis.
  • Before polarization transfer to 13C, we have to optimize the system.
  • the sample NMR signal is not flowing sine and cosine function
  1. it it due to crystal field

Statistic on Data analysis

Leave a comment

Suppose the data follow normal distribution, with mean μ and standard deviation σ, or in experiment, we call the s.d. be error or fluctuation. and the μ is the hypothetical true mean.

now we take several data and by that, we want to know

  1. How many data should be taken to make the different between data mean and true mean is less then certain value, say, ½ s.d. be 95.5%?
  2. what is the confident interval for our data mean from n-th data is within 2 s.d. or 95.5%?

first, let us establish the data statistic. say, we have n-th data: X1, X2, … , Xn, all from the same sample space of a normal distribution ( for example ). we write:

X_i ~ N(\mu, \sigma^2)

and that the data mean

\bar{X} = \frac{a}{n} \sum X_n

should follow the usual rule of summing up normal distribution:

a X+b Y ~ N( a \mu_X + b \mu_Y , a^2 \sigma_X^2 + b^2 \sigma_Y^2 )

thus,

\bar{X} ~ N ( \mu, \sigma^2 / n )

therefore, let our calculated data mean be \hat{\theta} , the probability that the data mean is within certain number of error is:

P( |\hat{\theta} - \mu| < \alpha)

using standardized normal distribution:

z = ( \hat{theta} - \mu ) / (\sigma /\sqrt{n} )<\beta=\alpha\sqrt{n}/\sigma

the number of data we should take to make the probability larger then 95.5% or 2 s.d. require:

\beta > 2 => n > 4 \sigma^2 / \alpha^2

or in general, with  number of s.d. ( γ ),

\beta > \gamma => n > ( \gamma \sigma / \alpha ) ^2

this mean, a higher probability or lower different, we need more data. In fact, (\hat{\theta} -\alpha , \hat{\theta} + \alpha )  is what we called confident interval or 95.5 %, which is directly related to the number of s.d. ( γ ).

now, there are few parameter, data mean \hat{theta} , sample s.d. \sigma , number of data n, confident interval and absolute different from the true mean α. we can rephrase question:

  1.  given \hat{\theta} , \sigma, \alpha , \gamma , what is n?
  2. given \hat{\theta} , \sigma, \gamma , n , what is α ?

which mean, the 2nd question is revert the process. thus,

\alpha > \gamma \sigma /\sqrt{n}

thus, a larger probability, the larger the confident interval. and which is quite obvious. a large interval will have more chance to cover the true mean.

Hypothesis testing

in the above discussion, we assumed that the……..

**************************** example ****************

for coin tossing: if we want to know whether the coin was unbiased, i.e, the expected value is different from 0.5. if we want the different is less then 0.1, thus, we have to at least toss the coin for 25 times for having 99.5% confident for the expected value within (0.4 , 0.6). In fact, if we tossed a coin for 25 times, and the data mean is 0.6, we have 99.5% confident that the coin is unbiased around ( 0.5, 0.7). this is same as testing in casino.

if we have only tossed a coin for 10 times, what is the confident interval for having 99.5% confident the true mean is with it? the answer is ( x – 0.32 , x + 0.32 ).

Now, for example, i tossed a coin with 11 head and 14 tail. thus, i have 99.5% confident that the true chance of head is lying between ( 0. 34, 0.54 ) and the true chance for the tail is ( 0.46, 0.66) thus, there is a overlap of those chance.

However, in the above discussion, we assumed the standard deviation for the coin is 0.5, which is from the assumption that the coin is unbiased. So, it is quite a looping logic. we assumed the coin is unbiased and then find whether it is unbiased.

To solve this, the hypothesis testing is required. it runs like this:

we assumed the coin is unbiased, μ=0.5 , thus, we will have the theoretical distribution, say, the 99.5% confident interval should be lay on ( 0.4, 0.6) for n=25. We also have to set the accept region or reject region, say, 0.5% or z = 1.96 for standardized normal distribution. Thus, we do experiment, and say, after the 25 tosses, we have 11 heads, thus, the standardized z is :

z = ( 0.44 - 0.5) / ( 0.5 / 5) = 0.6

thus, this result is accepted and the hypothesis that the coin is unbiased is true. or we can see, by the fact that the data mean is within the 99.5% confident interval, thus, we should accept the hypothesis. However, there is still a chance for the true mean is not at 0.5 and outside the region (0.4, 0.6), by 0.5% chance. this is called type II error, or false true. in more concrete:

P( 0.4 < θ < 0.6 | μ < 0.4 or μ > 0.6 ) = 0.5%

and also there is a chance that we reject the hypothesis by getting our data mean not in the region, this is called Type I error, or true negative.

P( θ < 0.4 or θ > 0.6| 0.4 < μ < 0.6 )

****Remark ****

the hypothesis testing is not quite sure.

Older Entries