Labels

Tuesday 22 September 2015

Mechanics of a system of Particles

It is just an expansion of the known Newton's laws of motion to real objects which have size, shape and etc. [where else Newton's laws are defined only for point particles].

When you go from one particle to system of particles, the concept of net external force gets modified, because now you need to define first what is a system and what you meant by external to that system. 


Why I am saying this because, we know that the most common force we deal with the movement of macroscopic objects are electrostatic and gravitational, where both of them are central in nature. Now, you are in a situation to distinguish all these central forces into external and internal. 


It is not predefined what is external and internal. For example, on playing with marbles, if you say your hands and a marble as a whole system, then there is no external force, but if you just consider the marble alone then the force given by your hands on striking the marble, will be external for the marble. 


Thus Newton's second law for a system of particles is written as,

$$ \frac{d^2}{dt^2}\sum_i m_i \vec{r_i} = \sum_i \vec{F_{i(e)}} + \sum_{i,j \,(i\neq j)\,} \vec{F_{ij}} \,\,\,\ldots...eq.(1)$$
Using action - reaction relation, $ \vec{F_{ij}} = -\vec{F_{ji}} $ the second term i.e. internal forces cancel of each other. 

It leads to, $$ \sum_i\vec{F_{i(e)}} = \sum_i m_i \left(\frac{d^2}{dt^2}\sum_i \frac{m_i \vec{r_i}}{m_i}\right) $$


The right hand side is defined as the center of mass of the system of particles, $ \sum_i \frac{m_i\vec{r_i}}{m_i} = \vec{R_{cm}} \,\,\,\ldots...eq.(2)$


 eq.(1) gives, $$ M\,\frac{d^2}{dt^2} \vec{R_{cm}} = \sum_i \vec{F_{i(e)}} \,\,\,\ldots...eq.(3) $$ 


From this we understand that, all our Newton's laws are perfectly applicable only for center of mass i.e. Only the point where the center of mass is defined is the only point that moves exactly as it is defined from Newton's laws. All other particles can have motion other than linear translational motion, but not center of mass. 


From our definition, it is easily derivable that the total linear momentum is defined as,


$$ \vec{P} = \sum_i m_i \dot{\vec{r}} = M \frac{d\vec{R_{cm}}}{dt} \,\,\,\ldots...eq.(4) $$ 

where $ \dot{\vec{r}} = \frac{dr}{dt} $

In a similar way, we can describe rotational quantities like angular momentum and torque when the object is in rotational motion. 
We can start by finding the angular momentum of a system of particles, $$ \sum_i L_i = \sum_i \vec{r_i}\times \vec{p_i} \,\,\,\ldots...eq.(5)$$
  

Again making use of the concept of center of mass, position vectors from the origin is written in terms of center of mass coordinate and its relative distance from center of mass. $$ \vec{r} = \vec{R_{cm}} + \vec{r'} \,\,\,\ldots...eq.(5)$$
where $\vec{r'} $ is the position coordinate from center of mass. 
Differentiating eq.(5) gives $$ \vec{v} = \vec{v_{cm}} + \vec{v'} \,\,\,\ldots...eq.(6) $$

Angular momentum from eq.(5) becomes,

$$ \sum_i \vec{L_i} = \sum_i \vec{r_i}\times\vec{p_i} = \sum_i m_i (\vec{R_{cm}}+\vec{r'}) \times (\vec{v_{cm}}+\vec{v'}) $$
Expanding with vector multiplication,
$$ \sum_i \vec{r_i}\times\vec{p_i} = \sum_i m_i \vec{R_{cm}}\times\vec{v_{cm}} + \sum_i m_i \vec{r'_i}\times\vec{v'_i} + \sum_i m_i \vec{r'_i}\times\vec{v_{cm}} + \sum_i m_i \vec{R_{cm}}\times\vec{v'_i} $$

From the definition of center of mass, the relative coordinate of center of mass w.r.t. center of mass is zero.  $ \sum_i m_i \vec{r'_i} = 0 $ and also $ \sum_i m_i \vec{R_{cm}} \times \vec{v'_i} = \frac{d}{dt} \sum_i m_i \vec{r'_i} = 0 $

We are left with, 
$$ \sum_i \vec{L_i} = \sum_i m_i \vec{R_{cm}} \times\vec{v_{cm}} + \sum_i \vec{r'_i}\times\vec{v'_i} $$

It says that, the total angular momentum of system of particles is equal to the angular momentum of the center of mass with respect to origin and the angular momentum of all particles with respect to the center of mass. 

In the same way, the kinetic energy of system of particles (with eq.(6)), 

$$ \frac{1}{2}\sum_i m_i \vec{v_i}^2 = \frac{1}{2}\sum_i m_i (\vec{v_{cm}}+ \vec{v'_i})^2 $$
$$ \frac{1}{2}\sum_i m_i \vec{v_i}^2 = \frac{1}{2}\left[\sum_i m_i \vec{v_{cm}}^2 + 2 \sum_i m_i \vec{v_{cm}}\cdot\vec{v'_i} + \sum_i m_i \vec{v'_i}^2 \right]$$
By the same token of center of mass, $\,\, 2 \vec{v_{cm}}\cdot \frac{d}{dt} \sum_i m_i \vec{r'_i} = 0 $ 

Hence, Kinetic energy of the system of particles is equal to the sum of kinetic energy of the center of mass and the kinetic energy about the center of mass. 

$$ \frac{1}{2} \sum_im_i\vec{v_i}^2 = \frac{1}{2} \sum_i m_i \vec{v_{cm}}^2 + \frac{1}{2} \sum_i m_i \vec{v'_i}^2 $$

We can use the same procedure to derive the expression for other quantities as well. 


Saturday 19 September 2015

Free Upgrade to Windows 10 pro failed - Solution

If you have a genuine windows, then you can choose to upgrade your old version of windows to the new Windows 10. 
Of course if you don't want, no need to worry. Suppose if you want to access windows apps - which is by the way not possible with windows 7. So you can upgrade to windows 10 and get all your favourite app. Just follow these steps..

If you are a genuine windows user and connected to internet, you will automatically get a windows 10 alert on your right side of the taskbar with windows 10 symbol. 

If you click it, you will be asked to reserve your upgrade which is completely free.  

Once reservation is confirmed, when it is available it will ask your permission to download windows 10. 

Once you start the download you needn't wait for the download to complete fully. If you are only half done, it is no problem closing your computer. The download will automatically start each time you turn on your computer and continues to downloading process.




You can check your download progress by going to 
Control Panel - system and security - windows update (in windows 7) 



Once the download is finished, you will get the notification. 



When you click the upgrade, it will automatically starts to upgrade. It may take a while, so you need to be calm



But here is where I got stuck, and failed to update. 

When the download is finished, I couldn't make the windows 10 update. It showed the error code like below,






I tried to do the whole process by again downloading the 2 GB update. It failed twice and wasted me a lot of time. 

Solution:

I surfed internet and found that there is an other way of downloading the same update file. But now, you can install it any time using a media creation tool. 

You can download the media creation tool from windows website



Now, do the same download using this media creation tool - but you need to be careful, because it is one time download. You cannot close your pc in the middle of download as you did in the previous method. [If there is no power connection in the middle, just use Hibernation mode in your computer].




Now do the same upgrade procedure, once the download finished. But now, you will able to run the installation procedure.




You need to wait for the installation procedure to finish. [ Now, you don't need to worry about power supply anything.. Even the installation interrupted, it will continue without any error next time you turn on your pc.

And follow the procedure as follows in your installation. 








When you finished all these steps, you will have your new windows 10 upgrade. 

Note: If you are not comfortable with windows 10, you can at any time, restore your old windows. So, you can give it a try to windows 10. 

Sunday 13 September 2015

Rigid Body Motion - Euler Angles

A rigid body is defined from the idealized concept that, distance between any two mass points remains constant throughout the motion. 
     Since, it is satisfied by the most of the objects we use in real life (Not absolutely, but perfectly applicable), the kinematics of these rigid bodies plays significant role in many areas. 

The special property of this rigid body is that, we need only 6 independent coordinates to completely define the state of a rigid body in 3-Dimensional Space. No matter how many particle it contains, it can be always applied, entirely due to the constraints. 

Other than the distance constraints, it is also possible to add additional constraints in any rigid body motion, and so the number of independent coordinates will be more reduced. 

It is customary to use the first set of 3 independent coordinates as the "Space fixed coordinates" and the second set of 3 coordinates as the "Body fixed coordinates" but with the same origin. 

     So that, you can explain the state of the body (general position) with the Space fixed coordinates and its orientation relative to this Space fixed coordinates using the Body fixed coordinates. 

Specifying one set of coordinates relative to another needs the basic rules of "Coordinate Transformation".   


Let us say the $(x_1,x_2,x_3)$ and $(x'_1,x'_2,x'_3)$ are the components of same vector in two sets of orthogonal coordinate system with same origin. If $ \{ \hat{e_1}, \hat{e_2}, \hat{e_3} \}$ and $\{ \hat{e'_1} , \hat{e'_2} , \hat{e'_3}\}$ are the corresponding unit vectors,


Direction cosines are defined by,


$$ cos(\theta)_{ij} = cos (\hat{e'_i}\cdot\hat{e_j}) = \hat{e'_i}\cdot\hat{e_j} \,\,\,\ldots...eq.(1)$$


Using the direction cosines, new primed unit vectors in terms of old non primed unit vectors written by, 

$$ \hat{e'_i} = \sum_j (\hat{e'_i}\cdot\hat{e_j}) \hat{e_j} = \sum_j cos(\theta)_{ij} \hat{e_j} \,\,\ldots...eq.(2)$$

Let me write the general vector in cartesian coordinates as,

$$ \vec{r} = x_1\hat{e_1} + x_2 \hat{e_2} + x_3\hat{e_3} = x'_1\hat{e'_1} + x'_2\hat{e'_2} + x'_3\hat{e'_3} \,\,\ldots...eq.(3) $$ 

Then each of the new coordinates in general can be written using eq.(2),


$$ x'_i = \vec{r}\cdot\hat{e'_i} = \sum_j x_j\hat{e_j} \cdot \hat{e'_i} = \sum_j x_j cos(\theta)_{ij} \,\,\ldots...eq.(4)$$


The domain of this definition can be expanded for any general vector. 


Note: In three dimension, all indices run from 1 to 3


Accordingly in 3 dimension, we need 9 direction cosines to make the transformation from one to another namely,
$$ cos(\theta)_{ij} = \hat{e_j}\cdot\hat{e'_i} $$ where "i and j" each runs from 1 to 3 and so it gives 9 components. 

But only three of them are needed to specify the orientation. What about the others?

It so happens, there are quite few extra relations exists due to the orthogonality property of the coordinates. 

The orthogonality relations are given by, 

$$ \hat{e_i}\cdot\hat{e_j} = \delta_{ij} \\~\\ \hat{e'_i}\cdot\hat{e'_j} = \delta_{ij} \,\,\, eq.(5) $$
Expanding the unit vector in one system in terms of the unit vectors of other system using eq.(2) and making use of direction cosines, it can be rewritten as, 

$$ \sum_{j=1}^3 cos(\theta)_{i'j} cos(\theta)_{ij} = \delta_{ii'} \,\,\, eq.(6)$$  

where $\delta $ is the Kronecker-delta symbol.    

To make life simpler, we use the Einstein summation convention, and we denote the direction cosines $$ cos(\theta)_{ij} = a_{ij} $$


Now, eq.(4) becomes,

$$ x'_i = a_{ij} x_j \,\,\,\,\,\,\ldots...eq.(7) $$
where summation is assumed.
The magnitude of the vector $\vec{r} $ in both cases are the same. Using that property with the Pythagoras relation for orthogonal coordinates, we can write,

$$ x'_{i} x'_i = a_{ij} x_j a_{ik} x_j x_k $$

therefore,  $$  a_{ij} a_{ik} = \delta_{jk} \,\,\,\,\,\ldots...eq.(8)$$
where both j,k runs from 1 to 3.

This is the exact same condition obtained from orthogonality relation but in a new representation. 


Since both i and j runs from 1 to 3 in discrete sense, we can write the set of direction cosines in a general matrix form where i used for row and j for column. Let me call the matrix as A,

$$ A = \left[\begin{matrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{matrix} \right] $$

The general transformation from one system to other can be thought of as an Matrix operation on one system which results the coordinates of other system.


Using matrices, 

$$ \left[\begin{matrix} x'_1 \\ x'_2 \\ x'_3 \end{matrix}\right] = \left[\begin{matrix}a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{matrix} \right] \left[\begin{matrix} x_1 \\ x_2 \\ x_3 \end{matrix} \right] \,\,\,\ldots...eq.(9)$$ 

Euler Angles: 


Lagrangian formalism is created based on the concept of degrees of freedom and constraints. Lagrangian itself is defined in terms of Independent coordinates. 


Finding the independent coordinates for a system in motion is essential for solving the Lagrangian. 


From orthogonality relations, we can be sure that, we don't need all the 9 direction cosines as independent coordinates. All we need is some three independent functions of these direction cosines. 


It will therefore necessary to define a set of three new independent functions to describe the orientation of rigid body in space using the Lagrangian. 

[But there is also another condition for our Matrices, which says the determinant of the matrix should be +1. Otherwise it would  be an inversion of the coordinates.]  

There are really a number of different independent functions obtained in a number of different ways to describe the Lagrangian. But Euler angles is the customary one, where we will make out transform from one to another by making three successive rotations in a specific way. 


The sequence in each step can be thought of a matrix operating on the coordinate system at that particular instant. 


Let me start the initial transformation from the $\{ \hat{e_1},\hat{e_2},\hat{e_3}\}$ coordinate system, by making angle $\phi$ counter clock wise about $\hat{e_3}$ axis. And let me denote the resultant coordinate system by, $\{\hat{e_1}^1, \hat{e_2}^1,\hat{e_3}^1\}$  


In the second stage, we make the transformation of $\{\hat{e_1}^1, \hat{e_2}^1,\hat{e_3}^1\}$ by rotating it about $\hat{e_1}^1 $ axis in the counter clock wise direction by an angle $\theta$ 

Consequently we arrive at a newer system denoted by $ \{\hat{e_1}^2, \hat{e_2}^2, \hat{e_3}^2\}$. 

Now, we make the final transformation by rotating $\{\hat{e_1}^2,\hat{e_2}^2,\hat{e_3}^2\}$ by an angle $\psi$ with respect to $\hat{e_3}^2$ axis.    

Thus, we finally arrived our desired transformation that is $$\{\hat{e_1}^3,\hat{e_2}^3,\hat{e_3}^3\} = \{\hat{e'_1},\hat{e'_2},\hat{e'_3}\} $$

In Matrix Notation, each transformation can be written as follows,

First transformation, $$ E_1 = A_1 E \,\,\,\,\ldots...eq.(10)$$

where "E" is the set of coordinate elements and "A" is the transformation Matrix.

Second transformation, $$ E_2 = A_2 E_1 \,\,\,\,\ldots...eq.(11)$$

Final transformation, $$ E_3 = A_3 E_2 = E' \,\,\,ldots...eq.(12)$$

Hence therefore, the combined transformation is denoted by the product of respective matrices,
$$ E' = A_3 A_2 A_1 E = R E \,\,\,\,\ldots...eq.(13) $$
where $ A_3 A_2 A_1 = R $

Writing each transformation in terms of its matrix element values i.e. direction cosine values, 
Since $A_1$ represents the counter clock wise rotation of "E" by an angle $\phi$ about $\hat{e_3}$ axis, it can be written in the matrix form as,

$$ A_1 = \left[ \begin{matrix} cos\phi &sin\phi &0\\ -sin\phi & cos\phi &0 \\ 0 & 0 & 1 \end{matrix}\right] \,\,\,\ldots...eq.(14) $$

Similarly, $A_2$ is the rotation of $E_1$ by an angle $\theta$ with respect to $\hat{e_1}^1$ axis, 

$$ A_2 = \left[ \begin{matrix} 1&0&0\\ 0& cos\theta &sin\theta\\ 0 & -sin\theta & cos\theta\end{matrix} \right] \,\,\,\ldots...eq.(15) $$

Finally, $A_3 $ is the rotation of $ E_2$ by angle $\psi$ with respect to $\hat{e_3}^2 $ axis,

$$ A_3 = \left[ \begin{matrix} cos\psi &\sin\psi & 0\\ -sin\psi & cos\psi & 0 \\ 0 &0&1 \end{matrix}\right] \,\,\,\ldots...eq.(16)$$

Combining together, three transformations can be written using a single matrix R as follows, 
$$ R = A_1A_2A_3 $$ 
subsituting for $ A_1, A_2, A_3 $ gives,
$$ R= \left[ \begin{matrix} cos{\psi} cos{\phi} - cos{\theta} sin{\phi} sin{\psi} & cos{\psi} sin{\phi}+ cos{\theta} cos{\phi} sin{\psi} & sin{\psi}sin{\theta}\\ -sin{\psi}cos{\phi} - cos{\theta}sin{\phi}cos{\psi} & -sin{\psi}sin{\phi}+cos{\theta}cos{\phi}cos{\psi} & cos{\psi}sin{\theta}\\ sin{\theta}sin{\phi} & -sin{\theta}cos{\phi} & cos{\theta} \end{matrix}\right] \,\, \ldots...eq.(17)$$

The inverse transformation from body fixed to space fixed coordinates is just given by the inverse of R i.e. $R^{-1}$ which is the transpose of R.

You can ask, why I choose this specific order of rotations. It needn't be. You can choose many other ways. But the only condition is, two consecutive rotations shouldn't be about the same axis. 

Thursday 10 September 2015

Quantum Mechanics - Postulates (Part -2) - Born Interpretation

Before stating the postulate, there is a special property we should know about the eigenfunctions corresponding to the distinct eigen values. 
It is an easy one to derive. Let us consider two eigenstates with distinct eigen values i.e. eigen states $ \vert\phi_1\rangle $ and $\vert\phi_2\rangle $ with the corresponding eigen values $\lambda_1 $ and $\lambda_2$.

From eq.(7) in the part-1, we use the definition of Hermitian Operator, $$ \int \phi_1^* (\hat{Q} \phi_2) \,dx = \int (\hat{Q}\phi_1)^*\phi_2 \,dx $$ 

Note: The equation applies for any two function because of the definition of Hermiticity (of an Operator).  
  
Using eq.(5) - eigen value equation

$$ \int \phi_1^* (\lambda_2\phi_2) \,dx = \int (\lambda_1^* \phi_1^*) \phi_2 \,dx $$

Since $\lambda_1\, , \, \lambda_2 $ are eigen values, they are real numbers. So, $$ \lambda_1^* = \lambda_1 $$ 
$$ \lambda_2 \int \phi_1^* \phi_2 \,dx = \lambda_1 \int \phi_1^* \phi_2 \,dx $$

which gives that,

$$ (\lambda_2 - \lambda_1) \int \phi_1^* \phi_2 \,dx = 0 $$
We assumed that eigenvalues are distinct, so $$ \lambda_2 - \lambda_1 \neq 0 $$

The only possibility is that, $$ \int \phi_1^* \phi_2 \,dx = 0 \,\,\,\,\, \ldots... eq.(8)$$ 

It is possible only when the eigen functions are orthogonal to each other. Thus we get that, the eigenfunctions corresponding to distinct eigenvalues are orthogonal to each other. And it is always possible to make the eigenfunctions as the basis for our Linear vector Space (even when there are same eigen values for different eigenfunctions - it is called degeneracy. Those cases should be dealt separately). 


Now, we will see our fourth postulate, that is, "The number of measurement that will result into the particular eigenvalue is proportional to the square of the magnitude of the coefficent of that particular eigenfunction in the expansion of the wave function.


When, it is said "the expansion", remember how the wave function (Ket vector) was written in terms of the basis vectors in eq.(1). 
But now, we have learnt that, "Eigen functions are always the one of the possible set of basis vectors" and so, the wave function in general can always be expanded in terms of these new eigen functions as the basis (similar to eq.(1)),

$$ \vert\psi(t)\rangle = A_1(t) \vert\phi_1\rangle + A_2(t) \vert\phi_2\rangle + ...$$

Where $\vert\phi_1\rangle , \vert\phi_2\rangle $ are eigen functions and let us say their eigen values are $\lambda_1, \lambda_2 $. 

Now, the postulates interprets that, the probability of a particular eigen value $\lambda_1 or \lambda_2$ is proportional to the square of the magnitude of the coefficient of the eigen function in the expansion of wave function i.e. $A_1 $ and $ A_2 $ 
where, $  A_1^2 $ gives the probability of the eigen value $\lambda_1 $ with the eigen function $\phi_1$ and similarly for others.  
Using inner product, the value of ,
$$ A_1^2 =  |\langle\phi_1\vert\psi(t)\rangle|^2 $$
In general,
$$ A_b^2 = |\langle\phi_b\vert\psi(t)\rangle|^2 \,\,\,\, \ldots...eq.(9)$$

This is the more general Statement of Max Born's interpretation.
There are many things you can predict with this postulate and it plays the most significant role hereafter. 

When, I say eigen functions, there are really infinite possible sets of eigen functions you can choose in Nature. Depending on the problems, we will mostly used to deal with Energy, Position and Momentum Eigen functions. 

In the beginning itself, we mentioned.. dealing with abstract ket vectors is not possible and so it should be taken the projection of these ket vectors in a desired function space. 

In particular, when I take the projection on position space, we get Position Space wave function expressed as the linear combination of eigen function in position space as, from eq.(2),

$$ \langle{x}\vert\psi(t)\rangle = \sum_b A_b(t) \langle{x}\vert\phi_b\rangle \,\,\,\, \ldots...eq.(2)$$
$$ \psi(x,t) =  \sum_b A_b(t) \phi_b(x) $$ 

where $\phi_b(x) $ is some general eigen function in position space. 

But, if the eigen functions itself are position eigen functions?


Then, the inner product $\langle{x}\vert\psi(t) $  becomes dirac delta function, which means, it gives the value "one" only at that specific "x" and gives the value zero for all other x. 
The reason is, when you are in one eigen state, you can only measure the corresponding eigen value and not some other eigen value of some different eigen function.  

Therefore, eq.(2) becomes,
$$ \langle{x}\vert\psi(t)\rangle = \sum_b A_b(t) \langle{x}\vert{x'}\rangle = \sum_b A_b(t) \delta(x-x')  $$
Thus, we obtain the value of probability amplitude,
$$ \psi(x,t) = A_b(t) $$ and $$ |\psi(x,t)|^2 = A_b^2 \,\,\,\, \ldots...eq.(10)$$ 
where $A_b^2 $ is the probability of getting the eigen value corresponding to eigen function $\vert{x}\rangle $.

Equating with the left side, we get the general property of a "Position space Wavefunction" and that is "the square of the amplitude of the Position space wave function gives the probability of finding the particle at x and x+dx". 
This is popularly known as the Max Born Interpretation in the basic Quantum Mechanics.

Just because we found a meaning for the $ |\psi(x,t)|^2$  doesn't mean, we can only define it in terms of position eigen functions. 
Still, we could expand this "Position space wave function in terms of Energy eigen functions or Momentum Eigenfunctions or anything, but in Position space (means Position as the parameter).

Instead of Position space, we can also take the inner product with momentum and get momentum space wave function and can define similar things. In that case,
$$ \langle{p}\vert\psi(t)\rangle = \psi(p,t) $$
and $$ |\psi(p,t)|^2 $$ gives the probability of finding the particle at momentum p and p+dp. 

All you need to understand is the Vector space and its basis. 
For a vague imagination, you can compare it with various coordinates like explaining a vector in terms of cartesian or spherical or cylindrical or etc. But, it is just for an understanding.

Wednesday 9 September 2015

Effect of magnetic field on Atomic orbits

Effect of magnetic field in atomic level can be quantified in a classical level with some assumptions such as, the atomic orbit is circular and electron revolves around the nucleus at radius R and the current produced is assumed to steady. 
The current is given by, $$ I = \frac{q}{t} = \frac{-ev}{2\pi{R}} $$ where $$ T = \frac{2\pi}{\omega} = {2\pi{R}}{v} \\~\\ v = R\omega $$ 
The orbital dipole moment of this configuration is given by, $$ \vec{m} = I \vec{a} = \frac{-ev}{2\pi{R}}\pi{R^2} \hat{z} = \frac{-evR}{2}\hat{z} $$ where $\hat{z} $ points in the direction perpendicular to the area of the loop when current flows by the usual right hand thumb rule direction. When it is placed in the magnetic field, this dipole moment experiences a torque which tries to align it along the magnetic field direction. 
Without magnetic field, there is only electrostatic interaction, therefore, the force equation gives, $$ \frac{e^2}{4\pi\epsilon_0R^2} = \frac{m_ev^2}{R}$$ 
If suppose we assume the magnetic field is in the direction of $\hat{z} $ , then the centripetal force can be written as, $$ \frac{e^2}{4\pi\epsilon_0R^2} + ev'B = \frac{m_e v'^2}{R} $$ 
where v' is the new velocity. If we assume $v'\simeq v $ then, $$ ev'B = \frac{m_e(v'^2 -v^2)}{R} = \frac{m_e}{R} (v'+v)(v'-v) $$
then, $$ v'-v = \frac{eRB}{2m_e} = \delta{v} $$ 
e,R,B, m are all positive quantities. So, the electron will speed up when the magnetic field is turned on. Similarly, the change in orbital speed develops a change in magnetic moment by, $$\delta\vec{m} = \frac{-e\delta{v} R} \hat{z} = \frac{-e^2R^2}{4m_e} \vec{B} $$ which directed in the opposite direction of applied magnetic field. 

That is it! But all this proved to be wrong with quantum mechanics where better explanation is given!   

Monday 7 September 2015

Quantum Mechanics - Postulates (Part -1) - Wave function, Hermitian Operators

I am not going to give the Postulates as it is in the books or anything, but I just want to postulate and speak its' mathematical importance, in a way I understood. 

From the Classical Physics of Lagrangian and Hamiltonian, we know that any system [it can be single particle or multi particle or anything] can be associated with a function so called Lagrangian or Hamiltonian, such that all the information about the system can be extracted from this function using the corresponding Equations of motion. 

Mathematically, we assume that the Lagrangian or Hamiltonian function contains all the necessary information we need to describe the system completely. 

In the same way, here we assume that "Every Quantum Mechanical System is completely described by an arbitrary State Vector or a Wave function $ \vert{\psi(t)}\rangle $ , read as "ket - psi" is an element of complex linear vector space called Hilbert Space. The State vector contains all the information about the system and it changes only with time.  


The state vector is an abstract concept and you can never measure this state vector or imagine it in a physical manner. 


From the concept of Vector space, we assume that it is always possible to define a set of vectors which are linearly independent and forms the basis for the Vector Space.  


Note: You needn't to panic on hearing the term Vector Space. Your Euclidean space follows the rules of Vector space. Whenever you get in trouble understanding vector space, you can always make a comparison with your 3 dimensional Euclidean space. 


The set of basis vectors needn't to be unique, but it is always possible to represent any vector in the Vector space as a linear combination of these basis vectors. 


As a consequence, you can imagine this arbitrary state vector as the linear combination of all the basis vectors.
We don't know what are these basis vectors, since there is many possible ways of choosing a set of basis vectors from different possible sets. Let us consider this as a general linear combination.  

It is represented as, $$ \vert{\psi(t)}\rangle = \sum_b A_b(t) \vert{\phi_b}\rangle = A_1(t) \vert{\phi_1}\rangle + A_2(t) \vert{\phi_2}\rangle + \ldots.....eq.(1)$$


All the basis vectors are ket vectors, after all left side should be equal to right side. And we can always make the time dependence of ket vector to come into the coefficients.  

We already said that, these are in abstract Hilbert space, so we cannot measure anything about them. 

To measure anything, we need to make the projection of this abstract quantities in the known space where we could describe the wave function completely. 

To measure the projection, we make the dot product of desired known parameter with this abstract Wave function. 

So that, the wave function and all its basis vectors are now described using our desired known parameter. 

For example, if the desired known parameter is position, then all the Wave function and its basis vectors will be projected into position space (where position is the parameter). And so, the new projected wave function is called "Position Space Wave function".


$$ \vert{\psi(t)}\rangle = \sum_b A_b(t) \vert{\phi_b}\rangle $$


Dotted with x to give the projection in Position space, 


$$ \langle{x}\vert{\psi(t)}\rangle = \sum_b A_b(t) \langle{x}\vert{\phi_b}\rangle   \,\, \ldots...eq.(2)$$


Now, the new projection of Wave function in Position space, i.e. Position Space wave function is,

$$ \psi(x,t) = \sum_b A_b(t) \phi_b(x) $$

Where $\langle{x}\vert{\psi(t)}\rangle = \psi(x,t)$ and $ \langle{x}\vert{\phi}\rangle = \phi(x)$ 


If we choose momentum as the desired known parameter, then we can dot momentum with the general wave function. It will result into, $$ \langle{p}\vert{\psi(t)}\rangle = \sum_b A_b(t) \langle{p}\vert{\phi_b}\rangle \,\,\ldots...eq.(3)$$ 

And the new wave function is called Momentum Space Wave function, $$ \psi(p,t) = \sum_b A_b(t) \phi_b(p) $$

That is all we can do with the first Postulate. 


The second Postulate is stated as, "Each dynamical variable that relates to the motion of the particle can be associated with a linear operator". 


An operator is called to be linear if it satisfies the condition, $$ \hat{Q}(c_1\psi_1 + c_2\psi_2) = c_1 \hat{Q}\psi_1 + c_2 \hat{Q}\psi_2 \,\,\,....\ldots.eq.(4)$$


With Each operator, it can be associated a linear eigen value equation such that $$ \hat{Q} \psi_i = \lambda_i \psi_i \,\,\,\,\ldots..eq.(5)$$

where $\psi_i $ is called the eigen state and 
$\lambda_i$ is called the eigen value. 

A linear operator is also an abstract concept, which is represented using a matrix. A linear operator is determined by how it acts on the basis vectors because any vector can be expanded as the linear combination of these basis vectors. 


If we know how an operator acts on the basis, then it gives us everything we need to know about the operator on that Vector Space. 

Let me represent the basis vectors as $$\vert{e_1}\rangle, \vert{e_2}\rangle, \ldots...$$


Therefore, $$ \hat{Q}\vert{\psi}\rangle = \hat{Q} \vert{e_1}\rangle + \hat{Q} \vert{e_2}\rangle + ... \,\,\ldots...eq.(6)$$



If we represent the linear operators with the matrix, knowing the matrix elements is knowing the operator itself. 

Let me take a basis vector $\vert{e_i}\rangle$ in the Hilbert Space. To understand how a linear operator works on this basis vector, we operate it on this basis and it will result some new vector. 


For example, you can consider the rotation of the coordinates as an operation that acts on the basis vectors. 

Due to linear property, $\hat{Q}\vert{e_i}\rangle$ - the new vector itself can be written again as a linear combination of the basis vectors, represented as $$ Q\vert{e_i}\rangle = \sum_k Q_{kj}\vert{e_k}\rangle $$

You can compare it with the coordinate transformation rules.   

Now, the third postulate says that, "Any observable in Quantum Mechanics is a linear Hermitian operator on the Hilbert space, where the eigenvalues are the only possible results of a precise measurement of that observable. 
Definition of a Hermitian operator:

$$ \int \psi_i^* (\hat{Q} \psi_i)\, dx = \int (\hat{Q}\psi_i)^* \psi_i \,dx \,\,\,\ldots...eq.(7)$$

Eq.(7) which gives a special property on expanding with eigenvalue eq.(5) as follows,

$$ \int \psi^* (\lambda_i \psi_i)\,dx =  \int (\lambda_i \psi_i)^* \psi_i\,dx $$
which gives, $$ \lambda_i \int \psi_i^*\psi_i \,dx = \lambda_i^* \int \psi_i^* \psi_i \,dx $$

$$(\lambda_i - \lambda_i^*) \int \psi_i^*\psi_i \,dx = 0 $$


But $\int\psi_i^*\psi_i \,dx = \int {|\psi_i|}^2 \,dx > 0 $ and it is equal to zero only when $|\psi_i| = 0 $ where wave function itself vanishes and that is not a desirable solution. 


So, the only solution is $$ \lambda_i - \lambda_i^* = 0 $$ or $$ \lambda_i = \lambda_i^* $$ It is only possible when $"\lambda_i"$ is a real number. This is a characteristic result of any Hermitian operator, which states that "The eigenvalues of an Hermitian Operator is always a real number". 
This is the reason why, eigenvalues of an operator is the only possible results on a precise measurement, because measurement should give a real number. 


There are much more things to talk about an operator and important relations like Completeness, Orthogonality, Hermiticity of an operator and etc. It should be dealt separately. 

Saturday 5 September 2015

Curvilinear Coordinate System and General expression for Gradient, Curl, Divergence and Laplacian

Curvilinear Coordinate system, which in fact is the most general coordinate system used to describe the motion of any particle. It includes all our usual systems like Cartesian, Spherical and Cylindrical coordinate systems. 

     With one to one correspondence, it is always possible to define a set of transformation rules like, $$ x_1 = x_1(u_1, u_2, u_3) \\~\\ x_2 = x_2(u_1, u_2, u_3) \\~\\ x_3 = x_3(u_1, u_2, u_3)$$


to write each of the cartesian coordinates in terms of the general coordinates. We can also define inverse transformation rules like,


$$ u_1 = u_1(x_1, x_2, x_3) \\~\\ u_2 = u_2(x_1, x_2, x_3) \\~\\ u_3 = u_3(x_1, x_2, x_3) $$

to go from one system to another. These transformations are unique, since they have one to one correspondence. 


The surfaces $ u_1 = const., u_2 = const., u_3 = const.$$ are called coordinate surfaces and the curve formed from the intersection of pair of two surfaces is called coordinate curves. The point where the tangent lines drawn to these coordinate curves intersect is chosen to be the origin of the coordinate system. 


For the sake of simplicity, we often used to deal with the coordinate systems where the coordinate surfaces intersect at right angles. They are called "Orthogonal coordinate system". 


Now, we can formulate the general rules for describing a point and its motion and to describe various vector operations in this new curvilinear coordinate system.  


But the formulation is going to be more general to apply in any system at any point. It should apply to Cartesian, Cylindrical, Spherical, Paraboloidal, Ellipsoidal and etc. 


Note: There are nearly more than 10 types of orthogonal coordinate systems we are using in mathematics. 


To start, first we will consider the example of describing small differential element in 3D Euclidean Space. $$\vec{dr} = dx \hat{e_x} + dy \hat{e_y} + dz \hat{e_z} ..... \ldots eq.(1)$$ 


Using the chain rule, we can write the same differential element as,

$$ \vec{dr} = \frac{\partial\vec{r}}{\partial{x}} dx + \frac{\partial\vec{r}}{\partial{y}} dy + \frac{\partial\vec{r}}{\partial{z}} dz \ldots... eq.(2)$$  

Comparing eq.(1) and (2) we get that $$\frac{\partial\vec{r}}{\partial{x}} = \hat{e_x} \, , \,\frac{\partial\vec{r}}{\partial{y}} = \hat{e_y} \, , \, \frac{\partial\vec{r}}{\partial{z}} = \hat{e_z} $$   In a similar way, if the differential element is written in terms of curvilinear coordinates as $$ \vec{r} = \vec{r} (u_1, u_2, u_3)$$ Tangent vector to $u_1$ curve at some point P is, $ \frac{\partial\vec{r}}{\partial{u_1}}$ 


Therefore, the unit tangent vector in this direction given by,  $$ \frac{\partial\vec{r}/\partial{u_1}}{|\partial\vec{r}/\partial{u_1}|} = \hat{e_1} $$


We call $$ |\partial\vec{r}/\partial{u_1}| = h_1 = scaling factor $$


Since these coordinates needn't necessarily have the dimension of distance, these parameters are used to make them all to same dimension - after all we cannot add mangoes and apples together to single count. 


To complete, we write, $$\frac{\partial\vec{r}}{\partial{u_1}} = h_1 \hat{e_1} \ldots... eq.(3) \\ \frac{\partial\vec{r}}{\partial{u_2}} = h_2 \hat{e_2} \ldots... eq.(4) \\ \frac{\partial\vec{r}}{\partial{u_3}} = h_3 \hat{e_3} \ldots... eq.(5) $$


These basis vectors are tangent vectors to the curves. Similar to that, we can always form another basis whose unit vectors are normal to the coordinate surfaces, where the normal vectors are represented in terms of gradient operator $ \nabla{u_1} , \nabla{u_2}, \nabla{u_3}$ 


After normalizing, we get a new basis unit vectors represented as, $$ \hat{E_1} = \frac{\nabla{u_1}}{|\nabla{u_1}|} , \hat{E_2} = \frac{\nabla{u_2}}{|\nabla{u_2}|}, \hat{E_3} = \frac{\nabla{u_3}}{|\nabla{u_3}|} $$  It can be shown separately that, these two set of basis vectors constitute reciprocal system of vectors under coordinate transformation. It leads to the concept of Co-variant and Contra-variant vectors.


Thus, any vector can be expressed as either in terms of first set of basis vectors or in terms of second set of basis vectors. 


The square of the magnitude of the differential element in terms of first set of unit basis vectors, $$ ds^2 = \vec{dr}\cdot\vec{dr} = {h_1}^2 {du_1}^2 + {h_2}^2 {du_2}^2 + {h_3}^2 {du_3}^2 $$ Since we got the basic things we need to work, now we can start defining the general relation for operations like Gradient, Divergence, Curl and Laplacian. 


Gradient:
Gradient from the definition, $$ df = \nabla{f} \cdot \vec{dr} $$


Using the chain rule, $$ df = \frac{\partial{f}}{\partial{u_1}} du_1 + \frac{\partial{f}}{\partial{u_2}} du_2 + \frac{\partial{f}}{\partial{u_3}} du_3 \dots... eq.(6) $$  and we can also write the differential element $ \vec{dr} $ using eq. (3), (4), (5) as, $$ \vec{dr} = h_1 du_1 \hat{e_1} + h_2 du_2 \hat{e_2} + h_3 du_3 \hat{e_3} $$


Still we don't know what is the form for gradient operator, but we do know, from the definition of gradient that it would give "df" when dotted with $ \vec{dr}$ . So, 


$$ \nabla{f} \cdot \vec{dr} = \nabla_1{f} h_1 du_1 + \nabla_2{f} h_2 du_2 + \nabla_{f} h_3 du_3 \ldots... eq.(7) $$


where $ \nabla_1{f} , \nabla_2{f} , \nabla_3{f} $ are the components of Gradient operator when it is written in terms of the general basis vectors $ \hat{e_1}, \hat{e_2}, \hat{e_3} $. 
Again comparing eqs.(6) and (7), the components of the gradient operator found out to be, $$ \nabla_1{f} = \frac{1}{h_1} \frac{\partial{f}}{\partial{u_1}},\nabla_2{f} = \frac{1}{h_2} \frac{\partial{f}}{\partial{u_2}}, \nabla_3{f} = \frac{1}{h_3} \frac{\partial{f}}{\partial{u_3}} $$  


Hence the general form of Gradient operator in any curvilinear orthogonal coordinate system is given by,

$$ \nabla{f} = \frac{1}{h_1} \frac{\partial{f}}{\partial{u_1}} \hat{e_1} + \frac{1}{h_2} \frac{\partial{f}}{\partial{u_2}} \hat{e_2} + \frac{1}{h_3} \frac{\partial{f}}{\partial{u_3}} \hat{e_3} \,\, \ldots...eq.(8)$$  

Divergence:


Let us analyze the first term we will get, when we apply the divergence operator on any vector function $\vec{A} $,

$$ (\nabla\cdot\vec{A})_1 = \nabla \cdot (A_1\hat{e_1})  \ldots.....(9)$$

We don't know, what we will obtain when we apply the divergence operator on $ \hat{e_1} $. But, if we could write $\hat{e_1}$ in terms of some gradient operations, then there is a real possibility of obtaining the expression for Divergence with our prior knowledge of Gradient.     


According to write the unit vectors in terms of gradient relations, 


We apply the gradient operator for the functions $ u_1, u_2, u_3 $ in eq.(8) from which, we will get $$ \nabla{u_1} = \frac{\hat{e_1}}{h_1}\,, \,\nabla{u_2} = \frac{\hat{e_2}}{h_2}\, ,\, \nabla{u_3} = \frac{\hat{e_3}}{h_3} $$ 

The resultant unit vectors using gradient relations are,

$$ \hat{e_1} = h_1 \nabla{u_1} \, ,\, \hat{e_2} = h_2 \nabla{u_2} \, ,\, \hat{e_3} = h_3 \nabla{u_3} $$  

But we need to relate it with $ \hat{e_1}$, So we apply the volume relation $$ \hat{e_1} = \hat{e_2} \times \hat{e_3} = h_2 h_3 \nabla{u_2} \times \nabla{u_3} $$

Applying this in eq.(9),
$$ \nabla \cdot (A_1\hat{e_1}) = \nabla\cdot[A_1h_2h_3 \nabla{u_2} \times \nabla{u_3}] \, \, \, \ldots...eq.(10)$$
Using the vector relation, $$ \nabla\cdot {f\vec{A}} = \nabla{f}\cdot\vec{A} + f \nabla\cdot\vec{A} $$

where f- scalar function, $\vec{A} = vector function $.

Eq.(10) becomes, $$ \nabla\cdot(A_1\hat{e_1}) = (\nabla{A_1h_2h_3})\cdot(\nabla{u_2}\times\nabla{u_3}) + A_1h_2h_3 \nabla\cdot(\nabla{u_2}\times\nabla{u_3}) \, \, \, \ldots...eq.(11)$$


But using the vector identity, $$ \nabla\cdot(\vec{A}\times\vec{B}) = \vec{B}\cdot(\nabla\times \vec{A}) - \vec{A}\cdot (\nabla\times\vec{B})$$

$$ \nabla\cdot(\nabla{u_2}\times\nabla{u_3}) = \nabla{u_3}\cdot(\nabla\times\nabla{u_2}) - \nabla{u_2}\cdot(\nabla\times\nabla{u_3})$$

But, Curl of gradient is always zero for any scalar function, which implies $$ \nabla\cdot(\nabla{u_2}\times\nabla{u_3}) = 0 $$  


Eq.(11) gives, $$\nabla \cdot (A_1\hat{e_1}) = (\nabla{A_1h_2h_3})\cdot(\nabla{u_2}\times\nabla{u_3}) \dots..eq.(12) $$  


Again writing $ \nabla{u_2}\times\nabla{u_3} $ in terms of basis vectors that is $$ \nabla{u_2}\times\nabla{u_3} = \frac{\hat{e_2}\times\hat{e_3}}{h_2h_3} = \frac{\hat{e_1}}{h_2h_3} $$  


Eq.(12) results into $$\nabla\cdot(A_1\hat{e_1}) = \frac{\hat{e_1}}{h_2h_3} \cdot \nabla(A_1h_2h_3) $$

Using our prior knowledge of Gradient, it can be expanded as,
 $$ \nabla\cdot(A_1\hat{e_1}) = \frac{\hat{e_1}}{h_2h_3}\cdot\left[ \frac{\hat{e_1}}{h_1} \frac{\partial(A_1h_2h_3)}{\partial{u_1}} + \frac{\hat{e_2}}{h_2} \frac{\partial(A_1h_2h_3)}{\partial{u_2}} + \frac{\hat{e_3}}{h_3} \frac{\partial(A_1h_2h_3)}{\partial{u_3}}\right] $$  While, we are dealing with orthogonal basis, dot product between any two different basis gives zero and dot product of same vector gives unity. 

Making using of the orthonormality, we finally arrive at the result,
$$ \nabla\cdot(A_1\hat{e_1}) = \frac{1}{h_1h_2h_3} \frac{\partial(A_1h_2h_3)}{\partial{u_1}} $$

Similar procedure gives the expression for other coordinates. 


The final expression for the divergence operator in general curvilinear coordinates is,


$$\nabla\cdot\vec{A} = \frac{1}{h_1h_2h_3}\left[ \frac{\partial{A_1h_2h_3}}{\partial{u_1}} + \frac{\partial{A_1h_2h_3}}{\partial{u_2}} + \frac{\partial{A_1h_2h_3}}{\partial{u_3}}\right] \ldots...eq.(13)$$  


Curl:


As the same, first we will take single component, write it in terms of gradient and apply the curl,


$$ \nabla \times (A_1\hat{e_1}) = \nabla \times (A_1h_1\nabla{u_1}) \, \, \ldots...eq.(14)$$

Since there is already a curl operator, we don't need to use volume relation but we could just simply write $ \hat{e_1} $ in terms of its own gradient relation i.e. $ \hat{e_1} = h_1 \nabla{u_1} $

Using the vector identity, $$ \nabla \times (f\vec{A}) = f \nabla\times\vec{A} + \nabla{f}\cdot\vec{A} $$

Eq.(14) gives, $$ \nabla \times(A_1\hat{e_1}) = \nabla \times (A_1h_1\nabla{u_1}) = \nabla (A_1h_1)\times \nabla{u_1} + A_1h_1\nabla \times \nabla{u_1} $$

but curl of gradient is zero.

So, Eq.(14) becomes, $$\nabla \times(A_1\hat{e_1}) = \nabla \times (A_1h_1\nabla{u_1}) = \nabla (A_1h_1)\times \nabla{u_1} $$  

With the help of eq.(8) we can rewrite the above into,

$$ \nabla \times (A_1\hat{e_1}) = \left[\frac{1}{h_1} \frac{\partial{A_1h_1}}{\partial{u_1}} \hat{e_1} + \frac{1}{h_2} \frac{\partial{A_1h_1}}{\partial{u_2}} \hat{e_2} + \frac{1}{h_3} \frac{\partial{A_1h_1}}{\partial{u_3}} \hat{e_3}\right] \times \nabla{u_1} $$

Again using the relation, $ \nabla{u_1} = \frac{\hat{e_1}}{h_1} $

$$\nabla \times (A_1\hat{e_1}) = \left[\frac{1}{h_1} \frac{\partial{A_1h_1}}{\partial{u_1}} \hat{e_1} + \frac{1}{h_2} \frac{\partial{A_1h_1}}{\partial{u_2}} \hat{e_2} + \frac{1}{h_3} \frac{\partial{A_1h_1}}{\partial{u_3}} \hat{e_3}\right] \times \frac{\hat{e_1}}{h_1} $$


Using the cross product rule for the positive volume element,  we finally get, 

$$\nabla \times (A_1\hat{e_1}) = \frac{1}{h_1h_2} \frac{\partial{A_1h_1}}{\partial{u_2}} \hat{-e_3} + \frac{1}{h_1h_3} \frac{\partial{A_1h_1}}{\partial{u_3}} \hat{e_2} \, \ldots... eq.(15)$$ 


Similar procedure gives for other components the following result,

$$\nabla \times (A_2\hat{e_2}) = \frac{1}{h_1h_2} \frac{\partial{A_2h_2}}{\partial{u_1}} \hat{e_3} + \frac{1}{h_2h_3} \frac{\partial{A_2h_2}}{\partial{u_3}} \hat{-e_1} \, \ldots... eq.(16)$$

$$\nabla \times (A_3\hat{e_3}) = \frac{1}{h_1h_3} \frac{\partial{A_3h_3}}{\partial{u_1}} \hat{-e_2} + \frac{1}{h_2h_3} \frac{\partial{A_3h_3}}{\partial{u_2}} \hat{e_1} \, \ldots... eq.(17)$$


Combining the components from eq.(15),(16),(17) we get the General expression for the Curl operator in Curvilinear coordinates as,


$$ \nabla\times\vec{A} = \frac{1}{h_2h_3} \left[\frac{\partial(A_3h_3)}{\partial{u_2}} - \frac{\partial(A_2h_2)}{\partial{u_3}}\right] \hat{e_1} + \\~\\ \frac{1}{h_1h_3} \left[\frac{\partial(A_1h_1)}{\partial{u_3}} - \frac{\partial(A_3h_3)}{\partial{u_1}}\right] \hat{e_2} + \\~\\ \frac{1}{h_1h_2} \left[\frac{\partial(A_2h_2)}{\partial{u_1}} - \frac{\partial(A_1h_1)}{\partial{u_2}}\right] \hat{e_3} \, \, \, \ldots...eq.(18)$$ 


Or simply we can write this in Matrix form as,


$$ \nabla \times \vec{A} = \frac{1}{h_1h_2h_3}\begin{vmatrix} h_1\,\hat{e_1} & h_2 \,\hat{e_2} & h_3 \, \hat{e_3} \\ \frac{\partial}{\partial{u_1}} & \frac{\partial}{\partial{u_2}} & \frac{\partial}{\partial{u_3}} \\ h_1\vec{A_1} & h_2 \vec{A_2} & h_3 \vec{A_3} \end{vmatrix} \,\,\, \dots...eq.(19) $$


Laplacian:


Unlike others, we don't need to find anything extra for Laplacian since it is just the combination of gradient and divergence. 


Let us take a scalar function "f" and write its gradient from eq.(8), $$ \nabla{f} = \frac{1}{h_1} \frac{\partial{f}}{\partial{u_1}} \hat{e_1} + \frac{1}{h_2} \frac{\partial{f}}{\partial{u_2}} \hat{e_2} + \frac{1}{h_3} \frac{\partial{f}}{\partial{u_3}} \hat{e_3} $$

Now, applying the Divergence operation for the resultant outcome, we get the General expression for Laplacian in curvilinear coordinates, 

$$ \nabla^2f = \frac{1}{h_1h_2h_3} \left[ \frac{\partial\left(\frac{h_2h_3}{h_1}\frac{\partial{f}}{\partial{u_1}}\right)}{\partial{u_1}} + \frac{\partial\left(\frac{h_3h_1}{h_2}\frac{\partial{f}}{\partial{u_2}}\right)}{\partial{u_2}} + \frac{\partial\left(\frac{h_1h_2}{h_3}\frac{\partial{f}}{\partial{u_3}}\right)}{\partial{u_3}} \right] \ldots...eq.(20)$$  




That is all we need to derive. We need to remember that, these derivations are done for general orthogonal curvilinear coordinate system. 

For the most general curvilinear coordinate system (i.e. which are not orthogonal), we will need Tensors and its analysis. 
 

All Posts

    Featured post

    Monopoles - 5 - Dirac Monopoles in Quantum Mechanics - Part - 1

    We know, Magnetic vector potential plays the crucial part in the Hamiltonian of an Electromagnetic system where the Hamiltonian formulation...

    Translate