Labels

Wednesday 24 February 2016

Monopoles - 8 - Dirac Monopoles in Quantum Mechanics - Part - 4


Let us start with the motion of an electron in the field of a magnetic monopole where the usual spherical polar coordinates are described by taking the monopole at the origin. In addition we know that Every wave function describing this system should have a singularity line starting from the origin, should pass through any closed surface.  We use the equation from previous posts (Part-1,2,3)
We consider the same wave function of the type, $$ \psi = \psi_1 e^{i\beta}$$ with corresponding definition on $\beta, \vec{k},\,etc.$
But, now it was introduced two separate things as nodal line and singular line. I couldn't understand it completely, but I just want to proceed with the next step. 
The magnetic field of the monopole is given by, $$ \vec{B} = \frac{q_m}{r^2} \hat{r} $$ and $$ \nabla\times\vec{K} = \frac{e}{\hbar{c}} \vec{B}  $$ On substitution, $$ \nabla\times \vec{K} = \frac{e}{\hbar{c}} \frac{q_m}{r^2}\hat{r} \\~\\ = \frac{e}{\hbar{c}}\frac{n\hbar{c}}{2er^2}\hat{r}\\~\\ = \frac{n}{r^2}\hat{r}$$
Thus, we get the curl of K as radial with magnitude $\frac{n}{2r^2}$ 
So, the solution of K could be worked out by expanding the curl in spherical polar coordinates as, $$ \frac{1}{r^2sin\theta} \left[ \frac{\partial(rsin\theta{k_{\phi}})}{\partial{\theta}} - \frac{\partial(rk_{\theta})}{\partial{\phi}}\right] \hat{r} + \frac{1}{rsin\theta}\left[\frac{\partial(k_r)}{\partial{\phi}} - \frac{\partial(rsin\theta{k_{\phi}})}{\partial{r}}\right] \hat{\theta}+ \\~\\\frac{1}{r}\left[ \frac{\partial(r{k_{\theta}})}{\partial{r}} - \frac{\partial(k_r)}{\partial{\theta}}\right]\hat{\phi} = \frac{n}{2r^2} \hat{r} $$
Equating the components we get a solution as, $$ k_\theta = k_r =  k_0 = 0 \\~\\ k_\phi = \frac{n}{2r} tan\frac{\theta}{2}$$
Then the Schrodinger for non-relativistic electron is given by, $$ \frac{-\hbar^2}{2m} \nabla^2\psi = E\psi$$ 
Applying $$ \psi = \psi_1 e^{i\beta}$$  we get, $$ \nabla\cdot\nabla(\psi_1e^{i\beta}) = \nabla\cdot\left[e^{i\beta}\nabla(\psi_1) + \psi_1 \nabla(e^{i\beta})\right] \\~\\= e^{i\beta} \nabla^2(\psi_1) + \nabla\psi_1\cdot\nabla(e^{i\beta}) + \nabla\cdot\left[\psi_1 \nabla (e^{i\beta})\right]$$ But $$ \nabla(e^{i\beta}) = \frac{\partial(e^{i\beta})}{\partial\vec{r}} = i e^{i\beta} \frac{\partial\beta}{\partial\vec{r}} = ie^{i\beta}\vec{k}$$ (These are 3 vectors - four vectors are separately indicated) Applying this we get, $$ \nabla^2\psi = e^{i\beta}\nabla^2\psi_1 + ie^{i\beta} \nabla\psi_1\cdot \vec{k} + \nabla(\psi_1ie^{i\beta})\cdot\vec{k} + \psi_1ie^{i\beta} \nabla \cdot\vec{k} \\~\\ = e^{i\beta}\nabla^2\psi_1 + ie^{i\beta} \vec{k}\cdot\nabla\psi_1 + ie^{i\beta}\nabla\psi_1\cdot\vec{k} + \psi_1 \nabla(ie^{i\beta})\cdot\vec{k} + \psi_1 ie^{i\beta} \nabla\cdot\vec{k} \\~\\ = e^{i\beta} \nabla^2\psi_1 + ie^{i\beta} \vec{k}\cdot\nabla\psi_1 + ie^{i\beta}\nabla\psi_1\cdot\vec{k} + ie^{i\beta}\psi_1 \nabla\cdot\vec{k} - e^{i\beta}\psi_1 \vec{k}\cdot\vec{k}$$
which finally gives, $$ \nabla^2\psi = e^{i\beta}\left[ \nabla^2\psi_1 + i\vec{k}.\nabla\psi_1 + i \left(\nabla\psi_1\cdot\vec{k} + \psi_1\nabla\cdot\vec{k}\right) - k^2\psi_1\right] $$$$ \nabla^2\psi = e^{i\beta}\left[ \nabla^2 + i\vec{k}.\nabla + i (\nabla\cdot\vec{k}) - k^2\right]\psi_1$$
Now, our initial schrodinger equation can be rewritten as,
$$ \frac{-\hbar^2}{2m}\left[\nabla^2 + i\vec{k}.\nabla + i (\nabla\cdot\vec{k}) - k^2\right]\psi_1 = E\psi_1$$ 
Substituting for the values of k, we will get, 
$$\vec{k^2} = k_\phi^2 = \frac{n^2}{4r^2}tan^2{\theta/2} $$ and 
$$ \vec{k}\cdot\nabla = (\nabla\cdot\vec{k}) = \frac{k_\phi}{rsin\theta}\frac{\partial}{\partial\phi} = \frac{n\,tan\frac{\theta}{2}}{2r^2sin{\theta/2}cos{\theta/2}}\frac{\partial}{\partial\phi} = \frac{n\,sec^2{\theta/2}}{4r^2}\frac{\partial}{\partial{\phi}}$$ 
On substitution, $$ \frac{-\hbar^2}{2m}\left[ \nabla^2 + \frac{2ni}{4r^2} sec^2{\theta/2}\frac{\partial}{\partial{\phi}} - \frac{n^2tan^2{\theta/2}}{4r^2}\right]\psi_1 = E \psi_1 $$
Applying for Laplace operator in polar coordinates and using the regular separation of variables method we finally get (it is just manipulation), for Radial part, $$ \left[ \frac{d^2}{dr^2} + \frac{2}{r} \frac{d}{dr} - \frac{\lambda}{r^2}\right]R(r) = \frac{-2mE}{\hbar^2} R(r)$$
and angular part, $$ \left[ \frac{1}{sin\theta} \frac{\partial}{\partial\theta}\left(sin\theta\frac{\partial}{\partial\theta}\right) + \frac{1}{sin^2\theta}\frac{\partial^2}{\partial\phi^2} +\frac{ni}{2} sec^2{\theta/2}\frac{\partial}{\partial\phi} - \frac{n^2}{4}tan^2{\theta/2}\right]Y(\theta,\phi) = -\lambda{Y(\theta,\phi)}$$

From here, 
we need to solve two differential equations for which I searched for the solution in various places. I couldn't find the complete solution but just the preview of the starting of the solution by I.Tamm. Only first two pages are free to see and the complete paper costs more money. So, I tried my own to convert the above Angular equation into the usual simple equation of Spherical Harmonics.  

We will start with the Angular part by assuming the solution of type, $$ Y(\theta,\phi) = L(\theta) e^{im\phi} $$ Upon substitution, $$e^{im\phi}\left[ \frac{1}{sin\theta} \frac{\partial}{\partial\theta}\left(sin\theta\frac{\partial}{\partial\theta}\right) - \frac{m^2}{sin^2\theta}-\frac{mn}{2} sec^2{\theta/2} - \frac{n^2}{4}tan^2{\theta/2}\right]L(\theta) = -\lambda{e^{im\phi}}{L(\theta)} $$ With slight alterations, we can again rewrite the above as, $$\left[ \frac{1}{sin\theta} \frac{\partial}{\partial\theta}\left(sin\theta\frac{\partial}{\partial\theta}\right) - \frac{m^2}{sin^2\theta}- \frac{mn}{1+ cos{\theta}} - \frac{n^2}{4}\frac{1-cos\theta}{1+cos\theta}\right]L(\theta) = -\lambda{L(\theta)} $$ and $$\left[ \frac{1}{sin\theta} \frac{\partial}{\partial\theta}\left(sin\theta\frac{\partial}{\partial\theta}\right)+\lambda - \frac{m^2}{sin^2\theta}- \frac{mn}{1+ cos{\theta}} - \frac{n^2}{4}\frac{1-cos\theta}{1+cos\theta}\right]L(\theta) = 0 $$Now, we will try to convert this into usual equation by replacing with suitable new variable $$ z = 1+cos\theta$$,
So that, $$ \frac{dL}{d\theta} = \frac{dL}{dz} \frac{dz}{d\theta}$$ and the first term becomes $$ \frac{dz}{d\theta} = -sin\theta$$ and $$ \frac{1}{sin\theta}\frac{d}{d\theta}\left(sin\theta\left(-sin\theta\frac{dL}{dz}\right)\right) = \frac{1}{sin\theta}\left[\frac{d}{dz}\left(-sin^2\theta\frac{dL}{dz}\right)\right]\frac{dz}{d\theta}\\~\\ = \frac{d}{dz}\left(sin^2\theta\frac{dL}{dz}\right) = \frac{d}{dz}\left(\left(2z-z^2\right)\frac{dL}{dz}\right)$$ where we used the fact that, $$ cos\theta = z - 1\\~\\ sin^2\theta = 2z-z^2$$ and the second term becomes, $$ \left[\lambda - \frac{m^2}{sin^2\theta} -\frac{m}{1+cos\theta} - \frac{n^2}{4} \frac{(1-cos\theta)}{(1+cos\theta)}\right] = \left[\lambda - \frac{m^2}{2z-z^2}-\frac{m}{z}-\frac{n^2}{4}\frac{2-z}{z}\right]\\~\\ = \lambda - \left[\frac{m^2+mn(2-z)+\frac{n^2}{4}(2-z)^2}{z(2-z)}\right]\\~\\ = \lambda - \left[\frac{\left(m+\frac{n}{2}(2-z)\right)^2}{z(2-z)}\right]$$
Finally we get our differential equation as, $$ \frac{d}{dz}\left(\left(2z-z^2\right)\frac{dL(z)}{dz}\right) + \left[\lambda - \frac{\left(m+\frac{n}{2}(2-z)\right)^2}{2z-z^2}\right]L(z) = 0$$
  

Monopoles - 7 - Dirac Monopoles in Quantum Mechanics - Part - 3


From our previous post we found that, the gauge invariance allows to reconsider our ordinary wave function with the phase factor as the wavefunction of an electron in the Electromagnetic potentials given by, $$ \vec{A} = \frac{\hbar{c}}{e}\vec{k}$$ and $$ V = \frac{-\hbar}{e}k_0$$ 
So, the Magnetic and Electric fields are given by, $$ \nabla\times\vec{A} = \vec{B} $$$$\rightarrow\,\,\,\,\,\,\nabla \times\vec{k} = \frac{e}{\hbar{c}} \vec{B}$$ and $$ E = -\nabla{V} - \frac{1}{c}\frac{\partial{A}}{\partial{t}} = \frac{\hbar}{e}\nabla{k_0} - \frac{\hbar}{e} \frac{\partial{\vec{k}}}{\partial{t}} $$ $$ \rightarrow\,\,\,\,\,\,\nabla{k_0} - \frac{\partial{\vec{k}}}{\partial{t}} = \frac{e}{\hbar}\vec{E}$$ 

Looking at these relations we see that, a new physical meaning is given to "k" in terms of the potentials of the Electromagnetic field. So, whatever the mathematical manipulation we do in "k" will give rise to a new physical meaning in terms of Electric and Magnetic fields. That makes our problem physically more significant as it is in Mathematics.

So, we need to go back and take a re look at our initial definitions of phase factor and its non-integrability and to check whether we could impose some more conditions, so that its applicability can be generalized.  

We have seen in the beginning that the change in phase (the change in $\beta$) of the wave function around a closed curve is same for all the wave functions. But even at the single point, there is some arbitrary freedom in choosing the phase of the wavefunction, so that we can add any integer multiples of phase $2\pi{n}$ to the wave function and we will still get the same result. 
$$ e^{i\beta} = e^{i (\beta + 2\pi)} = e^{i(\beta+4\pi)} = e^{i(\beta+2n\pi)}$$
So, it is always the phase and the change in phase is undetermined up to the addition of multiples of $2\pi$. 

From this we can conclude, the change in phase of any wave function around a closed loop should be equal the Electromagnetic flux penetrating through the area enclosed by the loop and with the addition of arbitrary multiples of $2\pi$.  

$$\oint_{(4d)}\,d\beta = 2n\pi + \oint_{(4d)}\vec{K_{(4d)}} \,dl_{(4d)} \\~\\= 2n\pi + \int (\nabla\times\vec{K}) \vec{da_{(6d)}}$$
Using Stokes' theorem for 4-dimensional vectors. Finally, we get the change in phase, $$ \Delta\beta = 2n\pi + \frac{e}{\hbar{c}}  \int \vec{B}\, \vec{da}$$
Only, magnetic flux comes into play, as we are considering the region where the monopole is enclosed. 
In a special case, if you take a very small closed curved where the functions are smooth, then the change in phase cannot be in terms of the integer multiples of $2\pi$. Only, the flux term will play the role, which by itself is unique and cannot vary as we take the small curve. This wouldn't be the case, if there is any singularity in the function that is in the potentials. 

It can be concluded that, for a very small closed curve with smooth functions (potential), the change in phase for different wave functions cannot arbitrarily vary in multiples of $ 2\pi$ but is determined by the flux penetrating through the surface. But the same wave function, with singularity will give the change as, $$ \Delta\beta = 2n\pi + \int \vec{B} \,\vec{da} $$ For a given monopole the second term cannot change for various wave functions. And so, the change in phase of various wave functions should depend on the various values of the first term "n" which itself depends on the singularities enclosed within the surface. 

Now, if make my closed curve very small approaching near to zero, then the change in phase should be reduced to give zero (from the simple logic that the wave function at a single point should have definite phase - and so definite probability). 

Remember, it was not considered an empty space, but the space with singularity passing through our point. 

From the above discussion we get, $$ \Delta\beta = 0 $$ So, $$2n\pi = \frac{-e}{\hbar{c}} \int \vec{B}\cdot\vec{da} $$
where $$\int\vec{B}\cdot\vec{da}$$ in three dimensions gives the net magnetic flux penetrating through the closed surface, which is equal to $ 4\pi{q_m}$. The positive and negative sign depends on the nature of singularity. 
On equating, we finally get the quantization condition for the charges as, $$ 4\pi{q_m} = \frac{2n\pi\hbar{c}}{e}$$ and $$ q_m = \frac{n\hbar{c}}{2e}$$

Thus, Dirac proved that even if one magnetic monopole exist in Nature, it would explain why all the electric charges in the Universe are Quantized.  

Let us consider next, the problem worked out by Dirac with a little more explanation on its solution.  

Friday 19 February 2016

Statistical and Thermal Physics - Summary of the contents - Part - 2

Let us start with the more abstract mathematical formalism of the statistical mechanics by introducing some new concepts. Let us start with defining $P_i$ is the probability of finding the system in some specific state "i". It is not the combination of outcomes for a particular event but the probability of a single outcome i.e. "i" refers to one particular microstate with energy $E_i$ 
So, the probability for the system to be found in this state is, $$ P_i = C\, N(E_0 - E_i) = C\, N(E)$$ where $ E_i + E = const. = E_0$ You may ask why we don't treat this event as the combination of two independent events and write the probability, as we did previous in two systems S,S' with energy E, E' as P(E) = C N(E) N(E') 
because now we consider the first one as a single microstate. 
Now, we assume E act as a reservoir for $E_i$, and it gives, on Taylor expansion and with suitable approximation, $$ \ln{N(E_0 - E_i)} = \ln{N(E_0)} - \beta{E_i} \\ N(E_0-E_i)= N(E_0) e^{-\beta{E_i}}$$ or simply, $$ P_i = C e^{-\beta{E_i}} = \sum_E N(E) e^{-\beta{E}}$$ We know in addition, the sum of all probabilities should give unity, $$ \sum_i P_i = C \sum_i e^{-\beta{E_i}} = 1 $$ or $$ C = \frac{1}{\sum_i e^{-\beta{E_i}}} \\ \rightarrow\,\,\,\,\,\, P_i = \frac{e^{-\beta{E_i}}}{\sum_i e^{-\beta{E_i}}}$$ Note that, the probabiliy depends exponentially on the value of energy E. 
Defining a new term called Partition function and denoting with a new symbol, $$ X = \sum_i e^{-\beta{E_i}} $$ we try to express all other variable using this X,
The average of Energy is given by, $$ \langle{E}\rangle = \frac{\sum_i e^{-\beta{E_i}E_i}}{\sum_i e^{-\beta{E_i}}} $$ but using X, $$ \sum_i e^{-\beta{E_i}}E_i = -\frac{\partial{X}}{\partial{\beta}}$$ and $$ \langle{E}\rangle = -\frac{1}{X} \frac{\partial{\ln{X}}}{\partial\beta}$$ The generalized force is given by (similar procedure), $$ f = \frac{1}{\beta} \frac{\partial\ln{X}}{\partial{x}}$$ with the same notations used in the previous post. In particular, f = p when x = V. When we talk about pressure, we mean the average pressure. 
Since, X is function of both $\beta$ and x, we get $$d(\ln{X}) = \beta{dW} - E d\beta \\ d\left(k\ln{X} + k\beta{E}\right) = \frac{dQ}{T} \\\rightarrow\,\,\,\,\,\, S = k\left(\ln{X} + \beta{E}\right)$$ where S - entropy. 
The partition for an ideal gas can be calculated as, $$ X = \frac{V^N}{h^{3N}} \left[\int_{-\infty}^{\infty}e^{-\beta{\frac{p^2}{2m}}} \,\,d^3p_i\right]^{3N} $$ The gaussian integral gives, $$\int_{-\infty}^{\infty} e^{-\alpha{x^2}} \,dx = \sqrt{\frac{\pi}{\alpha}} $$
So, $$\int_{-\infty}^{\infty} e^{-\frac{\beta}{2m}p^2}\,dp = \sqrt{\frac{2m\pi}{\beta}}$$
And so, $$ X = V^N \sqrt{\frac{2\pi{m}}{\beta{h^2}}}^{3N} \\ \ln{X} = N \left[\ln{V} + \frac{3}{2} \ln(\frac{2\pi{m}}{h^2}) - \frac{3}{2} \ln{\beta}\right] $$
And the mean energy is calculated to be $$ E = \frac{3}{2}kT$$
with this, the Entropy is calculated to be, $$ S = k (\ln{X}+\beta{E}) = k N \left[\ln{V} + \frac{3}{2} \ln(\frac{2\pi{m}}{h^2}) - \frac{3}{2} \ln{\beta}+ \frac{3}{2}\right] \\~\\= kN \left[\ln{V} + \frac{3}{2} \ln(\frac{2\pi{mk}}{h^2}) - \frac{3}{2} \ln{T} + \frac{3}{2}\right]$$ But, it has some small correction due to Gibbs paradox, and it is corrected as, $$ X' = \frac{X}{N!} $$ So that, our equation will get modified to (using sterling's approximation), $$ \ln{X'} = \ln{X} - \ln{N!} = \ln{X} - N \ln{N} + N $$ $$ S = kN \left[\ln{V} + \frac{3}{2} \ln(\frac{2\pi{mk}}{h^2}) - \frac{3}{2} \ln{T}+\frac{3}{2} - \ln{N} + 1 \right] \\~\\= kN \left[\ln{\frac{V}{N}} + \frac{3}{2} \ln(\frac{2\pi{mk}}{h^2}) + \frac{3}{2} \ln{T} + \frac{5}{2}\right]$$
which is the final result for Entropy. 
Now, we can calculate Maxwell's velocity distribution law using these ideas and by assuming the molecules to be non-interacting classical ideal particles.
The probability of finding the particle in the position range r and r+dr and momentum range p and p+dp is given by the canonical distribution as we used in the partition function, we can convert it into position and velocity and obtain equation, $$ M(r,v) d^3r \,d^3v = C e^{-\frac{\beta{mv^2}}{2}} d^3r \, d^3v $$ and integrating through all over the space and for all velocity range, we get the total number of molecules , $$ N = C V \left[\int_{-\infty}^{\infty} e^{-\frac{\beta{mv_i^2}}{2}}dv_i \right]^3 \\~\\= CV \left(\frac{2\pi}{\beta{m}}\right)^{\frac{3}{2}} = N $$ 
And so,we get the value of C , and the Maxwell's velocity function (as it does depend only on "v"), $$ M(v) \,d^3r\,d^3v = \frac{N}{V} \left(\frac{m}{2\pi{kT}}\right)^{\frac{3}{2}} e^{-\frac{mv^2}{2kT}} \,d^3r\,d^3v $$ Once we get this, we can ask for the various ways of velocity distribution. 
We can calculate the old school stuff - the three kinds of velocities - but now with the correct mathematical formalism.
Average velocity: To calculate the average of anything, first we need the probability density function. With our equation, the density function is calculated by taking small volume in the velocity space, $$ m(v) dv = 4\pi{M(v)} v^2 dv $$ and the function should be normalized, and so, $$ \int_0^\infty m(v) dv = \frac{N}{V}$$
gives the mean velocity as, $$ \langle{v}\rangle = \frac{V}{N} \int_0^\infty m(v) v dv = \frac{4V\pi}{N} \int_0^\infty M(v) v^3 dv$$
Applying the value of M(v), we get, $$ \langle {v}\rangle = 4\pi \left(\frac{m}{2\pi{kT}}\right)^{\frac{3}{2}} \int_0^\infty e^{-\frac{mv^2}{2kT}}v^3 \,dv  = \sqrt{\frac{8kT}{\pi{m}}}$$
Similarly, the mean square speed is $$ \langle{v^2}\rangle = \sqrt{\frac{3kT}{m}}$$
Finally the most probable speed is calculated by finding the maximum of the function m(v). $$ \frac{dm}{dv} = 0 $$
Apart from the constants, $$ m(v) = c v^2 e^{-\frac{mv^2}{2kT}} = kv^2 e^{(\,\,)} $$ and so the maximum condition gives, $$ 2v e^{(\,\, )} - \frac{m}{kT} v^3 e^{(\,\,)} = 0$$ $$ v_{(mp)} = most \,\,probable\,\,speed\,\,=\sqrt{\frac{2kT}{m}} $$
Specifically, if the particles are moving in one dimension (let us say "x"), then we get, $$\frac{dm_x}{dx} = 0 $$
In one dimension, the factor $4\pi{v^2}$ would not come in the equation of m(v), so it is just $$ m(v) = c e^{-mv^2}{2kT} $$ and maximum condition gives, $$ c \left(-\frac{m}{kT}\right) v = 0 \\\rightarrow \,\,\,\,\,\, v_{(mp)} = 0 $$ 
So, the most probable speed and also the mean speed in one dimension is calculated to be zero. Since, the distribution curves attain maximum at v = 0 and it is also symmetrical about v = 0.

Thursday 18 February 2016

Statistical and Thermal Physics - Summary of the contents - Part - 1

I just want to make a brief summary of the mathematics and concepts involved in Statistical Mechanics. 
Microstate is the state of the system where the phase space is split into unit cells and labelled by some indices.
Instead of analyzing the states of all single particles, we introduce the concept of Ensemble, where we take a large number of identical particles but at different states characterized by various corresponding parameters like spin, pressure, angular momentum, etc. And we ask for the probability of the particular value of that parameter - and so the probability arguments comes into the play.  

To make a probability argument, first we need to define the system and its behaviour as a postulate. For example, in an event of throwing the dice, we are intrinsically assuming the dice is a perfect one so that no outcome is preferred than any other outcome.
Similarly, here we assume that, when an isolated system is in equilibrium (the probability of the state of the particles are independent of time), the system has equal probability for any of its accessible states. 
The accessible states are determined by the initial conditions which can be imposed arbitrarily by the observer - the part where Experimental physicist like to play very much.   
Let us consider an ensemble of particles with energy ranges from E and E+$\delta{E}$ and N(E) denote the total number of possible states of the system in this range (similar to the number of possible combination of the outcomes for an event). Out of these states, some "N(E,s_j)" number of states correspond to some other event corresponding to a physical quantity with the value "$s_j$".   Then the probability of getting the value of the physical quantity "$s_j$", $$ P(s_j) = \frac{N(E,s_j)}{N(E)}$$
[Note: The reason I am using different notation for each time is to practice with the flexibility of our mind - So, don't stick with the notation. Everytime, it will be stated the meaning of the representation] 
Macroscopic system which consists of many microscopic states and characterized by external parameters like Pressure, Volume, etc.
Let us considering two systems S and S' in two different case where in the first place it is allowed only the exchange energy due thermal interactions, and in the second part the energy is transferred purely on mechanical interactions. When we talk about large number of particles, the mean Energy is implied.
First case gives, $$\langle{E}\rangle + \langle{E'}\rangle = constant \\~\\ Heat\,\,\,\,\rightarrow\,\,\,\,\,\,Q + Q' = 0 $$ 
Second case gives, Work done by S = - Work done by S' ,$$W + W' = 0$$
In the general case where both Energy and external parameters are changed, the change in mean energy is given by, 
Differential Change in Mean Energy (dU = d$\langle{E}\rangle$)= differential change in mean energy due to external parameters (mechanical work done on the system "W") + small amound of Heat given to the system "Q"
Which gives us, 
dU = dQ + W(on the system) 
or in reverse, using W (mechanical work on the system) = -W(mechanical by the system), we get $$ dQ = dU + W $$ which is known to be the first law of Thermodynamics. 
If we considered two systems S, S' with energy E, E', then the number of states for the energy E is N(E) and for E' is N(E'). Then the probability of  happening two at the same time (where both are independent), so $$P(A\cap{B}) = P(A).P(B)$$
So, the probability for the event with total energy E + E' = const. is, $$P(E) = C N(E) N(E')$$ where C - proportionality constant. 
To get the maximum probability w.r.t. Energy, we get, $$ \frac{\partial{P}}{\partial{E}} = 0 \\~\\ \frac{\partial\ln(N(E))}{\partial{E}} = \frac{\partial\ln(N(E'))}{\partial{E'}} $$
For the mathematical purpose, we take log, so that function will become more smooth. And we denote, $$\beta(E) =\beta(E') \\~\\ \beta(E) = \frac{\partial\ln(N(E))}{\partial{E}} $$ where we define this quantity as the inverse of temperature, so that $$ \beta = \frac{1}{kT} $$ and we define another new and useful quantity named Entropy as, $$ S = k \ln(N(E)) $$
And thus we can see that, when thermal equilibrium is attained between any two states, they should have the same $\beta$ value or the same temperature. Here comes the zeroth law of thermodynamics, states that if two system A, B are in thermal equilibrium with a third system C then all the three system will be in thermal equilibrium with each other i.e. If A with C and B with C implies A with B. All of them will have same temperature characterized by $\beta$. This temperature has a special name which is Absolute Temperature.  
On a special occasion, if we consider one system as a heat reservoir then we have for the reservoir (if we give some heat Q') then using taylor expansion, $$ f(x) = f(a) + f'(a) (x-a) + \frac{f''(a) (x-a)^2}{2!}+... $$ 
Then, if I replace f(x) = ln(N(E+Q)) where x = E+Q and a = E $$ \ln {N(E+Q)} = \ln(E) + \frac{\partial\ln{N(E)}}{\partial{E}} Q + \frac{1}{2}\frac{\partial^2\ln{N(E)}}{\partial{E^2}}Q^2 +...\\~\\ = \beta{Q} + \frac{1}{2}\frac{\partial\beta}{\partial{E}}Q^2 + ...$$
From the definition of reservoir, we can neglect higher order terms and we get, $$ \ln(N(E+Q)) - \ln(N(E)) = \beta{Q}= \frac{Q}{kT} \\~\\ \delta{S} = \frac{Q}{T} $$ If we take infinitesimal amount of heat energy $Q \,\,\rightarrow\,\, dQ$ then we get, $$ ds = \frac{dQ}{T}$$ which is the formal definition of Entropy. The third law is mostly based on the physical properties of this Entropy when the absolute temperature goes to zero. 
Let us analyze the ideal gas.. We know that for an ideal gas, $$ N \propto V^N \Phi(E)$$ or $$ \ln{N} = N\ln{V} + ln{\Phi(E)} + const. $$ and it is defined the generalized force, $$ f = \frac{1}{\beta}\frac{\partial\ln{N}}{\partial{s_i}}$$ where s- external parameter, and f-generalized force. When x=V we get the mean pressure as,  $$ \langle{p}\rangle = \frac{1}{\beta}\frac{\partial\ln{N}}{\partial{V}} $$ Using this, $$ \langle{p}\rangle = \frac{N}{\beta} \frac{\partial\ln{V}}{\partial{V}} $$ other terms don't depend on "V" and give zero.
We finally get, $$ \langle{p}\rangle = \frac{N}{\beta{V}} = \frac{NkT}{V} \\ \langle{p}\rangle V = nRT $$ where k-boltzmann constant.
N - number of particles
R = Avogadro number * boltzmann const. 
and n- number of moles = N/Avogadro number
and specifically $\beta$ depends only on energy $\Phi(E)$ and Temperature depends only on $\beta$ which gives us, for an ideal gas Energy is a function of Temperature. 
Now, we define a new quantity named specific heat capacity - that is the measure of the heat required to raise the temperature of the system to a unit value keeping a parameter at a constant value, $$ C_s = \left(\frac{dQ}{dT}\right)_s$$ 
Let us look at the relation between $ C_p = \left(\frac{dQ}{dT}\right)_p $ and $ C_v = \left(\frac{dQ}{dT}\right)_v$ where it is measured at constant pressure and volume. Using this in the first law we get, $$ dQ = dU + pdV $$ at constant volume $$dQ_v = dU = c_v dT $$ similarly from the equation of state of an ideal gas at constant pressure, $$ p\, dV = nR\, dT$$ and $$ dQ_p = c_p dT $$ Combining the equations, $$ c_p = c_v + nR$$ and if we change to molar specific heat capacity we get, $$ c_p - c_v = R $$  
We can also measure the microscopic calculation of specific heats from our assumption of monatomoic ideal gas, where the interaction between the particles is negligible. It implies the number of states can be written as, $$ N(E,V) = C V^N E^{\frac{3N}{2}}$$ 
where N is the number of particles and 
C - proportionality constant.
V - volume,
E - energy,
N - number of states. 
Then, $$ \ln{N(E,V)} = \ln{C} + N \ln{V} + \frac{3N}{2} \ln{E}$$ but we know, $$\beta = \frac{\partial\ln{N}}{\partial{E}} = \frac{3N}{2E}$$ and it gives us $$ E = \frac{3N}{2\beta} = \frac{3}{2}NkT = \frac{3}{2} n RT$$ where "n" is the number of moles.
In addition, the molar specific heat at constant volume is given by, $$ C_v = \frac{1}{n} \left(\frac{\partial{U}}{\partial{T}}\right)_v$$ where at constant volume $ dQ = dU$ so, it gives $$ C_v = \frac{3}{2}R$$  for an ideal gas. And so we can find $ C_p,\,\,and\,\,\, \gamma = \frac{C_p}{C_v}$
We can explore more with these. Now various situation may arise depending on which two variables are changed. Let us start with the first law, $$ dQ = dU + pdV$$ Using the relation of entropy, $$ dU = T dS - pdV $$ where the independent variable is S, V.
Now, if we have the independent variables as S, P then we can make use of the legendre transform as, $$ dU = T dS - d(pV)+ VdP \\ d\left(U + pV\right) = T dS + VdP $$ where we call H = U + pV = Enthalpy.
$$ dH = T dS + VdP$$ Next, if we take chose T and V as the independent variables then, $$ dU = TdS - pdV = d(ST) - pdV - SdT \\ d \left(U-TS\right) = - pdV - SdT \\ dF = -SdT - pdV$$ where F = U - TS is called the Helmholtz free energy. 
Finally if we take the independent variables T, P then $$ dU = d(TS) - SdT - d(pV) + VdP \\ d \left(U-TS+pV\right) = -SdT + VdP \\ dG = -SdT + VdP$$ where G = U -TS +pV = F + pV is called the Gibbs Free energy. 

Thursday 11 February 2016

Monopoles - 6 - Dirac Monopoles in Quantum Mechanics - Part - 2

From our previous post Monopoles-5-dirac-monopoles-part-1, we have for the wavefunction of the momentum operator, $$ -i\hbar \frac{\partial\psi}{\partial{x}} = -i\hbar\,e^{i\beta}\left[\frac{\partial\psi_1}{\partial{x}} + i\frac{\partial\beta}{\partial{x}}\right] = e^{i\beta} \left(-i\hbar\frac{\partial}{\partial{x}} + \hbar{k_x}\right)\psi_1$$
With all three components we get to know that, if the wavefunction $\psi$ satisfies any wave equation for the operator $\hat{p}$, then $\psi_1$ will satisfy the same wave equation for the operator $\hat{p} + \hbar\vec{k}$ Similarly, for the energy operator, $$ H\psi = i\hbar\frac{\partial\psi}{\partial{t}} = i\hbar\left[e^{i\beta}\frac{\partial\psi_1}{\partial{t}} + \psi_1ie^{i\beta}\frac{\partial\beta}{\partial{t}}\right] \\~\\ = e^{i\beta}\left[i\hbar\frac{\partial}{\partial{t}} - \hbar{k_0}\right]\psi_1$$ where the wave function $\psi_1$ will satisfy the wave equation for the operator, $ E - \hbar{k_0}$ The reason we take our wave function in the above given structure helps us to compare with a similar kind of wave equation, which we will encounter in the Gauge transformation of the wave function of free charge in an Electromagnetic field. 

Let us look at the Schrodinger equation for a particle with mass "m" and charge "e" in an Electromagnetic field described by its potentials $\vec{A}\,,\,\phi$ as, $$ i\hbar\frac{\partial\psi}{\partial{t}} = H\psi = \frac{1}{2m}\left(\vec{p} - \frac{e\vec{A}}{c}\right)^2\psi + e\phi\psi$$
$$ H\psi = \frac{\vec{p}^2}{2m}\psi -\frac{e}{2mc}\left(\vec{p}\cdot\vec{A} + \vec{A}\cdot\vec{p}\right)\psi +\frac{e^2}{2mc^2}\vec{A}^2\psi +e\phi\psi$$

But wait.. What I have done above is wrong!!

The reason is because I treated the operators as some usual terms in multiplication and did the usual product. 

Here is the main point when you deal with operators. 
Never work with operators blindly without any function. You can do all operations with operators only after it acts on any function. 

Let us try it again by acting on a function. 
$$ \left(\vec{p} - \frac{e}{c}\vec{A}\right)^2\psi = \left(\vec{p} - \frac{e}{c}\vec{A}\right)\left(\vec{p\psi}-\frac{e}{c}\vec{A\psi}\right) = \left(\vec{p}^2\psi - \frac{e}{c}\vec{p}\cdot\vec{A\psi}-\frac{e}{c}\vec{A}\cdot\vec{p\psi}+\frac{e^2}{c^2}\vec{A}^2\psi\right)$$
But $$ \left(\vec{p}\cdot\vec{A}\right)\psi = \vec{p}\cdot\vec{A\psi} +\vec{p\psi}\cdot\vec{A} \\\rightarrow\,\,\,\,\,\,\vec{p}\cdot\vec{A\psi} =\left( \vec{p}\cdot\vec{A}\right)\psi - \vec{p\psi}\cdot\vec{A}$$
So, $$\left[\vec{p}^2\psi - \frac{e}{c}\left(\vec{p}\cdot\vec{A}\right)\psi - \frac{e}{c}\vec{A}\cdot\vec{p\psi}-\frac{e}{c}\vec{A}\cdot\vec{p\psi}+\frac{e^2}{c^2}\vec{A}^2\psi\right]$$ Combining the terms and substituting in the Hamiltonian, we get, $$ H\psi = \frac{1}{2m}\left[\vec{p}^2\psi - \left(\frac{e}{c}\vec{p}\cdot\vec{A}\right)\psi - 2 \frac{e}{c}\vec{A}\cdot\vec{p\psi}+\frac{e^2}{c^2}\vec{A}^2\psi\right]$$
or simply, $$H\psi = \frac{\vec{p}^2}{2m}\psi -\frac{e}{2mc}\left(\vec{p}\cdot\vec{A}\right)\psi + \frac{e}{mc}\vec{A}\cdot\vec{p}\psi +\frac{e^2}{2mc^2}\vec{A}^2\psi +e\phi\psi$$
From this, the application of gauge transformation transfers the potential to new values. Accordingly, to maintain the same structure and physical results, the wave function should be transformed into a new wave function. 

It can be derived by substituting the new potentials in the Hamiltonian and comparing it with the old Hamiltonian. 

The Gauge transformation is given by, $$ \vec{A'} = \vec{A}+\nabla\chi \\~\\ V' = V - \frac{1}{c}\frac{\partial\chi}{\partial{t}}$$ The transformation of the wave function is, $$ \psi' = \psi e^{\frac{ie\chi}{\hbar{c}}}$$ 
If I make the initial Potentials zero, then $$\vec{A} = 0 \,\,\,\,\rightarrow\,\,\,\, \vec{A'} = \nabla\chi\\ V = 0 \,\,\,\,\rightarrow\,\,\,\, V' = -\frac{1}{c} \frac{\partial\chi}{\partial{t}}$$ which says that if $\psi$ satisfies the Hamiltonian where there is no Electromagnetic field i.e. in free space, then $\psi'$ will satisfy the Hamiltonian with the Electromagnetic potentials given by the above relations.  

This is exactly similar to the result we derived at the beginning. Comparing with the corresponding equations we get, $$ \beta = \frac{e\chi}{\hbar{c}}$$ and so, $$ \vec{A'} = \nabla\chi = \frac{\hbar{c}}{e}\left[\frac{\partial\beta}{\partial{x}}\hat{x} +\frac{\partial\beta}{\partial{y}}\hat{y} +\frac{\partial\beta}{\partial{z}}\hat{z}\right] = \frac{\hbar{c}}{e}\vec{k} $$ Similarly, $$ V' = \frac{-\hbar{c}}{e} \frac{1}{c} \frac{\partial\beta}{\partial{t}} = -\frac{\hbar{c}}{e}k_0$$ Thus, our initial wave equation now gets a physical meaning with its corresponding wave equation of a particle in an Electromagnetic field described by our Potentials. 

We will try to analyze completely the physical implications imposed by our potentials in the next post. 

Monopoles - 5 - Dirac Monopoles in Quantum Mechanics - Part - 1

We know, Magnetic vector potential plays the crucial part in the Hamiltonian of an Electromagnetic system where the Hamiltonian formulation is the basis definition of transformation of equations from classical to Quantum.   
And Experimental results like Aharonov-Bohm effect make it look like the magnetic vector potential is inevitable in Quantum mechanics. Together, they state the importance of vector potential

But, if we consider the possibility of monopoles, then we may have to reject the concept of vector potential and need to redefine our Hamiltonian with new potentials, which in turn may result into contradictions with unexpected results. 

To prevent this, we reject at the beginning itself the possibility of an isolated magnetic monopole in our Universe, so that everyone can live in their happy little world with conventional equations.
But, Dirac first took a completely different approach with his new mathematical treatment along with vector potential and proposed a new magnetic monopole that can even co-exist with the vector potential. i.e.without any change in our old potential formalism. 
Moreover, the vector potential itself allows for a such particle to exist in nature without any violations. 

Let us look at that approach  from his 1931 paper..
First we introduce the wave function in the usual form as, $$ \vert\psi\rangle = Ae^{i\gamma} $$ where $\gamma$ is the function of x,y,z,t and also we take A - amplitude as the function of position and time since we take the general case. 
Now, the indeterminacy in this wave function can be regarded as the possible addition of any constant to the phase. That is equivalent to $$\psi = A e^{i\gamma + \chi} = Ae^{i\gamma} e^{\chi}$$ where $\chi$ is some constant. It doesn't change anything physical about the wave function, because we know that from superposition, the wave function will not change from the possible multiplication of any arbitrary real or complex number. 
So, you can never determine any wave function up to an arbitary constant which can be chosen arbitrarily to normalize the wave function. 

From this fact, the wave function cannot have a definite phase at all the points but can have only definite difference. All it does matter is the difference in $\gamma$ value. But we are not sure whether this difference is unique for any two arbitrary points, as there are many paths from going from one point to another. We are not even sure whether the closed integral of this phase change will vanish or not.  

Away from this, the definitions of our phase change in any sense should not give rise to ambiguity in the applications of the theory.       
First it is seen that the concept of phase doesn't change anything in the density function as, $$ \langle\psi\vert\psi\rangle = A^2 e^{i\gamma} e^{-i\gamma} = A^2 $$ which is a pure real number that doesn't depend on the value of phase (as the phase vanishes by its complex conjugate). 

But, it is no longer the case if we take two different wavefunctions.
$$ \langle\psi_m\vert\psi_n\rangle = \langle\psi_m\vert[c_1\psi_1 + c_2\psi_2 + ...+ c_m\psi_m + ...]\rangle = c_m $$
where I expanded $\psi_n$ in terms of the eigen functions $\psi_m$. 

We know from the Quantum Mechanics postulates that, $\vert{c_m}\vert^2$ gives the probability of $\psi_m$ state in $\psi_n $ state which termed by dirac as, probability of agreement of the two states. 

The limits of the integral in the bra-ket notation needn't to be from $(-\infty,\infty)$.
I think, this may be the only disadvantage in Dirac notation. The limits are not represented explicitly.

Since the integral does depend on the end points, even though the wave functions don't have any definite phase, they should have definite phase difference between two points (because the integral is a number).

The same physical argument gives us that, the change in phase round the closed path should be zero. 

Let us look at it like this by saying, 

the phase of the wave function $\psi_m$ is $\gamma_m$ and for the second $\psi_n$ is $\gamma_n$ So, the phase of $ \langle\psi_m\vert\psi_n\rangle $ is $e^{i(\gamma_n - \gamma_m)}$

When we go along the path from one point to another, there is a corresponding change in phase for each wave function respectively $\chi_m\,,\,\chi_n$. 

And so, the new phase difference at this point is $$ e^{i(\gamma'_n -\gamma'_m)} = e^{i[(\gamma_n +\chi_n)-(\gamma_m+\chi_m)]}$$ For different set of two points, the phase difference can have definite values. But, we know that from the physical fact that, if we come again at the same initial point, we should have the same probability and so the same integral value on closed path integral, i.e. $$ e^{i(\gamma'_n-\gamma'_m)} = e^{i(\gamma_n -\gamma_m)} $$ or $$ \gamma_n + \chi_n -\gamma_m-\chi_m = \gamma_n - \gamma_m \\ \rightarrow \chi_n = \chi_m $$ 
Which says that the change in phase of $\psi_m$ and $\psi_n$ should be the same and opposite around a closed path. 

Since it is a general result, it can be stated as in Dirac's paper, 
The change in phase of a wave function round any closed curve must be the same for all the wave functions. 

The change in phase doesn't talk anything about the nature of wave function or concerned with any specific system. So, the change in phase should be something a property of the dynamical system or the force field in which it moves. 

For the mathematical treatment, it was expressed the wave function as, $$ \psi = \psi_1 e^{i\beta}$$ $\psi_1$ being the usual wave function with definite phase and the uncertainty in phase is put in the factor $ e^{i\beta}$ where $beta$ is the same as $\chi$ we used in the previous. This $\beta$ having definite values at each point is not a function of x,y,z,t. (because different paths at the same point will possibly give different values of phase change). But it has definite derivatives at each point (x,y,z,t). 

We represent its derivatives as, $$ k_x =\frac{\partial\beta}{\partial{x}} \\ k_y =\frac{\partial\beta}{\partial{y}}\\k_z =\frac{\partial\beta}{\partial{z}}\\k_0 =\frac{\partial\beta}{\partial{t}}$$ In general these derivatives needn't be integrable following the condition $$ \frac{\partial\beta}{\partial{y}\partial{x}} = \frac{\partial\beta}{\partial{x}\partial{y}}$$ 

Now, using Stokes' theorem, we try to calculated the change in phase around a closed path as, $$ \oint \vec{K}\cdot\vec{dl} = \int (\nabla\times\vec{K})\cdot\vec{dS} $$ 
where the length and area element is considered in four dimensions, since K has four components. 

Here is one essential point we shouldn't assume, that is all the wave functions should have the same phase factor $e^{i\beta}$ because of the fact that all wave functions have same phase difference along a closed path. 

The reason is because only the change in phase depend on the curl of K vector. We can still change the components by the gradient of any scalar function, so that same phase difference can be obtained to be the same.
   
We can start from here in the next post.    
      

Quantum Harmonic Oscillator - Series Method

With the postulates of Quantum mechanics, we can proceed to start analyzing simple problems such as Harmonic oscillator. The reason we used to chose Harmonic oscillator because it is the one of the simplest classical problem for which we analysed the complete solution.  
Note: One dimensional harmonic oscillator is considered.

We always used to start with the
Time Independent Schrodinger equation, where the potential is $ \frac{1}{2}kx^2 = \frac{1}{2} m\omega^2 x^2 $ , where $ \omega = \sqrt{\frac{k}{m}} $,

$$ \frac{-h^2}{2m} \frac{\partial^2 \psi(x)}{\partial{x^2}} + \frac{1}{2} m \omega^2 x^2 \psi(x) = E \psi(x) \,\,\,...eq.(1)$$

There are two methods to solve this differential equation and first we will look at the usual "Frobenius series method"

Before that, we should make some alterations so that the given equation will be more simplified.
Eq.(1) can be written as, $$ \frac{\partial^2\psi(x)}{\partial{x^2}} + \frac{-2m^2\omega^2x^2}{2\hbar^2}\psi(x) = \frac{-2mE}{\hbar^2}\psi(x)$$ $\rightarrow$ $$ \frac{\partial^2\psi(x)}{\partial{x^2}} + \frac{2mE}{\hbar^2}\psi(x) - \frac{m^2\omega^2x^2}{\hbar^2}\psi(x) = 0 \,\,\,...eq.(2) $$

To imply infinite series solutions (that includes all powers of x), we would like non-dimensionalize the above equation. Let us introduce a new dimensionless quantity $\rho = \alpha x$ where $\alpha$ should have the dimension that is inverse of the length. Using this relation wave function could be written in terms of "$\rho$" and using the chain rule, $$\frac{\partial\psi}{\partial{x}} = \frac{\partial\psi}{\partial\rho} \frac{\partial \rho}{\partial{x}} $$

that gives, $$  \frac{\partial\psi}{\partial{x}} = \frac{\partial\psi}{\partial\rho}\, \alpha $$ Again using chain rule for second derivative, 
$\rightarrow$ $$ \frac{\partial^2\psi}{\partial{x^2}} = \frac{\partial}{\partial\rho} \left(\frac{\partial\psi}{\partial{x}}\right) \frac{\partial\rho}{\partial{x}} = \frac{\partial}{\partial\rho} \left(\alpha \frac{\partial\psi}{\partial\rho}\right) \alpha $$  $\rightarrow$ $$ \frac{\partial^2\psi}{\partial{x^2}} = \alpha^2 \frac{\partial^2\psi}{\partial\rho^2} \,\,\,...eq.(3)$$ 

Applying it in eq.(2), we get, $$ \alpha^2 \frac{\partial^2\psi}{\partial\rho^2} + \frac{2mE}{\hbar^2}\psi - \frac{m^2 \omega^2 \rho^2}{\hbar^2 \alpha^2}\psi = 0 $$

$\rightarrow$ $$ \frac{\partial^2\psi}{\partial\rho^2} + \frac{2mE} {\hbar^2 \alpha^2} \psi - \frac{m^2\omega^2\rho^2}{\hbar^2 \alpha^4}\psi = 0 \,\,\,...eq.(4)$$ 

From dimensional analysis we can see that the only combination of the constants "$ m,\, \omega,\,, \hbar\,$ to give the dimension of inverse of length is $ \sqrt{\frac{m\omega}{\hbar}} = \alpha $ . In other way, we we would like the make the coefficient equal to unity so that it would have no dimensions. Substituting for $\alpha $ value, we get, $$\frac{\partial^2\psi}{\partial\rho^2} + \frac{2mE\hbar}{\hbar^2 m\omega } \psi - \rho^2\psi = 0 $$ $\rightarrow$ $$ \frac{\partial^2\psi} {\partial\rho^2} + \frac{2E}{\hbar\omega}\psi - \rho^2\psi = 0 $$

Calling, $ \frac{2E}{\hbar\omega} = \lambda $ we arrive at our non dimensionalized equation, $$ \frac{\partial^2\psi}{\partial^2\rho} + (\lambda - \rho^2)\psi = 0 \,\,\,...eq.(5)$$  
where $\lambda$ is also a dimensionless quantity. 

Once we get, eq.(5), we would like to find the solution. But a close looking at the equation gives, for large values of $\rho $  the equation reduces to , $$ \frac{\partial^2\psi} {\partial\rho^2} = (\rho^2 - \lambda) \psi \approx \rho^2 \psi \,\,\,...eq.(6) $$

Eq.(6) has the approximate solution that is given by, $$ \psi(\rho) = A e^{\frac{-\rho^2}{2}} + B e^{\frac{\rho^2}{2}} $$ Since the wave function should be normalizable, the solution will have only, $$ \psi(\rho) = A e^{\frac{-\rho^2}{2}} = u(\rho) e^{\frac{-\rho^2}{2}}$$ 

Applying this as a trial solution in eq.(6) with corresponding derivatives gives us, $$ \left(\frac{\partial^2u}{\partial\rho^2} - 2\rho \frac{\partial{u}}{\partial\rho} + (\rho^2 - 1)u \right) e^{\frac{-\rho^2}{2}} + (\lambda - \rho^2) e^{\frac{-\rho^2}{2}} = 0 $$
$\rightarrow$ $$ \left[\frac{\partial^2u}{\partial\rho^2} - 2\rho\frac{\partial{u}}{\partial\rho} + (\lambda - 1) u\right] e^{\frac{-\rho^2}{2}} = 0 \,\,\,...eq.(7)$$  
Thus we need to find $ u(\rho) $ such that, $$\frac{\partial^2u(\rho)}{\partial\rho^2} - 2\rho\frac{\partial{u(\rho)}}{\partial\rho} + (\lambda - 1) u(\rho) = 0 \,\,\,...eq.(8)$$ 

Eq.(8) is the known Hermite equation for which we know the solutions are Hermite polynomials.

For the solution of Hermite polynomials, see  Hermite Polynomials derivation
Thus our general solution is, $$ \psi_n(\rho) = A_n H_n(\rho) e^{\frac{-\rho^2}{2}} \,\,\,...eq.(9)$$where the condition is that $ \lambda = 2n + 1 $ which implies , $$ E = (n + \frac{1}{2}) \hbar \omega \,\,\,...eq.(10)\\~\\ n = 0,1,2,...$$

To determine the constant $ A_n $ we normalize the wave function as, 

$$ \int_{-\infty}^{\infty} |\psi(\rho)|^2 \, d\rho = 1 $$ since $ \rho = \alpha x $ it is as same as integrating from $ x = (-\infty,+\infty) $

We can choose A_n such that it satifies the following condition, 
$\rightarrow$ $$ \frac{A_n^2}{\alpha} \,\int_{-\infty}^{\infty} H_n^2(\rho) e^{-\rho^2} \, d\rho = 1  \,\,\,...eq.(11)$$
But using the generating function of Hermite polynomials, $$ e^{-x^2 + 2x\rho} = \sum_{n=0}^{\infty}\frac{ H_n(\rho)}{n!} x^n \,\,\,...eq.(12)$$
$\rightarrow$ $$ \int _{-\infty}^{\infty} e^{-x^2 + 2x\rho} e^{-y^2+2y\rho} e^{-\rho^2} \, d\rho = \sum_{n=0}^{\infty} \sum_{m=0}^{\infty} \frac{x^n y^m}{n!m!} \int_{-\infty}^{\infty} H_n(\rho) H_m(\rho) e^{-\rho^2} \,d\rho \,\,..eq.(13)$$
$\rightarrow$ 
But the left hand side can be integrated to give, 
$\rightarrow$ $$ \int_{-\infty}^{\infty} e^{-\rho^2+2(x+y)\rho-(x^2+y^2)} \,d\rho = \sqrt{\frac{\pi}{1}} e^{(x+y)^2-(x^2+y^2)} = \sqrt{\pi}\, e^{2xy} \,\,\,...eq.(14)$$
where we used the formula, $$ \int_{-\infty}^{\infty} e^{-ax^2 +bx+ c } \, dx = \sqrt{\frac{\pi}{a}} e^{\left(\frac{b^2}{4a}+c\right)} $$
where the comparison gives that, a = 1 , b = 2(x+y) , $ c = -(x^2+y^2)$ 
Again rewriting eq.(14) in series form,
$$ \sqrt {\pi} \, e^{2xy} = \sqrt{\pi}\, \sum_{n=0}^{\infty} \frac{(2xy)^n}{n!} $$ Using this, eq.(13) becomes, $$\sum_{n=0}^{\infty} \sum_{m=0}^{\infty} \frac{x^n y^m}{n!m!} \int_{-\infty}^{\infty} H_n(\rho) H_m(\rho) e^{-\rho^2} \,d\rho = \sqrt{\pi}\, \sum_{n=0}^{\infty} \frac{(2xy)^n}{n!} $$  
By making m =n, 
$$\sum_{n=0}^{\infty} \sum_{n=0}^{\infty} \frac{x^n y^n}{n!n!} \int_{-\infty}^{\infty} H_n(\rho) H_n(\rho) e^{-\rho^2} \,d\rho =\sqrt{\pi}\, \sum_{n=0}^{\infty} \frac{(2xy)^n}{n!} $$
Equal powers of the series are equated  to give, 
$$\int_{-\infty}^{\infty} H_n^2(\rho) e^{-\rho^2} \,d\rho = \sqrt{\pi}\,2^n\, n! \,\,\,...eq.(15)$$ and $$\int_{-\infty}^{\infty} H_n^2(\rho) \,e^{-\rho^2} \,d\rho = 0 \\~\\ for\,\, m\neq n$$ Substituting eq.(15) in eq.(11) gives, $$ \frac{A_n^2}{\alpha}\int_{-\infty}^{\infty} H_n^2(\rho) \,e^{-\rho^2} \,d\rho = \frac{A_n^2}{\alpha} \,\sqrt{\pi}\, 2^n\, n! = 1$$ which finally gives the value of the constant $$ A_n = \sqrt{\frac{\alpha}{\sqrt{\pi} \, 2^n \, n!}} $$

That's it! We arrived our final solution, i.e. the normalized stationary states of the quantum harmonic oscillator is, 

$$ \psi_n(\rho) = \sqrt{\frac{\alpha}{\sqrt{\pi} \, 2^n \, n! }}  \,H_n(\rho) \,e^{\frac{-\rho^2}{2}} \,\,\,...eq.(16)$$


Hermite Polynomials - Derivation

Let us look into the formal solution of Hermite polynomial equation. 
Hermite differential equation is given by, $$ \frac{d^2y}{dx^2} - 2x \,\frac{dy}{dx} + 2ny = 0 $$
Using Frobenius method, we assume an infinite series solution of ,
 $$ y(x) = \sum_{m=0}^{\infty} C_m x^{m+r} ........ where C_0 \neq 0 $$ Substituting this on our differential equation, our differential equation gets modified into $$ \sum_{m=0}^\infty \left[ (m+r)(m+r-1) C_m x^{m+r-2} + 2 [ n - (m+r)] C_m x^{m+r} \right] = 0 $$ This should be equal to zer0, which can be obtained only if and only if all the coefficients are zero. 

  Equating to zero, the coefficients of,
$$ (1) x^{r-2}, \ldots... r(r-1) = 0 \rightarrow r=0 \; or \; r=1 \; since \; C_0 \neq 0 $$ $$ (2) x^{r-1}, \ldots... (r+1)rC_1 = 0 .... \; if \;\; r=0, \; C_1\neq 0 ; and \; if \;\; r=1, \; then \; C_1=0 $$ $$ (3) x^{m+r}, \\~\\ for \; r=0 \dots......... \frac{C_{m+2}}{C_m} = \frac{-2(n-m)} {(m+1)(m+2)} \\~\\ for \; r=1 \dots...........\frac {C_{m+2}}{C_m} = \frac{-2[n-m-1]}{(m+2)(m+3)}$$

  Considering r=0, the general expression for even coefficients expressed as, 
$$ C_{2s} = \frac{(-1)^s 2^s n (n-2).....(n-2s+2)}{(2s!)} C_0 $$
and the odd coefficients are expressed as,
$$ C_{2s+1} = \frac{ (-1)^s 2^s (n-1) (n-3).....(n-2s+1)}{(2s+1)!} C_1$$
The general solution is therefore,

$$ y(x) = C_0 \left[ 1 + \sum_{s=1}^\infty \frac{ (-1)^s 2^s n (n-2)....(n-2s+2)}{(2s!)} x^{2s}\right] + \\~\\ C_1x \left[ 1 + \sum_{s=1}^\infty \frac{(-1)^s 2^s (n-1)(n-3)....(n-2s+1)}{(2s+1)!} x^{2s+1} \right] $$   

For r=1, if we proceed like the same, we will get a solution exactly similar to the second part of the series in the general solution with other coefficient. Since the coefficients are arbitrary, the series itself already inherent in the general solution, so we don't need to put separate attention on that. 

   That's it. We arrived to our general solution. Depending on the    
nature of the problem, we can make the series to converge or stop by appropriately choosing the values of $ \; C_0 \;and \;C_1$ 

You can choose the constants in various ways, where all those solutions will obey our Hermite differential equation - It is true, because I have tried!

But all of the solutions satisfying our equation is not physically meaningful.  

   There is a conventional way of choosing the constants, such that the terms in the series have the properties known as orthogonality, completeness,etc. And those specific functions in our series is known as the Hermite polynomials. 

In the general solution, we make the convergence test by taking the ratio of consecutive elements either in the odd series or in the even series, i.e. ratio of the terms $x^{2s}, x^{2s+2}$,
eg.In Even series $$\left|\frac{t_{s+1}}{t_s}\right| = \left| \frac{ C_{2s+2}}{C_{2s}} \right| = \left| \frac{-2 (n-2s)}{ (2s+2) (2s+1)} x^2 \right| $$
As "s" becomes very large,
$$ \lim_{s\rightarrow\infty} \left| \frac{C_{2s+2}}{C_{2s}}\right| = \left|\frac{4s}{4s^2} x^2\right| = \frac{x^2}{s} $$
[this is the same convergence of $ e^{x^2} $ series]. The series is an infinite series for all values of x and it will be meaningful only if it is terminated to finite terms. It can be achieved by choosing suitable values for "n".
Choosing the other constant as zero, we can always work with either odd or even series. Once we take an odd or even series, we choose the value of "n" such that,

for even series,  the coefficients of $x^{2s+2}$ is zero. i.e. n = 2s
and $ C_1 = 0 $ the general series reduces to, 
     $$y(x) = C_0 \left[ 1 + \sum_{s=1}^\infty \frac{ (-1)^s 2^s n (n-2)....(n-2s+2)}{(2s!)} x^{2s}\right]$$ 
Substituting, n = 2s, $$ y(x) = C_0 \left[ 1 + \sum_{s=1}^s \frac{(-1)^s 2^s (2s)(2s-2)...(2s-2s+2)} {(2s!)} x^{2s} \right] $$ 
$\rightarrow$ $$ y_{even}(x) = C_0 \left[ 1 + \sum_{s=1}^s \frac{ (-1)^s 2^{2s} s!} {2s!} x^{2s} \right]$$

for odd series, n = 2s+1 such that there is no terms involving power of x greater than 2s+1 and $C_0 = 0 $, then the series becomes,
$$ y_{odd}(x) = C_1x\left[ 1 + \sum_{s=1}^s \frac{ (-1)^s 2^{2s} s!}{(2s+1)!} x^{2s+1} \right]$$

Now, the conventional choice states that the constants $C_0\,,C_1$ chosen in a way such that the power of $x^n$ has the coefficient $2^n$ , then,
$$ for \,\,C_{2n} \rightarrow C_0 \frac{(-1)^n 2^{2n} n!}{2n!}x^{2n} = 2^{2n}  x^{2n} $$ which implies, $$ C_0 = \frac{2n! (-1)^n}{n!} $$
$$ for \,\,C_{2n+1} \rightarrow C_1 \frac{(-1)^n 2^{2n} n!}{(2n+1)!}x^{2n+1} = 2^{2n+1} x^{2n+1} $$$\rightarrow$ $$ C_1 = (-1)^n \frac{(2n+1)! \,(2)}{n!} $$ 
That is all we need to know!
Now, we will derive the first few Hermite polynomials as an example, 
n=0 $$ H_0(x) = C_0 \,\,where\,\, C_0 = \frac{2(0)!}{0!}(-1)^0 = 1 $$ Therefore , $$ H_0(x) = 1 $$
n=1 $$ H_1(x) = C_1x \,\,where\,\, C_1 = \frac{(-1)^0 1! 2}{0!} =2 $$ $\rightarrow$$$ H_1(x) = 2x $$
n=2 $$ H_2(x) = C_0 + C_0 \frac{(-1) 2^2 1!}{2!} x^2 \,\,where\,\, C_0 = \frac{2(1)! (-1)^1}{1!} = -2 $$$\rightarrow$ $$ H_2(x) = -2 + (-2) (-2x^2) = 4x^2 -2 $$
n=3, $$ H_3(x) = C_1x + C_1 \frac{(-1) 2^2 1!}{3!} x^3 $$ and $$ C_1 = \frac{(-1)^1 3! 2}{1!} = -12 $$ $\rightarrow$ $$ H_3(x) = -12 x + (-12) \frac{(-4)}{(6)}x^3 = 8x^3 - 12x $$ 
n=4, $$ H_4(x) = C_0 + \frac{(-1) 2^2 1!}{2!} x^2 C_0 + \frac{(-1)^2 2^4 2! }{4!} x^4 C_0 $$ and $$ C_0 = \frac{2(2)! (-1)^2}{2!} = 12 $$ $ \rightarrow$ $$ H_4(x) = 12 + 12 (-2) x^2 + 12 \frac{4}{3} x^4 = 16x^4 - 24x^2 +12 $$
n=5 $$ H_5(x) = C_1x + C_1 \frac{(-1) 2^2 1!}{3!} x^3 + C_1 \frac{(-1)^2 2^4 2!}{5!}x^5 $$ and $$ C_1 = \frac{(-1)^2 5! 2}{2!} = 120 $$ $\rightarrow $ $$ H_5(x) = 120 x + (-1) 80 x^3 + 32x^5 = 32x^5 - 80x^3 +120x $$ 
Thus we can find all the Hermite polynomials. 


You may ask "Why this choice of Hermite polynomials?" 
It is because of the special properties followed by these polynomials. For eg. These polynomials can be simply written in one line using a formula known as Rodrigues formula. And there is a specific generating function for these and orthogonality property for specific conditions and etc.  All these make Hermite polynomials more special in real life. 

All Posts

    Featured post

    Monopoles - 5 - Dirac Monopoles in Quantum Mechanics - Part - 1

    We know, Magnetic vector potential plays the crucial part in the Hamiltonian of an Electromagnetic system where the Hamiltonian formulation...

    Translate