Recollection

On A New Technique to Solve Ordinary Differential Equations

Over the past two years I have gradually developed a very effective technique to solve ordinary differential equations. It may not be rigorous mathematically, in fact when I showed it to a mathematics professor he was absolutely freaked out, but it works, and that’s all that matters to a physics student. I called it a theory of differential generator, and here is a rather long summary of it.

1

2 3 4 5 6 7 8 9

Advertisements
Standard
Recollection

Conceptual Quantum Mechanics (Part 1)

As part of recollection, here I gathered some my conceptual analysis on Quantum Mechanics.

1. Preliminary

It started when people observed conventional particles, for example, electrons, behave like a wave. This hints a description of particle in wave mechanics, so here we go, we will describe a particle by \psi(x, t). However, it’s not so easy to interpret it. We could say, since it’s a wave, maybe it has similar properties as light wave. Since we could interpret the amplitude of light wave as probability amplitude, \psi(x, t) could possibly also be a probability amplitude of some sort.

Being a probability amplitude implies that

|\psi(x, t)|^2 \rightarrow \text{probability density function}

\int|\psi(x, t)|^2dx = 1

In other words, the second equation means that particle must be somewhere, we call it normalization condition.

2. Two particles

Now suppose our system has two particles that are sufficiently apart such that there is no interaction between them. The two particles are described by \psi_1(x_1, t) and \psi_2(x_2, t) respectively. One may ask, what is the probability amplitude of finding particle 1 at x_1  and particle 2 at x_2?

It’s not hard to guess that

\psi_{12}(x_1, x_2, t) = \psi_1(x_1, t)\psi_2(x_2, t)

This really follows from common sense that the probability of two events occurring at same time is the product of the probability of individual event occurring.  We could check that it indeed behaves like a probability amplitude.

\int|\psi_{12}(x_1, x_2, t)|^2 dx_1 dx_2=\int|\psi_1(x_1, t)|^2|\psi_2(x_2,t)|^2dx_1dx_2=1

3. Energy conservation

The fact that we could describe a particle by a probability amplitude implies that the information about energy must be somehow encapsulated in the function. This means

E = E[\psi(x, t)]

Now let’s consider a two-particle system as we described above. We know their energies respectively.

E_1=E[\psi_1(x_1, t)]

E_2=E[\psi_2(x_2, t)]

The total energy of the system E is

E=E_1+E_2

On the other hand, if we consider the total probability amplitude of the system, we will have

E=E[\psi_{12}(x_1, x_2, t)]

By comparison we conclude that

E[\psi_1(x_1, t)\psi_2(x_2, t)]=E[\psi_1(x_1,t)]+E[\psi_2(x_2, t)]

This shows that the form of E must follow a peculiar structure. One familiar function that has the above property is log function.

log(AB)=log(A)+log(B)

If this is the case, we would expect

E\rightarrow log(\psi(x, t))

\psi(x, t)\rightarrow Ae^{aE}

Since there is no justification to say A and a are constants, so we expect

\psi(x, t) = A(x, t)e^{a(x, t)E}

It can’t be right that the probability distribution depends on the energy exponentially. In other words, the exponential dependence must be compensated by an extra term to ensure particle has to exist somewhere, but this obvious contradicts with our assumption that all energy dependence comes in the exponential term. It must be that a is an imaginary number. We rewrite it as

\psi(x, t) = A(x, t)e^{ib(x, t)E}

where b is a real function. The energy dependence nicely cancels out when we check our normalization condition

\int|\psi(x, t)|^2dx=\int|A(x, t)|^2dx=1

We see that the dependence on t disappears after integration over x. This could only happen either when A doesn’t depend on time, which is physically wrong, or the dependence on t cancels out like the term with E. In this case, one could factor out the dependence on t and absorb it in b, so we have a new form

\psi(x, t)=\phi(x)e^{ic(x, t)E}

Now we want to see how \psi(x, t) changes in time by taking the partial derivative of t.

\frac{\partial\psi(x,t)}{\partial t}=i\phi(c\frac{\partial E}{\partial t}+E \frac{\partial c}{\partial t})e^{icE}=i(c\frac{\partial E}{\partial t}+E \frac{\partial c}{\partial t})\psi

In a closed system, the energy doesn’t change with time, we get

\frac{\partial\psi(x,t)}{\partial t}=iE\frac{\partial c}{\partial t}\psi

We could infer that the \frac{\partial c}{\partial t} can’t be dependent on t because, in an arbitrary system, how \psi changes with time should be independent from where we choose to start timing. We have

\frac{\partial c}{\partial t}=C(x)

The same argument applies that the change of \psi should be independent with where we set up our coordinate in any arbitrary system. Therefore we conclude that

\frac{\partial c}{\partial t} = k

where k is a constant. Substituting this result back we get the expression of time evolution of wave function

\frac{\partial}{\partial t}\psi = iEk\psi

We could, then, easily figure out the constant k by fitting experiment. Here we could consider an example of a photon. Photon classically is described as the propagation of alternating electric field and magnetic field. However, as proposed by Einstein, the amplitude of classical electromagnetic field could also be interpreted as the probability amplitude of the particle photon. This means for a free photon, it’s wave function would be of the following form.

\psi = Ae^{i(kx - wt)}

where A is a normalization constant, k is its wave number and w is the angular frequency of the wave. Substituting this wavefunction back to our expression of energy we get

\frac{\partial}{\partial t}\psi =-iw\psi= iEk\psi

We know the energy of a photon with frequency w is

E=\hbar w

Substitute this expression of energy back to the wavefunction

-iw\psi= iEk\psi

-w\psi=\hbar wk\psi

k=-\frac{1}{\hbar}

We get the value of k, thus we have derived that

\frac{\partial}{\partial t}\psi =-iE\frac{1}{\hbar}\psi

i\hbar\frac{\partial}{\partial t}\psi =E\psi

This is in fact one of the very important results in Quantum Mechanics. Through the arguments, one has to conclude that the emergence of complex number is a natural consequence to conserve energy, and in this sense, Quantum Mechanics doesn’t seem that mysterious at all!

Standard
Recollection

Commutation Relations

Commutation relation in quantum mechanics has triggered my interests ever since i discovered the existence and equivalence of two commutations. The story is, one day while i was reading a book on relativity, i started to wonder if quantum mechanics should be better formulated in a four-vector (space-time) representation. Here is my thought process. The two commutation relations are

[\hat{x}, \hat{p}] = i\hbar

[\hat{H}, \hat{t}] = i \hbar

In theory of relativity we have,

x^a = (ct, x^i)

p^a = m\frac{dx^a}{d\tau} = \gamma(mc, p^i) = \gamma(E/c, p^i)

The quest is, we know [\hat{x^i}, \hat{p^i}] = i\hbar, does this relation holds even up to four dimensions? i.e. [\hat{x^a}, \hat{p^a}]? Before that let’s mention a simple fact about commutation relations,

[a\hat{A}, b\hat{B}] = ab[\hat{A}, \hat{B}]

Imagining we are on a rest frame, i.e. \gamma = 1, replacing all variables with operators of correspondence and replacing energy term E with H, I got the following

[\hat{x^0}, \hat{p^0}] = [c\hat{t}, \hat{H}/c] = [\hat{t}, \hat{H}] = -i\hbar

However,

[\hat{x^i}, \hat{p^i}] = i\hbar

They have different signs! It troubles me for some time because it doesn’t seem to be as elegant, until i realized something is wrong with my calculation! Time and space are treated completely equal footing, this shouldn’t happen because time is uni-directional, unlike space! This inspires me to add this extra term to make the three coordinate equal footing

x^a = (ict, x^i)

It follows then,

 [\hat{x^0}, \hat{p^0}] = [ic\hat{t}, i\hat{H}/c] = -[\hat{t}, \hat{H}] = i\hbar

[\hat{x^a}, \hat{p^a}] = i\hbar

The commutation relation indeed holds for four dimensions which is no only remarkable but also mysterious that an addition of imaginary number makes the picture complete.

After i discovered this, i was so excited that i called my friend to tell him about it. He, who has taken a graduate course on quantum field theory, told me that this is indeed what people have found, but instead of solving the problem by introducing imaginary number, people uses the metric tensor to solve the inconsistency.

We had a big argument which i think is worth mentioning here. My friend insisted that using metric tensor approach is more fundamental because apparently some famous guy said so and all textbooks are consistent with that, my solution is just a mathematical trick which bears no fundamental truth beneath. I didn’t agree very well with this statement.

There is no authorities in physics. Uncountable cases we have, where orthodox belief was completely proven wrong. If there is only one thing to be called fundamental, it is the experimental truth, unchangeable, unprejudiced, upon which all theoretical architectures build up.

Metric tensor is really an invention of notation that encapsulates the distinction of time and space in a geometric interpretation. I see no justification, that this point of view is more fundamental than introducing imaginary number.

I am personally fond of my own notation for the following reasons. It gives rise to a nice equivalence of space and time, which is elegant in its own right. If whichever law applies to one dimension, it applies to any other dimensions.

I believe the fact that time travels along an imaginary axis has profound implications. Anything oscillating in space and time could only behave in two ways, it either oscillates in space but decaying in time, or oscillates in time but decaying in space. That’s how nature works, nothing could happen at all time over all places . Some conservation law seems to be at work, in a mysterious but elegant way.

Standard