Energy Levels for the Hydrogen Atom (from Ph234)



Lecture 14 (October 29, 2008): (version 1.0; 24-Oct-2008::00:00)

The Zeeman Effect (A) [Strong-field case]

Back in Ph234 last spring we derived the Zeeman Effect by considering a spin ½ particle in a magnetic field. Let’s revisit that, now in terms of a perturbative correction to the hydrogen energy levels for a hydrogen atom in a strong magnetic field. We take the Coulomb Hamiltonian to be [pic], and treat the interaction with the B-field as a perturbation. Since each n-shell has an n2-fold degeneracy, we want to choose a basis in which [pic] is diagonal. Since the electron has its own intrinsic magnetic moment, and the orbit has an orientation energy of [pic] (see Shankar, page 287-289),

[pic] (14.1)

where

[pic] (14.2)

(Recall that g=2 for the electron.) Then

[pic] (14.3)

where [pic], [pic], and [pic]. Then [pic] is diagonal in the [pic] eigenbasis, and

[pic] (14.4)

The n=1 subshell (which has only s-waves) is split into two components, one for each of the 2 values of [pic]. One of these will increase with increasing B, and the other will decrease.

But let us focus on the n=2 shell, which contains 8 states (2 s-wave and 6 p-wave). For B large, these will be split into 5 components, 3 of which are doubly degenerate. If we define [pic] in equation (14.4), these 5 levels correspond to [pic]. The doubly degenerate levels are those with [pic]. If we include the fin-structure splitting, then each of these 3 levels will be split into 2 components corresponding to the 2 possible values of [pic].

The Zeeman Effect (B) [Weak-field case]

But we have already seen, in deriving the fine structure of hydrogen, that in the absence of external fields the relativistic and spin-orbit terms split the n=2 subshell into 2 levels, corresponding to [pic] and [pic]. Moreover, these are not basis vectors in the [pic] basis. Instead, they are basis vectors of the [pic] basis.

When we add a small magnetic field, then we would expect j to remain a good quantum number, with the vector [pic] precessing about [pic]. More formally, we can use the [pic] basis as the eigenbasis of

[pic] (14.5)

and treat

[pic] (14.6)

as a perturbation. Then the new energy eigenvalues will be the eigenvalues of (14.5) plus the correction terms

[pic] (14.7)

Now [pic], and [pic] is a good quantum number. We can regard J as a vector precessing slowly about [pic], while L and S precess rapidly about J (see sketch in Griffiths, page 278). The expectation value of S is just its projection along [pic], times the projection of J along B. That is,

[pic] (14.8)

But

[pic] (14.9)

where the expression in square brackets is called the Lande g-factor, [pic]. Then the Zeeman splitting of the fine-structure levels is given by

[pic] (14.10)

[pic]

This gives a total of 6 different levels, 2 for the [pic] states ([pic] and [pic]) and 4 for the [pic] states ([pic]). These are all eigenvectors in the [pic] basis (with small correction terms [pic] that we have not bothered to calculate)

.

The Zeeman Effect (C) [Intermediate-field case]

But as B increases, somehow these 6 levels in the weak-field solution have got to transform into the 5 levels in the strong-field solution. How does this happen? How can we quantitatively describe the transition? Even better, how can we give a unified treatment of all regions together?

The important thing to realize is that there are 8 different basis vectors (for the n=2 states), and consequently there are 8 different energy eigenvalues and 8 different energy eigenstates for any B. For low fields, these coincide with the 6 levels of the low-field solution (2 of which are doubly degenerate). At high fields, they coincide with the 5 strong-field solutions (3 of which are doubly degenerate). But we can calculate what these 8 energy levels and eigenvectors are at any B, using techniques we have already developed.

We will use the [pic] basis. There are of course 8 basis vectors: 2 from the 2 [pic] states and 6 from the [pic] states. We will label and order these as [pic] (following the notation in Griffiths, p.281), writing them first in the [pic] basis and then in the [pic] basis:

[pic]

If we neglect both [pic] and [pic] then these are all degenerate with [pic]. If we now include the effects of [pic], then the [pic] matrix is diagonal with eigenvalues given by

[pic] (14.11)

If we now add [pic] with field B, then we find it is not diagonal but is easy to calculate because we know its matrix elements in the [pic] basis (where it is diagonal):

[pic] (14.12)

So for example

[pic] (14.13)

Calculating all of the elements of

[pic] (14.14)

(see figure from Griffiths), we find that most elements are zero. To diagonalize [pic], we don’t have to diagonalize an [pic] matrix but only 2 [pic] matrices:

[pic]

The resulting eigenvalues are given by:

[pic]

Plotting these as a function of B gives the curves shown below.

[pic]

Note how the levels start out as the two FS levels when B=0, then immediately split into 6 levels, and eventually coalesce into the 5 levels (3 of which are degenerate) at the far right.

Shankar Section 17.2: Selection Rules (p.458)

In calculating higher order corrections, and also in handling degenerate perturbation theory even at lowest order, we need to calculate matrix elements between different states, including both diagonal and non-diagonal matrix elements. There are a number of techniques and tricks which Shankar discusses.

We can often show that some of these matrix elements are zero without explicitly calculating them, and this can save a lot of time. Sometimes these “selection rules” can be deduced from symmetry considerations. For example, suppose we are dealing with hydrogen (Coulomb) wavefunctions and need to calculate

[pic] (14.15)

where Z is the position operator along the z-axis. Then the diagonal elements ([pic]) must vanish by symmetry, because we would be integrating an even function of z ([pic]) times an odd function of z (Z) over all z. The integral must vanish by symmetry. In fact,

[pic] (14.16)

because the parity of a Coulomb wavefunction (spherical harmonic) is [pic], and the parity of Z is -1. The integral of any function over all z must vanish if the function is an eigenstate of parity with odd parity. The same is true, of course for operators X and Y.

But Shankar starts out with a non-trivial selection rule: If there is any operator [pic] with eigenfunctions [pic], and if

[pic] (14.17)

then

[pic] (14.18)

It’s natural to test this with spherical harmonics, and a natural choice (which Shankar suggests) is [pic]. Then since Z in invariant under rotations about the Z-axis, we have [pic] so the rule says

[pic] (14.19)

The simplest example is [pic]; but this gives zero even though [pic]. The same thing happens when we choose [pic], or any other integer! But the rule does not say the integral has to be non-zero when [pic]. These integrals must give zero because of the parity selection rule (equation (14.16), which has nothing to do with the rule we are trying to test. So we need some other examples. The first few spherical harmonics are given below:

[pic]

Now we can try [pic], and [pic]. This indeed gives a non-zero result:

[pic] (14.20)

Moreover, if we try [pic] and [pic], then we get zero as the rule predicts (because of the integral over [pic]).

If we look at more examples, then we begin to see that the reason for the vanishing of the matrix elements having [pic] is almost always due to the integral over [pic]. Each spherical harmonic has a [pic]-dependence given by [pic] with [pic]=integer. Since the [pic] functions have a domain of [pic], any two such functions are orthogonal unless they have the same values of [pic]. So the rule seems to work for the case where [pic] and [pic]. But we are unable to test the rule for [pic] because all such matrix elements vanish due to parity alone (independent of the m-values).

We can get around this limitation by choosing a different form for [pic]. In particular, try using [pic]. This has even parity, so parity selection rule now allows[pic] (and in fact all states with [pic]). In particular, we can test the rule on the states [pic]. Indeed, for [pic] it works; also for [pic] and [pic] (for all 3 possible values of m); all of these give non-zero integrals , because when [pic] the two [pic] exponents cancel. But the matrix elements are zero when [pic] due to the [pic] integrals.

So the rule seems to work in all cases, and it seems to have something to do with the [pic] dependence.

Now, let’s look at the formal proof (Shankar, page 458). Sure enough, the rule is quite easy to prove.

(to be continued)

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download