Antimatter



Chapter 5 Big Bang PuzzlesThis chapter describes the advent of observational cosmology and the search for the main parameters of the big bang model. We then consider the emergence of several big bang puzzles, notably the problem of fine tuning By the 1970s, cosmology had finally become a major research area in physics. There were now four main planks of evidence for the big bang model; the recession of the galaxies (Hubble’s law), the abundance of the lightest elements, the distribution of the radio galaxies and the cosmic microwave background. However, all of this evidence was rather qualitative, and a new generation of physicists took on the task of establishing quantitative parameters of the model. The search for two numbersOne problem with the big bang model is that the relativistic models of Friedmann and Lemaitre describe evolving universes in general; they do not specify which type of universe we live in. Key parameters are not predicted by the model, but are determined by observation and put in by hand. In particular, the two most important variables - the rate of expansion of the universe and the density of matter - are not specified. We recall from chapter 2 that both the fate and the geometry of the universe are decided by a competition between these two parameters1. If the initial density of matter in the universe is below a certain critical value, the expansion of the universe will overcome the pull of gravity and expand forever (with the open geometry shown in figure 5): if the matter density is above this value, gravity eventually overcomes the expansion (and the universe will exhibit closed geometry). In between these cases lies the special case of a universe with an exact balance between expansion and gravity (flat geometry).This is one reason one talks about models of the expanding universe, rather than a single theory. It was realised early on that these two parameters could only be determined by observation2.Fig 5 Three possibilities for the universe depending on the initial values of the rate of expansion vs the density of matter The search for the Hubble constantConsidering the rate of expansion first, it can in principle be estimated directly from the Hubble constant H0 i.e. from the slope of the redshift-distance graph of the galaxies (figure 5). However, this parameter is very sensitive to errors in our estimates of the absolute distances to the stars, as we saw in chapter 4. From the 1970s onwards, astronomers and cosmologists devoted a great deal of time and energy to the determination of a reliable scale for stellar distance3. A key player in this program was Allan Sandage, Baade’s successor at the Mount Wilson observatory. Working with a 200-inch Palomar telescope, Sandage approached the problem using a variety of methods, putting particular emphasis on obtaining extremely accurate measurements to the nearest galaxy clusters4. However, a serious dispute arose concerning his estimate of H0 and that obtained by the astronomer Gerard de Vaucouleurs and colleagues at the McDonald observatory of the University of Texas (50 versus 90 km/s/Mpc respectively, almost a factor of two in the difference). This debate lasted right up until the 1990s, with the Hubble Space Telescope (HST) providing a final answer somewhere in the middle (70 km/s/Mpc). Recent methods using the novel technique of gravitational lensing of extremely distant objects have given a value in agreement with the HST estimate5.The search for the density of matter As regards the density of matter in the universe, we recall first that it is specified in the Friedmann models in terms of the parameter Ω, the ratio of the actual density to the critical density required to close the universe6. The critical density can be calculated from theory (although it depends on the Hubble constant, as you might expect), but how does one determine the actual density of matter at any epoch? A first guess came from calculations of primordial nucleosynthesis (chapter 3) . In order to predict the observed abundances of the lightest elements, it is found that the analysis must assume a value for the matter density in the infant universe that is very low, corresponding to a value for Ω of only a few percent. However, this constraint applies only to baryonic matter i.e. particles that take part in nuclear reactions (protons and neutrons). Nucleosynthesis puts no limit on the amount of non-baryonic matter in the universe and such matter could contribute significantly to the mass of the universe.A second method was to estimate the mass of all the observable galaxies and galaxy clusters; this is done by considering their gravitational effects on other bodies7. These measurements offered support for a surprising idea that had been around for some time; there appears to be a great deal of matter in the universe that is observable only by its gravitational effect. For example, measurements of the speed of rotation of certain galaxies strongly suggest the presence of nearby matter that is not detectable through telescopes at any wavelength (see figure 6). Such matter is called dark matter and its existence was first mooted by the Swiss astronomer Fritz Zwicky in the 1930s. Although dark matter was seen as something of a fudge for some years, the phenomenon is now accepted by the vast majority of cosmologists8. However, the nature of the particles that make up dark matter remains unknown9. We shall return to this topic many times - for the moment, we note that, even including the contribution of dark matter, the ‘gravitational method’ suggested a density value of about Ω = 0.2 for today’s universe, again indicating an expanding universe that overcomes the pull of gravity.Fritz Zwicky was the first to suggest the existence of dark matterA third method was to use very distant objects to measure the curvature of space (recall that this is directly related to mass, according to relativity). Such methods also suggested a value of Ω less than 1, but they turned out to be rather error-prone. Another method involved measuring the luminosity of galaxies and using an estimate of the mass-to-light ratio of matter to convert this to a measurement of mass density. This is a rather complicated method10 and we simply note that such measurements suggested a value of Ω = 0.25. All in all, the various observational methods seemed to indicate a value of Ω between 0.1 and 0.3, indicative of an open universe. But where did this value come from? What property of the early universe resulted in a value of Ω ~ 0.3?The problem of galaxy formationAnother problem soon emerged; how did large-scale structures such as galaxies and galaxy clusters form? The Friedmann/Lemaitre models presume a universe that is both isotropic and homogenous on the largest scales. As astronomy progressed in the 1960s and 70s, this principle seemed to be supported more and more by observation. On the other hand, the density of matter in a particular galaxy is approximately one million times the density in the cosmos at large. How did this happen? The obvious approach to the study of the formation of the galaxies is to assume that natural infinitesimal fluctuations in the density of the early universe were amplified by the force of gravity, becoming the structures we see today. However, early calculations by Lemaitre, Richrad Tolman, and Evgenii Lifshitz showed that, in an expanding universe, such fluctuations are too small to give rise to the large-scale structures observed today 11. Hence, one must assume that the universe was ‘born’ with irregularities of just the right size to give rise to the galaxies we see today.In the 1960s,Yakov Zeldovitch and Igor Novikov of Moscow University, and Jim Peebles of Princeton, tried a different approach; working backwards from the present galaxies, they calculated the size of the perturbations necessary to create these structures. This method was greatly helped by measurements of the cosmic microwave background. As we saw earlier, the background radiation is a glimpse of the universe at a particular epoch, the time at which free particles coalesce into atoms (the epoch of recombination, when the universe is about 100,000 years old). Essentially, the density fluctuations present at that time were imprinted on the cosmic background radiation, measurable as tiny fluctuations in the temperature of the radiation. From these considerations, two competing models of galaxy formation arose – a process of isothermal growth where the initial perturbations in the primordial plasma simply grew larger and larger to form galaxies (Peebles et al) and a process of adiabatic growth, where the initial perturbations clumped to form large growths of matter which later fragmented into galaxies (Zelowitch et al). However, neither model was entirely successful, not least because they did not take account of matter that does not partake in nuclear reactions12.Another approach concerned dark matter; if the matter in the universe includes a dark matter component, could the latter have played a role in the formation of the galaxies? One model concerning neutrinos was proposed (the lightest known particles, neutrinos travel almost at the speed of light and are included in the category of dark matter because they are extremely weakly interacting). The number of neutrinos in the universe is not constrained by nucleosynthesis calculations as they are not baryonic (see above). Hence a universe filled with enough neutrinos could give rise to the fluctuations necessary for the formation of galaxies. This theory became known as the hot dark matter model of galaxy formation. However, the model failed when experiments in particle accelerators showed that neutrinos are too light to play a major role in galaxy formation 13.A final possibility was cold dark matter. In this scenario, dark matter particles that are heavier and slower than neutrinos might provide a mechanism for the formation of galaxies. In 1982, Jim Peebles showed that this hypothesis could indeed produce the measured fluctuations in the cosmic microwave background14. To this day, the hypothesis of cold dark matter predicts a spectrum for the microwave background that is in good accord with detailed measurements from satellite telescopes , as we shall see in chapter seven. Unfortunately, the model does not specify the nature of the particles making up the cold dark matter!The problem of baryon numberAnother puzzle concerned the number of baryons in the universe. Recall that a baryon is the name given to particles that partake in nuclear reactions i.e. protons and neutrons. In the big bang model, one expects the total number of baryons in the universe to remain constant as the universe expands15. One also expects the number of photons (the particles that make up radiation) to remain constant. Hence the ratio of baryons to photons is a characteristic parameter that was fixed early in our universe – and in fact is measured as about 1 billion photons for every baryon. But what determined this ratio? Was the universe born with this characteristic?We note in passing that the problem is related to another puzzle. Considerations from particle physics suggest that the photon-to-baryon ratio is indicative of a deeper asymmetry – an asymmetry in the ratio of baryons to anti-baryons in the early universe (anti-baryons are the antimatter counterparts to baryons, see Appendix). It can be shown that if the early universe was exactly symmetric with respect to matter and anti-matter, the photon-to-baryon ratio would be of the order of 1018, in gross conflict with the empirical value above. Hence, the measured ratio points at a mechanism in the early universe that caused a slight asymmetry between matter and antimatter. That such a mechanism exists is no surprise, as we live in a universe comprised almost entirely of matter, not antimatter . But what caused this asymmetry between matter and anti-matter in the early universe? Again, the big bang model tells us nothing about such initial conditions.The problem of fine tuningAll of the parameters above share an important feature; they are not predicted by the Friedmann-Lemaitre models, but have to be assumed. This is known as the problem of initial conditions or the fine tuning problem. Why should the Hubble constant have the value it does? What determined the initial value of the density of matter? What determined the ratio of photons to baryons? What caused the fluctuations in matter density that gave rise to today’s galaxies? While the four planks of evidence for the big bang model stood on solid ground, it was unsatisfactory that so many parameters of the model had to be assumed, rather than be predicted by the theory. This led some to wonder whether the model was incomplete; and an old puzzle lent weight to this idea.The problem of the singularityA final big bang puzzle was the old conundrum of the singularity. As we saw in chapter 3, physicists abhor singularities as they do not give good descriptions of the real world. Yet in the case of the Friedmann-Lemaitre models, backtracking along the graph brings one to a universe that is infinitely small and infinitely dense, which seems rather unreasonable. The big bang universe does not make sense at ‘time zero’! This problem was sidelined for some years – after all, it is not unusual for perfectly good theories to break down at some point (note that Newtonian gravity also contains a singularity16). It is interesting that Einstein himself warned of the dangers of extrapolating relativistic models back to a universe of atomic dimensions. However, in the 1970s, Stephen Hawking and Roger Penrose published a number of theorems suggesting that an expanding universe must begin in a singularity, assuming only very general conditions17. This development brought the problem of the singularity once more to the fore.How does one resolve the puzzle of the singularity? The key undoubtably lies in the realm of quantum physics. The Friedmann-Lemaitre model is rooted in general relativity, a classical theory that takes no account of quantum effects – yet one can certainly expect quantum effects to become important in a universe of atomic dimensions18. Hence we cannot expect to have a reliable description of the origin of the universe until we have a version of general relativity that incorporates quantum physics. Much effort has been devoted to achieving this synthesis (notably by Hawking) but it has proved elusive. It is a remarkable fact that the two great pillars of modern physics , general relativity and quantum physics, have so far proved irreconcilable. One consequence of this failed marriage is that the big bang model is an effective model, not a complete one. We note once more that the moniker ‘big bang’ is a terrible misnomer as the model fails precisely at the bang itself. That said, we shall see in the next chapter that a radical new version of the big bang model was to cast some light on this great puzzle…Notes 1 This is something of a simplification as it assumes the cosmological constant is zero2 The astronomer Alan Sandage wrote a famous paper titled ‘ Cosmology – the search for just two numbers’, indicative of this approach to observational cosmology3 There is a great description of this work in the book ‘The Cosmological Distance Ladder’ (Rowan-Robinson , 1985)4 Professor Sandage passed away a few months before this book was written - a very nice overview of his life and work can be found in his obituary in Physics Today, May 20115 The gravitational lensing of quasars provides a fascinating method of measuring stellar distance (Kundic et al 1997)6 In other words ? = ρ/ρc, where ρ is the actual density of matter (a variable) and ρc is the critical density (a constant). The critical density is related to the Hubble constant by the equation ρc = 3 H02/8ΠG where G is the gravitational constant7 For example, the mass of our sun can be estimated from measurements of the earth’s orbit8 There is no law that dictates that all matter be visible i.e. interact with the electromagnetic force. It should be noted that the one alternative is that our laws of gravity are wrong, a theory known as Modified Newtonian Dynamics or MOND. However, this theory has lost support in recent years as modern observations support the existence of dark matter at every scale (see Rowan-Robinson , 1985).9 The particles are known as WIMPS or weakly interacting massive particles. There are a number of candidates A good review can be found in .........10 There is a good description of this method in ‘The Cosmic Century’ (Longair, 2006) 11 Ibid12 There is a good overview of early work on attempts to explain large-scale structure in Longair (2006) Chapter 1413 Ibid14 See Longair (2002) chap 1415 These considerations arise from the principle of the conservation of energy (the altenative is Hoyle’s creation field)16 Newton’s law of gravity explodes as the separation between two masses approaches zero17 The original paper was Hawking and Penrose (1969). See Hawking and Ellis (1973) for a less techncial account or indeed Hawking (1988)18 Quantum effects become significant at atomic dimensions and are expected to dominate in a universe below this size. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download

To fulfill the demand for quickly locating and searching documents.

It is intelligent file search solution for home and business.

Literature Lottery

Related searches