A guide to InSAR for Geophysics



Contents

1. Introduction to Synthetic Aperture Radar 2

2. The idea of 'phase' 6

3. Basic concepts for InSAR over a spherical earth 11

4. Basic concepts for measuring topography 18

5. Basic concepts for measuring surface deformation 25

6. How to make an interferogram - initial steps 30

7. Baselines and Orbits 34

8. Phase unwrapping 39

9. Phase gradients 43

10. Quantifying displacement 45

11. Atmospheric Effects 50

12. Limitations, advantages and resolution 57

13. Applications to neotectonics 61

14. Obtaining SAR data 67

1. Introduction to Synthetic Aperture Radar

Conventional radar (which itself is an acronym for 'RAdio Detection And Ranging') remote sensing works by illuminating the ground with electromagnetic waves of microwave frequency, and using the amplitude of the signal and the time it takes between transmitting the signal and receiving back the echoes to deduce the distance from the sensor to the image pixel on the ground. These distances are used, along with orbital information, to produce a 2D image of the ground, which looks similar to an aerial photograph.

[pic]

Figure 1 The electromagnetic spectrum (from the Atlantis Scientific Webpage).).

There are various frequencies of electromagnetic energy in the microwave wavelength range (see Figure 1) that can be used, which correspond to the three commonly used bands L, C and X (see Table 1). There are tradeoffs between using these different frequencies. C band, for example, loses coherence more easily than L band, but is also four times more accurate than L band (which wouldn't be accurate enough to detect small fault displacements) (Massonnet, 1995).

|Band |Frequency (GHz) |Wavelength (cm) |

|L |1-2 |30-15 |

|C |4-8 |7.5-3.75 |

|X |8-12 |3.75-2.5 |

Table 1 Radar band characteristics.

Radar is an 'all weather' system, since the electromagnetic waves can penetrate cloud. Since the energy comes from the system itself (it is an 'active' remote sensing system), rather than the sun (which is where 'passive' systems get their illumination from), it can also be used at night.

The larger the radar antenna, the better the resolution will be. The 'synthetic' part of Synthetic Aperture Radar (SAR) therefore comes from the use of the movement of the satellite (and more complicated processing techniques involving the Doppler history of the radar echoes) to create a 'synthetically' larger antenna and hence improved resolutions. Typical pixel spacing in space-based SAR is 20-100m within a 100km wide swath (see Figure 2 for notation).

[pic]

Figure 2 Radar geometry and notation.

References

Burgmann, Roland, Paul A. Rosen and Eric J. Fielding (2000)

Synthetic Aperture Radar Interferometry to Measure Earth's Surface Topography and its Deformation

Annu. Rev. Earth Plant. Sci, Vol 28, pp.169-209

Madsen and Zebker (1998)

Imaging Radar Interferometry; in Principles and Applications of Imaging Radar, Manual of Remote Sensing; American Society of Photogrammetry and Remote Sensing, Chapter 5, pp.270-358

Massonnet, D. (1995) (check you still need this reference here!)

Application of Remote Sensing Data in Earthquake Monitoring.

Adv. Space Research, Vol.15, No.11, pp.1137-1144

Atlantis Scientific's Webpage:



2. The idea of 'phase'

The Theory

A processed SAR signal is made up of two things - an amplitude and a phase, which are represented by a complex number. The phase is not used in traditional SAR studies, as it is influenced by so many things that it appears as random over a SAR image and the things that influence it are very difficult to quantify. In fact, the phase information was usually completely destroyed by speckle removal techniques (used to make the radar image look more like an aerial photo), which average the amplitudes of neighboring pixels.

When the radar sensor sends out a pulse of electromagnetic energy, there will be an integer number of complete wavelengths that travel to and from the target, then a final, incomplete wavelength received at the sensor, which is what is termed the 'phase'. Phase is measured as an angle, in radians or degrees. 2π radians of phase make up one phase cycle. The phase is therefore a term that is inadvertently proportional to the range from the satellite to a pixel on the ground.

The most important influences on phase are:

Useful:

• Topography

• Geometric displacement of the targets within each pixel (i.e. surface deformation).

Not useful (to us, anyway?!):

• Earth curvature

• The reflection of the signal by scatterers in the pixel (rocks, plants, etc).

• Change in phase delay caused by differences in the position of these scatterers within each pixel.

• Orbit error

• Atmospheric (ionospheric and tropospheric) delay

• Phase noise (from the radar system).

This is quite an intimidating list, and we shall see that the focus of anyone trying to use InSAR for research in their field will be to isolate just one of these features, reducing and eliminating the other aspects of phase through mathematical manipulation and calibration. Some of the effects, such as earth curvature, are much easier to deal with than effects such as tropospheric delay, but the closer we get to measuring extremely small effects on phase (for example interseismic strain), to levels smaller than a single fringe, the harder it will become to accurately remove the effects we don't want.

If two images were taken of exactly the same target from exactly the same satellite position, and the ground had not changed, then the phase for each would be the same. This implies that if two images are taken of exactly the same target from slightly different positions and the difference between the phases for each pixel is calculated, the unwanted effects on phase can be removed, leaving only the useful quantities. This combining of images gives us 'Interferometric Synthetic Aperture Radar', otherwise known as the acronym InSAR.

The Math

Representing Phase as a Complex Number

The radar wave can be viewed as shown in Figure 3. From this we can see that the wave can be defined as [pic], where A is the amplitude and θ is the number of radians. We get this just from simple trig - remember that you get a similar curve if you plot a graph of [pic].

[pic]

Figure 3 The electromagnetic wave as a function of amplitude and phase

If we translate these parameters onto real and complex axes, we get Figure 4. In this view, θ goes round and round the circle with increasing numbers of waves, increasing by 2π each time it completes the loop. If the wave is incomplete, the circle will not be completed and all the preceding full loops of 2π plus the partial value of θ will be our phase. We can derive the following equations for amplitude by looking at the geometry in the circle:

[pic]

[pic]

We can also, most importantly, get an equation for phase, θ:

[pic]

[pic]

Figure 4 The electromagnetic wave viewed in complex format.

Relating Phase to Range

The total, roundtrip, distance in wavelengths between a radar antenna and a point on the surface is [pic], where ρ is the range and λ is the wavelength of the system. By multiplying this equation by π, we get an equation for the total phase over this distance: [pic]. For two-way travel this is [pic].

We can also translate these equations to find phase difference. The difference in round-trip range distance between two radar antennas to a point on the surface, in wavelengths, is [pic], where δρ is the difference in distance and λ is the wavelength. This equation multiplied by 2π gives the phase difference as [pic].

References

Gabriel, Andrew K., Richard M.Goldstein and Howard A.Zebker (1989)

Mapping Small Elevation Changes Over Large Areas: Differential Radar Interferometry

Journal of Geophysical Research, Vol.94, No.B7, pp.9183-9191

Rosen, Paul A., Scott Hensley, Ian R. Joughin, Fuk Li, Soren N. Madsen, Ernesto Rodriguez and R.M.Goldstein (1999)

Synthetic Aperture Radar Interferometry

Proceedings of the IEEE, Unpublished manuscript

3. Basic concepts for InSAR over a spherical earth

The Theory

The natural variation of a LOS vector across a scene, caused by the shape of the earth, introduces a gradient of phase across the scene (see Figure 5). The effects of a spherical earth must therefore be removed by 'flattening' the interferogram. We do this by calculating what the gradient of phase change would be assuming that the earth is a sphere, and that there is no topography and has been no surface deformation, and subtracting this from the interferogram. The math behind the removal of phase caused by the shape of the earth is a good introduction to the concepts of interferometry.

[pic]

Figure 5 Fringes over a spherical earth (exaggerated for effect).

The Math

What follows are the calculations for obtaining δρe. They follow the steps of, for example, Zebker et al, 1994, Price and Sandwell, 1998, Price, 2001, Chapter 1, but I have expanded them in tedious detail.

[pic]

Figure 6 Diagram of geometry for InSAR over a spherical earth. A1 and A2 represent the two sensor positions. ρ is range from A1, ρ+δρ is range from A2. B is the baseline length, with components perpendicular and parallel to the line of flight indicated by [pic] and [pic]. θ is the look angle (the angle between the radar ray and the vertical) and α is the angle between the baseline and the horizontal.

For a case with two satellite positions and no topography, the phase difference φ will only have a contribution from the difference in range over the scene, δρe, caused by a spherical earth:

[pic]

Remember the Cosine Rule:

[pic]

... and set the Cosine Rule up for the geometry in Figure 6

[pic].

Through a collection of trigonometric manipulations (which you could skip reading), we can simplify the right hand side of this equation:

The trig law

[pic]

means that we can write

[pic] (**)

This can be simplified further by using the trig laws

[pic]

[pic]

to write

[pic]

[pic].

This means that we can write Equation ** as

[pic]

Substituted into the original equation, this gives

[pic]

Expanding the left hand side gives:

[pic]

Now we start making approximations to simplify things further. δρe is very small, so δρe2 is going to be so tiny that we can ignore it. We can also subtract ρ2 from both sides.

[pic]

Then, by dividing each side by 2ρ, we get:

[pic]

(Zebker et al, 1994, swap this around to get [pic], but how?)

Now we make another assumption - for spaceborne geometries, ρ is going to be very large compared to B (approximately 800km compared to less than several hundred meters), so we can pretty much pretend that the paths from the two antennas are parallel. This idea was introduced by Zebker and Goldstein (1986), and is now termed the 'Parallel Ray Approximation'. This means that B2, in the scale of things, can also be ignored, which gives:

[pic]

Now look closely at the diagram again. We can see that Bll, the component of the baseline vector that is parallel to the look direction of the reference satellite, can be described as [pic], which we know from before can be simplified to [pic]. For completeness, we can also see that [pic]. Combining this range-baseline relationship and the equation for phase difference ([pic]), gives us an equation for range in terms of our observables: [pic].

Since [pic], we can draw the geometry as shown in Figure 7, as many authors do (for example Zebker and Goldstein, 1986). Note that they change the position of θ to where 90-θ was before - I guess this is just for convenience. You can see straight away from this that [pic]. You can also see that the height of the satellite, H, is [pic]. Combining both these equations with the equation for phase difference gives an expression for H in terms of φ and θ:

[pic]

[pic]

Figure 7 Alternative notation for geometry over a spherical earth.

References

Burgmann, Roland, Paul A. Rosen and Eric J. Fielding (2000)

Synthetic Aperture Radar Interferometry to Measure Earth's Surface Topography and its Deformation

Annu. Rev. Earth Plant. Sci, Vol 28, pp.169-209

Madsen and Zebker (1998)

Imaging Radar Interferometry; in Principles and Applications of Imaging Radar, Manual of Remote Sensing; American Society of Photogrammetry and Remote Sensing, Chapter 5, pp.270-358

Price, Evelyn J. and David T.Sandwell (1998)

Small-scale deformations associated with the 1992 Landers, California, earthquake mapped by synthetic aperture radar interferometry phase gradients.

Journal of Geophysical Research, Vol.103, No.B11, pp.27001-27016

Price, Chapter 1 (unpublished, 2001)



Rosen, Paul A., Scott Hensley, Ian R. Joughin, Fuk Li, Soren N. Madsen, Ernesto Rodriguez and R.M.Goldstein (1999)

Synthetic Aperture Radar Interferometry

Proceedings of the IEEE, Unpublished manuscript

Zebker and Goldstein (1986)

Topographic Mapping From Interferometric Synthetic Aperture Radar Observations; JGR, Vol.91, No.B5, pp.4993-4999

Zebker, Howard A., Paul A. Rosen, Richard M. Goldstein, Andrew Gabriel and Charles L. Werner (1994)

On the derivation of coseismic displacement fields using differential radar interferometry: The Landers earthquake

Journal of Geophysical Research, Vol.99, No.B10, pp.19617-19634

4. Basic concepts for measuring topography

The Theory

If the range from two different sensor positions to a single point on the surface of the earth is known, along with the distance between these sensor positions, then it is a relatively simple geometric problem to calculate the position of the point on the surface. If two SAR images are taken from slightly different positions and the backscatter phase does not change between them, then the measured phase difference will be proportional to the difference in range from the sensor positions to each pixel. Simultaneous measurement of range, azimuth angle and elevation angle can therefore provide 3D coordinates for each pixel. Slant range measurements of topography can be converted to ground range to make DEMs in map projection.

The Math

[pic]

Figure 8 Diagram of geometry for InSAR over a spherical earth with topography. Notation is the same as for Figure 6 except that θo is the look angle from the reference satellite to a spherical earth with no topography and δθt is the distortion to this angle caused by the presence of topography.

Having been through the derivation of the equations for InSAR geometry on the spherical earth with no topography in Chapter 3, it is now relatively easy to convert these equations into a calculation for topography on a spherical earth.

The phase is now dependent on contributions from both spherical earth (δρe) and topography (δρt):

[pic]

We must now, therefore, write equation [pic] from Chapter 3 as [pic].

Expanding the equation (using the trig laws from Chapter 3) gives

[pic],

[pic] and

[pic].

We can again make approximation here due to the scale of spaceborne geometries, as δθt is going to be very small. For very small angles (if they are measured in radians) [pic] and [pic], so

[pic] and

[pic].

(***CHECK!***) Putting these into the original equation gives

[pic]= [pic] (except that it doesn't exactly, so I must have missed something here?)

Remembering that [pic] and [pic], we can therefore write:

[pic]

We can now remove δρe from these equations using the equation for Bll and the fact that [pic], leaving only a relationship between range and topography; [pic] or [pic].

We will actually measure φ, the phase difference between the two images. We know that φ is a function of this range difference, δρ . Combining the equations for range and phase difference ([pic]) we get:

[pic]

We can also use trigonometry to calculate the vertical height of the antennas:

[pic], then use the above equation to calculate an equation for H in terms of our observables:

(CHECK!) Somehow do trig manipulation on Δφ to get θo+δθo separate to the rest of the equation, then put this into equation for H.

Combining this equation for H with along-track distance, x, and slant range measurements will give a topographic map in the coordinate system x, ρ and H. [This next bit's from Zebker, 1986, and I'm not quite sure what y is relative to the diagram...??] If we want to transform the coordinate system from slant range to ground range (i.e. x, y and h coordinates (where h is ground elevation)), we can calculate true ground range from the antennas to each pixel using [pic]. The image must also be rectified to fit a square grid.

SAR Systems for measuring topography

There are two different system configurations for measuring topography:

Dual Antenna Systems

In dual antenna systems, two antennas are mounted on the same platform, so there is a fixed baseline between them and the geometry of the system is well constrained. The Shuttle Radar Topography Mission (SRTM) is the best example of a 'dual antenna system' - see Chapter 14 for more details..

It is worth noting that if the same transmitter is used for both antennas, the phase equation for one-way propagations differences should be used. If each antenna transmits and receives, use the two-way propagation phase equation.

Single Antenna Systems

Topographic mapping using a single antenna is usually called 'repeat-track' or 'dual-pass' interferometry. This is how most space-based systems operate. As the system only has one antenna, the 'slave' image must be obtained in a second orbit following that of the 'master' image. This means that the orbit parameters for the two image takes must be well constrained so that second satellite is in a similar orbital position to the first and so that the baseline between the two satellite positions can be calculated. For repeat track systems you should use the two-way propagation formulae for phase.

The ERS Tandem mission is a good example of a repeat track system. It was designed to measure topography, so ERS-1 followed ERS-2 in the same orbit, with a one-day separation.

References

Burgmann, Roland, Paul A. Rosen and Eric J. Fielding (2000)

Synthetic Aperture Radar Interferometry to Measure Earth's Surface Topography and its Deformation

Annu. Rev. Earth Plant. Sci, Vol 28, pp.169-209

Madsen and Zebker (1998)

Imaging Radar Interferometry; in Principles and Applications of Imaging Radar, Manual of Remote Sensing; American Society of Photogrammetry and Remote Sensing, Chapter 5, pp.270-358

Price, Evelyn J. and David T.Sandwell (1998)

Small-scale deformations associated with the 1992 Landers, California, earthquake mapped by synthetic aperture radar interferometry phase gradients.

Journal of Geophysical Research, Vol.103, No.B11, pp.27001-27016

Rosen, Paul A., Scott Hensley, Ian R. Joughin, Fuk Li, Soren N. Madsen, Ernesto Rodriguez and R.M.Goldstein (1999)

Synthetic Aperture Radar Interferometry

Proceedings of the IEEE, Unpublished manuscript

Toutin, Thierry and Laurence Gray (2000)

State-of-the-art of elevation extraction from satellite SAR data.

ISPRS Journal of Photogrammetry and Remote Sensing, Vol.55, pp.13-33

Zebker and Goldstein (1986)

Topographic Mapping From Interferometric Synthetic Aperture Radar Observations; JGR, Vol.91, No.B5, pp.4993-4999

Zebker, Howard A., Paul A. Rosen, Richard M. Goldstein, Andrew Gabriel and Charles L. Werner (1994)

On the derivation of coseismic displacement fields using differential radar interferometry: The Landers earthquake

Journal of Geophysical Research, Vol.99, No.B10, pp.19617-19634

SRTM Website:



5. Basic concepts for measuring surface deformation

The Theory

If two images are taken from exactly the same sensor position of the exactly the same target, at different times, and the returned phases are different, then these different phases relate to a change in the range from the satellite to the target, indicating a change in position of the target. This means that by differencing the phase of images taken before and after the ground has moved, changes towards or away from the satellite can be measured as millimeter level line of site (LOS) (i.e. along the path of the radar signal) displacements.

InSAR can ONLY measure LOS displacements. It can only measure a change in the surface along the look direction of the satellite, not a full, 3D displacement vector. Additional information can be obtained by using images from both ascending and descending orbits (see Chapter 5), but this is at the cost of increased complication in processing and the mercy of data availability.

If the orbits of two satellite passes were repeated perfectly, the phase would only contain a measure of the deformation, but since the two images are unlikely to have been taken from exactly the same position, the phase difference between the two will include both a measure of the deformation and a measure of topography. In order to quantify only surface displacement, therefore, the effects of topography must be removed.

The Math

[pic]

Figure 9 Geometry for InSAR over a spherical earth, with topography, that has undergone deformation (d). Notation is the same as for Figure 8, except that δθd is the change in look angle caused by the surface deformation.

If deformation has taken place, the phase can be written as [pic], where δρe represents the range change due to a spherical earth, δρt represents range change due to topography and δρd represents range change due to surface deformation.

We already know how to remove δρe (see Chapter 2), which leaves δρt to be removed in order to isolate δρd.

Removing Topographic Fringes

Independent DEMs

To use this method, a DEM must be available from external sources. The DEM is used to create a synthetic topographic fringe pattern, which is then subtracted from the interferogram to leave only the fringes caused by surface deformation (Massonnet and Feigl, 1995).

Three-pass, differential InSAR

This method is otherwise termed 'double differencing', or, particularly when more than 3 images are used, the 'N-pass' method (see Figure 10). A DEM is created from two SAR images, with at least one unrelated to the pair of interest. This DEM is then subtracted from the interferogram thought to show surface displacement. (Gabriel et al, 1989, Zebker et al, 1994).

[pic]

Figure 10 Geometry for the 'N-Pass' method of measuring surface deformation.

References

Burgmann, Roland, Paul A. Rosen and Eric J. Fielding (2000)

Synthetic Aperture Radar Interferometry to Measure Earth's Surface Topography and its Deformation

Annu. Rev. Earth Plant. Sci, Vol 28, pp.169-209

Gabriel, Andrew K., Richard M.Goldstein and Howard A.Zebker (1989)

Mapping Small Elevation Changes Over Large Areas: Differential Radar Interferometry

Journal of Geophysical Research, Vol.94, No.B7, pp.9183-9191

Madsen and Zebker (1998)

Imaging Radar Interferometry; in Principles and Applications of Imaging Radar, Manual of Remote Sensing; American Society of Photogrammetry and Remote Sensing, Chapter 5, pp.270-358

Massonnet, D., and K.L.Fiegl (1995)

Discrimination of geophysical phenomena in satellite radar interferograms.

Geophysical Research Letters, Vol.22, pp.1537-1540

Massonnet, D. and K.L.Feigl (1998)

Radar interferometry and its application to changes in the earth's surface

Reviews of Geophysics , Vol.36, pp.441-500

Zebker, Howard A., Paul A. Rosen, Richard M. Goldstein, Andrew Gabriel and Charles L. Werner (1994)

On the derivation of coseismic displacement fields using differential radar interferometry: The Landers earthquake

Journal of Geophysical Research, Vol.99, No.B10, pp.19617-19634

6. How to make an interferogram - initial steps

Data Processing

Raw data consists of radar echoes collected from the surface. These must first be processed so that each pixel contains amplitude and phase information. Processing algorithms are based on the signal characteristics of the sensor and the satellite orbit. Data that contains amplitude and phase information as an array of complex numbers is called Single Look Complex (SLC). As you can see in Figure 11,complex data looks pretty noisy, and not at all like the amplitude-based radar images we're used to seeing, which have had speckle removed.

[pic]

Figure 11 An ERS-1 scene over Prudoe Bay, Alaska, in complex format (from the ASF website).

Corectification

Common pixels between the two images must be mapped, in order that the two images may be overlain exactly. Differences in geometry, and differences caused by inconsistencies in satellite design, must be accounted for; i.e. it may be necessary to warp or stretch one image to fit well over the other. Differences in satellite velocity can cause differences between the two images, giving a systematic along-track distortion. Although the baseline is ideally aligned parallel to the flight track, satellite tracks will usually be divergent, which also introduces a 'linear shear'.

If the pixels are not properly aligned at a sub-pixel level, the random part of the phase caused by scatterers in the image will not cancel out between images. The precision of this alignment must be better than 100μs ( ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download