4 .edu



3. Spatiotemporal field correlations

3.1. Spatiotemporal correlation function. Coherence volume.

• All optical fields in practice fluctuate randomly in both time and space and are subject to a statistical description [1]. These fluctuations depend on both the emission process (primary sources) and propagation media (secondary sources).

• Optical coherence is a manifestation of the field statistical similarities in space and time and coherence theory is the discipline that mathematically describes these similarities [2]. A deterministic field distribution in both time and space is the monochromatic plane wave, which is only a mathematical construct, impossible to obtain in practice due to the uncertainty principle.

• The formalism presented below for describing the field correlations is mathematically similar to that used for mechanical fluctuations, for example, in the case of vibrating membranes.

• The analogy between the two different types of fluctuations and their mathematical description in terms of spatiotemporal correlations has been recently emphasized [3].

• A starting point in understanding the physical meaning of a statistical optical field is the question: what is the effective (average) temporal sinusoid, [pic], for a broadband field? What is the average spatial sinusoid, [pic].

• A monochromatic plane wave is described by [pic]. These two averages can be performed using the probability densities associated with the temporal and spatial frequencies, S(() and P(k), which are normalized to satisfy [pic], [pic].

• Thus, S(()d( is the probability of having frequency component ( in our field, or the fraction of the total power contained in the vicinity of frequency (.

• Similarly, P(k)d3k is the probability of having component k in the field, or the fraction of the total power contained around spatial frequency k. Up to a normalization factor, S and P are the temporal and spatial power spectra associated with the fields. The two “effective sinusoids” can be expressed as ensemble averages, using S(() and P(k) as weighting functions,

[pic] (1a)

[pic] (1b)

• Equations 1a-b establish that the average temporal sinusoid for a broadband field equals its temporal autocorrelation, (. The average spatial sinusoid for an inhomogeneous field equals its spatial autocorrelation, denoted by W.

• Besides describing the statistical properties of optical fields, coherence theory can make predictions of experimental relevance. The general problem can be formulated as follows (Fig. 1):

[pic]

Figure 1. Spatio-temporal distribution of a real optical field

• Given the optical field distribution [pic] that varies randomly in space and time, over what spatiotemporal domain does the field preserve significant correlations? This translates into: combining the field [pic] with a replica of itself shifted in both time and space, [pic], on average, how large can [pic] and [pic] be and still observe “significant” interference?

• We expect that monochromatic fields exhibit infinitely broad temporal correlations, plane waves are expected to manifest broad spatial correlations. Regardless of how much we shift a monochromatic field in time or a plane wave in space, they remain perfectly correlated with their unshifted replicas. It is difficult to picture temporal correlations decaying over timescales that are shorter than an optical period and spatial correlations that decay over spatial scales smaller than the optical wavelength. In the following we provide a quantitative description of the spatiotemporal correlations.

• The statistical behavior of optical fields can be mathematically captured generally via a spatiotemporal correlation function

[pic] 2

• The average is performed temporally and spatially, indicated by the subscripts r and t. Because common detector arrays capture the spatial intensity distributions in 2D only, we will restrict the discussion to [pic], without losing generality. These averages are defined in the usual sense as

[pic] 3

• Often we deal with fields that are both stationary (in time) and statistically homogeneous (in space).

• If stationary, the statistical properties of the field (e.g. the average, higher order moments) do not depend on the origin of time. Similarly, for statistically homogeneous fields, their properties do not depend on the origin of space. Wide-sense stationarity is less restrictive and defines a random process with only it’s first and second moments independent of the choice of origin. For the discussion here, the fields are assumed to be stationary at least in the wide-sense.

• Under these circumstances, the dimensionality of the spatiotemporal correlation function [pic] decreases by half,

[pic] 4

• The spatiotemporal correlation function becomes

[pic] 5

• [pic] represents the spatially averaged irradiance of the field, which is, of course, a real quantity. In general [pic] is complex. Define a normalized version of [pic], referred to as the spatiotemporal complex degree of correlation

[pic] 6

• For stationary fields [pic] attains its maximum at [pic], thus

[pic] 7

• Define an area [pic] and length [pic], over which [pic] maintains a significant value, say [pic], which defines a coherence volume

[pic] 8

• This coherence volume determines the maximum domain size over which the fields can be considered correlated. In general an extended source, such as an incandescent filament, may have spectral properties that vary from point to point. It is convenient to discuss spatial correlations at each frequency [pic], as described below.

3.2. Spatial correlations of monochromatic light

[pic]

Figure 3. Measuring the spatial power spectrum of the field from source S via a lens (a) and Fraunhofer propagation in free space (b).

• The spatial correlation function [pic] can also be experimentally determined from measurements of the spatial power spectrum, as shown in Fig. 3.

• Both the far field propagation in free space and propagation through a lens can generate the Fourier transform of the source field, as illustrated in Fig. 3,

[pic] 16

• The CCD is sensitive to power and detects the spatial power spectrum, [pic].

• In Eq. 16, the frequency component [pic] depends either on the focal distance, for the lens transformation (Fig. 3a), or on the propagation distance z, for the Fraunhofer propagation (Fig. 3b),

[pic]. 17

• In the Fraunhofer regime, the ratios x/f and x/z describe the diffraction angle; therefore sometimes [pic] is called angular power spectrum.

• For extended sources that are far away from the detection plane, as in Fig. 3b, the size of the source may have a significant effect on the Fourier transform in Eq. 16. This effect becomes obvious if we replace the source field U with its spatially truncated version, U, to indicate the finite size of the source

[pic] 18

• [pic] is the 2D rectangular function, a square of side a. The far field becomes

[pic] 19

• * denotes convolution and sinc is sin(x)/x.

• The field across detection plane (x’, y’), [pic], is smooth over scales given by the width of the sinc function.

• This smoothness indicates that the field is spatially correlated over this spatial scale. Along x’, this correlation distance, [pic], is obtained by writing explicitly the spatial frequency argument of the sinc function,

[pic] 20

• We can conclude that the correlation area of the field generated by the source in the far zone is of the order of

[pic] 21

• [pic] is the solid angle subtended by the source.

• This relationship allowed Michelson to measure interferometricaly the angle subtended by stars.

• For example, the Sun subtends an angle [pic], i.e. [pic]. Thus, for the green radiation that is the mean of the visible spectrum, [pic], the coherence area at the surface of the Earth is of the order of [pic]. Measuring this area over which the sun light shows correlations (or generates fringes) provides information about its angular size.

• For angularly smaller sources, far field spatial coherence is correspondingly higher. This is the essence of the Van Cittert-Zernike theorem, which states that the field generated by spatially incoherent sources gains coherence upon propagation. This is the result of free-space propagation acting as a spatial low-pass filter [4].

• Zernike employed the spatial filtering concept to develop phase contrast microscopy [5, 6]. It had been known since Abbe that an image can be described as an interference phenomenon [7]. Image formation is the result of simultaneous interference processes that take place at each point in the image.

3.3. Temporal correlations of plane waves

• We now investigate the temporal correlations of fields at a particular spatial frequency k (or a certain direction of propagation). Taking the spatial Fourier transform of ( in Eq. 2, we obtain the temporal correlation function

[pic] 22

[pic]

Figure 5. Michelson interferometry.

• The autocorrelation function [pic] is relevant in interferometric experiments of the type illustrated in Fig. 5. In a Michelson interferometer, a plane wave from the source is split in two by the beam splitter and subsequently recombined via reflections on mirrors M1,and M2. The intensity at the detector has the form (we assume 50/50 beam splitter)

[pic] 23

• The real part of [pic] is obtained by varying the time delay between the two fields. This delay can be controlled by translating one of the mirrors. The complex degree of temporal correlation at spatial frequency k is defined as

[pic] 24

• [pic] represents the intensity of the field, i.e.

[pic] 25

• The complex degree of temporal correlation has the similar property with its spatial counterpart [pic], i.e.

[pic] 26

• The coherence time is defined as the maximum time delay between the fields for which [pic] maintains a significant value, say ½.

• If we cross-correlate temporally two plane waves of different wave vectors (directions of propagation), the result vanishes unless [pic],

[pic] 27

• At each moment t, the two plane waves generate fringes parallel to [pic]. If the detector (e.g. a CCD) averages the signal over scales larger than the fringe period, the temporal correlation information is lost.

• As ( changes, the fringes “run” across the plane such that the contrast averages to 0. For this reason, for example, the two beams in a typical Michelson interferometer are carefully aligned to be parallel.

• The temporal correlation [pic] is the Fourier transform of the power spectrum,

[pic] 28

• [pic] can be determined via spectroscopic measurements, as exemplified in Fig. 6.

[pic]

Figure 6. Spectroscopic measurement using a grating: G grating, D detector, diffraction angle. The dashed line indicates the undiffracted order (zeroth order)

• By using a grating (a prism, or any other dispersive element), we can “disperse” different colors at different angles, such that a rotating detector can measure [pic]directly.

• To estimate the coherence time for a broad band field, let us assume a Gaussian spectrum centered at frequency [pic], and having the r.m.s. width [pic],

[pic] 29

• S0 is a constant.

• The autocorrelation function is also a Gaussian, modulated by a sinusoidal function, as a result of the Fourier shift theorem

[pic] 30If we define the width of [pic] as the coherence time, we obtain

[pic] 31

• and the coherence length

[pic] 32

• The coherence length depends on the spectral bandwidth in an analog fashion to the coherence area dependence on solid angle (Eq. 21). This is not surprising as both types of correlations depend on their respective frequency bandwidth.

1. J. W. Goodman Statistical optics (Wiley, New York, 2000).

2. L. Mandel and E. Wolf Optical coherence and quantum optics (Cambridge University Press, Cambridge ; New York, 1995).

3. G. Popescu, Y. K. Park, R. R. Dasari, K. Badizadegan and M. S. Feld, Coherence properties of red blood cell membrane motions, Phys. Rev. E., 76, 031902 (2007).

4. J. W. Goodman Introduction to Fourier optics (McGraw-Hill, New York, 1996).

5. F. Zernike, Phase contrast, a new method for the microscopic observation of transparent objects, Part 1, Physica, 9, 686-698 (1942).

6. F. Zernike, Phase contrast, a new method for the microscopic observation of transparent objects, Part 2, Physica, 9, 974-986 (1942).

7. E. Abbe, Beiträge zur Theorie des Mikroskops und der mikroskopischen Wahrnehmung, Arch. Mikrosk. Anat. , 9, 431 (1873).

8. A. R. Webb Introduction to biomedical imaging (Wiley, Hoboken, New Jersey, 2003).

9. M. Pluta Advanced light microscopy (Polish Scientific Publishers, Warszawa, 1988).

4. IMAGE CHARACTERISTICS

• Imaging is the process of mapping a certain physical property of an object and displaying it in a visual form [8].

• Examples of such physical properties include absorption (e.g. light imaging, x-rays), reflectivity (e.g. ultrasound, photography), proton density (e.g. MRI), concentration of radionuclides (e.g. nuclear imaging).

• In microscopy, intrinsic specimen absorption, emission, and scattering are the quantities of interest. Using exogenous contrast agents, e.g. stains or fluorescent tags, one can image concentration distributions of certain species.

• We discuss the basic properties of microscope images, irrespective of the method involved in acquiring the images and present simple image enhancement operations allowed by numerical image processing.

4.1 Imaging as linear operation

• Often an imaging system, e.g. a microscope, can be approximated by a linear system.

• Consider the physical property of the sample under investigation is described by the function [pic], [pic]. The imaging system outputs an image [pic] which is related to[pic], through a convolution operation,

[pic] (4.1)

• h defines the impulse response or Green’s function of the system. h is commonly called point spread function (PSF). We are not specific as to what quantity I means. For example, I can be an intensity distribution, like in incoherent imaging, or complex field like in coherent imaging and QPI. h can be experimentally retrieved by imaging a very small object, i.e. approaching a delta-function in 3D.

• Thus, replacing the sample distribution S(r) with ((r), we obtain

[pic] (4.2)

• The PSF of the system, [pic], defines how an infinitely small object is blurred through the imaging process. The PSF is a measure of the resolving power of the system.

• I(r) can be an intensity distribution or a complex field distribution, depending upon whether the imaging uses spatially incoherent or coherent light, respectively. This distinction is very important because in coherent imaging, the system is linear in complex fields, while in incoherent imaging, the linearity holds in intensities.

4.2 Resolution

• The resolution of an imaging system is defined by the minimum distance between two points that are considered “resolved.” What is considered resolved is subject to convention.

• The half-width half maximum of h in one direction is one possible measure of resolution along that direction. When discussing image formation, we may encounter other conventions for resolution.

• The image can be represented in the spatial frequency domain via a 3D Fourier transform (see Appendix B)

[pic] (4.3)

• [pic] is the (angular) spatial frequency, in units of rad/m.

• Writing I in separable form, [pic], the 3D Fourier transform in Eq. 3 breaks into a product of three 1D integrals,

[pic] (4.4)

• To find a relationship between the object and the image in the frequency domain, we Fourier transform Eq. 1 and use the convolution theorem to obtain,

[pic] (4.5)

• [pic] and [pic] the Fourier transforms of S and h, respectively. [pic] is referred to as the transfer function (TF).

[pic]

Figure 1. a) Relationship between the modulus of PFS and TF. b) Infinitely narrow PSF requires infinitely broad TF.

• From the Fourier relationship between PSF and TF, it is clear that high resolution (i.e. narrow PSF) demands broad frequency support of the system (broad TF), as illustrated in Fig. 1.

• The ideal system for which [pic] requires infinite frequency support, which is clearly unachievable in practice,

[pic] (4.6)

• The physics of the image formation defines the instrument’s PSF and TF. We will derive these expressions for coherent and diffraction-limited microscopy later. For now, we continue the general discussion regarding image characteristics, irrespective of the type of microscope that generated the image, or of whether the image is an intensity or phase distribution.

4.3 Signal to noise ratio (SNR)

• Any measured image is affected by experimental noise. There are various causes of noise, particular to the light source, propagation medium, and detector, which contribute to the measured signal,

[pic] (4.7)

• [pic] is the measured signal, x is the true (noise-free) signal, and [pic] the noise.

• The variance, [pic], of the signal describes how much variation around the average there is over N measurements,

[pic] (4.8)

• The N measurements can be taken over N pixels within the same image, which characterizes the noise spatially, or over N values of the same pixel in a time-series, which defines the noise temporally.

• The standard deviation [pic] is also commonly used and has the benefit of carrying the same units as the signal itself.

• The SNR is defined in terms of the standard deviation as

[pic], (4.9)

• The signal is expressed as a modulus, such that SNR>0.

• Let us discuss the effects of averaging on SNR. It can be easily shown that for N uncorrelated measurements, the variance of the sum is the sum of the variance,

[pic] (4.10)

• Thus, the standard deviation of the averaged signal is

[pic] (4.11)

• and the SNR becomes

[pic] (4.12)

• Equation 12 establishes that taking N measurements and averaging the results increases the signal to noise ratio by [pic]. This benefit only exists when there is no correlation between the noise present in different measurements.

• For correlated (coherent) noise, Eq. 10 is not valid. To prove this, consider the variance associated with the sum of two correlated noise signals a1 and a2 (assume [pic] for simplicity),

[pic] (4.13)

• The total variance is the sum of individual variances, [pic], only when the cross term vanishes, i.e. [pic].

• If, for example, a1 and a2 fluctuate randomly in time, and signal a2 is measured after a delay time [pic] from the measurement of a1, the cross term has the familiar form of a cross-correlation,

[pic] (4.14)

• Equation 14 indicates that if the noise is characterized by some correlation time, [pic], then averaging different measurements increases the signal to noise only if the interval between measurements [pic] is larger than [pic]. The noise becomes uncorrelated at temporal scales larger than (c.

4.4. Contrast and contrast to noise ratio

[pic]

Figure 2. a) Illustration of a low- and high-contrast image. b) Region of interest A, B, and noise region N.

• The contrast of an image quantifies the ability to differentiate between different regions of interest within the image (Fig. 2),

[pic], (4.15)

• SA,B stands for the signal associated with regions A and B. The modulus in Eq. 15 ensures that the contrast has always positive values.

• Unlike resolution, which is established solely by the microscope itself, contrast is a property of the instrument-sample combination. The same microscope, characterized by the same resolution, renders superior contrast for stained than unstained tissue slices.

• The simple definition of contrast in Eq. 15 is insufficient because it ignores the effects of noise. It is easy to imagine circumstances where the noise itself is of very high contrast across the field of view, which by Eq. 15 generates high values of CAB.

• A better quantity to use for real, noisy images is the contrast to noise ratio (CNR),

[pic] (4.16)

• [pic] is the standard deviation associated with the noise in the image.

[pic]

Figure 3. Contrast to noise ration vs. contrast CAB and noise standard deviation (N.

• Of course, the best case scenario from a measurement point of view happens at low noise and high contrast (Fig. 3).

4.5 Image filtering

• Filtering is a generic term that relates to any operation taking place in the frequency domain of the image,

[pic] (4.17)

• [pic] is the Fourier transform of the image and H is the filter function.

• The filtered image [pic] is obtained by Fourier transforming [pic] into the space domain,

[pic] (4.18)

• The filtered image I is the convolution between the unfiltered image [pic] and the Fourier transform of the filter function H. Any measured image is a filtered version of the object (the transfer function, TF, of the instrument is the filter), as stated in Eq. 5.

• Filtering is used to enhance or diminish certain features in the image. Depending on the frequency range that they allow to pass, we distinguish three types of filters: low-pass, band-pass, and high-pass.

Low-pass filtering

[pic]

Figure 4. Low-pass filtering: a) noisy signal, b) filtering high frequencies; c) low-passed signal.

• Consider the situation where the noise in the measured image exhibits high-frequency fluctuations (Fig. 4a).

• This noise contribution is effectively diminished if a filter that blocks the high frequencies (passes the low frequencies) is used (Fig. 4b).

• The resulting filtered image, has better SNR (Fig. 4c). However, in the process, resolution is diminished.

Band-pass filter

• In this case the filter function H passes a certain range (band) of frequencies, from the image. This type of filtering is useful when structures of particular range of sizes are to be enhanced in the image.

High-pass filter

• Using a filter that selectively passes the high frequencies reveals finer details within the image.

• One particular application of high-pass filtering is edge detection. This application is particularly useful in selecting (segmenting) an object of interest (e.g. a cell) from the image. The gradient of the image along one dimension can significantly enhance the edges in an image (Fig. 5).

[pic]

Figure 5. Edge enhancement: a) original image; b) gradient along one direction; c) Profiles along the lines shown in a and b. d) Frequency domain profile associated with a. e) Frequency domain profile associated by b, i.e. kxI(kx).

• Thus, the new, edge-enhanced image is

[pic] (4.19)

• Taking the Fourier transform of Eq. 19, we find

[pic] (4.20)

• i indicates a [pic] shift that occurs upon differentiation (i.e. sines become cosines). The new frequency content of the gradient image [pic] is enhanced at high frequencies due to the multiplication by [pic] (Fig. 5).

• The gradient image suffers from “shadow” artifacts due to the change in sign of the first order derivative across an edge (Fig. 5b). This artifact is commonly known in DIM microscopy, where the gradient is computed “optically” [9].

• Taking the Laplacian of the image removes these anisotropic artifacts related to the change in sign of the first order derivative.

[pic] (4.21)

[pic]

Figure 6. Laplacian (a) and frequency domain profile (b) associated with the image in Fig 5a.

• Figure 6a shows the results of taking the Laplacian of the same image. Even finer details are now visible in the image, as the high-pass filter is stronger due to the k2 multiplication of I(k). The shadow artifacts are less disturbing because they do not change sign across each edge.

• There are many, much more sophisticated filters and algorithms for image enhancement and restoration that are beyond the scope of this discussion.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download