RADAR BASICS - UAH



DETECTION THEORY

INTRODUCTION

In some of our radar range equation problems we looked at finding the detection range based on SNRs of 13 and 20 dB. We now want to develop some of the theory that explains the use of these particular SNR values. More specifically, we want to examine the concept of detection probability, [pic]. Our need to study detection from a probabilistic perspective stems from the fact that the signals we deal with are noise-like. From our studies of RCS we found that, in practice, the signal return looks random. In fact, Swerling has convinced us that we should use statistical models to represent target signals. Also, in addition to the target signal we found that the signals in the radar contain a noise component, which also needs to be dealt with using the concepts of random variables, random processes and probabilities.

To develop the requisite equations for detection probability we need to develop a mathematical characterization of the target signal, the noise signal and the target-plus-noise signal at various points in the radar. From the above, we will use the concepts of random variables and random processes to characterize these quantities. We start with a characterization of noise and then progress to the target and target-plus-noise signals.

NOISE IN RECEIVERS

We will characterize noise for the two most common types of receiver implementations. The first receiver configuration is illustrated in Figure 1 and it termed the IF representation. In this representation, both the matched filter and the signal processor are implemented at some intermediate frequency, or IF. The second receiver configuration is illustrated in Figure 2 and is termed the baseband representation. In this configuration, the signal processing is implemented at baseband. The IF configuration is common to older radars and the baseband representation is common to more modern radars that use digital signal processing.

[pic]

In the IF representation, the noise is represented by

[pic] (1)

where [pic][pic], [pic] and [pic] are random processes. If we expand (1) using trig identities we get

[pic] (2)

where [pic] and [pic] are also random processes. In (2), [pic] and [pic] are joint, wide-sense stationary, zero-mean, equal variance, Gaussian random processes. They are also such that the random variables [pic] and [pic] are independent. The variances of [pic] and [pic] are both equal to [pic]. The above statements mean that the density functions of [pic] and [pic] are equal and given by

[pic]. (3)

We will now show that [pic] is Rayleigh and [pic] is uniform on [pic]. We will further argue that the random variables [pic] and [pic] are independent.

From probability and random variables[1] if [pic] and [pic] are real random variables,

[pic] (4)

and

[pic], (5)

where [pic] denotes the four-quadrant arctangent, then the joint density of [pic] and [pic] can be written in terms of the joint density of [pic] and [pic] as

[pic]. (6)

In our case [pic], [pic], [pic] and [pic]. Thus, we have

[pic] (7)

[pic] (8)

and

[pic]. (9)

Now, since [pic] and [pic] are independent, Gaussian and zero-mean with equal variance

[pic]. (10)

If we use this in (9) with [pic] and [pic] we get

[pic]. (11)

From random variable theory, we can find the marginal density from the joint density by integrating with respect to the variable we want to eliminate. Thus,

[pic] (12)

and

[pic]. (13)

This proves the assertion that [pic] is Rayleigh and [pic] is uniform on [pic]. To prove that the random variables [pic] and [pic] are independent we note from (11), (12) and (13) that

[pic], (14)

which means that [pic] and [pic] are independent.

Since we will need it later, we want to find the noise power out of the signal processor. Since [pic] is wide-sense stationary we can use (2) and write

[pic] (15)

In (15), the term on the third line is zero because [pic] and [pic] are independent and zero-mean.

[pic]

In the baseband configuration of Figure 2 we represent the noise at the signal processor as a complex random process of the form

[pic] (16)

where [pic] and [pic] are joint, wide-sense stationary, zero-mean, equal variance, Gaussian random processes. They are also such that the random variables [pic] and [pic] are independent. The variances of [pic] and [pic] are both equal to [pic]. The constant of [pic] is included to provide consistency between the noises in the baseband and IF receiver representations. The power in [pic] is given by (making use of the properties of [pic] and [pic])

[pic]. (17)

We note that we can write [pic] in polar form as

[pic] (18)

where

[pic] (19)

and

[pic]. (20)

It will be noted that the definitions [pic]. [pic], [pic] and [pic] are consistent between the IF and baseband representations. This means that both representations are equivalent in terms of the statistical properties of the noise. We will reach the same conclusion for the signal. The ramifications of this are that the detection and false alarm performance of both types of receiver/signal processor configurations will be the same. Thus the future detection and false alarm probability equations that we derive will be applicable to either receiver configuration.

It should be noted that, if the receiver you are analyzing is not of one of the two forms indicated above, the ensuing detection and false alarm probability equations may not be applicable to it. The most notable exception to the two representations above is the case where the receiver uses only the I or Q channel in baseband processing. While this is not a common receiver configuration, it is sometimes used. In this case, one would need to derive a different set of detection and false alarm probability equations that would be specifically applicable to the configuration.

SIGNAL IN RECEIVERS

We now want to turn our attention to developing a representation of the signals at the output of the signal processor of the receiver. Consistent with the noise case, we want to consider both IF and baseband receiver configurations. Thus, for our analyses we will use Figures 1 and 2 but replace [pic] with [pic], [pic] with [pic], [pic] with [pic], [pic] with [pic] and [pic] with [pic].

We will need to develop three signal representations: one for SW0/SW5 targets, one for SW1/SW2 targets and one for SW3/SW4 targets. We have already acknowledged that the SW1 through SW4 target RCS models are random process models. To be consistent, and consistent with what happens in an actual radar, we will also use a random process model for the SW0/SW5 target RCS.

Since the target RCS models are random processes we must also represent the target voltage signals in the radar (henceforth termed the target signal) as random processes. To that end, the IF representation of the target signal is

[pic] (21)

where

[pic] (22)

and

[pic]. (23)

The baseband signal model is

[pic]. (25)

It will be noted that both of the signal models are consistent with the noise voltage model of the previous sections.

Consistent with the noise model, we assume that [pic] and [pic] are independent.[2]

At this point we need to develop separate signal models for the different types of targets because the signal amplitude fluctuations, [pic], of each are governed by different models.

SW0/SW5

For the SW0/SW5 target case we assume that the target RCS is constant. This means that the target power, and thus the target signal amplitude, will be constant. This means that we let

[pic]. (26)

With this the IF signal model becomes

[pic]. (27)

Consistent with the constant RCS assumption we have chosen the phase, [pic], as a random variable (still uniform on [pic]). This means that [pic] and [pic] are also random variables (rather than random processes). [pic] is a random process because of the presence of the [pic] term.

The density functions of [pic] and [pic] are the same and are given by

[pic]. (28)

We note that the random variables [pic] and [pic] are not independent since they are “tied” together through the random variable [pic].

The signal power is given by

[pic]. (29)

In the above we can write

[pic]. (30)

Similarly, we get

[pic] (31)

and

[pic]. (32)

Substituting (30), (31) and (32) into (29) results in

[pic]. (33)

From (25) the baseband signal model is

[pic]. (34)

The signal power is

[pic]. (34)

SW1/SW2

For the SW1/SW2 target case we have already stated that the target RCS is governed by the density function

[pic]. (35)

Since the power is a direct function of the RCS (from the radar range equations), the signal power at the signal processor output has a density function that is the same form as (35). That is

[pic] (36)

where [pic] (37)

From random variable theory it can be shown that the signal amplitude, [pic], is governed by the density function

[pic]. (38)

Which is recognized as a Rayleigh density function. This, combined with the fact that [pic] in (21) is uniform, and the assumption that [pic] and [pic] are independent, leads to the interesting observation that the signal model for a SW1/SW2 target is of the same form as the noise model. That is, the IF signal model for a SW1/SW2 target is of the form

[pic] (39)

where [pic] is Rayleigh and [pic] is uniform on [pic]. If we adapt the results from our noise study we arrive at the conclusion that [pic] and [pic] are Gaussian with the density functions

[pic]. (40)

Furthermore, [pic] and [pic] are independent.

The signal power is given by

[pic]. (41)

Invoking the independence of [pic] and [pic] and the fact that [pic] and [pic] are zero mean and have equal variances of [pic] leads to the conclusion that

[pic]. (42)

The baseband representation of the signal is

[pic] (43)

where the various terms are as defined above. The power in the baseband signal representation can be written as

[pic] (44)

as expected.

SW3/SW4

For the SW3/SW4 target case we have already stated that the target RCS is governed by the density function

[pic]. (45)

Since the power is a direct function of the RCS (from the radar range equations), the signal power at the signal processor output has a density function that is the same form as (45). That is

[pic] (46)

where [pic] (47)

From random variable theory it can be shown that the signal amplitude, [pic], is governed by the density function

[pic]. (48)

Unfortunately, this is about as far as we can carry the signal model development for the SW3/SW4 case. We can invoke the previous statements and write

[pic] (49)

and

[pic]. (50)

However, we don’t know the form of [pic] and [pic]. Furthermore, deriving its form has proven very laborious and elusive.

We can find the power in the signal from

[pic]. (51)

We will need to deal with the inability to characterize [pic] and [pic] when we consider the characterization of signal-plus-noise.

SIGNAL-PLUS-NOISE IN RECEIVERS

Now that we have characterizations for signals and for noise we want to develop characterizations for the sum of signal and noise. That is, we want to develop the appropriate density functions for

[pic]. (52)

If we are using the IF representation we would write

[pic], (53)

and if we are using the baseband representation we would write

[pic]. (54)

In either representation, the primary variable of interest is the magnitude of the signal-plus-noise voltage, [pic], since this is the quantity used in computing detection probability. We will compute the other quantities as needed.

We will begin the development with the easiest case, which is the SW1/SW2 case, and progress through the SW0/SW5 case to the most difficult, which is the SW3/SW4 case.

SW1/SW2

For the SW1/SW2 case we found that the real and imaginary parts of both the signal and noise were zero-mean, Gaussian random processes. Since Gaussian random processes are relatively easy to work with we will use the baseband representation to derive the density function of [pic]. Since [pic] and [pic] are Gaussian, [pic] will also be Gaussian. Since [pic] and [pic] are zero-mean, [pic] will also be zero-mean. Finally, since [pic] and [pic] are independent, the variance of [pic] will equal to the sum of the variances of [pic] and [pic]. That is

[pic]. (55)

With this we get

[pic]. (56)

By similar reasoning we get

[pic]. (57)

Since [pic], [pic], [pic] and [pic] are mutually independent, [pic] and [pic] are independent. This, with the above and our previous discussions of noise and the SW1/SW2 signal model, leads to the observation that [pic] is Rayleigh. Thus the density of [pic] is

[pic]. (58)

SW0/SW5

Since [pic] and [pic] are not Gaussian for the SW0/SW5 case when we add them to [pic] and [pic] the resulting [pic] and [pic] will not be Gaussian. This means that directly manipulating [pic] and [pic] to obtain the density function of [pic] will be difficult. Therefore, we take a different tack and invoke some properties of joint and marginal density functions. Specifically, we use

[pic]. (59)

We then use

[pic] (60)

to get the density function of [pic]. This procedure involves some tedious math but it is math that can be found in many books on random variable theory.

To execute the derivation we start with the IF representation and write

[pic] (61)

where we have made use of (27). If we expand (61) and group terms we get

[pic]. (62)

According to the conditional density of (59) we want to consider (62) for the specific value of [pic]. If we do this we get

[pic]. (63)

With this we note that [pic] and [pic] are Gaussian random variables with means of [pic] and [pic]. They also have the same variance of [pic]. Further more, since [pic] and [pic] are independent [pic] and [pic] are also independent. With this we can write

[pic]. (64)

If we invoke the discussions on page 2 (Equations (4), (5) and (6)) we can write

[pic]. (65)

If we substitute from (64) we get

[pic]. (66)

We can manipulate the exponent to yield

[pic] (67)

Finally we can use

[pic] (68)

along with (59) to write

[pic]. (69)

For the next step we need to integrate [pic] with respect to [pic] and [pic] to derive the desired marginal density, [pic]. That is (after a little manipulation)

[pic]. (70)

We want to first consider the integral with respect to [pic]. That is,

[pic] (71)

We recognize that the integrand is periodic with a period of [pic] and that the integral is performed over a period. This means that we can evaluate the integral over any period. Specifically, we will choose the period from [pic] to [pic]. With this we get

[pic]. (72)

If we make the change of variables [pic] the integral becomes

[pic] (73)

where [pic] is a modified Bessel function of the first kind.

If we substitute (73) into (70) the latter becomes

[pic] (74)

where the last step derives from the fact that the integral with respect to [pic] is equal to one. Equation (74) is the desired result, which is the density function of [pic].

SW3/SW4

As with the SW0/SW5 case, [pic] and [pic] are not Gaussian for the SW3/SW4 case. Thus, when we add them to [pic] and [pic] the resulting [pic] and [pic] will not be Gaussian. This means that directly manipulating [pic] and [pic] to obtain the density function[pic] will be difficult. Based on our experience with the SW0/SW5 signal, we will again use the joint/conditional density approach. In this case we note that the if signal-plus-noise voltage is given by

[pic]. (75)

In this case we will need to find the joint density of [pic], [pic], [pic] and [pic] and perform the appropriate integration to get the marginal density of [pic]. More specifically, we will find

[pic] (76)

and

[pic]. (77)

We can draw on our work from the SW0/SW5 case to write

[pic]. (78)

Further, since [pic] and [pic] are, by definition, independent, we can write

[pic]. (79)

If we substitute (78) and (79) into (76) we get

[pic]. (80)

From (77) we can write

[pic] (81)

where

[pic] (82)

and

[pic]. (83)

We recognize (83) as the same double integral of (70). Thus, using the work from pages 13 and 14 we get

[pic] (84)

and

[pic]. (85)

To complete the calculation of [pic] we must compute the integral

[pic] (86)

where

[pic]. (87)

It turns out that Maple was able to compute the integral as

[pic]. (88)

With this [pic] becomes

[pic] (89)

which, after manipulation can be written as

[pic]. (90)

Now that we have completed the characterization of noise, signal and signal-plus-noise we are ready to attack the detection problem.

DETECTION PROBABILITY

A functional block diagram of the detection process is illustrated in Figure 3. It consists of a magnitude detector and a threshold. The magnitude detector determines the magnitude of the signal coming from the signal processor and the threshold is a binary decision that outputs a detection signal if the signal magnitude is above some threshold or a no-detection if the signal magnitude is below the threshold.

[pic]

The magnitude detector can be a square-law detector or a linear detector. Both variants are illustrated functionally in Figure 4 for the IF implementation and the baseband implementation. In the IF implementation, the detector consist, functionally, of a diode followed by a low-pass filter. If the circuit is designed such that it uses small voltage levels, the diode will be operating in its low signal region and will result in a square-law detector. If the circuit is designed such that it uses large voltage levels the diode will be operating in its large signal region and will result in a linear detector.

For the baseband case, the digital hardware (which we assume in the baseband signal processing case) will actually form the square of the magnitude of the complex signal out of the signal processor by squaring the real and imaginary components of the signal processor output and then adding them. The result of this operation will be a square-law detector. In some instances the detector also performs a square root to form the magnitude.

[pic]

In either the IF or baseband representation the output of the square-law detector will be [pic] when only noise is present at the signal processor output and [pic] when signal-plus-noise is present at the signal processor output. For the linear detector the output will be [pic] when only noise is present at the signal processor output and [pic] when signal-plus-noise is present at the signal processor output.

Since both [pic] and [pic] are random processes we must use concepts from random process theory to characterize the performance of the detection logic. In particular, we will use probabilities to characterize the performance of the detection logic.

Since we have two signal conditions (noise only or signal-plus-noise) and two outcomes from the threshold check we have four possible events to consider:

1. signal-plus-noise [pic] threshold – detection

2. signal-plus-noise < threshold – missed detection

3. noise [pic] threshold – false alarm

4. noise < threshold – no false alarm

Of the above, the two desired events are 1 and 4. That is, we want to detect targets when they are present and we don’t want to detect noise when targets are not present. Since events 1 and 2 are related and events 3 and 4 are related we only find probabilities associated with events 1 and 3. We term the probability of the first event occurring the detection probability and the probability of the third even occurring the false alarm probability. In equation form

[pic] (91)

and

[pic]. (92)

where [pic] is the signal-plus-noise voltage evaluated at a specific time and [pic] is the noise voltage evaluated at a specific time. This carries some subtle implications. First, when one finds detection probability it is tacitly assumed that the target return is present at the time the output of the threshold device is checked. Likewise, when one finds false alarm probability it is tacitly assumed that the target return is not present at the time the output of the threshold device.

In practical applications it is more appropriate to say: At the time the output of the threshold device is checked the probability that there will be a threshold crossing is equal to [pic] if the signal contains a target signal and [pic] if the signal does not contain a target signal. In typical applications the output of the threshold device will be checked at times separated by a pulse width and will result in many checks per PRI.

It will be noted that the above probabilities are conditional probabilities. In normal practice we don’t explicitly use the conditioning and write

[pic] (93)

and

[pic] (94)

and recognize that we should use signal-plus-noise when we assume the target is present and noise only when we assume that the target is not present.

The above assumes that the detector preceding the threshold device is a linear detector. If the detector is a square law detector the appropriate equations would be

[pic] (95)

and

[pic]. (96)

From probability theory we can write

[pic] or [pic] (97)

and

[pic] or [pic] (98)

In the above [pic] is the threshold voltage level and [pic] is the threshold expressed as normalized power.

To avoid having to use two sets of [pic] and [pic] equations we will digress to show that we can compute them using either of the integrals of (97) and (98). In homework 7 you showed that if [pic] then

[pic]. (99)

If we write

[pic] (100)

we can use (99) to write

[pic]. (101)

If we make the change of variables [pic] we can write

[pic]. (102)

Similar results apply to [pic] and indicate that one can use either form to compute detection and false alarm probability.

If we examine the equations for [pic] and [pic] we note that both are integrals over the same limits. This integration is illustrated graphically in Figure 5. It will be noted that [pic] and [pic] are areas under their respective density functions to the right of the threshold value. Thus, increasing the threshold decreases the probabilities and decreasing the threshold increases the probabilities. This is not exactly what we want. Ideally, we want to select the threshold so that we have [pic] and [pic]. However this is not possible and we therefore usually choose the threshold as some sort of tradeoff between [pic] and [pic]. In fact, what we actually do is choose the threshold to achieve a certain [pic] and find other means of increasing [pic]. If we refer to (12) the only parameter that affects [pic] is the noise power, [pic]. While we have some control over this via noise figure and effective noise bandwidth, executing this control can be very expensive. On the other hand, [pic] is dependent upon both [pic] and [pic]. Thus, this gives us some degree of control. In fact, what we usually try to do is affect both [pic] and [pic] by increasing [pic] and decreasing [pic]. The net result of this is that we try to maximize SNR.

[pic]

If we use (12) in (98) we get

[pic]. (103)

In this equation we define

[pic] (104)

as the threshold-to-noise ratio. As indicated earlier, we usually select a desired [pic] and, from this derive the required TNR as

[pic]. (105)

We compute detection probability for the three target classes by substituting (58), (74) and (90) into (102). The results for SW0/SW5 is

[pic]. (106)

where

[pic] (107)

is the signal-to-noise ratio that one would compute from the radar range equation and

[pic] (108)

is one form of the error function.

The detection probability equations for the SW1/SW2 case and for the SW3/SW4 case are, respectively

[pic] (109)

and

[pic]. (110)

In (109) and (110) SNR is the signal-to-noise ratio computed from the radar range equation.

Figure 6 contains plots of [pic] versus SNR for the three target types with [pic], a typical value. It is interesting to note the [pic] behavior for the three target types. In general, the SW0/SW5 target provides the largest [pic] for a given SNR, the SW1/SW2 target provides the lowest [pic] and the SW3/SW4 is somewhere between the other two. With some thought this makes sense. For the SW0/SW5 target model the only thing affecting a threshold crossing is the noise (since the RCS of the target is constant). For the SW1/SW2 the target RCS can fluctuate considerably, thus both noise and RCS fluctuation affects the threshold crossing. The standard assumption for the SW3/SW4 model is that it consists of a predominant (presumably constant RCS) scatterer and several smaller scatterers. Thus, the threshold crossing for the SW3/SW4 target is affected somewhat by RCS fluctuation, but not to the extent of the SW1/SW2 target.

It is interesting to note that the SNR required for [pic], with [pic], on a SW1/SW2 target is about 13 dB. This same SNR gives a [pic] on a SW0/SW5 target. To obtain a [pic] on a SW1/SW2 target requires a SNR of about 21 dB. These numbers are the origin of the 13 dB and 20 dB SNR numbers we used in our radar range equation studies.

[pic]

COMPUTATION OF Pfa

One of the parameters in the detection probability equations is threshold-to-noise ratio, [pic]. As indicated in (105), [pic], where [pic] is the false alarm probability. False alarm probability is set by system requirements.

In a radar, false alarms result in wasted radar resources (energy, timeline and hardware) in that every time a false alarm occurs the radar must expend resources determining that it did, in fact, occur. Said another way, every time the output of the amplitude detector exceeds the threshold, [pic], a detection is recorded. The radar data processor does not know, a priori, whether the detection is a target detection or the result of noise (i.e. a false alarm). Therefore, the radar must verify each detection. This usually requires transmission of another pulse and another threshold check (an expenditure of time and energy). Further, until the detection is verified, it must be carried in the computer as a valid target detection (a expenditure of hardware).

In order to minimize wasted radar resources we wish to minimize the possibility of a false alarm. Said another way, we want to minimize [pic]. However, we can’t set [pic] to an arbitrarily small value because this will increase [pic] and reduce detection probability,[pic] (see (106), (109) and (110)). As a result we set [pic] to provide an acceptable number of false alarms within a given time period. This last statement provides the criterion normally used to compute [pic]. Specifically, one states that [pic] is chosen to provide an average of one false alarm within a time period that is termed the false alarm time, [pic]. [pic] is usually set by some criterion that is driven by radar resource limitations.

The classical method of determining [pic] is based strictly on timing. This can be explained with the help of Figure 3-7 which contains a plot of noise at the output of the amplitude detector. The horizontal line labeled Threshold represents the detection threshold voltage level. It will be noted that the noise voltage is above the threshold for three time intervals of length [pic], [pic] and [pic]. Further, the spacings between threshold crossings are [pic] and [pic]. Since a threshold crossing constitutes a false alarm one can say that over the interval [pic] false alarms occur for a period of [pic]. Likewise, over the interval [pic] false alarms occur for a period of [pic], and so forth. If we were to average all of the [pic] we would have the average time that the noise is above the threshold, [pic]. Likewise, if we were to average all of the [pic] we would have the average time between false alarms; i.e. the false alarm time, [pic]. To get the false alarm probability we would take the ratio of [pic] to [pic], i.e.

[pic]. (111)

[pic]

While [pic] is reasonably easy to specify, the specification of [pic] is not obvious. The standard assumption is to set [pic] to the range resolution expressed as time, [pic]. For an unmodulated pulse, [pic] is the pulse width. For a modulated pulse, [pic] is the modulation bandwidth.

It has been the author’s experience that the above method of determining [pic] not very accurate. While it would be possible to place the requisite number of caveats on (111) to make it accurate, with modern radars this is not necessary.

The previously described method of determining [pic] was based on the assumption that detections were recorded via hardware operating on a continuous-time signal. In modern radars detection is base on examining signals that have been converted to the discrete-time domain by sampling or by and analog-to-digital converter. This makes determination of [pic] easier, and more intuitively appealing, in that one can deal with discrete events. With modern radars one computes the number of false alarm chances, [pic], within the desired false alarm time, [pic], and computes the probability of false alarm from

[pic]. (112)

To compute [pic] one needs to know certain things about the operation of the radar. We will outline some thoughts along this line. In a typical radar, the return signal from each pulse is sampled with a period equal to the range resolution, [pic], of the pulse. As indicated above, this would be equal to the pulse width for an unmodulated pulse and the reciprocal of the modulation bandwidth for a modulated pulse. These range samples are usually taken over a duration, [pic], that is less than the PRI, [pic]. In a search radar, [pic] might be only slightly less than [pic]. However, for a track radar, [pic], may be significantly less than [pic]. With the above, we can compute the number of range samples per PRI as

[pic]. (113)

Each of the range samples provides represents a chance that a false alarm will occur.

In a time period of [pic] the radar will transmit

[pic] (114)

pulses. Thus, the number of range samples (and thus chances for false alarm) that one has over the time period of [pic] is

[pic]. (115)

In some radars, the signal processor consists of several ([pic]), parallel Doppler channels. This means that it will also contain [pic] amplitude detectors. Each amplitude detector will generate [pic] range samples per PRI. Thus, in this case, the total number of range samples in the time period [pic] would be

[pic]. (116)

In either case, the false alarm probability would be give by (112).

To illustrate the above, we consider a simple example. We have a search radar that has a PRI of [pic]. It uses a [pic] pulse with linear frequency modulation (LFM) where the LFM bandwidth is 1 MHz. With this we get [pic]. We assume that the radar starts its range samples one pulse-width after the transmit pulse and stops taking range samples one pulse-width before the succeeding transmit pulse. From this we get [pic]. The signal processor is not a multi-channel Doppler processor. The radar has a search scan time of [pic] and we desire no more than one false alarm every two scans.

From the last sentence above we get [pic]. If we combine this with the PRI we get

[pic]. (117)

From [pic] and [pic] we get

[pic]. (118)

This results in

[pic] (119)

and

[pic]. (120)

-----------------------

[1] E.g. Papoulis, A. “Probability, Random Variables, and Stochastic Processes” McGraw-Hill

[2] We have made a large number of assumptions concerning the statistical properties of the signal and noise. A natural questions is: Are the assumptions reasonable? The best answer to this question is that we design radars so that the assumptions are satisfied. In particular, we endeavor to make the receiver and signal processor linear. Because of this and the central limit theorem, we can reasonable assume that [pic]and [pic]are Gaussian. Further, if we enforce reasonable constraints on the bandwidth of receiver components we can reasonable assume the independence requirements. The stationarity requirements are easily satisfied if we assume that the receiver gains don’t change with time. We enforce the zero-mean assumption by using bandpass processes to eliminate DC components. For signals, we won’t need the Gaussian requirement. However, we will need the stationarity, zero-mean and other requirements. These are usually satisfied for signals based on the same assumptions as for noise and by the requirement that the target RCS is a stationary process and that [pic] is uniform on [pic]and is wide-sense stationary. Both of the latter assumptions are valid for practical radars and targets.

-----------------------

[pic]

Figure 4 – IF and Baseband Detectors – Linear and Square Law

[pic]

Figure 3 – Block Diagram of Detector and Threshold

[pic]

Figure 5 – Probability Density Functions for Noise and Signal-plus-noise

[pic]

Figure 2 – Baseband Receiver/Signal Processor Representation

[pic]

Figure 1 – IF Receiver/Signal Processor Representation

[pic]

Figure 6 – Pd versus SNR for Three Target Types and Pfa=10-6

[pic]

Figure 3-7 – Illustration of False Alarm Time

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download