Home | Applied Mathematics & Statistics



TransformationsDear students, Since we have covered the mgf technique extensively already, here we only review the cdf and the pdf techniques, first for univariate (one-to-one and more-to-one) and then for bivariate (one-to-one and more-to-one) transformations. 1. The cumulative distribution function (cdf) techniqueSuppose Y is a continuous random variable with cumulative distribution function (cdf) FYy≡P(Y≤y). Let U=gY be a function of Y, and our goal is to find the distribution of U. The cdf technique is especially convenient when the cdf FYy has closed form analytical expression. This method can be used for both univariate and bivariate transformations.Steps of the cdf technique:Identify the domain of Y and U.WriteFUu=P(U≤u), the cdf of U, in terms of FYy, the cdf of Y .Differentiate FUu to obtain the pdf of U, fUu.Example 1. Suppose that Y ~ U0,1. Find the distribution of U=gY=-lnY. Solution. The cdf of Y ~ U0,1 is given byFYy=0, y≤0y, 0<y≤11, y≥1The domain (*domain is the region where the pdf is non-zero) for Y ~ U0,1 is RY={y:0<y<1}; thus, because u=-lny>0, it follows that the domain for U is RU={u:u>0}.The cdf of U is:FUu=PU≤u=P-lnY≤u=PlnY>-u=PY>e-u=1-PY≤e-u=1-FY(e-u)Because FYy=y for 0 < y < 1; i.e., for u > 0, we haveFUu=1-FYe-u=1-e-uTaking derivatives, we get, for u > 0,fUu=dduFUu=ddu1-e-u=e-uSummarizing,fUu=e-u, u>00, otherwiseThis is an exponential pdf with mean 1λ=1; that is, U ~ exponential(λ = 1). □ Example 2. Suppose that Y ~ U (-π2,π2) . Find the distribution of the random variable defined by U = g(Y ) = tan(Y ).Solution. The cdf of Y ~ U (-π2,π2) is given byFYy=0, y≤-π2y+π2π, -π2<y≤π21, y≥π2The domain for Y is RY={y:-π2<y<π2}. Sketching a graph of the tangent function from -π2 to π2, we see that -∞<u<∞ . Thus, RU={ u: -∞<u<∞}≡R, the set of all reals. The cdf of U is:FUu=PU≤u=PtanY≤u=PY≤tan-1u=FY[tan-1u]Because FYy=y+π2πfor-π2<y<π2;i.e., for u∈R , we haveFUu=FYtan-1u=tan-1(u)+π2πThe pdf of U, for u∈R, is given byfUu=dduFUu=ddutan-1u+π2π=1π(1+u2).Summarizing,fUu=1π(1+u2), -∞<u<∞0, otherwise.A random variable with this pdf is said to have a (standard) Cauchy distribution. One interesting fact about a Cauchy random variable is that none of its moments are finite. Thus, if U has a Cauchy distribution, E(U), and all higher order moments, do not exist.Exercise: If U is standard Cauchy, show thatEU=+∞. □2. The probability density function (pdf) technique, univariateSuppose that Y is a continuous random variable with cdf FYy and domain RY , and let U=g(Y), where g: RY→R is a continuous, one-to-one function defined over RY . Examples of such functions include continuous (strictly) increasing/decreasing functions. Recall from calculus that if g is one-to-one, it has an unique inverse g-1. Also recall that if g is increasing (decreasing), then so is g-1.Derivation of the pdf technique formula using the cdf method: Suppose that g(y) is a strictly increasing function of y defined over RY . Then, it follows that u = g(y) ?g-1(u) = y andFUu=PU≤u=PgY≤u=PY≤g-1u=FY[g-1u]Differentiating FUu with respect to u, we getfUu=dduFUu=dduFYg-1u=fYg-1uddug-1u (by chain rule)Now as g is increasing, so is g-1; thus, ddug-1u>0. If g(y) is strictly decreasing, thenFUu=1-FY[g-1u] and ddug-1u<0, which givesfUu=dduFUu=ddu{1-FYg-1u}=-fYg-1uddug-1u Combining both cases, we have shown that the pdf of U, where nonzero, is given byfUu=fYg-1uddug-1u .It is again important to keep track of the domain for U. If RY denotes the domain of Y, then RU, the domain for U, is given by RU=u:u=gy;y∈RY. Steps of the pdf technique:1. Verify that the transformation u = g(y) is continuous and one-to-one over RY .2. Find the domains of Y and U.3. Find the inverse transformation y=g-1u and its derivative (with respect to u).4. Use the formula above for fUu.Example 3. Suppose that Y ~ exponential(β); i.e., the pdf of Y isfYy=1βe-y/β, y>00, otherwise.Let U=gY=Y, . Use the method of transformations to find the pdf of U.Solution. First, we note that the transformation gY=Y is a continuous strictly increasing function of y over RY={y:y>0}, and, thus, gY is one-to-one. Next, we need to find the domain of U. This is easy since y > 0 implies u=y>0 as well.Thus, RU={u:u>0}. Now, we find the inverse transformation:gy=u=y?y=g-1u=u2 (by inverse transformation)and its derivative:ddug-1u=dduu2=2u.Thus, for u > 0,fUu=fYg-1uddug-1u =1βe-u2β×2u=2uβe-u2β.Summarizing,fUu=2uβe-u2β, u>00, otherwise.This is a Weibull distribution. The Weibull family of distributions is common in life science (survival analysis), engineering and actuarial science applications. □Example 4. Suppose that Y ~ beta(α = 6; β = 2); i.e., the pdf of Y is given byfYy=42y51-y, 0<y<10, otherwise.What is the distribution of U = g(Y ) = 1 -Y ?Solution. First, we note that the transformation g(y) = 1 -Y is a continuous decreasing function of y over RY={y:0<y<1}, and, thus, g(y) is one-to-one. Next, we needto find the domain of U. This is easy since 0 < y < 1 clearly implies 0 < u < 1. Thus,RU={y:0<u<1}. Now, we find the inverse transformation:gy=u=1-y?y=g-1u=1-u (by inverse transformation)and its derivative:ddug-1u=ddu1-u=-1.Thus, for 0 < u < 1,fUu=fYg-1uddug-1u =421-u51-1-u×-1=42u1-u5.Summarizing,fUu=42u1-u5, 0<u<10, otherwise.We recognize this is a beta distribution with parameters α = 2 and β = 6. □More-to-one transformation: What happens if u = g(y) is not a one-to-one transformation? In this case, we can still use the method of transformations, but we have “break up" the transformation g:RY→RU into disjoint regions where g is one-to-one.RESULT: Suppose that Y is a continuous random variable with pdffYy and that U = g(Y ), not necessarily a one-to-one (but continuous) function of y over RY . However, suppose that we can partition RY into a finite collection of sets, say, A0, A1, A2, … , Ak, where PY∈A0=0, and PY∈Ai>0 for all i ≠ 0, and fYy is continuous on each Ai , i ≠ 0. Furthermore, suppose that the transformation is 1-to-1 from Ai (i = 1, 2, …, k,) to B, where B is the domain of U = g(Y ) such that gi-1? is a 1-to-1 inverse mapping of Y to U = g(Y ) from B to Ai.Then, the pdf of U is given byfUu=i=1kfYgi-1uddugi-1u , u∈RU0, otherwise.That is, writing the pdf of U can be done by adding up the terms fYgi-1uddugi-1u corresponding to each disjoint set Ai, i = 1, 2,…,k.Example 5. Suppose that Y ~ N(0, 1); that is, Y has a standard normal distribution; i.e.,fYy=12πe-y2/2,-∞<y<∞0, otherwise.Consider the transformation: U=gY=Y2. Solution 1 (the pdf technique):This transformation is not one-to-one on RY=R={y:-∞<y<∞} , but it is one-to-one on A1 = (-∞, 0) and A2 = (0, ∞) (separately) since gy=y2 is decreasing on A1 and increasing on A2, and A0 = {0} where PY∈A0=PY=0=0. Furthermore, note that A0, A1 and A2 partitions RY . Summarizing,PartitionTransformationInverse transformationA1 = (-∞, 0)gy=y2=ug1-1u=-u=yA2 = (0, ∞)gy=y2=ug2-1u=u=yAnd, on both sets A1 and A2,ddugi-1u =12u.Clearly, u=y2>0; thus, RU={u:u>0}, and the pdf of U is given byfUu=12πe--u2212u+12πe-u2212u,u>0 0, otherwise. Thus, for u > 0, and recalling that Γ12=π, f(u) collapses tofUu=22πe-u212u=12πu12-1e-u2=1Γ12212u12-1e-u2.Summarizing, the pdf of U isfUu=1Γ12212u12-1e-u2,u>0 0, otherwise. That is, U ~ gamma(1/2, 2). Recall that the gamma(1/2, 2) distribution is the same as a χ2 distribution with 1 degree of freedom; that is, U ~ χ2(1). □Solution 2 (the cdf technique):FUu=PU≤u=PY2≤u=1-PY2>u=1-PY>u or Y<-u=1-PY>u -P Y<-u=PY≤u -P Y≤-u=FYu-FY-uTaking derivative with respect to u at both sides, we have:fUu=fYududu+fY-ududu=12πe--u2212u+12πe-u2212u=12πu12-1e-u2=1Γ12212u12-1e-u2, u>0That is, U ~ gamma(1/2, 2). Recall that the gamma(1/2, 2) distribution is the same as a χ2 distribution with 1 degree of freedom; that is, U ~ χ2(1). □3. The probability density function (pdf) technique, bivariateHere we discuss transformations involving two random variable Y1, Y2. The bivariate transformation isU1=g1(Y1, Y2)U2=g2(Y1, Y2)Assuming that Y1 and Y2 are jointly continuous random variables, we will discuss the one-to-one transformation first. Starting with the joint distribution of Y=(Y1, Y2), our goal is to derive the joint distribution of U=(U1, U2).Suppose that Y=(Y1, Y2) is a continuous random vector with joint pdffY1, Y2(y1, y2). Let g: R2→R2 be a continuous one-to-one vector-valued mapping from RY1, Y2 to RU1, U2 , where U1=g1(Y1, Y2) and U2=g2(Y1, Y2), and where RY1, Y2 and RU1, U2 denote the two-dimensional domain of Y=(Y1, Y2) and U=(U1, U2), respectively. If g1-1(u1, u2) and g2-1u1, u2 have continuous partial derivatives with respect to both u1 and u2, and the Jacobian, J, where, with “det” denoting “determinant”,J=det?g1-1u1, u2?u1?g1-1u1, u2?u2?g2-1u1, u2?u1?g2-1u1, u2?u2≠0, thenfU1, U2u1, u2=fY1, Y2g1-1u1, u2,g2-1u1, u2J,u1, u2∈RU1, U20, otherwise,where |J| denotes the absolute value of J.RECALL: The determinant of a 2 × 2 matrix, e.g.,detabcd=ad-bc.Steps of the pdf technique:1. Find fY1, Y2(y1, y2), the joint distribution of Y1 and Y2. This may be given in the problem. If Y1 and Y2 are independent, then fY1, Y2y1, y2=fY1y1fY2y2.2. Find RU1, U2 , the domain of U=(U1, U2).3. Find the inverse transformations y1=g1-1(u1, u2) and y2=g2-1(u1, u2).4. Find the Jacobian, J, of the inverse transformation.5. Use the formula above to find fU1, U2u1, u2, the joint distribution of U1and U2.NOTE: If desired, marginal distributions fU1u1 and fU2(u2). can be found by integrating the joint distribution fU1, U2u1, u2.Example 6. Suppose that Y1~ gamma(α, 1), Y2~ gamma(β, 1), and thatY1 and Y2 are independent. Define the transformationU1=g1Y1, Y2=Y1+Y2U2=g2Y1, Y2=Y1Y1+Y2.Find each of the following distributions:(a) fU1, U2u1, u2, the joint distribution of U1and U2,(b) fU1u1, the marginal distribution of U1, and(c) fU2(u2), the marginal distribution of U2.Solutions. (a) Since Y1 and Y2 are independent, the joint distribution of Y1 and Y2 isfY1, Y2y1, y2=fY1y1fY2y2 =1Γαy1α-1e-y1×1Γβy2β-1e-y2=1ΓαΓβy1α-1y2β-1e-y1+y2,for y1>0, y2>0, and 0, otherwise. Here, RY1, Y2=y1, y2:y1>0, y2>0. By inspection, we see that u1=y1+ y2>0, and u2=y1y1+ y2 must fall between 0 and 1.Thus, the domain of U=(U1, U2) is given byRU1, U2=u1, u2:u1>0, 0<u2<1.The next step is to derive the inverse transformation. It follows thatu1=y1+ y2u2=y1y1+ y2?y1=g1-1u1, u2=u1u2y2=g2-1u1, u2=u1-u1u2The Jacobian is given byJ=det?g1-1u1, u2?u1?g1-1u1, u2?u2?g2-1u1, u2?u1?g2-1u1, u2?u2=detu2u11-u2-u1=-u1u2-u11-u2=-u1.We now write the joint distribution for U=(U1, U2). For u1>0 and 0<u2<1, we have thatfU1, U2u1, u2=fY1, Y2g1-1u1, u2,g2-1u1, u2J=1ΓαΓβu1u2α-1u1-u1u2β-1e-u1u2+u1-u1u2×|-u1|Note: We see that U1and U2are independent since the domain RU1, U2=u1, u2:u1>0, 0<u2<1 does not constrain u1 by u2 or vice versa and since the nonzero partof fU1, U2u1, u2 can be factored into the two expressions h1u1 and h2u2, whereh1u1=u1α+β-1e-u1andh2u2=u2α-1(1-u2)β-1ΓαΓβ.(b) To obtain the marginal distribution of U1, we integrate the joint pdffU1, U2u1, u2over u2. That is, for u1 > 0,fU1u1=u2=01fU1, U2u1, u2du2=u2=01u2α-11-u2β-1ΓαΓβu1α+β-1e-u1du2=1ΓαΓβu1α+β-1e-u1u2=01u2α-11-u2β-1du2 (u2α-11-u2β-1 is betaα,β kernel)1ΓαΓβu1α+β-1e-u1×ΓαΓβΓα+β=1Γα+βu1α+β-1e-u1Summarizing,fU1u1=1Γα+βu1α+β-1e-u1,u1 > 00, otherwise.We recognize this as a gamma(α+β, 1) pdf; thus, marginally, U1 ~gamma(α+β, 1).(c) To obtain the marginal distribution of U2, we integrate the joint pdf fU1, U2u1, u2 over u2. That is, for 0<u2<1,fU2u2=u1=0∞fU1, U2u1, u2du1=u1=0∞u2α-11-u2β-1ΓαΓβu1α+β-1e-u1du1=u2α-11-u2β-1ΓαΓβu1=0∞u1α+β-1e-u1du1 =Γα+βΓαΓβu2α-11-u2β-1.Summarizing,fU2u2=Γα+βΓαΓβu2α-11-u2β-1,0<u2<10, otherwise.Thus, marginally, U2 ~ beta(α, β). □REMARK: Suppose that Y=(Y1, Y2) is a continuous random vector with joint pdf fY1, Y2y1, y2, and suppose that we would like to find the distribution of a single random variable U1=g1Y1, Y2Even though there is no U2 present here, the bivariate transformation technique can still be useful. In this case, we can devise an “extra variable” U2=g2Y1, Y2, perform the bivariate transformation to obtain fU1, U2u1, u2, and then find the marginal distribution of U1 by integrating fU1, U2u1, u2 out over the dummy variable u2. While the choice of U2 is arbitrary, there are certainly bad choices. Stick with something easy; usually U2=g2Y1, Y2=Y2 does the trick.Exercise: (Homework 3, Question 1) Suppose that Y1 and Y2 are random variables with joint pdffY1, Y2y1, y2=8y1y2,0<y1<y2<10, otherwise.Find the pdf of U1=Y1/Y2.More-to-one transformation: What happens if the transformation of Y to U is not a one-to-one transformation? In this case, similar to the univariate transformation, we can still use the pdf technique, but we have to “break up" the transformation g:RY→RU into disjoint regions where g is one-to-one.RESULT: Suppose that Y is a continuous bivariate random variable with pdffY1, Y2y1, y2 and that U1=g1Y1, Y2, U2=g2Y1, Y2, where U=U1, U2 is not necessarily a one-to-one (but continuous) function of y over RY = RY1, Y2. Furthermore, suppose that we can partition RY into a finite collection of sets, say, A0, A1, A2, … , Ak, where PY∈A0=0, and PY∈Ai>0 for all i ≠ 0, and fY1, Y2y1, y2 is continuous on each Ai , i ≠ 0. Furthermore, suppose that the transformation is 1-to-1 from Ai (i = 1, 2, …, k,) to B, where B is the domain of U=U1=g1Y1, Y2, U2=g1Y1, Y2 such that (g1i-1?, g2i-1?) is a 1-to-1 inverse mapping of Y to U from B to Ai.Let Ji denotes the Jacobina computed from the ith inverse, i = 1, 2, …, k. Then, the pdf of U is given byfU1, U2u1, u2=i=1kfY1, Y2g1i-1u,v,g2i-1u,vJi, u∈B=RU0, otherwise.Example 7. Suppose that Y1~ N(0, 1), Y2~ N(0, 1), and that Y1 and Y2 are independent. Define the transformationU1=g1Y1, Y2=Y1Y2U2=g2Y1, Y2=Y2.Find each of the following distributions:(a) fU1, U2u1, u2, the joint distribution of U1and U2,(b) fU1u1, the marginal distribution of U1.Solutions. (a) Since Y1 and Y2 are independent, the joint distribution of Y1 and Y2 isfY1, Y2y1, y2=fY1y1fY2y2 =12πe-y12/2e-y22/2Here, RY1, Y2=y1, y2:-∞<y1<∞, -∞<y2<∞. The transformation of Y to U is not one-to-one because the points y1, y2 and -y1, -y2 are both mapped to the same u1, u2 point. But if we restrict considerations to either positive or negative values of y2, then the transformation is one-to-one. We note that the three sets below form a partition of A=RY1, Y2 as defined above with A1=y1, y2: y2>0, A2=y1, y2: y2<0 , and A0=y1, y2: y2=0.The domain of U, B=u1, u2: -∞<u1<∞,u2>0 is the image of both A1 and A2 under the transformation. The inverse transformation from B to A1 and B to A2 are given by:y1=g11-1u1, u2=u1u2y2=g21-1u1, u2=u2andy1=g12-1u1, u2=-u1u2y2=g22-1u1, u2=-u2The Jacobians from the two inverses are J1=J1=u2The pdf of U on its domain B is thus:fU1, U2u1, u2=i=12fY1, Y2g1i-1u,v,g2i-1u,vJiPlugging in, we have: fU1, U2u1, u2=12πe-u1u22/2e-u22/2u2+12πe--u1u22/2e--u22/2u2Simplifying, we have:fU1, U2u1, u2=u2πe-u12+1u22/2, -∞<u1<∞,u2>0(b) To obtain the marginal distribution of U1, we integrate the joint pdffU1, U2u1, u2over u2. That is, fU1u1=0∞fU1, U2u1, u2du2= 1πu12+1 , -∞<u1<∞Thus, marginally, U1 follows the standard Cauchy distribution. □REMARK: The transformation method can also be extended to handle n-variate transformations. Suppose that Y1, Y2, …, Yn are continuous random variables with joint pdffYy and defineU1=g1Y1, Y2, …, YnU2=g2Y1, Y2, …, Yn?Un=gnY1, Y2, …, Yn.Example 8. Given independent random variables X andY, each with uniform distributions on (0, 1), find the joint pdf of U and V defined by U=X+Y, V=X-Y, and the marginal pdf of U.The joint pdf of X and Y isfX,Yx,y=1, 0≤x≤1, 0≤y≤1.The inverse transformation, written in terms of observed values is x=u+v2, and y=u-v2.It is clearly one-to-one. The Jacobian is J=?(x,y)?(u,v)=1/21/21/2-1/2=-12, so J=12.We will use A to denote the range space of (X,Y), and B to denote that of (U,V), and these are shown in the diagrams below. Firstly, note that there are 4 inequalities specifying ranges of x andy, and these give 4 inequalities concerning u andv, from which Bcan be determined. That is,x≥0 u+v≥0, that is, v≥-ux≤1 u+v≤2, that is v≤2-uy≥0 u-v≥0, that is v≤uy≤1 u-v≤2, that is v≥u-2Drawing the four linesv=-u,v=2-u,v=u,v=u-2On the graph, enables us to see the region specified by the 4 inequalities. Now, we have fU,Vu,v=1*12=12, -u≤v≤u, 0≤u≤1u-2≤v≤2-u, 1≤u≤2The importance of having the range space correct is seen when we find marginal pdf of U.fUu=-∞∞fU,Vu,vdv =-uu12dv, 0≤u≤1u-22-u12dv, 1≤u≤2 0, otherwise =u, 0≤u≤12-u, 1≤u≤2 =uI0,1u+2-uI1,2u, using indicator functions.Example 9. Given Xand Y are independent random variables each with pdffXx=12e-x2, x∈0,∞, find the distribution of(X-Y)/2.We note that the joint pdf of X and Y is fX,Yx,y=14ex+y2, 0≤x<∞, 0≤y<∞.DefineU=(X-Y)/2. Now we need to introduce a second random variable V which is a function of X andY. We wish to do this in such a way that the resulting bivariate transformation is one-to-one and our actual task of finding the pdf of U is as easy as possible. Our choice for V is of course, not unique. Let us defineV=Y. Then the transformation is, (usingu,v,x,y, since we are really dealing with the range spaces here). x=2u+vy=vFrom it, we find the Jacobian, J=2101=2To determineB, the range space of U andV, we note thatx≥0 2u+v≥0 , that is v≥-2ux<∞ 2u+v<∞y≥0 v≥0y<∞ u<∞So B is as indicated in the diagram below.Now, we havefU,Vu,v=14e-2u+v+v2*2=12e-u+v, u,v∈B.The marginal pdf of U is obtained by integrating fU,V(u,v) with respect tov, givingfUu =-2u∞12e-u+vdv, u<00∞12e-u+vdv, u>0=12eu, u<012e-u, u>0 =12e-u, -∞<u<∞[This is sometimes called the folded or doubleexponential distribution.]Homework #3. Question 1 is the exercise on page 9 of this handout.Questions 2-9 (from our textbook): 2.9, 2.15, 2.24, 2.30, 2.31, 2.32, 2.33, 2.34 ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download