Introduction to random processes in .NET Printer GS1 - 12 in .NET Introduction to random processes

Introduction to random processes using vs .net toconnect universal product code version a for web,windows application Microsoft .NET Micro Framework does not depend on t. T UPC Symbol for .NET he corresponding condition for discrete-time processes is that P (X1+m , .

. . , Xn+m ) B not depend on the integer time shift m.

If a process is nth order strictly stationary for every positive, nite integer n, then the process is said to be strictly stationary. Example 10.13.

Let Z be a random variable, and put Xt := Z for all t. Show that Xt is strictly stationary. Solution.

Given any n-dimensional set B, P (Xt1 + t , . . .

, Xtn + t ) B = P (Z, . . .

, Z) B , which does not depend on t. Example 10.14.

Show that an i.i.d.

sequence of continuous random variables Xn with common density f is strictly stationary. Solution. Fix any positive integer n and any n-dimensional set B.

Let m be any integer, positive or negative. Then P (X1+m , . .

. , Xn+m ) B is given by . f (x1+m ) f (xn+m ) dx1+m dxn+m . (10.12). Since x1+m , . . .

, xn+ m are just dummy variables of integration, we may replace them by x1 , . . .

, xn . Hence, the above integral is equal to . f (x1 ) f (xn ) dx UPC Symbol for .NET 1 dxn ,. which does not depend on m. It is instructive to see how the preceding example breaks down if the Xi are independent but not identically distributed. In this case, (10.

12) becomes . fX1+m (x1+m ) fXn+m (xn+m ) dx1+m dxn+m . Changing the dummy variables of integration as before, we obtain fX1+m (x1 ) fXn+m UPC-A for .NET (xn ) dx1 dxn ,. which still depends on m . Strict stationarity is a strong property with many implications. If a process is rst-order strictly stationary, then for any t1 and t1 + t, Xt1 and Xt1 + t have the same pmf or density.

. 10.3 Strict-sense and wide-sense stationary processes It then follows that for any function g(x), E[g(Xt1 )] = E[g(Xt1 + t )]. Taking t = t1 shows that E[g(Xt1 )] = E[g(X0 )], which does not depend on t1 . If a process is second-order strictly stationary, then for any function g(x1 , x2 ), we have E[g(Xt1 , Xt2 )] = E[g(Xt1 + t , Xt2 + t )] for every time shift t.

Since t is arbitrary, let t = t2 . Then E[g(Xt1 , Xt2 )] = E[g(Xt1 t2 , X0 )]. It follows that E[g(Xt1 , Xt2 )] depends on t1 and t2 only through the time difference t1 t2 .

Requiring second-order strict stationarity is a strong requirement. In practice, e.g.

, analyzing receiver noise in a communication system, it is often enough to require that E[Xt ] not depend on t and that the correlation RX (t1 ,t2 ) = E[Xt1 Xt2 ] depend on t1 and t2 only through the time difference, t1 t2 . This is a much weaker requirement than second-order strict-sense stationarity for two reasons. First, we are not concerned with probabilities, only expectations.

Second, we are only concerned with E[Xt ] and E[Xt1 Xt2 ] rather than E[g(Xt )] and E[g(Xt1 , Xt2 )] for arbitrary functions g. Even if you can justify the assumption of rst-order strict-sense stationarity, to fully exploit it, say in the discrete-time case, you would have to estimate the density or pmf of Xi . We saw in 6 how much work it was for the i.

i.d. case to estimate fX1 (x).

For a second-order strictly stationary process, you would have to estimate fX1 X2 (x1 , x2 ) as well. For a strictly stationary process, imagine trying to estimate n-dimensional densities for all n = 1, 2, 3, . .

. , 100, . .

. . Wide-sense stationarity We say that a process is wide-sense stationary (WSS) if the following two properties both hold: (i) The mean function E[Xt ] does not depend on t.

(ii) The correlation function E[Xt1 Xt2 ] depends on t1 and t2 only through the time difference t1 t2 . Notation. For a WSS process, E[Xt+ Xt ] depends only on the time difference, which is (t + ) t = .

Hence, for a WSS process, it is convenient to re-use the term correlation function to refer to the the univariate function RX ( ) := E[Xt+ Xt ]. (10.13).

Observe that since t in UCC - 12 for .NET (10.13) is arbitrary, taking t = t2 and = t1 t2 gives the formula E[Xt1 Xt2 ] = RX (t1 t2 ).

(10.14). Example 10.15. In Figure 10.

8, three correlation functions RX ( ) are shown at the left. At the right is a sample path Xt of a zero-mean process with that correlation function..

Copyright © . All rights reserved.