Concept explainers
Let X have a beta distribution with parameters
(See Example 5.2-3.)
(a) Show that the mean and variance of X are, respectively.
(b) Show that when
Want to see the full answer?
Check out a sample textbook solutionChapter 5 Solutions
Probability And Statistical Inference (10th Edition)
- Let X and Y be random variables, and a and b be constants. ???? a) Show that Cov [aX,bY] = abCov [X,Y] . b) Show that if a > 0 and b > 0, then the correlation coefficient between aX and bY is the same as the correlation coefficient between X and Y . c) Is the correlation coefficient between X and Y unaffected by changes in the units of X and Y ?arrow_forwardLetX1,X2,...,Xn be a sequence of independent and identically distributed random variables having the Exponential(λ) distribution,λ >0, fXi(x) ={λe−λx, x >0 0, otherwise Define the random variable Y=X1+X2+···+Xn. Find E(Y),Var(Y)and the moment generating function ofY.arrow_forwardStock y has a beta of 1.2 and an expected return of 11.5. Stock z has a beta of .80 and an expected return of 8.5 percentarrow_forward
- Consider a real random variable X with zero mean and variance σ2X . Suppose that wecannot directly observe X, but instead we can observe Yt := X + Wt, t ∈ [0, T ], where T > 0 and{Wt : t ∈ R} is a WSS process with zero mean and correlation function RW , uncorrelated with X.Further suppose that we use the following linear estimator to estimate X based on {Yt : t ∈ [0, T ]}:ˆXT =Z T0h(T − θ)Yθ dθ,i.e., we pass the process {Yt} through a causal LTI filter with impulse response h and sample theoutput at time T . We wish to design h to minimize the mean-squared error of the estimate.a. Use the orthogonality principle to write down a necessary and sufficient condition for theoptimal h. (The condition involves h, T , X, {Yt : t ∈ [0, T ]}, ˆXT , etc.)b. Use part a to derive a condition involving the optimal h that has the following form: for allτ ∈ [0, T ],a =Z T0h(θ)(b + c(τ − θ)) dθ,where a and b are constants and c is some function. (You must find a, b, and c in terms ofthe information…arrow_forwardX1 and X2 are two discrete random variables, while the X1 random variable takes the values x1 = 1, x1 = 2 and x1 = 3, while the X2 random variable takes the values x2 = 10, x2 = 20 and x2 = 30. The combined probability mass function of the random variables X1 and X2 (pX1, X2 (x1, x2)) is given in the table below a) Find the marginal probability mass function (pX1 (X1)) of the random variable X1.b) Find the marginal probability mass function (pX2 (X2)) of the random variable X2.c) Find the expected value of the random variable X1.d) Find the expected value of the random variable X2.e) Find the variance of the random variable X1.f) Find the variance of the random variable X2.g) pX1 | X2 (x1 | x2 = 10) Find the mass function of the given conditional probability.h) pX2 | X1 (x2 | x1 = 2) Find the mass function of the given conditional probability.i) Are the random variables X1 and X2 independent? Show it. The combined probability mass function of the random variables X1 and X2 is belowarrow_forward1. Consider the Gaussian distribution N (m, σ2).(a) Show that the pdf integrates to 1.(b) Show that the mean is m and the variance is σ.arrow_forward
- 3.7. Consider the performance function Y- 3X1-2X2 where X1 and X2 are both normally distributed random variables with Rx.-16.6 Ох.-2.45 μΧ.-18.8 ơXs: 2.83 The two variables are correlated, and the covariance is equal to 2.0. Determine the probability of failure if failure is defined as the state when Y0.arrow_forwardConsider the following two formulations of the bivariate PRF, where ui and εi are both mean-0 stochastic disturbances (i.e random errors): yi = β0 + β1xi + u yi = α0 + α1(xi − x¯) + ϵ a) Write the OLS estimators of β1 and α1. Are the two estimators the same? b) What is the advantage, if any, of the second model over the first?arrow_forward1)Let x be a random variable Gaussian with zero mean and variance 1. Find:a)The conditional pdf and pdf of x given x > 0;b)E [ x| x>0 ]c)Var [ x | x >0]arrow_forward
- An investor has found that company1 have an expected return on E(X) = 4% and variance for the return equal V(X) = 0.49. Company 2 has E(Y) = 6% and variance V(Y) = 0.64. The correlation between the companies return is ρ(X,Y) = 0.3. The investor wants to invest p (0<p<1) in company1 and (1-p) in company2. The combined investment have a return: R = pX + (1-p)Y. Let p=0.4 such that R= 0.4X +0.6Y. Find the Expectation and variance of R. how are these results in comparison with X and Y separatley?arrow_forwardX is an exponential random variable with λ =1 and Y is a uniform random variable defined on (0, 2). If X and Y are independent, find the PDF of Z = X-Y2arrow_forwardLet X and Y have the following joint distribution:X \Y 0 1 0 0.40 0.10 1 0.10 0.10 2 0.10 0.20 (a) Find Cov(4 + 2X , 3 − 2Y ). (b) Let Z = 3X − 2Y + 2. Find E[Z] and σ2z (c) Calculate the correlation coefficient between X and Y. What does this suggest about the relationship between X and Y? (d) Show that for two nonzero constants a and b, Cov(X + a, Y + b) = Cov(X , Y ).arrow_forward
- A First Course in Probability (10th Edition)ProbabilityISBN:9780134753119Author:Sheldon RossPublisher:PEARSON