Concept explainers
Let
(a) If X and Y have the same distribution, what is n?
(b) For the value of n determined in part (a), find
Want to see the full answer?
Check out a sample textbook solutionChapter 5 Solutions
Probability And Statistical Inference (10th Edition)
- Let X₁,X₂,...,Xₙ denote a random sample from a distribution that is N(0,θ), where the variance θ is an unknown positive number. Show that there exists a uniformly most powerful test of size α for testing the simple hypothesis H₀ : θ = θ', where θ' is a fixed positive number.arrow_forwardLet X1 and X2 be independent chi-square random variables with r1 and r2 degrees of freedom, respectively. Let Y1=(X1/r1)/(X2/r2) and Y2=X2. (a) Find the joint pdf of Y1 and Y2.arrow_forwardConsider a random variable Y with PDF Pr(Y=k)=pq^(k-1),k=1,2,3,4,5....compute for E(2Y)arrow_forward
- Let X1, X2, X3, . . . be a sequence of independent Poisson distributed random variables with parameter 1. For n ≥ 1 let Sn = X1 + · · · + Xn. (a) Show that GXi(s) = es−1.(b) Deduce from part (a) that GSn(s) = ens−n.arrow_forwardConsider a real random variable X with zero mean and variance σ2X . Suppose that wecannot directly observe X, but instead we can observe Yt := X + Wt, t ∈ [0, T ], where T > 0 and{Wt : t ∈ R} is a WSS process with zero mean and correlation function RW , uncorrelated with X.Further suppose that we use the following linear estimator to estimate X based on {Yt : t ∈ [0, T ]}:ˆXT =Z T0h(T − θ)Yθ dθ,i.e., we pass the process {Yt} through a causal LTI filter with impulse response h and sample theoutput at time T . We wish to design h to minimize the mean-squared error of the estimate.a. Use the orthogonality principle to write down a necessary and sufficient condition for theoptimal h. (The condition involves h, T , X, {Yt : t ∈ [0, T ]}, ˆXT , etc.)b. Use part a to derive a condition involving the optimal h that has the following form: for allτ ∈ [0, T ],a =Z T0h(θ)(b + c(τ − θ)) dθ,where a and b are constants and c is some function. (You must find a, b, and c in terms ofthe information…arrow_forwardLet X1, X2, . . . , Xn be an i.i.d. random sample from a Beta distribution with density: f(x; θ) = Γ(2θ) Γ(θ) 2 x θ−1 (1 − x) θ−1 , 0 < x < 1, θ > 0. Find a sufficient statisticarrow_forward
- Let X1 and X2 be observations of a random sample of size n = 2 from a Cauchy Distribution.Find P(X1 < −1 and 1 < X2)arrow_forwardLet X1...., Xn be a random sample of size n from an infinite population and assume X1 d= a + bU2 with the constants a > 0 and b > 0 unknown and U a standard uniform distributed random variable given by FU (x) := P(U ≤ x) = 0 if x ≤ 0 x if 0 < x < 1 1 if x ≥ 1 1. Compute the cdf of the random variable X1. 2. Compute E(X1) and V ar(X1). 3. Give the method of moments estimators of the unknown parameters a and b. Explain how you construct these estimators!arrow_forwardLet X1, . . . , Xn be independent random variables, such that Xi ∼ Poiss(λi), for i = 1, . . . , n. Find the distribution of Y = X1 + · · · + Xn.arrow_forward
- Let us say that we have jointly distributed random variables X and Y with E(X) = 3,E(Y) = 5, Var (X) = 1, Var(Y) = 3, and Cov (X, Y) = 2.Determine:a) E(2X + Y) b) Var(6X)arrow_forwardLet x and y be random variable such that the mean and variance of X are 2 and 4. respectively, while the mean and variance of y are 6 and k, respectively. A sample of size 4 is taken from the x-distribution and a sample of size 9 is taken from the y-distribution .If p[(x-y)>8]=0.0228,then what is the value of the constant k?arrow_forwardLet X, Y be two Bernoulli random variables and denote by p = P (X = 1), q = P (Y = 1) and r = P (X = 1, Y = 1). Let {Xi, Yi}ni=1be a sample of n i.i.d. copies of (X, Y ). Based on this sample, we want to test whether X and Y are independent, i.e., whether r =pqarrow_forward
- Algebra & Trigonometry with Analytic GeometryAlgebraISBN:9781133382119Author:SwokowskiPublisher:Cengage