Concept explainers
The strength X of a certain material is such that its distribution is found by
NOTE:
Want to see the full answer?
Check out a sample textbook solutionChapter 3 Solutions
Probability And Statistical Inference (10th Edition)
- Let X denote the reaction time, in seconds, to a certain stimulus and Y denote the temperature (◦F) at which a certain reaction starts to take place. Suppose that two random variables X and Y have the joint densityarrow_forwardIf X has a standard normal distribution (X ∼ N(0, 1)), then the characteristic function of X is ϕX(t) = e −0.5(t^2) . Using this fact, show that X has skewness 0 and kurtosis 3arrow_forwardLet X and Y be a pair of continuous random variables with a joint density fx,y(x,y). Assume that fx,y(x,y) = cxy for x greater than or equal to 0, y greater than or equal to 0, and x + y less than or equal to 1. Here c is a constant. Assume that fx,y(x,y) is 0 elsewhere. What is the constant c equal to? With the value of c, what is E[XY]?arrow_forward
- For any continuous random variables X, Y , Z and any constants a, b, show the following from the definition of the covariance:arrow_forwardLet X1, ...., Xn be a random sample from a population with θ unknown and given by the density f(x; θ) = ( 1 2θ √2 x e − √2 x θ if x > 0 0 if x ≤ 0 1. Show that E(X) = 2θ 2 and E( √2 X) = θ (Hint: you may use that R ∞ 0 e −z z α−1dz = (α − 1)! for every α ∈ N). 2. Show that the statistic θbn := 1 n Xn i=1 p2 Xi (1) is an unbiased estimator of θ. 3. Give the definition of a consistent estimator. 4. Show that the estimator θbn given in relation (1) is a consistent estimator of θ. 5. Show that the estimator θbn is a minimum variance estimator of θ. (Hint: use the Cramer-Rao inequality given by var(θb) ≥ 1 nE ∂ ln(f(X;θ) ∂θ 2arrow_forwardSuppose that the random variables X,Y, and Z have the joint probability density function f(x,y,z) = 8xyz for 0<x<1, 0<y<1, and 0<z<1. Determine P(X<0.7).arrow_forward
- If the probability density of X is given by f(x) =kx3(1 + 2x)6 for x > 00 elsewhere where k is an appropriate constant, find the probabilitydensity of the random variable Y = 2X 1 + 2X . Identify thedistribution of Y, and thus determine the value of k.arrow_forwardLet X be a random variable with pdff(x) = 4x^3 if 0 < x < 1 and zero otherwise. Use thecumulative (CDF) technique to determine the pdf of each of the following random variables: 1) Y=X^4, 2) W=e^(-x) 3) Z=1-e^(-x) 4) U=X(1-X)arrow_forwardConsider a real random variable X with zero mean and variance σ2X . Suppose that wecannot directly observe X, but instead we can observe Yt := X + Wt, t ∈ [0, T ], where T > 0 and{Wt : t ∈ R} is a WSS process with zero mean and correlation function RW , uncorrelated with X.Further suppose that we use the following linear estimator to estimate X based on {Yt : t ∈ [0, T ]}:ˆXT =Z T0h(T − θ)Yθ dθ,i.e., we pass the process {Yt} through a causal LTI filter with impulse response h and sample theoutput at time T . We wish to design h to minimize the mean-squared error of the estimate.a. Use the orthogonality principle to write down a necessary and sufficient condition for theoptimal h. (The condition involves h, T , X, {Yt : t ∈ [0, T ]}, ˆXT , etc.)b. Use part a to derive a condition involving the optimal h that has the following form: for allτ ∈ [0, T ],a =Z T0h(θ)(b + c(τ − θ)) dθ,where a and b are constants and c is some function. (You must find a, b, and c in terms ofthe information…arrow_forward
- A First Course in Probability (10th Edition)ProbabilityISBN:9780134753119Author:Sheldon RossPublisher:PEARSON