*9.93 Let Y₁, ₂,..., Y, be a random sample from a population with density function 202 f(y 10) = y3 0, 0

MATLAB: An Introduction with Applications
6th Edition
ISBN:9781119256830
Author:Amos Gilat
Publisher:Amos Gilat
Chapter1: Starting With Matlab
Section: Chapter Questions
Problem 1P
icon
Related questions
Question
*9.93 Let Y₁, Y₂...., Y, be a random sample from a population with density function
20²
y3
0 <y <∞,
elsewhere.
In Exercise 9.53, you showed that Y(₁) = min(Y₁, Y₂, ..., Y,) is sufficient for 6.
a Find the MLE for 0. [Hint: See Example 9.16.]
b Find a function of the MLE in part (a) that is a pivotal quantity.
*
Use the pivotal quantity from part (b) to find a 100(1-) % confidence interval for 8.
f(y) =
0,
Transcribed Image Text:*9.93 Let Y₁, Y₂...., Y, be a random sample from a population with density function 20² y3 0 <y <∞, elsewhere. In Exercise 9.53, you showed that Y(₁) = min(Y₁, Y₂, ..., Y,) is sufficient for 6. a Find the MLE for 0. [Hint: See Example 9.16.] b Find a function of the MLE in part (a) that is a pivotal quantity. * Use the pivotal quantity from part (b) to find a 100(1-) % confidence interval for 8. f(y) = 0,
C20110
EXAMPLE 9.16 Let Y₁...... Y, be a random sample of observations from a uniform distribution
with probability density function (18)= 1/8,for0 ≤ y ≤ and i = 1, 2, ..., n.
Find the MLE of 8.
Solution In this case, the likelihood is given by
L(B) = f(y-2--- % ]®) = S(x]8) × ƒ(y₂]6) × --- × ƒ(8)
1
1
0 ≤ y ≤0.1=1,2,.....
otherwise.
Obviously, L() is not maximized when L(8)= 0. You will notice that 1/8" is a
monotonically decreasing function of 6. Hence, nowhere in the interval 0 <<
is d[1/8"1/49 equal to zero. However, 1/6 increases as 8 decreases, and 1/8" is
maximized by selecting to be as small as possible, subject to the constraint that all
of the y values are between zero and 8. The smallest value of that satisfies this
constraint is the maximum observation in the set y. y. y That is, 8=Y=
max(₁. Y..... Y) is the MLE for 8. This MLE for is not an unbiased estimator
of 8, but it can be adjusted to be unbiased, as shown in Example 9.1.
We have seen that sufficient statistics that best summarize the data have desirable
properties and often can be used to find an MVUE for parameters of interest. If U
is any sufficient statistic for the estimation of a parameter 8, including the sufficient
statistic obtained from the optimal use of the factorization criterion, the MLE is
always some function of U. That is, the MLE depends on the sample observations
only through the value of a sufficient statistic. To show this, we need only observe
480 Chapter 9 Properties of Point Estimators and Methods of Estimation
w okkapuenjiž.
that if U is a sufficient statistic for 6, the factorization criterion (Theorem 9.4) implies
that the likelihood can be factored as
L(8)=L(... y) = g(u,0)h(y)
where g(u,) is a function of only u and 6 and k(y₁. 2. y) does not depend
on 8. Therefore, it follows that
In[L(9)] = Ing(u,9)]+[(y)].
Notice that In[h(y. y... y)] does not depend on and therefore maximizing
In[L(9)] relative to is equivalent to maximizing In[g(u, 8)] relative to 6. Because
Ing(4,8)] depends on the data only through the value of the sufficient statistic U, the
MLE for 8 is always some function of U. Consequently, if an MLE for a parameter
can be found and then adjusted to be unbiased, the resulting estimator often is an
MVUE of the parameter in question.
MLEs have some additional properties that make this method of estimation par-
ticularly attractive. In Example 9.9, we considered estimation of 8², a function of the
parameter 8. Functions of other parameters may also be of interest. For example, the
variance of a binomial random variable is mp(1-p). a function of the parameter p.
If Y has a Poisson distribution with mean A, it follows that P(Y=0)=we may
wish to estimate this function of à Generally, if is the parameter associated with
a distribution, we are sometimes interested in estimating some function of 8-say
1(6) rather than itself. In Exercise 9.94, you will prove that if 1(8) is a one-to-one
function of and it is the MLE for e, then the MLE of 1() is given by
1(0)=1(0).
This result, sometimes referred to as the invariance property of MLEs, also holds for
any function of a parameter of interest (not just one-to-one functions). See Casella
and Berger (2002) for details.
Transcribed Image Text:C20110 EXAMPLE 9.16 Let Y₁...... Y, be a random sample of observations from a uniform distribution with probability density function (18)= 1/8,for0 ≤ y ≤ and i = 1, 2, ..., n. Find the MLE of 8. Solution In this case, the likelihood is given by L(B) = f(y-2--- % ]®) = S(x]8) × ƒ(y₂]6) × --- × ƒ(8) 1 1 0 ≤ y ≤0.1=1,2,..... otherwise. Obviously, L() is not maximized when L(8)= 0. You will notice that 1/8" is a monotonically decreasing function of 6. Hence, nowhere in the interval 0 << is d[1/8"1/49 equal to zero. However, 1/6 increases as 8 decreases, and 1/8" is maximized by selecting to be as small as possible, subject to the constraint that all of the y values are between zero and 8. The smallest value of that satisfies this constraint is the maximum observation in the set y. y. y That is, 8=Y= max(₁. Y..... Y) is the MLE for 8. This MLE for is not an unbiased estimator of 8, but it can be adjusted to be unbiased, as shown in Example 9.1. We have seen that sufficient statistics that best summarize the data have desirable properties and often can be used to find an MVUE for parameters of interest. If U is any sufficient statistic for the estimation of a parameter 8, including the sufficient statistic obtained from the optimal use of the factorization criterion, the MLE is always some function of U. That is, the MLE depends on the sample observations only through the value of a sufficient statistic. To show this, we need only observe 480 Chapter 9 Properties of Point Estimators and Methods of Estimation w okkapuenjiž. that if U is a sufficient statistic for 6, the factorization criterion (Theorem 9.4) implies that the likelihood can be factored as L(8)=L(... y) = g(u,0)h(y) where g(u,) is a function of only u and 6 and k(y₁. 2. y) does not depend on 8. Therefore, it follows that In[L(9)] = Ing(u,9)]+[(y)]. Notice that In[h(y. y... y)] does not depend on and therefore maximizing In[L(9)] relative to is equivalent to maximizing In[g(u, 8)] relative to 6. Because Ing(4,8)] depends on the data only through the value of the sufficient statistic U, the MLE for 8 is always some function of U. Consequently, if an MLE for a parameter can be found and then adjusted to be unbiased, the resulting estimator often is an MVUE of the parameter in question. MLEs have some additional properties that make this method of estimation par- ticularly attractive. In Example 9.9, we considered estimation of 8², a function of the parameter 8. Functions of other parameters may also be of interest. For example, the variance of a binomial random variable is mp(1-p). a function of the parameter p. If Y has a Poisson distribution with mean A, it follows that P(Y=0)=we may wish to estimate this function of à Generally, if is the parameter associated with a distribution, we are sometimes interested in estimating some function of 8-say 1(6) rather than itself. In Exercise 9.94, you will prove that if 1(8) is a one-to-one function of and it is the MLE for e, then the MLE of 1() is given by 1(0)=1(0). This result, sometimes referred to as the invariance property of MLEs, also holds for any function of a parameter of interest (not just one-to-one functions). See Casella and Berger (2002) for details.
Expert Solution
steps

Step by step

Solved in 4 steps

Blurred answer
Similar questions
  • SEE MORE QUESTIONS
Recommended textbooks for you
MATLAB: An Introduction with Applications
MATLAB: An Introduction with Applications
Statistics
ISBN:
9781119256830
Author:
Amos Gilat
Publisher:
John Wiley & Sons Inc
Probability and Statistics for Engineering and th…
Probability and Statistics for Engineering and th…
Statistics
ISBN:
9781305251809
Author:
Jay L. Devore
Publisher:
Cengage Learning
Statistics for The Behavioral Sciences (MindTap C…
Statistics for The Behavioral Sciences (MindTap C…
Statistics
ISBN:
9781305504912
Author:
Frederick J Gravetter, Larry B. Wallnau
Publisher:
Cengage Learning
Elementary Statistics: Picturing the World (7th E…
Elementary Statistics: Picturing the World (7th E…
Statistics
ISBN:
9780134683416
Author:
Ron Larson, Betsy Farber
Publisher:
PEARSON
The Basic Practice of Statistics
The Basic Practice of Statistics
Statistics
ISBN:
9781319042578
Author:
David S. Moore, William I. Notz, Michael A. Fligner
Publisher:
W. H. Freeman
Introduction to the Practice of Statistics
Introduction to the Practice of Statistics
Statistics
ISBN:
9781319013387
Author:
David S. Moore, George P. McCabe, Bruce A. Craig
Publisher:
W. H. Freeman