Respuesta :
Answer:
a) [tex] \hat a = max(X_i)[/tex]
For this case the value for [tex]\hat a[/tex] is always smaller than the value of a, assuming [tex] X_i \sim Unif[0,a][/tex] So then for this case it cannot be unbiased because an unbiased estimator satisfy this property:
[tex] E(a) - a= 0 [/tex] and that's not our case.
b) [tex] E(\hat a) - a= \frac{na}{n+1} - a = \frac{na -an -a}{n+1}= \frac{-a}{n+1}[/tex]
Since is a negative value we can conclude that underestimate the real value a.
[tex] \lim_{ n \to\infty} -\frac{1}{n+1}= 0[/tex]
c) [tex] P(Y \leq y) = P(max(X_i) \leq y) = P(X_1 \leq y, X_2 \leq y, ..., X_n\leq y)[/tex]
And assuming independence we have this:
[tex]P(Y \leq y) = P(X_1 \leq y) P(X_2 \leq y) .... P(X_n \leq y) = [P(X_1 \leq y)]^n = (\frac{y}{a})^n[/tex]
[tex] f_Y (Y) = n (\frac{y}{a})^{n-1} * \frac{1}{a}= \frac{n}{a^n} y^{n-1} , y \in [0,a][/tex]
e) On this case we see that the estimator [tex]\hat a_1[/tex] is better than [tex] \hat a_2[/tex] and the reason why is because:
[tex] V(\hat a_1) > V(\hat a_2) [/tex]
[tex] \frac{a^2}{3n}> \frac{a^2}{n(n+2)}[/tex]
[tex] n(n+2) = n^2 + 2n > n +2n = 3n [/tex] and that's satisfied for n>1.
Step-by-step explanation:
Part a
For this case we are assuming [tex] X_1, X_2 , ..., X_n \sim U(0,a)[/tex]
And we are are ssuming the following estimator:
[tex] \hat a = max(X_i)[/tex]
For this case the value for [tex]\hat a[/tex] is always smaller than the value of a, assuming [tex] X_i \sim Unif[0,a][/tex] So then for this case it cannot be unbiased because an unbiased estimator satisfy this property:
[tex] E(a) - a= 0 [/tex] and that's not our case.
Part b
For this case we assume that the estimator is given by:
[tex] E(\hat a) = \frac{na}{n+1}[/tex]
And using the definition of bias we have this:
[tex] E(\hat a) - a= \frac{na}{n+1} - a = \frac{na -an -a}{n+1}= \frac{-a}{n+1}[/tex]
Since is a negative value we can conclude that underestimate the real value a.
And when we take the limit when n tend to infinity we got that the bias tend to 0.
[tex] \lim_{ n \to\infty} -\frac{1}{n+1}= 0[/tex]
Part c
For this case we the followng random variable [tex] Y = max (X_i)[/tex] and we can find the cumulative distribution function like this:
[tex] P(Y \leq y) = P(max(X_i) \leq y) = P(X_1 \leq y, X_2 \leq y, ..., X_n\leq y)[/tex]
And assuming independence we have this:
[tex]P(Y \leq y) = P(X_1 \leq y) P(X_2 \leq y) .... P(X_n \leq y) = [P(X_1 \leq y)]^n = (\frac{y}{a})^n[/tex]
Since all the random variables have the same distribution.
Now we can find the density function derivating the distribution function like this:
[tex] f_Y (Y) = n (\frac{y}{a})^{n-1} * \frac{1}{a}= \frac{n}{a^n} y^{n-1} , y \in [0,a][/tex]
Now we can find the expected value for the random variable Y and we got this:
[tex] E(Y) = \int_{0}^a \frac{n}{a^n} y^n dy = \frac{n}{a^n} \frac{a^{n+1}}{n+1}= \frac{an}{n+1}[/tex]
And the bias is given by:
[tex]E(Y)-a=\frac{an}{n+1} -a=\frac{an-an-a}{n+1}= -\frac{a}{n+1}[/tex]
And again since the bias is not 0 we have a biased estimator.
Part e
For this case we have two estimators with the following variances:
[tex] V(\hat a_1) = \frac{a^2}{3n}[/tex]
[tex] V(\hat a_2) = \frac{a^2}{n(n+2)}[/tex]
On this case we see that the estimator [tex]\hat a_1[/tex] is better than [tex] \hat a_2[/tex] and the reason why is because:
[tex] V(\hat a_1) > V(\hat a_2) [/tex]
[tex] \frac{a^2}{3n}> \frac{a^2}{n(n+2)}[/tex]
[tex] n(n+2) = n^2 + 2n > n +2n = 3n [/tex] and that's satisfied for n>1.