1An estimator is said to be an unbiased estimator of a parameter if:
unbiased estimator
Easy
A.
B.The sample size is large.
C.
D.
Correct Answer:
Explanation:
An estimator is unbiased if its expected value (the average of its values over all possible samples) is equal to the true population parameter it is trying to estimate.
Incorrect! Try again.
2If the expected value of an estimator is not equal to the true parameter value, the difference is called the:
unbiased estimator
Easy
A.Efficiency
B.Variance
C.Bias
D.Standard Error
Correct Answer: Bias
Explanation:
The bias of an estimator is defined as the difference between its expected value and the true value of the parameter being estimated. An unbiased estimator has a bias of zero.
Incorrect! Try again.
3For a random sample from a population with mean , the sample mean is:
unbiased estimator
Easy
A.A biased estimator of
B.An unbiased estimator of
C.A consistent estimator of the sample size
D.Always equal to
Correct Answer: An unbiased estimator of
Explanation:
The expected value of the sample mean, , is equal to the population mean . Therefore, the sample mean is an unbiased estimator for the population mean.
Incorrect! Try again.
4The concept of unbiasedness focuses on an estimator's:
unbiased estimator
Easy
A.Average behavior over many repeated samples
B.Behavior as the sample size grows infinitely large
C.Accuracy in a single sample
D.Variance compared to other estimators
Correct Answer: Average behavior over many repeated samples
Explanation:
Unbiasedness is a property of the sampling distribution of an estimator. It means that, on average, the estimator will hit the true parameter value if we were to take many, many random samples.
Incorrect! Try again.
5If an estimator for a parameter has an expected value , what can be said about this estimator?
unbiased estimator
Easy
A.It is unbiased.
B.It is positively biased.
C.It is efficient.
D.It is negatively biased.
Correct Answer: It is positively biased.
Explanation:
The bias is . Since the bias is positive, the estimator is positively biased, meaning it tends to overestimate the true parameter on average.
Incorrect! Try again.
6What is the defining characteristic of a consistent estimator?
consistent estimator
Easy
A.It converges to the true parameter value as the sample size increases.
B.Its variance is the smallest possible.
C.Its expected value equals the true parameter.
D.It is easy to calculate.
Correct Answer: It converges to the true parameter value as the sample size increases.
Explanation:
A consistent estimator is one that gets closer and closer to the true value of the parameter as the sample size () grows larger.
Incorrect! Try again.
7Consistency is a property that describes an estimator's behavior:
consistent estimator
Easy
A.in the limit as the sample size approaches infinity.
B.for small sample sizes.
C.for a single, specific sample.
D.only when it is also unbiased.
Correct Answer: in the limit as the sample size approaches infinity.
Explanation:
Consistency is an asymptotic property, meaning it describes how the estimator behaves as the sample size becomes very large (approaches infinity).
Incorrect! Try again.
8If an estimator is consistent, what generally happens to its variance as the sample size increases?
consistent estimator
Easy
A.It approaches zero.
B.It stays the same.
C.It increases.
D.It becomes equal to the parameter.
Correct Answer: It approaches zero.
Explanation:
For an estimator to converge to a single value (the true parameter), its sampling distribution must become more concentrated around that value. This means its variance must shrink towards zero as .
Incorrect! Try again.
9The Law of Large Numbers provides the theoretical basis for why the sample mean is a:
consistent estimator
Easy
A.efficient estimator.
B.biased estimator.
C.consistent estimator.
D.maximum likelihood estimator.
Correct Answer: consistent estimator.
Explanation:
The Law of Large Numbers states that the average of the results obtained from a large number of trials should be close to the expected value, which is the definition of consistency for the sample mean.
Incorrect! Try again.
10Which of the following is the most important factor for an estimator to be consistent?
consistent estimator
Easy
A.The estimator's formula.
B.The sample size.
C.The value of the true parameter.
D.The population distribution.
Correct Answer: The sample size.
Explanation:
Consistency is fundamentally about what happens to an estimator as the sample size increases. A larger sample size is what drives the estimator closer to the true parameter value.
Incorrect! Try again.
11When comparing two unbiased estimators for the same parameter, the more efficient estimator is the one with the:
efficient estimator
Easy
A.larger variance.
B.larger bias.
C.simpler formula.
D.smaller variance.
Correct Answer: smaller variance.
Explanation:
Efficiency relates to the precision of an estimator. For unbiased estimators, lower variance means the estimates are, on average, closer to the true parameter value, making the estimator more precise or efficient.
Incorrect! Try again.
12The concept of efficiency is primarily concerned with an estimator's:
efficient estimator
Easy
A.computational complexity.
B.bias.
C.consistency.
D.variance.
Correct Answer: variance.
Explanation:
Efficiency is a measure of an estimator's quality based on its variance. A more efficient estimator has less variability in its estimates from sample to sample.
MVUE stands for Minimum Variance Unbiased Estimator. It is an estimator that has the lowest possible variance among the group of all unbiased estimators for a given parameter.
Incorrect! Try again.
14If Estimator A has a variance of and Estimator B has a variance of , and both are unbiased, which is more efficient?
efficient estimator
Easy
A.Cannot be determined.
B.They are equally efficient.
C.Estimator A
D.Estimator B
Correct Answer: Estimator A
Explanation:
Since both estimators are unbiased, the one with the smaller variance is more efficient. Because for any positive sample size , Estimator A is more efficient.
Incorrect! Try again.
15A 'good' point estimator is often considered to be one that is:
efficient estimator
Easy
A.biased and has low variance.
B.biased and has high variance.
C.unbiased and has low variance.
D.unbiased and has high variance.
Correct Answer: unbiased and has low variance.
Explanation:
Ideally, we want an estimator that is accurate on average (unbiased) and precise (has low variance). This combination makes it a 'good' estimator in many contexts.
Incorrect! Try again.
16The principle of maximum likelihood estimation is to choose the parameter value that:
maximum likelihood estimation
Easy
A.makes the parameter equal to the sample mean.
B.maximizes the probability (or likelihood) of the observed data.
C.minimizes the probability of the observed data.
D.has the smallest possible variance.
Correct Answer: maximizes the probability (or likelihood) of the observed data.
Explanation:
Maximum Likelihood Estimation (MLE) works by finding the value of the parameter(s) that makes the observed sample data most likely to have occurred.
Incorrect! Try again.
17In MLE, the likelihood function is treated as a function of:
maximum likelihood estimation
Easy
A.the sample size .
B.the sample data , for a fixed parameter .
C.a random variable.
D.the parameter , for the fixed observed data .
Correct Answer: the parameter , for the fixed observed data .
Explanation:
The likelihood function flips the perspective of the joint probability density function. It treats the observed data as fixed and asks which value of the parameter would maximize its probability.
Incorrect! Try again.
18Why is it often easier to work with the log-likelihood function instead of the likelihood function itself?
maximum likelihood estimation
Easy
A.The log-likelihood is always positive.
B.The likelihood function cannot be maximized.
C.The log-likelihood function does not require differentiation.
D.The logarithm is a monotonic transformation, so the maximum occurs at the same parameter value.
Correct Answer: The logarithm is a monotonic transformation, so the maximum occurs at the same parameter value.
Explanation:
The natural logarithm is a monotonically increasing function, so the parameter value that maximizes the likelihood also maximizes the log-likelihood. The log-likelihood is usually easier to differentiate because it turns products into sums.
Incorrect! Try again.
19The first step in finding the Maximum Likelihood Estimate (MLE) is typically to:
maximum likelihood estimation
Easy
A.write down the likelihood function for the sample.
B.calculate the sample variance.
C.collect a second sample for validation.
D.assume the parameter is zero.
Correct Answer: write down the likelihood function for the sample.
Explanation:
To find the value of the parameter that maximizes the likelihood, you must first define the likelihood function, which is the joint probability of observing your specific sample data, viewed as a function of the parameter.
Incorrect! Try again.
20A common method to find the maximum of the likelihood function is to:
maximum likelihood estimation
Easy
A.find the average of the observed data points.
B.take the integral of the function and set it to one.
C.take the derivative with respect to the parameter and set it to zero.
D.use a value from a pre-existing table.
Correct Answer: take the derivative with respect to the parameter and set it to zero.
Explanation:
This is a standard calculus technique for finding the maximum of a function. By finding where the slope (the first derivative) is zero, we can identify critical points, one of which is usually the maximum.
Incorrect! Try again.
21Let be a random sample from a population with mean and variance . Let and . Which of the following statements is true regarding these estimators for the population variance ?
unbiased estimator
Medium
A. is an unbiased estimator of .
B.Both and are biased estimators of .
C. is an unbiased estimator of .
D.Both and are unbiased estimators of .
Correct Answer: is an unbiased estimator of .
Explanation:
The sample variance with denominator is defined specifically to be an unbiased estimator of the population variance , meaning . The estimator with denominator is biased, as its expected value is .
Incorrect! Try again.
22Let be a random sample from a Uniform distribution on the interval . The estimator is proposed for . Is this estimator unbiased?
unbiased estimator
Medium
A.No, because the maximum likelihood estimator is .
B.No, because its variance is too large.
C.Yes, but only if is large.
D.Yes, because .
Correct Answer: Yes, because .
Explanation:
For a U[0, ] distribution, the expected value is . The expected value of the sample mean is . Therefore, the expected value of the estimator is . Since , the estimator is unbiased.
Incorrect! Try again.
23An estimator for a parameter has an expected value . What is the bias of this estimator?
unbiased estimator
Medium
A.
B.
C.The estimator is unbiased.
D.
Correct Answer:
Explanation:
The bias of an estimator is defined as . Given , the bias is .
Incorrect! Try again.
24Let be a random sample from a population. Two estimators are proposed for the population mean : and . Which statement is correct?
unbiased estimator
Medium
A.Both and are unbiased estimators of .
B.Only is an unbiased estimator of .
C.Only is an unbiased estimator of .
D.Neither is an unbiased estimator of .
Correct Answer: Both and are unbiased estimators of .
Explanation:
The expectation of the sample mean is , so it is unbiased. The expectation of the first estimator is . Thus, is also an unbiased estimator of .
Incorrect! Try again.
25Let be a single observation from a Bernoulli distribution with parameter . An estimator for is proposed as . What is the bias of this estimator?
unbiased estimator
Medium
A.
B.$0$
C.
D.
Correct Answer:
Explanation:
The goal is to estimate . The expected value of the estimator is . The bias is defined as . The estimator is biased unless or .
Incorrect! Try again.
26An estimator for a parameter is consistent if which of the following conditions hold as the sample size ?
consistent estimator
Medium
A.The estimator is unbiased for any sample size .
B.The bias approaches 0, but the variance can be non-zero.
C.The bias and the variance both approach 0.
D.The variance approaches 0, but the estimator can remain biased.
Correct Answer: The bias and the variance both approach 0.
Explanation:
A sufficient condition for an estimator to be consistent is that it is asymptotically unbiased (i.e., its bias approaches 0 as ) and its variance also approaches 0 as .
Incorrect! Try again.
27Consider the estimator for the population variance . Which statement best describes this estimator?
consistent estimator
Medium
A.It is biased and not consistent.
B.It is unbiased and consistent.
C.It is biased but consistent.
D.It is unbiased but not consistent.
Correct Answer: It is biased but consistent.
Explanation:
The estimator is biased because . However, as , the bias . Also, its variance approaches 0 as . Therefore, the estimator is biased but consistent.
Incorrect! Try again.
28The Weak Law of Large Numbers states that the sample mean converges in probability to the population mean . This directly implies that is a(n) ____ estimator for .
consistent estimator
Medium
A.sufficient
B.consistent
C.efficient
D.unbiased
Correct Answer: consistent
Explanation:
The definition of a consistent estimator is one that converges in probability to the true parameter value as the sample size increases. The Weak Law of Large Numbers is the formal statement of this convergence for the sample mean, making it a consistent estimator of the population mean.
Incorrect! Try again.
29Let be an estimator for the population mean . Given that is the sample mean from a population with finite variance. Is a consistent estimator for ?
consistent estimator
Medium
A.No, because its variance does not tend to 0.
B.Yes, but only if the population is normally distributed.
C.No, because it is biased for any finite .
D.Yes, because its bias and variance both tend to 0.
Correct Answer: Yes, because its bias and variance both tend to 0.
Explanation:
The bias is , which tends to 0 as . The variance is , which also tends to 0. Since both conditions for consistency are met, the estimator is consistent.
Incorrect! Try again.
30If an estimator is unbiased, is it necessarily consistent?
consistent estimator
Medium
A.No, an unbiased estimator can never be consistent.
B.Yes, all unbiased estimators are consistent.
C.No, an unbiased estimator also needs its variance to approach 0 as to be consistent.
D.Yes, provided the sample size is greater than 30.
Correct Answer: No, an unbiased estimator also needs its variance to approach 0 as to be consistent.
Explanation:
Unbiasedness alone is not sufficient for consistency. For example, the estimator (the first observation) is an unbiased estimator of the population mean , but its variance is , which does not decrease as the sample size increases. Therefore, it is not a consistent estimator.
Incorrect! Try again.
31For a random sample from a Normal distribution , both the sample mean and the sample median are unbiased estimators of . Why is the sample mean generally preferred?
efficient estimator
Medium
A.The sample mean has a smaller variance.
B.The sample median is only unbiased for large samples.
C.The sample mean is easier to calculate.
D.The sample median is not a consistent estimator.
Correct Answer: The sample mean has a smaller variance.
Explanation:
For a normal distribution, the variance of the sample mean is , while the variance of the sample median is approximately . Since , the sample mean has a smaller variance, making it a more efficient estimator for .
Incorrect! Try again.
32Let and be two unbiased estimators for a parameter . If and , what is the relative efficiency of with respect to ?
efficient estimator
Medium
A.2
B.0.833
C.
D.1.2
Correct Answer: 1.2
Explanation:
The relative efficiency of with respect to is defined as . Plugging in the values, we get . Since the efficiency is greater than 1, is more efficient than .
Incorrect! Try again.
33What does it mean if an unbiased estimator's variance is equal to the Cramér-Rao Lower Bound (CRLB)?
efficient estimator
Medium
A.The estimator is the maximum likelihood estimator.
B.The estimator is biased.
C.The estimator is the most efficient unbiased estimator possible.
D.The estimator is consistent.
Correct Answer: The estimator is the most efficient unbiased estimator possible.
Explanation:
The Cramér-Rao Lower Bound gives a theoretical minimum for the variance of any unbiased estimator. An estimator that achieves this lower bound is called a Minimum Variance Unbiased Estimator (MVUE), meaning it is the most efficient among all unbiased estimators.
Incorrect! Try again.
34For a sample from a Uniform distribution on , two unbiased estimators for are and , where is the maximum value in the sample. It is known that and . Which estimator is more efficient for ?
efficient estimator
Medium
A.
B.They are equally efficient.
C.
D.It depends on the value of .
Correct Answer:
Explanation:
To compare efficiency, we compare their variances. We need to see if is smaller than . This is equivalent to comparing with . For any , we have , which implies . Therefore, , making the more efficient estimator.
Incorrect! Try again.
35Why is efficiency (minimum variance) a desirable property for an estimator, in addition to being unbiased?
efficient estimator
Medium
A.A lower variance is only important for small sample sizes.
B.A lower variance implies that the estimator's values are more concentrated around the true parameter.
C.A lower variance guarantees the estimator is consistent.
D.A lower variance makes the estimator easier to compute.
Correct Answer: A lower variance implies that the estimator's values are more concentrated around the true parameter.
Explanation:
If an estimator is unbiased, its distribution is centered on the true parameter value. A lower variance means this distribution is narrower, or more tightly clustered around the center. This implies that any single estimate from this estimator is more likely to be close to the true parameter value, making it more reliable.
Incorrect! Try again.
36A coin is tossed 10 times, resulting in 7 heads. Let be the probability of getting a head. What is the maximum likelihood estimate (MLE) of ?
maximum likelihood estimation
Medium
A.0.5
B.0.3
C.7
D.0.7
Correct Answer: 0.7
Explanation:
For a sequence of Bernoulli trials with successes, the likelihood function is . The MLE for is . In this case, and , so the MLE of is .
Incorrect! Try again.
37Let be a random sample from an Exponential distribution with PDF for . What is the maximum likelihood estimator (MLE) for ?
maximum likelihood estimation
Medium
A.
B.
C.
D.
Correct Answer:
Explanation:
The likelihood function is . The log-likelihood is . Taking the derivative with respect to and setting it to zero gives . Solving for yields .
Incorrect! Try again.
38Suppose the MLE for the variance of a normal distribution is found to be . According to the invariance property of MLEs, what is the MLE for the standard deviation ?
maximum likelihood estimation
Medium
A.
B.
C.5
D.25
Correct Answer:
Explanation:
The invariance property of maximum likelihood estimators states that if is the MLE of , then for any function , the MLE of is . Here, the parameter is and its MLE is 5. We want the MLE of . Therefore, the MLE of is .
Incorrect! Try again.
39A sample of size is drawn from a Poisson distribution with mean . The observed values are . What is the maximum likelihood estimator (MLE) for ?
maximum likelihood estimation
Medium
A.The sample median
B.
C.The sample mean,
D.The sample variance,
Correct Answer: The sample mean,
Explanation:
The likelihood function for a Poisson sample is . Taking the log, differentiating with respect to , and setting to zero gives , which simplifies to . Solving for gives the MLE .
Incorrect! Try again.
40Which of the following best describes the principle of maximum likelihood estimation?
maximum likelihood estimation
Medium
A.It chooses the parameter value that results in an unbiased estimator.
B.It chooses the parameter value based on a prior belief about the parameter.
C.It chooses the parameter value that minimizes the variance of the estimator.
D.It chooses the parameter value that makes the observed data most probable.
Correct Answer: It chooses the parameter value that makes the observed data most probable.
Explanation:
The principle of maximum likelihood estimation is to find the value of the population parameter(s) that maximizes the likelihood function. The likelihood function measures how probable the observed sample is for a given parameter value. Therefore, MLE finds the parameter value under which the observed data has the highest probability of occurring.
Incorrect! Try again.
41Let be a random sample from a Uniform distribution on the interval . What is the Maximum Likelihood Estimator (MLE) for ?
maximum likelihood estimation
Hard
A.
B.The sample median
C.Any value in the interval
D.
Correct Answer: Any value in the interval
Explanation:
The likelihood function is for and , and otherwise. This simplifies to the condition . Any value of in this interval maximizes the likelihood function (which is constant at 1 in this range). Therefore, any such that is an MLE.
Incorrect! Try again.
42Let be i.i.d. from a distribution with PDF for . The Cramér-Rao Lower Bound (CRLB) for the variance of an unbiased estimator of is . Consider the estimator , where is the minimum order statistic. Which statement is true?
efficient estimator
Hard
A. cannot be the MVUE because the regularity conditions for the CRLB do not hold.
B. is an unbiased estimator whose variance meets the CRLB.
C. is a biased estimator, so the CRLB does not apply.
D. is the MVUE because its variance is less than the CRLB.
Correct Answer: cannot be the MVUE because the regularity conditions for the CRLB do not hold.
Explanation:
The distribution is a shifted Exponential(1). The support of the distribution, , depends on the parameter . This violates one of the regularity conditions for the Cramér-Rao Lower Bound (the ability to differentiate the integral of the likelihood function with respect to the parameter). Therefore, the CRLB is not a valid lower bound for the variance of unbiased estimators in this case. The actual MVUE is , but the reasoning based on the CRLB is flawed.
Incorrect! Try again.
43Let be a random sample from a Poisson() distribution. We want to find an unbiased estimator for . Which of the following estimators is unbiased for ?
unbiased estimator
Hard
A.
B.
C.
D.
Correct Answer:
Explanation:
For a Poisson() random variable , we know and . Since , we have , which implies . By linearity of expectation, . Thus, the sample second moment is an unbiased estimator for .
Incorrect! Try again.
44Let be an estimator for a parameter . Which of the following conditions is sufficient for to be a consistent estimator, but is not a necessary condition?
consistent estimator
Hard
A. is the Maximum Likelihood Estimator.
B. for all
C. and
D. is an unbiased estimator.
Correct Answer: and
Explanation:
The condition that the estimator is asymptotically unbiased () and its variance goes to zero () is a sufficient condition for consistency, often proven using Chebyshev's inequality. However, it is not a necessary condition. An estimator can be consistent even if its variance is undefined. The definition of consistency is , making that option the definition itself, not just a sufficient condition.
Incorrect! Try again.
45Let be an i.i.d. sample from a Laplace distribution with PDF . What is the maximum likelihood estimator (MLE) for ?
maximum likelihood estimation
Hard
A.The sample mean,
B.The smallest order statistic,
C.The sample median
D.The solution to
Correct Answer: The sample median
Explanation:
The log-likelihood function is . To maximize the log-likelihood, we must minimize the sum of absolute deviations, . This sum is minimized when is the sample median of the 's. The sample mean minimizes the sum of squared deviations, not absolute deviations.
Incorrect! Try again.
46For a random sample from with , the Cramér-Rao Lower Bound for the variance of any unbiased estimator of is . The sample mean is an unbiased estimator for . What is the efficiency of relative to the CRLB?
efficient estimator
Hard
A.$1$
B.It depends on the value of .
C.
D.$3$
Correct Answer:
Explanation:
The correct option follows directly from the given concept and definitions.
Incorrect! Try again.
47Let be a random sample from a distribution, with . Let be the maximum order statistic. We know that is a biased estimator for . Which of the following estimators for is unbiased?
unbiased estimator
Hard
A.
B.
C.
D.
Correct Answer:
Explanation:
The PDF of is for . To find , we compute . To create an unbiased estimator for , we must multiply by the reciprocal of its coefficient, which is . Therefore, .
Incorrect! Try again.
48Let be a single observation from a binomial distribution, , where is known. Using the invariance property of MLEs, what is the MLE for the odds, ?
maximum likelihood estimation
Hard
A.
B.
C.
D.
Correct Answer:
Explanation:
The likelihood function is . The MLE for is found by maximizing , which yields . The invariance property of MLEs states that if is the MLE of , then for any function , the MLE of is . Here, . Therefore, the MLE for the odds is .
Incorrect! Try again.
49Let be i.i.d. from a Cauchy distribution with location parameter and scale 1. The PDF is . Which statement about the sample mean as an estimator for is correct?
consistent estimator
Hard
A. is asymptotically normal, which implies consistency.
B. is inconsistent because the distribution of is the same as the distribution of .
C. is consistent due to the Law of Large Numbers.
D. is consistent because it is an unbiased estimator.
Correct Answer: is inconsistent because the distribution of is the same as the distribution of .
Explanation:
The Cauchy distribution is a special case where the mean and variance are undefined. The Law of Large Numbers does not apply because it requires a finite mean. A property of the Cauchy distribution is that the average of i.i.d. Cauchy random variables has the same Cauchy distribution as the individual variables. This means the distribution of does not 'narrow' or converge to a single point as increases. Therefore, does not converge in probability to and is an inconsistent estimator.
Incorrect! Try again.
50Let be a random sample from a Bernoulli() distribution. The variance of the sample mean is . The Cramér-Rao Lower Bound (CRLB) for an unbiased estimator of is also . Consider estimating . What is the CRLB for an unbiased estimator of ?
efficient estimator
Hard
A.
B.
C.
D.
Correct Answer:
Explanation:
The CRLB for an unbiased estimator of a function is given by , where is the Fisher information for a single observation. For a Bernoulli trial, . Here, , so . Plugging this into the formula, we get CRLB() = .
Incorrect! Try again.
51Let and be two independent, unbiased estimators for a parameter , with and . Consider a combined estimator . What value of produces the Minimum Variance Unbiased Estimator (MVUE) in this class of linear estimators?
unbiased estimator
Hard
A.
B.
C.
D.
Correct Answer:
Explanation:
For any , , so the estimator is unbiased. The variance is due to independence. . To minimize this quadratic in , we take the derivative and set it to zero: , which gives .
Incorrect! Try again.
52Let be a random sample from a distribution with PDF for and . What is the Maximum Likelihood Estimator (MLE) for ?
maximum likelihood estimation
Hard
A.
B.
C.
D.
Correct Answer:
Explanation:
The likelihood function is . The log-likelihood is . Taking the derivative with respect to and setting it to zero gives: . Solving for yields , so , which gives .
Incorrect! Try again.
53Suppose follows a Geometric distribution with probability of success , for . We want an unbiased estimator for . Which of the following estimators based on a single observation is unbiased for ?
unbiased estimator
Hard
A.
B.
C.
D.No simple polynomial in X can be an unbiased estimator for .
Correct Answer:
Explanation:
The expected value of a Geometric random variable with this parameterization is . Therefore, the estimator is itself an unbiased estimator for . The other options are incorrect: and does not simplify to .
Incorrect! Try again.
54Let be a sample from where both parameters are unknown. The Fisher Information is a matrix. The Cramér-Rao Lower Bound for the variance of an unbiased estimator of is . What is this value?
efficient estimator
Hard
A.
B.
C.
D.
Correct Answer:
Explanation:
For , the Fisher Information matrix for a single observation is . For a sample of size , the information matrix is . The inverse of is . The CRLB for is the top-left element of this inverse matrix, which is . This shows that knowing or not knowing does not change the CRLB for because the off-diagonal information elements are zero.
Incorrect! Try again.
55Let be i.i.d. from . Consider two estimators for : and (the sample midrange). Which of the following statements is true regarding their consistency?
consistent estimator
Hard
A.Neither estimator is consistent.
B.Both are consistent, but converges faster.
C.Only is consistent.
D.Only is consistent.
Correct Answer: Both are consistent, but converges faster.
Explanation:
Both estimators are unbiased for . For , , which goes to 0 as , so it is consistent. For , the variance is , which goes to 0 at a much faster rate of compared to 's . Since both are unbiased and their variances tend to zero, both are consistent, but the midrange converges much faster for a uniform distribution.
Incorrect! Try again.
56A device has an exponential lifetime with parameter . The test is censored at time . For devices, we observe failure times (all ) and devices that survived past time . What is the MLE for ?
maximum likelihood estimation
Hard
A.
B.
C.
D.
Correct Answer:
Explanation:
The likelihood function for this censored data is . This simplifies to . The log-likelihood is . Taking the derivative w.r.t. and setting to zero gives . Solving for gives the MLE .
Incorrect! Try again.
57Let be i.i.d. . We want to estimate , where is a known constant and is the standard normal CDF. Using the Rao-Blackwell theorem with the sufficient statistic , find the MVUE of . Let be an initial unbiased estimator.
unbiased estimator
Hard
A.
B.
C.
D.
Correct Answer:
Explanation:
The Rao-Blackwell theorem states the MVUE is . We need to compute . The conditional distribution of given is Normal with mean and variance . So, . This is the MVUE.
Incorrect! Try again.
58Let be i.i.d. random variables with , , and finite fourth moment . Let be the sample variance. What is the asymptotic variance of ?
consistent estimator
Hard
A.
B.
C.
D.
Correct Answer:
Explanation:
This is a classical result from asymptotic theory. The estimator is consistent for . By the Central Limit Theorem applied to sample moments, converges in distribution to a normal distribution with mean 0 and variance equal to . For the special case of a Normal distribution, , so the asymptotic variance is .
Incorrect! Try again.
59Suppose are i.i.d. from a Gamma distribution with shape and rate , where both are unknown. The log-likelihood function is . Let be the sample mean and be the mean of the log-transformed data. What system of equations must the MLEs satisfy?
maximum likelihood estimation
Hard
A.The MLEs cannot be found as there is no closed-form solution.
B. and
C. and
D. and
Correct Answer: and
Explanation:
Taking the partial derivative of with respect to and setting to 0 gives: . Taking the partial derivative with respect to and setting to 0 gives: , where is the digamma function. Substituting gives . Dividing by and rearranging gives , which simplifies to .
Incorrect! Try again.
60Let be a sample from a distribution where an unbiased estimator exists and attains the Cramér-Rao Lower Bound. This implies that the score function can be written in what form for some function ?
efficient estimator
Hard
A.
B.
C.
D.
Correct Answer:
Explanation:
An unbiased estimator attains the CRLB if and only if the score function (the derivative of the log-likelihood) is a linear function of the estimator. Specifically, for some function that does not depend on the data . This condition is met by distributions in the one-parameter exponential family, where the sufficient statistic is often the MVUE.