Unit 5 - Practice Quiz

MTH302 60 Questions
0 Correct 0 Wrong 60 Left
0/60

1 An estimator is said to be an unbiased estimator of a parameter if:

unbiased estimator Easy
A.
B. The sample size is large.
C.
D.

2 If the expected value of an estimator is not equal to the true parameter value, the difference is called the:

unbiased estimator Easy
A. Efficiency
B. Variance
C. Bias
D. Standard Error

3 For a random sample from a population with mean , the sample mean is:

unbiased estimator Easy
A. A biased estimator of
B. An unbiased estimator of
C. A consistent estimator of the sample size
D. Always equal to

4 The concept of unbiasedness focuses on an estimator's:

unbiased estimator Easy
A. Average behavior over many repeated samples
B. Behavior as the sample size grows infinitely large
C. Accuracy in a single sample
D. Variance compared to other estimators

5 If an estimator for a parameter has an expected value , what can be said about this estimator?

unbiased estimator Easy
A. It is unbiased.
B. It is positively biased.
C. It is efficient.
D. It is negatively biased.

6 What is the defining characteristic of a consistent estimator?

consistent estimator Easy
A. It converges to the true parameter value as the sample size increases.
B. Its variance is the smallest possible.
C. Its expected value equals the true parameter.
D. It is easy to calculate.

7 Consistency is a property that describes an estimator's behavior:

consistent estimator Easy
A. in the limit as the sample size approaches infinity.
B. for small sample sizes.
C. for a single, specific sample.
D. only when it is also unbiased.

8 If an estimator is consistent, what generally happens to its variance as the sample size increases?

consistent estimator Easy
A. It approaches zero.
B. It stays the same.
C. It increases.
D. It becomes equal to the parameter.

9 The Law of Large Numbers provides the theoretical basis for why the sample mean is a:

consistent estimator Easy
A. efficient estimator.
B. biased estimator.
C. consistent estimator.
D. maximum likelihood estimator.

10 Which of the following is the most important factor for an estimator to be consistent?

consistent estimator Easy
A. The estimator's formula.
B. The sample size.
C. The value of the true parameter.
D. The population distribution.

11 When comparing two unbiased estimators for the same parameter, the more efficient estimator is the one with the:

efficient estimator Easy
A. larger variance.
B. larger bias.
C. simpler formula.
D. smaller variance.

12 The concept of efficiency is primarily concerned with an estimator's:

efficient estimator Easy
A. computational complexity.
B. bias.
C. consistency.
D. variance.

13 What does MVUE stand for in point estimation?

efficient estimator Easy
A. Most Valid Unbiased Estimator
B. Mean Value Unbiased Estimator
C. Minimum Variance Unbiased Estimator
D. Maximum Value Unbiased Estimator

14 If Estimator A has a variance of and Estimator B has a variance of , and both are unbiased, which is more efficient?

efficient estimator Easy
A. Cannot be determined.
B. They are equally efficient.
C. Estimator A
D. Estimator B

15 A 'good' point estimator is often considered to be one that is:

efficient estimator Easy
A. biased and has low variance.
B. biased and has high variance.
C. unbiased and has low variance.
D. unbiased and has high variance.

16 The principle of maximum likelihood estimation is to choose the parameter value that:

maximum likelihood estimation Easy
A. makes the parameter equal to the sample mean.
B. maximizes the probability (or likelihood) of the observed data.
C. minimizes the probability of the observed data.
D. has the smallest possible variance.

17 In MLE, the likelihood function is treated as a function of:

maximum likelihood estimation Easy
A. the sample size .
B. the sample data , for a fixed parameter .
C. a random variable.
D. the parameter , for the fixed observed data .

18 Why is it often easier to work with the log-likelihood function instead of the likelihood function itself?

maximum likelihood estimation Easy
A. The log-likelihood is always positive.
B. The likelihood function cannot be maximized.
C. The log-likelihood function does not require differentiation.
D. The logarithm is a monotonic transformation, so the maximum occurs at the same parameter value.

19 The first step in finding the Maximum Likelihood Estimate (MLE) is typically to:

maximum likelihood estimation Easy
A. write down the likelihood function for the sample.
B. calculate the sample variance.
C. collect a second sample for validation.
D. assume the parameter is zero.

20 A common method to find the maximum of the likelihood function is to:

maximum likelihood estimation Easy
A. find the average of the observed data points.
B. take the integral of the function and set it to one.
C. take the derivative with respect to the parameter and set it to zero.
D. use a value from a pre-existing table.

21 Let be a random sample from a population with mean and variance . Let and . Which of the following statements is true regarding these estimators for the population variance ?

unbiased estimator Medium
A. is an unbiased estimator of .
B. Both and are biased estimators of .
C. is an unbiased estimator of .
D. Both and are unbiased estimators of .

22 Let be a random sample from a Uniform distribution on the interval . The estimator is proposed for . Is this estimator unbiased?

unbiased estimator Medium
A. No, because the maximum likelihood estimator is .
B. No, because its variance is too large.
C. Yes, but only if is large.
D. Yes, because .

23 An estimator for a parameter has an expected value . What is the bias of this estimator?

unbiased estimator Medium
A.
B.
C. The estimator is unbiased.
D.

24 Let be a random sample from a population. Two estimators are proposed for the population mean : and . Which statement is correct?

unbiased estimator Medium
A. Both and are unbiased estimators of .
B. Only is an unbiased estimator of .
C. Only is an unbiased estimator of .
D. Neither is an unbiased estimator of .

25 Let be a single observation from a Bernoulli distribution with parameter . An estimator for is proposed as . What is the bias of this estimator?

unbiased estimator Medium
A.
B. $0$
C.
D.

26 An estimator for a parameter is consistent if which of the following conditions hold as the sample size ?

consistent estimator Medium
A. The estimator is unbiased for any sample size .
B. The bias approaches 0, but the variance can be non-zero.
C. The bias and the variance both approach 0.
D. The variance approaches 0, but the estimator can remain biased.

27 Consider the estimator for the population variance . Which statement best describes this estimator?

consistent estimator Medium
A. It is biased and not consistent.
B. It is unbiased and consistent.
C. It is biased but consistent.
D. It is unbiased but not consistent.

28 The Weak Law of Large Numbers states that the sample mean converges in probability to the population mean . This directly implies that is a(n) ____ estimator for .

consistent estimator Medium
A. sufficient
B. consistent
C. efficient
D. unbiased

29 Let be an estimator for the population mean . Given that is the sample mean from a population with finite variance. Is a consistent estimator for ?

consistent estimator Medium
A. No, because its variance does not tend to 0.
B. Yes, but only if the population is normally distributed.
C. No, because it is biased for any finite .
D. Yes, because its bias and variance both tend to 0.

30 If an estimator is unbiased, is it necessarily consistent?

consistent estimator Medium
A. No, an unbiased estimator can never be consistent.
B. Yes, all unbiased estimators are consistent.
C. No, an unbiased estimator also needs its variance to approach 0 as to be consistent.
D. Yes, provided the sample size is greater than 30.

31 For a random sample from a Normal distribution , both the sample mean and the sample median are unbiased estimators of . Why is the sample mean generally preferred?

efficient estimator Medium
A. The sample mean has a smaller variance.
B. The sample median is only unbiased for large samples.
C. The sample mean is easier to calculate.
D. The sample median is not a consistent estimator.

32 Let and be two unbiased estimators for a parameter . If and , what is the relative efficiency of with respect to ?

efficient estimator Medium
A. 2
B. 0.833
C.
D. 1.2

33 What does it mean if an unbiased estimator's variance is equal to the Cramér-Rao Lower Bound (CRLB)?

efficient estimator Medium
A. The estimator is the maximum likelihood estimator.
B. The estimator is biased.
C. The estimator is the most efficient unbiased estimator possible.
D. The estimator is consistent.

34 For a sample from a Uniform distribution on , two unbiased estimators for are and , where is the maximum value in the sample. It is known that and . Which estimator is more efficient for ?

efficient estimator Medium
A.
B. They are equally efficient.
C.
D. It depends on the value of .

35 Why is efficiency (minimum variance) a desirable property for an estimator, in addition to being unbiased?

efficient estimator Medium
A. A lower variance is only important for small sample sizes.
B. A lower variance implies that the estimator's values are more concentrated around the true parameter.
C. A lower variance guarantees the estimator is consistent.
D. A lower variance makes the estimator easier to compute.

36 A coin is tossed 10 times, resulting in 7 heads. Let be the probability of getting a head. What is the maximum likelihood estimate (MLE) of ?

maximum likelihood estimation Medium
A. 0.5
B. 0.3
C. 7
D. 0.7

37 Let be a random sample from an Exponential distribution with PDF for . What is the maximum likelihood estimator (MLE) for ?

maximum likelihood estimation Medium
A.
B.
C.
D.

38 Suppose the MLE for the variance of a normal distribution is found to be . According to the invariance property of MLEs, what is the MLE for the standard deviation ?

maximum likelihood estimation Medium
A.
B.
C. 5
D. 25

39 A sample of size is drawn from a Poisson distribution with mean . The observed values are . What is the maximum likelihood estimator (MLE) for ?

maximum likelihood estimation Medium
A. The sample median
B.
C. The sample mean,
D. The sample variance,

40 Which of the following best describes the principle of maximum likelihood estimation?

maximum likelihood estimation Medium
A. It chooses the parameter value that results in an unbiased estimator.
B. It chooses the parameter value based on a prior belief about the parameter.
C. It chooses the parameter value that minimizes the variance of the estimator.
D. It chooses the parameter value that makes the observed data most probable.

41 Let be a random sample from a Uniform distribution on the interval . What is the Maximum Likelihood Estimator (MLE) for ?

maximum likelihood estimation Hard
A.
B. The sample median
C. Any value in the interval
D.

42 Let be i.i.d. from a distribution with PDF for . The Cramér-Rao Lower Bound (CRLB) for the variance of an unbiased estimator of is . Consider the estimator , where is the minimum order statistic. Which statement is true?

efficient estimator Hard
A. cannot be the MVUE because the regularity conditions for the CRLB do not hold.
B. is an unbiased estimator whose variance meets the CRLB.
C. is a biased estimator, so the CRLB does not apply.
D. is the MVUE because its variance is less than the CRLB.

43 Let be a random sample from a Poisson() distribution. We want to find an unbiased estimator for . Which of the following estimators is unbiased for ?

unbiased estimator Hard
A.
B.
C.
D.

44 Let be an estimator for a parameter . Which of the following conditions is sufficient for to be a consistent estimator, but is not a necessary condition?

consistent estimator Hard
A. is the Maximum Likelihood Estimator.
B. for all
C. and
D. is an unbiased estimator.

45 Let be an i.i.d. sample from a Laplace distribution with PDF . What is the maximum likelihood estimator (MLE) for ?

maximum likelihood estimation Hard
A. The sample mean,
B. The smallest order statistic,
C. The sample median
D. The solution to

46 For a random sample from with , the Cramér-Rao Lower Bound for the variance of any unbiased estimator of is . The sample mean is an unbiased estimator for . What is the efficiency of relative to the CRLB?

efficient estimator Hard
A. $1$
B. It depends on the value of .
C.
D. $3$

47 Let be a random sample from a distribution, with . Let be the maximum order statistic. We know that is a biased estimator for . Which of the following estimators for is unbiased?

unbiased estimator Hard
A.
B.
C.
D.

48 Let be a single observation from a binomial distribution, , where is known. Using the invariance property of MLEs, what is the MLE for the odds, ?

maximum likelihood estimation Hard
A.
B.
C.
D.

49 Let be i.i.d. from a Cauchy distribution with location parameter and scale 1. The PDF is . Which statement about the sample mean as an estimator for is correct?

consistent estimator Hard
A. is asymptotically normal, which implies consistency.
B. is inconsistent because the distribution of is the same as the distribution of .
C. is consistent due to the Law of Large Numbers.
D. is consistent because it is an unbiased estimator.

50 Let be a random sample from a Bernoulli() distribution. The variance of the sample mean is . The Cramér-Rao Lower Bound (CRLB) for an unbiased estimator of is also . Consider estimating . What is the CRLB for an unbiased estimator of ?

efficient estimator Hard
A.
B.
C.
D.

51 Let and be two independent, unbiased estimators for a parameter , with and . Consider a combined estimator . What value of produces the Minimum Variance Unbiased Estimator (MVUE) in this class of linear estimators?

unbiased estimator Hard
A.
B.
C.
D.

52 Let be a random sample from a distribution with PDF for and . What is the Maximum Likelihood Estimator (MLE) for ?

maximum likelihood estimation Hard
A.
B.
C.
D.

53 Suppose follows a Geometric distribution with probability of success , for . We want an unbiased estimator for . Which of the following estimators based on a single observation is unbiased for ?

unbiased estimator Hard
A.
B.
C.
D. No simple polynomial in X can be an unbiased estimator for .

54 Let be a sample from where both parameters are unknown. The Fisher Information is a matrix. The Cramér-Rao Lower Bound for the variance of an unbiased estimator of is . What is this value?

efficient estimator Hard
A.
B.
C.
D.

55 Let be i.i.d. from . Consider two estimators for : and (the sample midrange). Which of the following statements is true regarding their consistency?

consistent estimator Hard
A. Neither estimator is consistent.
B. Both are consistent, but converges faster.
C. Only is consistent.
D. Only is consistent.

56 A device has an exponential lifetime with parameter . The test is censored at time . For devices, we observe failure times (all ) and devices that survived past time . What is the MLE for ?

maximum likelihood estimation Hard
A.
B.
C.
D.

57 Let be i.i.d. . We want to estimate , where is a known constant and is the standard normal CDF. Using the Rao-Blackwell theorem with the sufficient statistic , find the MVUE of . Let be an initial unbiased estimator.

unbiased estimator Hard
A.
B.
C.
D.

58 Let be i.i.d. random variables with , , and finite fourth moment . Let be the sample variance. What is the asymptotic variance of ?

consistent estimator Hard
A.
B.
C.
D.

59 Suppose are i.i.d. from a Gamma distribution with shape and rate , where both are unknown. The log-likelihood function is . Let be the sample mean and be the mean of the log-transformed data. What system of equations must the MLEs satisfy?

maximum likelihood estimation Hard
A. The MLEs cannot be found as there is no closed-form solution.
B. and
C. and
D. and

60 Let be a sample from a distribution where an unbiased estimator exists and attains the Cramér-Rao Lower Bound. This implies that the score function can be written in what form for some function ?

efficient estimator Hard
A.
B.
C.
D.