10.1: Introduction
- Page ID
- 55671
Recall that statistical estimation is a process through which we deduce parameters of the density that characterize the behavior of a random experiment based on a sample - a typically large but in any event finite number of observable outcomes of the random experiment. Specifically, a sample is a set of \(n\) independent and identically distributed (i.i.d.) random variables; we recall that a set of random variables \[X_{1}, X_{2}, \ldots, X_{n}\] is i.i.d. if \[f_{X_{1}, \ldots, X_{n}}\left(x_{1}, \ldots, x_{n}\right)=f_{X}\left(x_{1}\right) \cdots f_{X}\left(x_{n}\right),\] where \(f_{X}\) is the common probability density for \(X_{1}, \ldots, X_{n}\). We also define a statistic as a function of a sample which returns a random number that represents some attribute of the sample; a statistic can also refer to the actual variable so calculated. Often a statistic serves to estimate a parameter. In this chapter, we focus on statistical estimation of parameters associated with arguably the simplest distribution: Bernoulli random variables.