Like determining population parameters , we often want to calculate the parameters in our regression models. There are two basic procedures that are often used to determine those. These sections are only meant to be illustrative and not comprehensive into the topic.
I based these illustrations heavily on the material from the Ecological Detective: Confronting Models with Data. I highly encourage you to read this book and follow along with the psuedo code in the book.
Sum of squares (Ordinary Least Squares [OLS])
Simplest method of estimating parameters.
Several selling points:
- Simple, no assumptions about the uncertainty
- Long history of use in science
- Computational methods allow us to do remarkable calculations
What if we have two parameters?
Use the above approach to estimate the values for the two parameters.
Maximum likelihood
Given a set of observations from the population, we can find estimates of one (or many) parameters that maximize the liklihood of observing those data. The likelihood function provides the likelihood of the observed data for all possible values of the parameter we are trying to estimate.
This approach allows us to incorporate some of the uncertainty based on probability distributions. For example, if the deviations of the data from the average follow a normal distribution, then it can be assumed that the uncertainty is normally distributed.
This approach allows us to develop confidence bounds around our parameters that we could not do in the sum of squares approach.
Likelihood and Maximum Likelihood
The probability of observing the data \( Y_i \) given a particular parameter value \( p \) is:
The subscript on Y, indicates that there are many possible outcomes but only one parameter \( p \). Thus we can ask, “Given these data, how likely are the possible hypothesis?”
or
Notice the difference from the previous equation. In the likelihood function we have one observation but numerous potential hypotheses or parameter values. The key difference between likelihood and probability is that with the probability the hypothesis is known and data are unknown, whereas the likelihood the data are known and the hypotheses are unknown.
Thus there are parameter values that are more likely than others.
The parameter that makes the likelihood as large as possible is the maximum likelihood estimate (MLE). Generally likelihoods are very small values and thus the log likelihood is used and to make it analogous to sum of squares we use the negative value of that. So the best fit parameter value will be the negative, log likelihood.
As an example, consider the heights of ten people in cm. We can assume that height is normally distributed with a standard deviation of 10 cm.
The likelihood for the true mean of the population can be given as:
and the negative log-likelihood for 10 of the ten heights is:
And to find that value:
This is an overly simplified description of using likelihood to estimate parameters. In all reality, it often requires calculus or complex iterative algorithms to determine the value that maximizes the likelihood function.