At this point in the course, you understand how to build, interpret, and select regression models. You can also verify that linear regression is adapted to the dataset by verifying the five assumptions from the last chapter.
However, you still don't know how the coefficients of the linear regression method are calculated and why you need to satisfy these five assumptions. To understand the mechanisms behind regression modeling, you need to have a good grasp of the mathematical aspects of the method.
At the end of this chapter, you should have a good understanding of the following:
How the linear regression coefficients are calculated with both OLS and MLE.
The fundamental differences between the OLS and the MLE method.
Where the log-likelihood statistic come from.
The concept of loss function.
This chapter is more formal and mathematical than the previous ones. To make it more palatable, we will sacrifice some mathematical rigor. We will also mostly restrict ourselves to the univariate case.
There are several ways to calculate the optimal coefficients for a linear regression model. Here we focus on the ordinary least square (OLS) method and the maximum likelihood estimation (MLE) method.
The ordinary least square (OLS) method is tailored to the linear regression model. If the data is not too weird, it should always give a decent result. The OLS method does not make any assumption on the probabilistic nature of the variables and is considered to be deterministic.
The maximum likelihood estimation (MLE) method is a more general approach, probabilistic by nature, that is not limited to linear regression models.
The cool thing is that under certain conditions, the MLE and OLS methods lead to the same solutions.
Let's briefly reiterate the context of univariate linear regression. We have an outcome variable y , a predictor x , and N samples. x and y are N-sized vectors. We assume that there's a linear relation between y and x. We can write:
Where ε is a random variable that introduces some noise in the dataset, ε is assumed to follow a normal distribution with mean = 0.
The goal is to find the best coefficients ˆa and ˆb such that the estimation ^yi=ˆaxi+ˆb is as close as possible to the real values yi . In other words, to minimize the distance between yi and ^yifor all samples i.
We can rewrite that last equation as a product of a vector of coefficients ω and a design matrix of predictors X
Where ω=[a,b]T and X is the 2 by N design matrix defined by:
What's important here is that:
We want y and ˆy to be as close as possible.
ˆy can be written as ˆy=Xω , a product of a vector of coefficient ω and a matrix X of samples.
The OLS Method
When we say we want the estimated values ˆy to be as close as possible to the real value y , this implies a notion of distance between the samples. In math, a good reliable distance is a quadratic distance.
In two-dimensions, if the points have the coordinates p=(p1,p2) and q=(q1,q2) , then the distance is given by:
Our goal is to minimize the quadratic distance between all the real points yi and the inferred points ^yi .
And the distance between y and ˆy is:
To find the value of ω that minimizes that distance, take the derivative of n∑i=1(xiw−yi)2 with respect to ω , and solve the equation:
We'll skip the corny details of that derivation (it's online somewhere), and instead fast-forward to the solution:
And that's how you calculate the coefficients of the linear regression with OLS.
In the univariate case with N samples, the problem comes down to finding ˆa and ˆb that best solve a set of N equations with two unknowns:
The solution to this set of N equations is given by:
Where ¯x and ¯y are respectively the means of x and y.
Too Good to Be True?
Although it may seem that the OLS method will always output meaningful results, this is unfortunately not the case when dealing with real-world data. In the multivariate case, the operations involved in calculating this exact OLS solution can sometimes be problematic.
Calculating the matrix (XTX)−1XT involves many operations including inverting the N by N matrix XTX . For large number of samples N, big datasets, this matrix can become huge, and inverting huge matrices takes time and large amounts of memory. Even though this large matrix inversion problem is greatly optimized today, using the closed-form solution given by the OLS method is not always the most efficient way to obtain the coefficients.
The feasibility of calculating the closed-form solution ˆw=(XTX)−1XTy is what drives the five assumptions of linear regression.
For example, it can be shown that if there is perfect multicollinearity between the predictors (one predictor is a linear combination of the others), then the normal matrix has no inverse, and the coefficients cannot be calculated by the OLS method.
What's important to understand and remember about the OLS method:
The OLS method comes down to minimizing the squared residuals.
There is a closed-form solution.
Being able to calculate this exact solution is what drives the five assumptions of linear regression.
The exact solution is difficult to calculate for a large number of samples.
Let's turn our attention now to the second method for calculating the regression coefficients, the maximum likelihood estimation, or MLE.
Maximum Likelihood Estimation (MLE)
Let's look at the problem of calculating the best coefficients of linear regression differently with this or any kind of model. The model has a potential set of parameters that best approaches a given dataset. You want to find this set of optimum parameters.
You want to maximize the probability of observing your dataset, given the parameters of that model.
In other words, find the parameters of the model that makes the current observation most likely. This is called maximizing the likelihood of the dataset.
How do we define the likelihood mathematically?
Consider p(xi/ω) the probability of the existence (observation) of a sample xi given the model and parameters ω . If the model was the true model that generated the dataset, what is the chance of seeing that particular sample?
Now consider the probability for each sample in the dataset and multiply them all.
L(ω) is called the likelihood, and this is what you want to maximize.
Suppose here that the samples are independent of each other. p(xi,xk)=0 for all (i,k) with i≠k.
Since the log function is an always increasing function, maximizing L(ω) or log(L(ω)) is the same. And if you take the log of L(ω) , the product transforms into a sum and you have the mathematical definition of the log-likelihood!
MLE for Linear Regression
In the case of linear regression, and when the residuals are normally distributed, the two methods OLS and MLE lead to the same optimal coefficients.
Math Demo (From Afar)
The demonstration goes like this:
If you assume that the residuals ˆy−y are normally distributed centered with standard deviation σ2:N(0,σ2) , then the likelihood function can be expressed as:
The log-likelihood function is then:
In other words:
And you find the same distance d(y,ˆy) between the real values y and the inferred values ˆy in the OLS method!
Notice the negative sign in front of d(y,ˆy) . This is why minimizing the quadratic distance in OLS is the same as maximizing the log-likelihood.
If the assumptions of the linear regression are met, notably the normally distributed property of the residuals, the OLS and the MLE methods both lead to the same optimal coefficients.
To conclude this chapter, I'd like to talk about loss functions. In both OLS and MLE methods, we choose a different function of the model parameters and the model estimation error, and then we minimize or maximize that function for the parameters of the model. We can generalize that idea by considering other functions that we could potentially minimize to find optimal model parameters.
We could consider the absolute value of the residuals |ˆy−y| instead of the quadratic distance.
We could also add a term α‖ω‖2 that takes into account the influence of the coefficients to the term to be minimized ‖ˆy−y‖2+α‖ω‖2. This loss function is used in a method called ridge regression.
By changing the loss function, we can build different models that lead to different optimal solutions.
In fact, as you will see in the chapter on logistic regression, the loss function in binary classification (when y can only take 2 values 0 or 1):
In this chapter, we focused on the mathematical theory that drives the linear regression method.
The key takeaways are:
The math is what dictates the five assumptions of. linear regression.
The ordinary least square minimizes the square of the residuals.
The OLS method is computationally costly in the presence of large datasets.
The maximum likelihood estimation method maximizes the probability of observing the dataset given a model and its parameters.
In linear regression, OLS and MLE lead to the same optimal set of coefficients.
Changing the loss functions leads to other optimal solutions.
This concludes Part 2 of the course! Amazing work! You should now have a good grasp on how to apply linear regression to data, how to evaluate the quality of a linear regression model, the conditions that need to be met, and what calculation drives the linear regression method.
In Part 3, we're going to extend linear regression, first to handle categorical variables, both as an outcome and as predictors, and then to effectively deal with nonlinear datasets.
Go One Step Further: Math Stuff That Did Not Make the Cut
I won't lie, math has a bad rap for being obtuse. I like the elegance and feeling of the magic behind equations and logical reasoning. That said, here is some math stuff that I find very cool, but not necessary to understand OLS or MLE.
A distance is simply the length between two points, cities, places, etc. But in math, we can play with a whole slew of distances.
The absolute value distance ( L1 norm) between 2 points x and y is defined as
While the quadratic norm is defined by
The Manhattan distance, aka taxi distance.
You can even define an ∞ distance with
So much fun!
There are several equivalent conditions that are necessary and sufficient for a matrix A to be invertible. For instance:
Its determinant det(A)≠0
Its columns are not linearly dependent.
The only solution to Ax=0 is x=0
In a way, this is similar to the scalar case where the inverse of 0:10 can not be calculated.