Linear Regression

From Deep Learning Course Wiki
Jump to: navigation, search



Linear Regression is a supervised machine learning algorithm where the predicted output is continuous and has a constant slope. Is used to predict values within a continuous range. (e.g. sales, price, height) rather than trying to classify them into categories (e.g. cat, dog, chipmunk).

Simple Linear Regression
Simple linear regression uses traditional slope-intercept form, where m and b are the variables our algorithm will try to "learn" to produce the most accurate predictions. x represents our input data and y represents our prediction.

y = mx + b

Multiple Linear Regression
A more complex, multi-variable linear equation might look like this, where W's represent the coefficients, or weights, our model will try to learn.

f(x,y,z) = W_1 x + W_2 y + W_3 z

The variables x, y, z represent the attributes, or distinct pieces of information, we have about each observation. For sales predictions, these attributes might include a company's advertising spend on radio, TV, and newspapers.

Sales = W_1 Radio + W_2 TV + W_3 News


  • Pros: fast runtimes, easy to understand results
  • Cons: can be less accurate than other models b/c data relationships in the real world are often non-linear

Simple Linear Regression

Let’s say we are given a dataset with the following columns (features): how much a company spends on Radio advertising each year and its annual Sales in terms of units sold. We are trying to develop an equation that will let us to predict units sold based on how much a company spends on radio advertising. The rows (observations) represent companies.

Company Radio ($) Sales (units)
Amazon 37.8 22.1
Google 39.3 10.4
Facebook 45.9 18.3
Apple 41.3 18.5

Predict Function

Our prediction function outputs an estimate of sales given a company's radio advertising spend and our current values for Weight and Bias.


Sales = Weight*Radio + Bias
  • Weight is the coefficient for the Radio independent variable. In machine learning we call coefficients weights.
  • Bias is the intercept where our line intercepts the y-axis. In machine learning we can call intercepts bias. Bias offsets all predictions that we make.
  • Radio is our independent variable. In machine learning we call these variables features.

Our algorithm will try to "learn" the correct values for Weight and Bias. By the end of our training, our equation will approximate the line of best fit.

Regress line 4.png

Here is our equation rewritten in python:

def predict_sales(radio, weight, bias):
    return weight*radio + bias

Cost Function

The predict function is nice, but for our purposes we don't really need it. What we need is a cost function so we can start optimizing our weights. A cost function is a wrapper around our model function that tells us "how good" our model is at making predictions for a given set of parameters. The cost function has its own curve and its own derivatives. The slope of this curve tells us the direction we should update our weights to make the model more accurate!

Let's use Mean Squared Error as our cost function. MSE measures the average squared difference between an observation's actual and predicted values. The output is a single number representing the cost, or score, associated with our current set of weights. Our goal is to minimize MSE to improve the accuracy of our model.

For our simple linear equation:

y = mx + b

MSE can be calculated with the formula:

MSE =  \frac{1}{N} \sum_{i=1}^{n} (y_i - (mx_i + b))^2
  • N is the total number of observations (data points)
  • \frac{1}{N} \sum_{i=1}^{n} is the mean
  • y_i is the actual value of an observation and (mx_i + b) is our prediction

Calculating the mean squared error in python.

# MSE for sales = weight*radio + bias
def cost_function(radio, sales, weight, bias):
    companies = len(radio)
    total_error = 0.0
    for i in range(companies):
        total_error += (sales[i] - (weight*radio[i] + bias))**2
    return total_error / companies

Gradient Descent

To minimize MSE we need to calculate the gradient of our cost function. A good introduction to gradient descent can be found on our wiki.

Now let's run gradient descent for our Radio vs Sales example.

There are two "parameters" (i.e. coefficients) in our cost function we can control: weight and bias. Since we need to consider the impact each one has on the final prediction, we use partial derivatives. To find the partial derivatives, we use the chain rule. We need the chain rule because (y - (mx + b))^2 is really 2 nested functions, inner y-mx+b and outer x^2. An explanation of the math behind this can be found here. An overview of the chain rule more generally can be found on our wiki.

Given the cost function:

f(m,b) =  \frac{1}{N} \sum_{i=1}^{n} (y_i - (mx_i + b))^2

The gradient of this cost function would be:

f'(m,b) =
       \frac{1}{N} \sum -2x_i(y_i - (mx_i + b)) \\
       \frac{1}{N} \sum -2(y_i - (mx_i + b)) \\

To solve for the gradient, we iterate through our data points using our new weight and bias values and take the average of the partial derivatives. The resulting gradient tells us the slope of our cost function at our current position (i.e. weight and bias) and the direction we should update to reduce our cost function (we move in the direction opposite the gradient). The size of our update is controlled by the learning rate.

def update_weights(radio, sales, weight, bias, learning_rate):
    weight_deriv = 0
    bias_deriv = 0
    companies = len(radio)
    for i in range(companies):
        # Calculate partial derivatives
        # -2x(y - (mx + b))
        weight_deriv += -2*radio[i] * (sales[i] - (weight*radio[i] + bias))

        # -2(y - (mx + b))
        bias_deriv += -2*(sales[i] - (weight*radio[i] + bias))

    # We subtract because the derivatives point in direction of steepest ascent
    weight -= (weight_deriv / companies) * learning_rate
    bias -= (bias_deriv / companies) * learning_rate

    return weight, bias


Training a model is the process of iteratively improving your prediction equation by looping through the dataset multiple times, each time updating the weight and bias values in the direction indicated by the slope of the cost function (gradient). Training is complete when we reach some predetermined “acceptable error” threshold, or when subsequent training iterations fail to reduce our cost.

Before training we need to initializing our weights (set default values), set our hyper-parameters (learning rate and number of iterations), and prepare to log our progress over each iteration.


def train(radio, sales, weight, bias, learning_rate, iters):
    cost_history = []
    for i in range(iters):
        weight,bias = update_weights(radio, sales, weight, bias, learning_rate)
        #Calculate cost for auditing purposes
        cost = cost_function(features, targets, weights)
        # Log Progress
        if i % 10 == 0:
            print "iter: "+str(i) + " cost: "+str(cost)
    return weight,bias,cost_history


If our model is working, we should see our cost decrease after every iteration.


 iter=1     weight=.03    bias=.0014    cost=197.25
 iter=10    weight=.28    bias=.0116    cost=74.65
 iter=20    weight=.39    bias=.0177    cost=49.48
 iter=30    weight=.44    bias=.0219    cost=44.31
 iter=30    weight=.46    bias=.0249    cost=43.28


Regress line 1.png

Regress line 2.png

Regress line 3.png

Regress line 4.png

Cost Function

Training cost function.png


By learning the best values for weight (.46) and bias (.25), we now have an equation that predicts future sales based on radio advertising investment.

Sales = .46 Radio + .025

How would our model perform in the real world? I’ll let you think about it :)

Multiple Linear Regression

Let’s say we are given data on TV, radio, and newspaper advertising spend for a list of companies, and our goal is to predict sales (units sold).

Company TV Radio Newspaper Sales
Amazon 230.1 37.8 69.2 22.1
Google 44.5 10.4 23.1 10.4
Facebook 17.2 18.3 34.7 9.3
Apple 151.5 18.5 13.2 28.2

Growing Complexity
As the number of features grows, the complexity of our model increases and it becomes increasingly difficult to visualize, or even comprehend, our data.

from mpl_toolkits.mplot3d import Axes3D

def plot_3d_plane(x,y,z,xlabel,ylabel,zlabel):
    fig = plt.figure()
    ax = fig.add_subplot(111, projection='3d')
    n = 100
    for c, m, zl, zh in [('g', 'o', -50, -25)]:
        ax.plot_trisurf(x, y, z,

One solution is to break the data apart and compare 1-2 features at a time. In this example we explore how Radio and TV investment impacts Sales.

3d plane mlr.png


As the number of features grows, calculating gradient takes longer to compute. We can speed this up by "normalizing" our input data to ensure all values are within the same range. This is especially important for datasets with high standard deviations or differences in the ranges of the attributes. Our goal now will be to normalize our features so they are all in the range -1 to 1.


 For each feature column {
     #1 Subtract the mean of the column (mean normalization)
     #2 Divide by the range of the column (feature scaling)

Our input is a 200 x 3 matrix containing TV, Radio, and Newspaper data. Our output is a normalized matrix of the same shape with all values between -1 and 1.

def normalize(features):
    features     -   (200, 3)
    features.T   -   (3, 200)
    # We transpose our matrix, swapping columns 
    # and rows to make vector math easier
    for feature in features.T:
        fmean = np.mean(feature)
        frange = np.amax(feature) - np.amin(feature)
        #Vector Subtraction
        feature -= fmean
        #Vector Division
        feature /= frange

    return features

Further Reading

Matrix Math

Before we continue, it's helpful to understand how basic matrix operations work in Numpy. For example, Numpy dot product can be used on both 1D and 2D arrays.

Predict Function

Our predict function outputs an estimate of sales given our current weights (coefficients) and a company's TV, radio, and newspaper spend. Our model will try to identify weight values that most reduce our cost function.

Sales= W_1 TV + W_2 Radio + W_3 Newspaper

Initialize Weights

W1 = 0.0
W2 = 0.0
W3 = 0.0
weights = np.array([

Predict Function

def predict(features, weights):
    features - (200, 3) 
    weights - (3, 1)
    predictions - (200,1)

Cost Function

As we discussed above, we need a cost function to audit how our model is performing.

The math is the same, except we swap our mx + b expression for W_1 x_1 + W_2 x_2 + W_3 x_3. We also divide our expression by 2 to make our derivative calculation simpler (don't ask questions).

MSE =  \frac{1}{2N} \sum_{i=1}^{n} (y_i - (W_1 x_1 + W_2 x_2 + W_3 x_3))^2

Calculating the mean squared error in python.

def cost_function(features, targets, weights):
    Targets: (200,1)
    Returns 1D matrix of predictions
    N = len(targets)
    predictions = predict(features, weights)
    # Matrix math lets use do this without looping
    sq_error = (predictions - targets)**2

    # Return average squared error among predictions
    return 1.0/(2*N) * sq_error.sum()

Gradient Descent

Again using the chain rule we can compute the gradient--a vector of partial derivatives describing the slope of the cost function for each weight.

\frac{d}{dW_1} = -x_1(y - (W_1 x_1 + W_2 x_2 + W_3 x_3))
\frac{d}{dW_2} = -x_2(y - (W_1 x_1 + W_2 x_2 + W_3 x_3))
\frac{d}{dW_3} = -x_3(y - (W_1 x_1 + W_2 x_2 + W_3 x_3))


def update_weights(features, targets, weights, lr):
    Features:(200, 3)
    Targets: (200, 1)
    Weights:(3, 1)
    predictions = predict(features, weights)
    #Extract our features
    x1 = features[:,0]
    x2 = features[:,1]
    x3 = features[:,2]
    # Use matrix cross product (*) to simultaneously 
    # calculate the derivative for each weight
    d_w1 = -x1*(targets - predictions)
    d_w2 = -x2*(targets - predictions)
    d_w3 = -x3*(targets - predictions)
    # Multiply the mean derivative by the learning rate
    # and subtract from our weights (remember gradient points in direction of steepest ASCENT)
    weights[0][0] -= (lr * np.mean(d_w1))
    weights[1][0] -= (lr * np.mean(d_w2))
    weights[2][0] -= (lr * np.mean(d_w3))
    return weights

Vectorized Gradient Descent

The above gradient descent code has a lot of duplication. Can we improve it somehow? One way to refactor would be to loop through our features and weights--allowing our function handle any number of features. However there is another even better technique: vectorized gradient descent.

In VGD we use the same formula as above, but instead of operating on a single feature at a time, we use matrix multiplication to operative on all features and weights simultaneously! Math explained here.

Old way

d/dW_1 = -x_1(targets - predictions)

New way
Replace x_1, x_2, x_3 with our entire features matrix.

gradient = -features(targets - predictions)


features = [
    [x1, x2, x3]
    [x1, x2, x3]
    [x1, x2, x3]
targets-predictions = [

Vectorized Gradient Descent

def update_weights_vectorized(features, targets, weights, lr):
    gradient = features.T * (predictions - targets) / N
    Features: (200, 3)
    Targets: (200, 1)
    Weights: (3, 1)
    companies = len(features)

    #1 - Get Predictions
    predictions = predict(features, weights)

    #2 - Calculate error/loss
    error = targets - predictions
    #3 Transpose features from (200, 3) to (3, 200)
    # So we can multiply w the (200,1)  error matrix.
    # Returns a (3,1) matrix holding 3 partial derivatives -- 
    # one for each feature -- representing the aggregate 
    # slope of the cost function across all observations
    gradient =,  error)

    #4 Take the average error derivative for each feature
    gradient /= companies

    #5 - Multiply the gradient by our learning rate
    gradient *= lr
    #6 - Subtract from our weights to minimize cost
    weights -= gradient

    return weights

Add Bias

Our train function is the same as for simple linear regression, however we're going to make one final tweak before running: add a bias term to our feature matrix. There is a good explanation for why on Quora:

In linear regression, without the bias term your solution has to go through the origin. That is, when all of your features are zero, your predicted value would also have to be zero. However, that may not be the answer the training data suggests. Adding a bias weight that does not depend on any of the features allows the hyperplane described by your learned weights to more easily fit data that doesn't pass through the origin.

Essentially, when you're deciding whether to add a bias term you have to ask yourself: if all my feature values were 0, would my output also be zero? Is it possible there is some base value upon which my features values have an effect? In our example, it's very unlikely that sales would be zero if companies stopped advertising. Possible reasons for this might include past advertising, existing customer relationships, retail locations, and salespeople.

# Here we add a constant bias term of 1
# By setting this value to 1, it turns our bias
# weight into a constant.
bias = np.ones(shape=(len(features),1))
features = np.append(bias, features, axis=1)


After training our model through 1000 iterations with a learning rate of .0005, we finally arrive at a set of weights we can use to make predictions:

Sales = 4.7TV + 3.5Radio + .81Newspaper + 13.9

Our MSE cost dropped from 110.86 to 6.25.

Error rate multiple regression.png

Code Examples