Tuesday, April 26, 2022

Linear Regression

Introduction:

This blog helps to understand the basic fundamentals of linear regression. The blog explains the mathematics behind linear regression using a simple linear equation such as Y = mX + c. The blog is separated into two parts: (1) simple linear regression and (2) multivariant linear regression.

First, you need to understand why linear regression is helpful in such cases as 
For simple single variable linear regression:
  1. Employees' salary prediction based on experience.
  2. Vehicles' top speed based on their market values.
  3. Power consumption based on a number of family members and many more.
For multivariant linear regression:
  1. House pricing prediction using carpet area, area code and etc.
  2. Future population Estimation using parameters of current lifestyle.
  3. Company growth based on expense, market conditions, and many more.
Let's start sequentially, we will understand the simple linear regression and then multivariant linear regression.

1. Simple Linear Regression.

Simple linear regression is just one variable prediction where we can pass a single feature and estimate the output using two coefficients. The  Basic equation for linear regression is:

Y = mX + c

 Where m and c are coefficients. Y is the estimated output and X is an input value.

Here, m and c are learning parameters that can be changed over time during training sessions.

The core functionality of simple linear regression is divided into 2 parts:
  • calculate the mean and covariance of the input feature.
  • estimate coefficients.

Calculate mean, variance, and covariance of the input feature:

We need to find a total of three parameters:
  1. mean:
    `mean = {\frac {1}{n}}\sum _{i=1}^{n}x_{i}`
  2. variance:
    `variance = \sum_{i=0}^n (x_{i} - \overline{x})^{2}`
  3. covariance:
    `covariance = \sum_{i=0}^n (x_{i} - \overline{x}) * (y_{i} - \overline{y})`

The above equations can be written as function in the python program as follows:

  1. mean:
    mu = lambda x : sum(x) / float(len(x))
  2. variance:
    var = lambda x, mean : sum([(val-mean)**2 for val in x])
  3. covariance:
    def calculate_covariance(x, y, mean_x, mean_y):
        covarince = 0
        for xi, yi in zip(x,y):
            covarince += (xi - mean_x) * (yi-mean_y)
        return covarince

Estimate coefficients:

We can find coefficients m and c using the above functions. As per the linear equation terminology, we can call m the slope of the line, and c is the bias or the origin point on the x-axis.

The slope can be calculated as follows:

`Slope = \frac{covariance}{x_{variance}}`

Bias can be found very easily as we know the line equation Y = slope*X + bias. Make bias as a target and we will get the equation as bias = Y - slope*X.

The python implementation can be written as follow to estimate slope and bias value.

def estimate_coefficient(covariance, variance_x, mean_x, mean_y):
    slope = covariance/variance_x
    bias = mean_y - (slope * mean_x)
    return slope, bias

After getting the slope and bias values, the estimation can be performed using the same line equation(Y = mX + c) where we can put slope and bias values along with the feature value to get its estimation.

Multivariant Linear Regression.

The multivariant linear regression model uses multiple features to estimate continuous prediction. For example, we need to predict the opening value of a specific stock using multiple features such as closing price, opening price, avg price, and more for the previous day.

There are 2 sections that can be implemented in the multivariance linear regression.
  1. Estimator.
  2. Stochastic gradient descent.

1. Estimator

The estimator is a simple linear function with multiple features multiplied by their respective weights. Also, these multiplications can be summed along with a bias value.

So, The estimator function can be visualized in mathematical form as follows:

    `y = b + \sum_0^n w_{i}*x_{i}`

    The above linear function can be written as a function in the python program as follows:

    def estimation(x,w):
        return w[0] + sum([wi*xi for wi,xi in zip(w[1:],x)])

    Where x and w are scaler vectors of a 1D array that contain all feature values and feature importance respectively.

    2. Stochastic gradient descent

    The stochastic gradient descent method can be broken down into 3 steps (1) run estimation (2) calculate error (3) update weights according to the generated error.

    we can use the estimation function from the estimator. The output of the estimator can be used to calculate the error by comparing it with actual values. The error can be used to update weights by applying multiplication of error and learning rate.

    The error can be calculated as follow:

        `Err = (y_{pred} - y)^{2}`

    To update weights, the effect of change in every feature can be subtracted from the current value of weights by a predefined rate.

        `w_{i} := w_{i} - lr * Err * x_{i}`

        `b_{i} := b_{i} - lr * Err`

    The above functions can be written in a function in the python program as follows:

    def stochastic_gradient_descent(train_X, train_y, learning_rate, epochs):
        weights = [0.0 for i in range(len(train_X[0]))]
        for epoch in range(epochs):
            sum_error = 0
            for X,Y in zip(train_X, train_y):
                pred = estimation(X, weights)
                error = ((pred - Y) ** 2)
                sum_error += error
                weights[0] = weights[0] - learning_rate * error
                for i in range(len(X)-1):
                    weights[i + 1] = weights[i + 1] - learning_rate * error * X[i]
            print( ' >epoch=%d, lrate=%.3f, error=%.3f ' % (epoch, learning_rate, sum_error))
        return weights

    No comments:

    Post a Comment

    Linear Regression

    Introduction: This blog helps to understand the basic fundamentals of linear regression. The blog explains the mathematics behind linear reg...