Logistic regression

Содержание

Слайд 2

Logistic Regression is a statistical method of classification of objects. In

Logistic Regression is a statistical method of classification of objects.
In this

tutorial, we will focus on solving binary classification problem using logistic regression technique.
This tutorial also presents a case study that will let youlearn how to code and apply Logistic RegressioninPython.
Слайд 3

A doctor classifies the tumor as malignant or benign. A bank

A doctor classifies the tumor as malignant or benign.
A bank

transaction may be fraudulent or genuine.
For many years, humans have been performing such tasks -albeit they are error-prone. The question is can we train machines to do these tasks for us with a better accuracy?
One such example of machine doing the classification is the email Clienton your machine that classifies every incoming mail as “spam” or “not spam” and it does it with a fairly large accuracy.
The statistical technique of logistic regression has been successfully applied in email client. In this case, we have trained our machine to solve a classification problem
Слайд 4

Logistic Regression is just one part of machine learning used for

Logistic Regression is just one part of machine learning used for

solving this kind of binary classification problem. There are several other machine learning techniques that are already developed and are in practice for solving other kinds of problems.
Here the outcome of the predication has only two values -Yes or No.
We call these as classes -so as to say we say that our classifier classifies the objects in two classes. In technical terms, we can say that the outcome or target variable is dichotomous in nature.
Слайд 5

There are other classification problems in which the output may be

There are other classification problems in which the output may be

classified into more than two classes. For example, given abasket full of fruits, you are asked to separate fruits of different kinds. Now, the basket may contain Oranges, Apples, Mangoes, and so on. So when you separate out the fruits, you separate them out in more than two classes. This is a multivariate classification problem
Слайд 6

Слайд 7

Слайд 8

Слайд 9

The sigmoid function also known as the logistic function is going

The sigmoid function also known as the logistic function is going to be the key

to using logistic regression to perform classification.
The sigmoid function takes in any value and outputs it to be between 0and 1.
The key thing to notice here is that it doesn’t matter what value of z you put into the logistics or the sigmoid function you’ll always get a value between 0 and 1.
Слайд 10

We can take our linear regression solution and place it into

We can take our linear regression solution and place it into

the sigmoid function and it looks something like this:
Слайд 11

We can set a cutoff point at 0.5and we can say

We can set a cutoff point at 0.5and we can say anything below 0.5 results

in class 0 and anything above 0.5 belongs to class 1.
Слайд 12

Слайд 13

Convex cost function for logistic regression •If h goes to zero

Convex cost function for logistic regression

•If h goes to zero and

Cost also goes to zero, Class 0 is selected •If h goes to 1 and Cost goes to zero, class 1 is selected
Слайд 14

Model evaluation After we have trained a logistic regression model on

Model evaluation

After we have trained a logistic regression model on some training dataset we can

evaluate the model’s performance on some test dataset, we can use confusion matrix to evaluate classification models.
Confusion matrix:
The confusion matrix is a table test is often used to describe the performance of the classification model on the test data for which the true values are already known, so we can use a confusion matrix to evaluate a model.
Слайд 15

#example: testing the presence of a disease NO = negative test

#example: testing the presence of a disease
NO = negative test = False =

0
YES = positive test = True = 1
Basic Terms:
True Positives(TP)= are the cases in which we predicted yes they have the disease and in reality, they do have the disease.
True Negative(TN)= are the cases in which we predicted no they don’t have the disease and in reality, they don’t have the disease.
False Positive(FP) = are the cases in which we predicted yes they have the disease and in reality, they don’t have the disease. This is also known as Type 1 Error.
False Negative(FN)= are the cases in which we predicted no they don’t have the disease and in reality, they do have the disease. This is also known as the Type 2 Error.
Слайд 16

Misclassification Rate: how often is it wrong? MR = (FP+FN)/total MR

Misclassification Rate:
how often is it wrong?
MR = (FP+FN)/total
MR = (10+5)/165 =

0.09
This is also called as the Error Rate
Слайд 17

Слайд 18

Multi-class classification One-vs-all strategy: working with multiple binary classifications We train

Multi-class classification

One-vs-all strategy: working with multiple binary classifications
We train one

logistic regression classifier for each class i to predict the probability that y = i

For each x, pick the class having highest value of probability

Слайд 19

Слайд 20

How to deal with overfitting Seems having higher order of polynomials

How to deal with overfitting

Seems having higher order of polynomials

is good fit, but how to deal with overfitting?
Reduce the number of features manually –Keep all the features, but apply regularization
The most common variants in machine learning are L₁and L₂ regularization
–Minimizing E(X, Y) + α‖w‖, where w is the model's weight vector, ‖·‖ is either the L₁norm or the squared L₂norm, and α is a free parameter that needs to be tuned empirically
–Regularization using L₂norm is called Tikhonov regularization (Ridge regression), using L₁norm is called Lasso regularization
Слайд 21

Advantages: it doesn’t require high computational power is easily interpretable is

Advantages:

it doesn’t require high computational power
is easily interpretable
is used widely by the data analyst and data

scientists.
is very easy to implement
it doesn’t require scaling of features
it provides a probability score for observations.
Слайд 22

Disadvantages: while working with Logistic regression you are not able to

Disadvantages:

while working with Logistic regression you are not able to handle a large

number of categorical features/variables.
it is vulnerable to overfitting
it cant solve the non-linear problem with the logistic regression model that is why it requires a transformation of non-linear features
Logistic regression will not perform well with independent(X) variables that are not correlated to the target(Y) variable.
Слайд 23

https://www.youtube.com/watch?v=yIYKR4sgzI8

https://www.youtube.com/watch?v=yIYKR4sgzI8