Machine learning

studied byStudied by 3 People
0.0(0)
Get a hint
hint

Probability

1/385

Tags & Description

Everything until deep learning

Studying Progress

New cards
385
Still learning
0
Almost done
0
Mastered
0
385 Terms
New cards

Probability

Study of uncertainty and randomness, used to model and analyze uncertainty in data.

New cards
New cards

A form of regularization

Ridge regression

New cards
New cards

Rows on a confusion matrix

Correspond to what is predicted

New cards
New cards

Collumns on a confusion matrix

Correspond to the known truth

New cards
New cards

The sensitivity Metric equation

True positives divided by the sum of true positives and false negatives

<p>True positives divided by the sum of true positives and false negatives </p>
New cards
New cards

The Specificity metric equation

True negatives divided by true negatives plus false positives

New cards
New cards

if sensitivity = 0,81 what does it mean

example: tells us that 81% of the people with heart disease were correctly identifies by the logistic regression model

New cards
New cards

If specificity = 0.85 what does it mean

It means that 85% of the people without heart disease were correctly identified

New cards
New cards

When a correlation matrix has more than 2 rows, how do we calculate the sensitivity

We sum the false negatives

New cards
New cards

What is the function of specificity and sensitivity:

It helps us to decide which machine learning method would be best for our data

New cards
New cards

Sensitivity

If correcty identifying positives is the most important thing to do, which one should i choose? Sensitivity or Specificity?

New cards
New cards

If correctly identifying negatives is the most important thing, which one should I choose? Sensitivity or specificity?

Specificity

New cards
New cards

ROC

Receiver operator Characteristic

New cards
New cards

Roc funtion

To provide a simple way to summarize all the information, instead of making several confusion matrix

New cards
New cards

The y axis, in ROC, is the same thing as

Sensitivity

New cards
New cards

The x axis, in ROC, is the same thing as

Specificity

New cards
New cards

True positive rate =

Sensitivity

New cards
New cards

False positive rate =

Specificity

New cards
New cards

In another words, ROC allows us to

Set the right threshold

New cards
New cards

When specificity and sensitivity are equal,

the diagonal line shows where True positive rate = False positive rate

New cards
New cards

The ROC summarizes…

All of the confusion matrices that each threshold produced

New cards
New cards

AUC

Area under the curve

New cards
New cards

AUC function

To compare one ROC curve to another

New cards
New cards

Precision equation

True positives / true positives + false positives

New cards
New cards

Precision

the proportion of positive results that were correctly classified

New cards
New cards

Precision is not affected by imbalance because

It does not include the number of true negatives

New cards
New cards

Example when imbalance occurs

When studying a rare disease. In this case, the study will contain many more people without the disease than with the disease

New cards
New cards

ROC Curves make it easy to

Identify the best threshold for making a decision

New cards
New cards

AUC curves make it easy to

to decide which categorization method is better

New cards
New cards

Entropy can also be used to

Build classification trees

New cards
New cards

Entropy is also the basis of

Mutual Information

New cards
New cards

Mutual Information

Quantifies the relationship between 2 things

New cards
New cards

Entropy is also the basis of

Relative entropy ( the kullback leibler distance) and Cross entropy

New cards
New cards

Entropy is used to

quantify similarities and differences

New cards
New cards

If the probability is low, the surprise is

high

New cards
New cards

If the probability is high, the surprise is

low

New cards
New cards

The entropy of the result of X is

The expected surprise everytime we try the data

New cards
New cards

Entropy IS

The expected value of the surprise

New cards
New cards

We can rewrite entropy using

The sigma notation

<p>The sigma notation </p>
New cards
New cards

Equation for surprise

knowt flashcard image
New cards
New cards

Equation for entropy

knowt flashcard image
New cards
New cards

Entropy

Is the log for the inverse of the probability

<p>Is the log for the inverse of the probability </p>
New cards
New cards

R2 *R Squared does not work for

Binary data, yes or no

New cards
New cards

R squared works for

Continuous data

New cards
New cards

Mutual information is

A numeric value that gives us a sense of how closely related two variables are

New cards
New cards

Equation for mutual information

knowt flashcard image
New cards
New cards

Joint probabilities

The probability of two things occuring at the same time

New cards
New cards

Marginal Probabiities

The opposite of joint probability, is the probability of one thing occuring

New cards
New cards

Least sqaures =

Linear regression

New cards
New cards

squaring ensures

That each term is positive

New cards
New cards

Sum of Squared Residuals

How well the line fits the data

New cards
New cards

Sum of Squared Residuals function

The residuals are the differences between the real data and the line, and we are summing the square of these values

New cards
New cards

The Sum of square residuals must be

as low as possible

New cards
New cards

First step when working with bias and variance

Split the data in 2 sets, one for training and one for testing

New cards
New cards

How do we find the optimal rotation for the line

We take the derivative of the function. The derivative tells us the slope of the function at every point

New cards
New cards

Least squares final line

Result of the final line, that minimizes the distance between it and the real data

New cards
New cards

The first thing you do in linear regression

Use least squares to fit a line to the data

New cards
New cards

The second thing you do in linear regression

calculate r squared

New cards
New cards

The third thing you do in linear regression

calculate a p value for R

New cards
New cards

Residual

The distance from the line to a data point

New cards
New cards

SS(Mean)

Sum of squares around the mean

New cards
New cards

SS(Fit)

Sum of squares around the least squares fit

New cards
New cards
New cards
New cards

Linear regression is also called:

Lest squares

New cards
New cards

What is Bias

Inability for a machine learning method like linear regression to capture the true relationship

New cards
New cards

How do we calculate how the lines will fit the training set:

By calculating the sum of squares. We measure how far the dots are from the main line

New cards
New cards

How do we calculate how the lines will fit the testing set:

New cards
New cards

Overfit

When the line at the training set data fits well, but not it does not fit well on the testing set

New cards
New cards

Ideal algorithm

Low bias, accurate on the true relationship

New cards
New cards

Low variability

Producing consistent predictions across different datasets

New cards
New cards

Result of least squares determination value for the equation parameters

it minimizes The sum of the square residuals

New cards
New cards

Y= Y-intercept + slope X

Linear regression

New cards
New cards

Y = Y-intercept + slope x + slope z

Multiple regression

New cards
New cards

Equation for R2 r squared

R2 = ss(mean) - ss(fit)


ss(mean)

New cards
New cards

Goal of a t test

Compare means and see if they are significantly different from each other

New cards
New cards

Odds are NOT

Probabilities

New cards
New cards

ODDS are

the ration of something happening ex. the team winning


to something not happening, ex. the team NOT winning

New cards
New cards

Logit function

Log of the ration of the probabilities and formas the basis for logistic regression

New cards
New cards

log(odds)

Log of the odds

New cards
New cards

log odds use?

Log odds is useful to determine probabilitirs about win/lose, yes/no, or true/false

New cards
New cards

Odds ratio

ex>

<p>ex&gt; </p>
New cards
New cards

Relationship between odds ration and the log(odds ratio)

They indicate a relationship between 2 things, ex a relationship between the mutated gene and cancer, like weather or not having a mutated gene increases the odds of having cancer

<p>They indicate a relationship between 2 things, ex <em>a relationship between the mutated gene and cancer, like weather or not having a mutated gene increases the odds of having cancer </em></p>
New cards
New cards

Tests used to determine p values for log (odds ratio)

Fisher`s exact test, chi square test and the wald test

New cards
New cards

Large r squared implies…

A large effect

New cards