knowt logo

week 3bb part 2

biostats week 3bb part 1

p value will always have a way to reject or fail to reject the null


biostats week 3bb part 1

Transcript

Today we continue with these T procedures. So recall on Friday we went over to confidence intervals in the t distributions.

So what was that kind of new thing that we introduced about the t distributions that we now have to account for when we're constructing our confidence intervals?

So we introduced the new statistical table, the T table, right.

And there was this extra thing to the T table that was not on the C table.

What was that extra thing? And hint, it depends on the sample size. Second hint, it tells us how well our sample standard deviation is doing in estimating our population standard deviation.

Third hint, it's written on the board here. It's the degrees of freedom. So remember how we're finally now at a place in this class?

We are using real methods that are actually used on real data.

The Z distributions that we introduced last week, those are more used for teaching purposes.

Not to say they don't have any applications, but when we're comparing means with real data, we're going to be using the T distribution.

The z distribution assumes we somehow miraculously know the population standard deviation, but that is not applicable.

So what do we use? What's our best guess at the population standard deviation?

Well, it's just the sample standard deviation, so we can just sub in s for that.

But we have to account for the degrees of freedom, account for how well our sample standard devia deviation is estimating our population standard deviation.

As you might expect, as our sample size goes up, our sample standard deviation will do a better job estimating our standard deviation for the population.

Does anybody remember how we calculate degrees of freedom in this one sample t test world?

And when I say one sample t test, I mean when we are calculating a mean and just seeing where the null lies within the mean, how do we calculate degrees of free?

It's a very simple calculation in this world. N minus 1. N minus 1. Exactly. So easiest calculation we'll do in this unit. So degrees of freedom are just n minus one here. And looking at the table, if you're going to do it with the table, this is what the T table looks like.

It looks kind of like. It might be more confusing at first, but I think it's actually easier to use what's different from the z table and the t table, There's a main difference between the z table and the t table.

So in the Z table, recall the critical values for margins and the probability is we are working in sort of the main cells of the table.

But with the t table, the critical values are actually in the main joint distribution of the table, whereas the probabilities and the confidence levels are in the margins.

Also in the margins are the degrees of freedom. There's a different t distribution for each degree of freedom.

So you can't just, you have to make sure to take that into account when you're finding the critical value to calculate your confidence in.

So if we're going to do it with a calculator, which hopefully that's the route everyone will go, you can just find all these functions for your different hypothesis tests and different confidence intervals right there in that calculated 84.

So you can go to stat over to test and scroll down 8th option T interval.

And that will give you a couple different options of how to enter data.

You can enter it in as like a data list. So remember earlier in the semester we did like that.

There's different lists to calculate summary statistics.

You can do it like that enters like L1 or L2 and then calculate the confidence level based on that.

Or if you already have the stats on the problem, let's say you already have a sample mean standard deviation, you can just enter those in, input those in, put in the confidence level, and that will give you the confidence interval.

So when our null hypothesis is located inside the confidence interval, does that give us evidence to reject that null?

No, that would give us that we would fail to reject.

Right? Because what is that confidence interval telling us?

It's telling us this range of values where we're pretty sure the true value is.

So if hypothesized mean is in that interval, that's not going to give us very good evidence to reject that null if it's outside, that would give us good evidence.

All right, so hopefully everyone feels okay about that.

If not, go back and review those notes. Now we're going to talk about a 1 sample T test. So recall in the z world we did that test where we took some kind of benchmark value, some kind of mean outside the study and we compared our sample data to it.

Right? Use that rare event rule. We use those probabilities under the curve to say, hey, if this null were true, what's the probability of getting the result we got?

Right? If that probability is super small, that's going to give us evidence to say, hey, maybe this expected value, maybe this null value is not true.

So same type of thing, right? Same kind of overall guiding principles. But the main difference now is we have to explain account for those degrees of freedom because we're now in the t distribution.

So just like before, we're going to have a test statistic.

What does the test statistic tell us? Test statistics tells us the same thing in every distribution, every application.

What does the test statistic tell us? Test statistic tells us something about our data versus the null.

Two different words that start with a D. What does the test statistic tell us exactly? Yes. So it's the distance and divided direction in which our data divulges from the null.

Right? So that's our first bit of evidence. We mount up against the null. What's the second bit of evidence? We mount up against the null. And I would say the second one that I'm hinting at is sort of the main one.

We use the first one to get to the second probability.

There's N, the p value. The p value, exactly. So we have our test statistic, right? That distance and direction from the null which is in the center.

You can just calculate that as our sample mean minus the null over the standard error.

And then we have our P value, a two sided P value we calculate on both sides of the distribution.

So here are our steps. First we want to verify that our requirements are satisfied.

We're using the T procedures. So we either want the observations to be normally distributed or over 40 and.

Or over 40, I should say. As long as one of those folds, we're good to go. As always with hypothesis testing, we want to specify the null and alternative and then we specify the significance level.

What's our significance level? It's kind of the main one we use.05.0.05 and then we calculate the T statistic.

And that's going to be on a certain degree of freedom in this one sample t test world.

Degrees of freedom we just calculate as N minus 1.

Pretty straightforward there. Then we get the p value. We'll get this P value from the table. It won't be quite as laborious calculation as it was in the z world.

Then we state our conclusions. So very important here. Hopefully everyone has this already. But our P value is less than our alpha. What do we conclude? We reject the null. Right, because it tells us there's a very small chance of seeing the result we saw under the assumption that the null is true.

And if the p value is greater than alpha, we fail to reject.

So once again, the idea of this hypothesis testing is we are operating in this world where we assume this null, this mean is true.

Right? And then we calculate maybe a sample mean way over here and find that it's way up higher than our null.

And under the normal curve we can calculate the probability.

The area of seeing this Right. And if this probability is super small, it's unlikely that this is true.

So it gives us the evidence to reject the null. So how do we do this with a t table? Well, it'll be similar to how we did it with the confidence interval.

So first we just need to calculate degrees of freedom, go to that row and kind of block everything out but that row.

Then that t test statistic that you calculate, you just take the absolute value of that and then you kind of find where that's positioned in between the different t critical values on those rows.

And then you can scroll down to the bottom of the table to find your two sided p value.

So if this t critical value or your t test statistic is in the table, you can go to that column containing it.

Quite often if you're going to do it with a table, it won't be.

So it'll be sort of in between two values. So you'll specify your p value as some sort of inequality.

We're going to go over how to do it with the table for a few minutes and then we'll switch over to the calculator.

This slide looks a little crazy, but let's say we calculated a t test statistic of negative 2.52 and we had n of 8.

So we, we could do n minus 1. We get over here at Degrees of Freedom 7, take the absolute value of this t test statistic and then we find where that lies here.

Right. Notice we have all these critical values going in this direction.

We find that our 2.52 is right here. So we can scroll down and see what our two sided P value is.

And we see our two sided P value lies some somewhere in between 0.05 and 0.02.

If our significance level was 0.05, would we reject or fail to reject the null in this situation?

Is it less than 0.05? Yep. So we reject. All right, let's try some other ones. I don't really know why this the same one. So I'll just click through this one. So we take the absolute value. Right. And we do. Since we're at n of 8, we can just do n minus 1. We only look at this 7 row. We want to be looking at the two sided p value. So we scroll over, we find that 2.52, the absolute value of that in between these two.

So we can get 0.02, 0.05. Same thing with the positive version of this. Right. So we're taking the absolute value. We get 0.02, 0.05. What's the P value if T equals 5.1 on 8 degrees? So it's going to be somewhere over here. Right. So how would you specify that here? What's our last value? And the values are getting smaller. Right. So you'd say P is less than. You got it. So P is less than 0.002. Notice the relationship happening here. Right. And we've already kind of, you know, outlined this relationship up on the curve.

As our T values get higher, what's happening to our P values?

Yep, they get lower. And notice that on the curve, right, we have a T test statistic that's higher, that's showing our data is going to be further away from the null.

We're going to have a smaller area in that tail. So smaller areas give us evidence to reject higher T values.

Give us the. All right, what about what's the P value if t equals 0.7, be greater than 0.5?

Right. Because it's going off in this direction. So with this one, would we reject or fail to reject?

We fail to reject. Right. Because really high P value, our sample data is likely not very far from rl.

Hypothesize me. All right, let's do an example. Are there any questions about this table, how it works?

After the attacks of September 11, 2001, there was concern about the air quality near the World Trade Center.

Measurements were reported in random locations at building 5, 10 different days.

Immediately after the collapse, the mean lead level, air lead level was sampled.

All the sample days were to be 3 and the standard deviation 1.91.

The EPA has established an air quality standard for lead at 1.5, used a 0.05 significance test to test the claim that the sample is from a population with a mean greater than the EPA standard.

So in this one sample T world, right, we're collecting sample data, taking the mean of that sample data, and then we're testing that mean against some expected claim outside the study.

So that claim outside the study is the null hypothesis.

The alternative is that it's not. The true value doesn't equal. So in this situation, which number up there, There's a lot of numbers.

Which one would be our null hypothesis? Exactly. It would be that the mean is true, mean 1.5 in this group.

That's what we would expect. And then we have what's our sample mean and sample standard deviation 3 and 1.91.

And what's our N and what's our degrees? For you, it's kind of funny on this one. But they recorded it on 10 different days. So it would be 10 and then degrees of freedom would be what?

How do we calculate degrees of freedom? N minus one. Yeah. So what would it be here? Just be nine. All right, so what do we do next? What are our two bits of evidence? We use that we mount up against the null hypothesis test statistic.

Test statistic and P that. So we gotta do test statistics first. So how do we do that? Here Our null was 1.5. Our mean was 3. Our standard deviation is 1.91. How do we calculate our test statistic? So it's just our sample mean minus our null over the standard pair.

Do that, we get 2.48. Now what's our next step here? Look on the table. You got it. Look on the table. We've got our distance and direction from the null.

We are at 90 degrees of freedom. So at 9 degrees of freedom, looking for 2.48, 2.48 to be right here, somewhere in between 0.02 and 0.05.

So we would have this P value that is between 0.202 and 0.05.

So will we reject or fail to reject at the 0.05 level we project?

Right. So since our P value is less than alpha, we reject the null and favor of the alternative.

To conclude that there is evidence at the 0.05 level that the lead levels in the air building 5 and the days immediately following the attack were higher than the recommended.

It's estimated that the mean lead level was 3. Any questions about how we did this, about the conclusion or anything?

So let's do another. And I think this is the one you're going to get a calculator with.

So go ahead and get your calculator handy. In general, the average weight of rats is mu equals 175 grams.

The mean body weight of rats for the particular type of genetic disorder is unknown, as is the standard deviation.

It would be useful to know whether the mean body weight of rats would genetic disorder is different from the mean body weight of rats in the general population.

The mean and standard deviation of a sample of n equals 9.

Rats are given below. Tell us the hypothesis that rats with the disorder have a mean body weight different from 175.

We have a sample mean, we have a sample standard deviation.

What's our null and alternative here? Null is 175. Alternative is it doesn't equal 175. So we're going to see, we're going to test for differences in Our data versus a null.

We have an in equals five. What's degrees of freedom? Eight. You got it. So now. So I'll go to that stack button sort of in the top center of your TI calculator and then go to the right tab tests and you'll scroll down to the second one that says T test and within T test just like with the confidence interval.

Then you have two options. You have the data option and you have the static option.

Once again the data option. That refers to when like you're entering the data in as a list.

So like we did earlier in the semester when we entered subset list calculate summary statistics.

That's when you would use that if you're like given a list of data set.

But what are we given here? We're just given statistics. So from this example we're just going to hit stats.

And once you hit stats, it asks for a bunch of information.

It asks for the null, it asks for the sample standard deviation.

I'm sorry, sample mean sample standard deviation N.

And then it asks for what type of test you want. It's got the little equal with the line through it.

That's two sided test. That's something we're always going to do. You can also get a one sided test in either direction as well.

So in this situation what are you going to put in for the first value?

Uh, 175. You got it. And then what about for your X bar? Just that 182.67. And then for SX, what are you gonna put in? What are you gonna put in for the SX? Just the standard deviation 10.94. And then what's your N would actually be 10. Yeah. Degrees of freedom is nine degrees of freedom. I'm glad you said that. Degrees of freedom is calculated behind the scenes.

So if you're. I think it'll be given to you once you actually hit calculate.

But yeah, so if you do the calculator, degrees of freedom will happen behind the scenes.

And then we're going to do a two sided test. We love two sided tests. You can either hit calculate or draw. They're both going to give you the P value. Draw will just give you a nice little curve to look at too.

So that's kind of fun. But both should give you the test statistic and the P value if you hit calculate and enter.

So you should come up with something that looks like this.

Wait, did you say nine or ten? No, I was wrong. I'm sorry, you were right. N is 9 degrees of freedom. Sorry, Professor. Malfunction. N is 9. One of the early examples. N was 10. Roots green was 9. Put n n. Forget everything I said in the past 45 seconds. Reason to be eight. Yeah. Thank you. Sorry. So we put in all this stuff over here and we should get a P value that's greater than our alpha and then we would fail to reject any questions about that V calculator issues.

All right, let's do some more practice. A distribution of diastolic blood pressures for the population of female diabetics between ages 30 and 34 has an unknown mean mu and standard deviation sigma.

Researchers want to know whether the mean of this population is different from the mean diastolic blood pressure of the general population of females in this age group, which is 74.4.

A sample of 10 diabetic women in this age group is selected.

Their mean diastolic blood pressure is 84 and standard deviation is 9.1.

Using this information, conduct a two sided test with alpha equals 0.05.

Okay, so what is our null? What's our alternative? What's our sample mean? Sample standard deviation? NN what's our null alternative? A lot of information here, right? But having a battle with these is picking apart what we're going to type in.

Can't. Is it 74.4 you got 74.4 would be the null. We have a sample of 10 and then our sample mean is 84.

Sample standard deviation is 9.1. We're testing at the alpha equals 0.05 level. So we can go to stat test, go to that second option, the 1 sample T test.

And now let's just enter all this in and see what we get.

So for the null we'll put in a 74.4. For the x bar we'll put in 84. For sx we'll put in 9.1. For n we'll put on 10 and then we can just go ahead and hit calculate.

What do we get for our test statistic and p value? We get for our t value and the probability of P value 3.33 and you got it.

So you should get 3.336 and P value is 0.008. So what do we conclude here? Do we reject or fail to reject at the alpha equals 0.05 level?

Reject. We reject. Right. So we reject the null because the p value is so low.

So in summary here the T distributions help us deal with the fact that we are estimating the population standard deviation with the sample standard deviation.

We're entering that new level of uncertainty in the system.

So because of that, we have to introduce those degrees of freedom.

Degrees of freedom are direct function of our sample size.

So as we would expect, just like with the sample mean or other sample statistics, as N gets larger, our sample standard deviation will do a better job of estimating our population standard deviation.

The basic approach to hypothesis testing and confidence intervals is similar to the stuff we did before, but we just now have actually a little bit easier way to do it.

We can just do it with a calculator. Hopefully we kind of all understand a little bit of what's going on behind the scenes.

The basic steps of hypothesis tests and confidence intervals are going to be the same regardless of the methods used.

Right. So in the next couple weeks, it's kind of wild, but they don't have that much more material to cover in this class, which is just kind of crazy.

But in the next few weeks, we will use these overall ideas to sort of apply them to different types of methods.

Like, we're still going to be in the normal distribution, but we're going to within the distribution, we're going to look at comparing two different groups in a couple different ways.

So, but before we do all that, let's just do a little bit more practice today.

So we're going to pop back over to confidence intervals.

So I'm going to let you all read through this one and solve it, hopefully with the calculator and then we'll come back together and feel free to work with somebody next to you if you'd like it.

Okay, so what do we get for the 99% confidence interval and what is the interpretation of that confidence interval?

So we should have gone to stat test to the eighth option T interval.

Do you enter this in as data or stats? Stats. Right. What do you put in for X bar? What's the sample mean? Yep. And then what about for s 215. And then we can just set. We had 75 renters, so n is 75. The only catch here is we're asking for a 99% confidence interval.

So on the C level, just make sure to put in 0.99. It's going to default to 0.95, but you want to put in 0.99 there.

This is shown. If you did it with a table, you could just look this up in the table, construct it like that.

You did it with the calculator, you should get something very similar.

So is everybody getting something somewhere around this?

What would be the interpretation of this counter?

We are 99% confident that. What are we trying to do with the confidence interval?

What are we trying to capture? We have sample data. We calculated the mean of that data. Great. The mean of that data is the best guess at the parameter.

But because of sample data, we know it's not perfect.

Right. So the confidence interval gives us plausible range of values in which we think the real parameter are likely lies.

Right. So we're 99% confident that what? We're 99% confident. The true mean is between 1160 and1290. All right, let's do one more. In general, the average weight of newborns is mu equals 3500 grams.

The mean body weight of newborns with a particular disorder is unknown, as is the standard deviation.

It may be useful to know whether the mean body weight of newborns with a disorder is different from the mean body weight of newborns in the general population.

A study was conducted with 12 newborns with disorder and found the same mean body weight to be 3169.5 with the sample standard deviation of 327.88.

Test the hypothesis that newborns with the disorder have mean body weight different from 3500.

Use a 0.05. So we already have Lenol. And the alternative, what's our n? N is 12 degrees of freedom. If our n is 12, our degrees of freedom would just be n minus 1, which would be 11.

What is the sample mean of sample standard deviation?

Go ahead. Yeah, it's this. This will be the sample mean, this will be the sample standard deviation.

So we can just once again use that stat test. And which calculator function are you going to use?

What was that? So this one, it's asking us to do a T test. So for a T test we want to get that T test statistic and that P value.

So we're actually going to use the second option T test.

The eighth one is that T interval. So yeah, let's go to that second option T test and let's enter all this stuff in.

So for the null we would enter in what? We would just enter in that 3500. And then for our X bar we enter in 3161.5. And what do you get for your t test statistic and P value?

Negative 3 something. Hopefully negative 3.5. And what's the P value? That with that we got a very small P value. So that gives us the evidence to conclude what we reject or fail to reject that small P value.

Reject yeah, so we got the small P value. So we reject the null and conclude that we found a significant difference between the newborns with this disorder.

Real quick, I just want to go over a few things from homework.

8. A lot of people miss some of these. The Framingham Heart Study recorded the systolic blood pressure 3534 adults and 95% confidence interval as 126.26.7, 127.9.

Correct interpretation confidence interval is that we are 95% confident that the interval ranging from 126.7 to 127.9 contains population mean.

So when we construct a confidence interval we are 95% complement in the methods used to capture the population parameter at true value.

It's based on the idea of theoretically resampling the population a ton of times.

With the Same sample size 95% of the time we would expect to see the tree population parameter fall between our lower and upper bound.

When we talk about 95% of CIs when you're not talking about chance, rather confidence in our methods, I realize this sort of confusing and they sound very similar.

I also realized that in statistics we talk about chance a lot.

So this thing kind of tends to trick people up a lot.

So I'm sure I want to highlight that along the same lines.

Same one. They talk about how study reports mean change in high density lipoproteins adults eating raw garlic six days a week for six months.

Margin of error for 95% confidence interval is given as + or -6 milligrams per deciliter.

This means that the study used a method that gives a result within six of the true mean HDL change in the population for 95% of all samples of 3534 votes.

So once again construct a CI. It's based on the idea of theoretically resampling the population many, many, many times.

95% of the time we would expect to see the true parameter fall between the low and upper bound.

So this one kind of gets at that theoretical definition of it.

On resampling, you plan to construct a competence interval for the mean mu of all for a normal population with a known sigma.

Which of the following will reduce the size of the margin of error?

Well, all of these actually will reduce that margin of error.

Remember that margin of error is a thing you add and subtract to the sample mean to get the confidence interval.

So if we use a lower level of confidence, that will make our margin of error smaller.

If we increase the sample size, that will decrease standard error.

And then if we reduced sigma, that would do the same.

So this kind of refers to this slide. So we kind of talked through how decreasing the sample size, we take this and decrease it.

We'll make this term smaller and lead to a wider interval.

If we increase the confidence level, it'll lead to a wider interval.

And if we increase the standard deviation, it will lead to a wider interval.

So what you want to do for this one is just kind of take this slide and sort of flip it around, you know.

So increasing the sample size would be a narrower interval.

Decreasing the confidence interval to be an error interval and decreasing the standard deviation will lead to an error interval.

In a test of statistical hypotheses, the P value tells us probability of seeing the results we saw or something more extreme under that assumption that the null is the truth.

So remember, it's that innocent until proven guilty.

We're operating in a world where the null is true and we're mounting up that evidence.

We're calculating probability of seeing the result is.

And if that probability is so small, we can use that as evidence to reject the number.

That is it for me today. Homework nine is due tonight, so make sure to get that done.

Now, we have all our homework that are due on Mondays, TAM labs.

So don't forget that. If you have any questions about that, I will be here for a few minutes.

If not, have a great rest of your day and I will see you on Wednesday.

Can we actually go over that? Sure. Take care. Thank you. Bye. Take care. Is it fair to, like, go over what I got? Sure. And see if it's right and then you tell me if it's wrong or not or.

Yeah, you can just change it. Okay, so for the first one, I got the TE procedure to say that it's large by at least 40.

Yeah. You may use, provided sample size as large, say at least 40.

Okay, question two. I got all of the above. You got it. Okay, question three. I got the inference from a voluntary response sample can't be trusted.

You got it. Test of significance determines the observed effect due to random chance.

A statistical test is more likely to find a significant increase in mean life.

It is based on a very large sample. You got that? Yeah. Okay, now for this one. Why is it based off of a very large sample instead of a small one?

Because in my opinion, you would see a bigger contrast on a small sample while a larger sample goes closer to a moon.

So we have our curve here. Right. So that T test statistic right we have our null in the middle and our t test statistic takes in into account.

Right. So if we have, we have like s over square root n. So if we have a really small N, what's it going to do to this whole term?

Wouldn't it make it miniature? If we have a really small N, it's going to make the whole thing larger.

Right. So let's see. Actually, sorry, this is the wrong one. So s over square root of n. Right. We have that in the denominator of our t test statistic.

So we have our x bar minus mu over here. So if this is smaller, it's going to make this whole thing larger.

Right. So mathematically, if n is larger, we'll be more likely to have a small area over here and reject them always.

So no matter what, you have more sample, you'll be more likely to reject the null.

Okay, that makes more sense. Let me go over here for six. I got the data can be thought of as a random sample from the population of interest.

You got it. Seven was just math. I got 268, 99, negative 289. And the last one should be larger. You got it. Okay, awesome. There's just that one question. I was like, did y'all have a question about something?

We were just confused about one, but we got this. You got it, Right. Okay, awesome. Thank you. Thank you. Have a good rest of the day. No problem at all. Numbers. Okay, how did you like get. So this one kind of used to calculate a 95 volunteers.

It. It



Made With Glean | Open Event

Made With Glean | Open Event

week 3bb part 2

biostats week 3bb part 1

p value will always have a way to reject or fail to reject the null


biostats week 3bb part 1

Transcript

Today we continue with these T procedures. So recall on Friday we went over to confidence intervals in the t distributions.

So what was that kind of new thing that we introduced about the t distributions that we now have to account for when we're constructing our confidence intervals?

So we introduced the new statistical table, the T table, right.

And there was this extra thing to the T table that was not on the C table.

What was that extra thing? And hint, it depends on the sample size. Second hint, it tells us how well our sample standard deviation is doing in estimating our population standard deviation.

Third hint, it's written on the board here. It's the degrees of freedom. So remember how we're finally now at a place in this class?

We are using real methods that are actually used on real data.

The Z distributions that we introduced last week, those are more used for teaching purposes.

Not to say they don't have any applications, but when we're comparing means with real data, we're going to be using the T distribution.

The z distribution assumes we somehow miraculously know the population standard deviation, but that is not applicable.

So what do we use? What's our best guess at the population standard deviation?

Well, it's just the sample standard deviation, so we can just sub in s for that.

But we have to account for the degrees of freedom, account for how well our sample standard devia deviation is estimating our population standard deviation.

As you might expect, as our sample size goes up, our sample standard deviation will do a better job estimating our standard deviation for the population.

Does anybody remember how we calculate degrees of freedom in this one sample t test world?

And when I say one sample t test, I mean when we are calculating a mean and just seeing where the null lies within the mean, how do we calculate degrees of free?

It's a very simple calculation in this world. N minus 1. N minus 1. Exactly. So easiest calculation we'll do in this unit. So degrees of freedom are just n minus one here. And looking at the table, if you're going to do it with the table, this is what the T table looks like.

It looks kind of like. It might be more confusing at first, but I think it's actually easier to use what's different from the z table and the t table, There's a main difference between the z table and the t table.

So in the Z table, recall the critical values for margins and the probability is we are working in sort of the main cells of the table.

But with the t table, the critical values are actually in the main joint distribution of the table, whereas the probabilities and the confidence levels are in the margins.

Also in the margins are the degrees of freedom. There's a different t distribution for each degree of freedom.

So you can't just, you have to make sure to take that into account when you're finding the critical value to calculate your confidence in.

So if we're going to do it with a calculator, which hopefully that's the route everyone will go, you can just find all these functions for your different hypothesis tests and different confidence intervals right there in that calculated 84.

So you can go to stat over to test and scroll down 8th option T interval.

And that will give you a couple different options of how to enter data.

You can enter it in as like a data list. So remember earlier in the semester we did like that.

There's different lists to calculate summary statistics.

You can do it like that enters like L1 or L2 and then calculate the confidence level based on that.

Or if you already have the stats on the problem, let's say you already have a sample mean standard deviation, you can just enter those in, input those in, put in the confidence level, and that will give you the confidence interval.

So when our null hypothesis is located inside the confidence interval, does that give us evidence to reject that null?

No, that would give us that we would fail to reject.

Right? Because what is that confidence interval telling us?

It's telling us this range of values where we're pretty sure the true value is.

So if hypothesized mean is in that interval, that's not going to give us very good evidence to reject that null if it's outside, that would give us good evidence.

All right, so hopefully everyone feels okay about that.

If not, go back and review those notes. Now we're going to talk about a 1 sample T test. So recall in the z world we did that test where we took some kind of benchmark value, some kind of mean outside the study and we compared our sample data to it.

Right? Use that rare event rule. We use those probabilities under the curve to say, hey, if this null were true, what's the probability of getting the result we got?

Right? If that probability is super small, that's going to give us evidence to say, hey, maybe this expected value, maybe this null value is not true.

So same type of thing, right? Same kind of overall guiding principles. But the main difference now is we have to explain account for those degrees of freedom because we're now in the t distribution.

So just like before, we're going to have a test statistic.

What does the test statistic tell us? Test statistics tells us the same thing in every distribution, every application.

What does the test statistic tell us? Test statistic tells us something about our data versus the null.

Two different words that start with a D. What does the test statistic tell us exactly? Yes. So it's the distance and divided direction in which our data divulges from the null.

Right? So that's our first bit of evidence. We mount up against the null. What's the second bit of evidence? We mount up against the null. And I would say the second one that I'm hinting at is sort of the main one.

We use the first one to get to the second probability.

There's N, the p value. The p value, exactly. So we have our test statistic, right? That distance and direction from the null which is in the center.

You can just calculate that as our sample mean minus the null over the standard error.

And then we have our P value, a two sided P value we calculate on both sides of the distribution.

So here are our steps. First we want to verify that our requirements are satisfied.

We're using the T procedures. So we either want the observations to be normally distributed or over 40 and.

Or over 40, I should say. As long as one of those folds, we're good to go. As always with hypothesis testing, we want to specify the null and alternative and then we specify the significance level.

What's our significance level? It's kind of the main one we use.05.0.05 and then we calculate the T statistic.

And that's going to be on a certain degree of freedom in this one sample t test world.

Degrees of freedom we just calculate as N minus 1.

Pretty straightforward there. Then we get the p value. We'll get this P value from the table. It won't be quite as laborious calculation as it was in the z world.

Then we state our conclusions. So very important here. Hopefully everyone has this already. But our P value is less than our alpha. What do we conclude? We reject the null. Right, because it tells us there's a very small chance of seeing the result we saw under the assumption that the null is true.

And if the p value is greater than alpha, we fail to reject.

So once again, the idea of this hypothesis testing is we are operating in this world where we assume this null, this mean is true.

Right? And then we calculate maybe a sample mean way over here and find that it's way up higher than our null.

And under the normal curve we can calculate the probability.

The area of seeing this Right. And if this probability is super small, it's unlikely that this is true.

So it gives us the evidence to reject the null. So how do we do this with a t table? Well, it'll be similar to how we did it with the confidence interval.

So first we just need to calculate degrees of freedom, go to that row and kind of block everything out but that row.

Then that t test statistic that you calculate, you just take the absolute value of that and then you kind of find where that's positioned in between the different t critical values on those rows.

And then you can scroll down to the bottom of the table to find your two sided p value.

So if this t critical value or your t test statistic is in the table, you can go to that column containing it.

Quite often if you're going to do it with a table, it won't be.

So it'll be sort of in between two values. So you'll specify your p value as some sort of inequality.

We're going to go over how to do it with the table for a few minutes and then we'll switch over to the calculator.

This slide looks a little crazy, but let's say we calculated a t test statistic of negative 2.52 and we had n of 8.

So we, we could do n minus 1. We get over here at Degrees of Freedom 7, take the absolute value of this t test statistic and then we find where that lies here.

Right. Notice we have all these critical values going in this direction.

We find that our 2.52 is right here. So we can scroll down and see what our two sided P value is.

And we see our two sided P value lies some somewhere in between 0.05 and 0.02.

If our significance level was 0.05, would we reject or fail to reject the null in this situation?

Is it less than 0.05? Yep. So we reject. All right, let's try some other ones. I don't really know why this the same one. So I'll just click through this one. So we take the absolute value. Right. And we do. Since we're at n of 8, we can just do n minus 1. We only look at this 7 row. We want to be looking at the two sided p value. So we scroll over, we find that 2.52, the absolute value of that in between these two.

So we can get 0.02, 0.05. Same thing with the positive version of this. Right. So we're taking the absolute value. We get 0.02, 0.05. What's the P value if T equals 5.1 on 8 degrees? So it's going to be somewhere over here. Right. So how would you specify that here? What's our last value? And the values are getting smaller. Right. So you'd say P is less than. You got it. So P is less than 0.002. Notice the relationship happening here. Right. And we've already kind of, you know, outlined this relationship up on the curve.

As our T values get higher, what's happening to our P values?

Yep, they get lower. And notice that on the curve, right, we have a T test statistic that's higher, that's showing our data is going to be further away from the null.

We're going to have a smaller area in that tail. So smaller areas give us evidence to reject higher T values.

Give us the. All right, what about what's the P value if t equals 0.7, be greater than 0.5?

Right. Because it's going off in this direction. So with this one, would we reject or fail to reject?

We fail to reject. Right. Because really high P value, our sample data is likely not very far from rl.

Hypothesize me. All right, let's do an example. Are there any questions about this table, how it works?

After the attacks of September 11, 2001, there was concern about the air quality near the World Trade Center.

Measurements were reported in random locations at building 5, 10 different days.

Immediately after the collapse, the mean lead level, air lead level was sampled.

All the sample days were to be 3 and the standard deviation 1.91.

The EPA has established an air quality standard for lead at 1.5, used a 0.05 significance test to test the claim that the sample is from a population with a mean greater than the EPA standard.

So in this one sample T world, right, we're collecting sample data, taking the mean of that sample data, and then we're testing that mean against some expected claim outside the study.

So that claim outside the study is the null hypothesis.

The alternative is that it's not. The true value doesn't equal. So in this situation, which number up there, There's a lot of numbers.

Which one would be our null hypothesis? Exactly. It would be that the mean is true, mean 1.5 in this group.

That's what we would expect. And then we have what's our sample mean and sample standard deviation 3 and 1.91.

And what's our N and what's our degrees? For you, it's kind of funny on this one. But they recorded it on 10 different days. So it would be 10 and then degrees of freedom would be what?

How do we calculate degrees of freedom? N minus one. Yeah. So what would it be here? Just be nine. All right, so what do we do next? What are our two bits of evidence? We use that we mount up against the null hypothesis test statistic.

Test statistic and P that. So we gotta do test statistics first. So how do we do that? Here Our null was 1.5. Our mean was 3. Our standard deviation is 1.91. How do we calculate our test statistic? So it's just our sample mean minus our null over the standard pair.

Do that, we get 2.48. Now what's our next step here? Look on the table. You got it. Look on the table. We've got our distance and direction from the null.

We are at 90 degrees of freedom. So at 9 degrees of freedom, looking for 2.48, 2.48 to be right here, somewhere in between 0.02 and 0.05.

So we would have this P value that is between 0.202 and 0.05.

So will we reject or fail to reject at the 0.05 level we project?

Right. So since our P value is less than alpha, we reject the null and favor of the alternative.

To conclude that there is evidence at the 0.05 level that the lead levels in the air building 5 and the days immediately following the attack were higher than the recommended.

It's estimated that the mean lead level was 3. Any questions about how we did this, about the conclusion or anything?

So let's do another. And I think this is the one you're going to get a calculator with.

So go ahead and get your calculator handy. In general, the average weight of rats is mu equals 175 grams.

The mean body weight of rats for the particular type of genetic disorder is unknown, as is the standard deviation.

It would be useful to know whether the mean body weight of rats would genetic disorder is different from the mean body weight of rats in the general population.

The mean and standard deviation of a sample of n equals 9.

Rats are given below. Tell us the hypothesis that rats with the disorder have a mean body weight different from 175.

We have a sample mean, we have a sample standard deviation.

What's our null and alternative here? Null is 175. Alternative is it doesn't equal 175. So we're going to see, we're going to test for differences in Our data versus a null.

We have an in equals five. What's degrees of freedom? Eight. You got it. So now. So I'll go to that stack button sort of in the top center of your TI calculator and then go to the right tab tests and you'll scroll down to the second one that says T test and within T test just like with the confidence interval.

Then you have two options. You have the data option and you have the static option.

Once again the data option. That refers to when like you're entering the data in as a list.

So like we did earlier in the semester when we entered subset list calculate summary statistics.

That's when you would use that if you're like given a list of data set.

But what are we given here? We're just given statistics. So from this example we're just going to hit stats.

And once you hit stats, it asks for a bunch of information.

It asks for the null, it asks for the sample standard deviation.

I'm sorry, sample mean sample standard deviation N.

And then it asks for what type of test you want. It's got the little equal with the line through it.

That's two sided test. That's something we're always going to do. You can also get a one sided test in either direction as well.

So in this situation what are you going to put in for the first value?

Uh, 175. You got it. And then what about for your X bar? Just that 182.67. And then for SX, what are you gonna put in? What are you gonna put in for the SX? Just the standard deviation 10.94. And then what's your N would actually be 10. Yeah. Degrees of freedom is nine degrees of freedom. I'm glad you said that. Degrees of freedom is calculated behind the scenes.

So if you're. I think it'll be given to you once you actually hit calculate.

But yeah, so if you do the calculator, degrees of freedom will happen behind the scenes.

And then we're going to do a two sided test. We love two sided tests. You can either hit calculate or draw. They're both going to give you the P value. Draw will just give you a nice little curve to look at too.

So that's kind of fun. But both should give you the test statistic and the P value if you hit calculate and enter.

So you should come up with something that looks like this.

Wait, did you say nine or ten? No, I was wrong. I'm sorry, you were right. N is 9 degrees of freedom. Sorry, Professor. Malfunction. N is 9. One of the early examples. N was 10. Roots green was 9. Put n n. Forget everything I said in the past 45 seconds. Reason to be eight. Yeah. Thank you. Sorry. So we put in all this stuff over here and we should get a P value that's greater than our alpha and then we would fail to reject any questions about that V calculator issues.

All right, let's do some more practice. A distribution of diastolic blood pressures for the population of female diabetics between ages 30 and 34 has an unknown mean mu and standard deviation sigma.

Researchers want to know whether the mean of this population is different from the mean diastolic blood pressure of the general population of females in this age group, which is 74.4.

A sample of 10 diabetic women in this age group is selected.

Their mean diastolic blood pressure is 84 and standard deviation is 9.1.

Using this information, conduct a two sided test with alpha equals 0.05.

Okay, so what is our null? What's our alternative? What's our sample mean? Sample standard deviation? NN what's our null alternative? A lot of information here, right? But having a battle with these is picking apart what we're going to type in.

Can't. Is it 74.4 you got 74.4 would be the null. We have a sample of 10 and then our sample mean is 84.

Sample standard deviation is 9.1. We're testing at the alpha equals 0.05 level. So we can go to stat test, go to that second option, the 1 sample T test.

And now let's just enter all this in and see what we get.

So for the null we'll put in a 74.4. For the x bar we'll put in 84. For sx we'll put in 9.1. For n we'll put on 10 and then we can just go ahead and hit calculate.

What do we get for our test statistic and p value? We get for our t value and the probability of P value 3.33 and you got it.

So you should get 3.336 and P value is 0.008. So what do we conclude here? Do we reject or fail to reject at the alpha equals 0.05 level?

Reject. We reject. Right. So we reject the null because the p value is so low.

So in summary here the T distributions help us deal with the fact that we are estimating the population standard deviation with the sample standard deviation.

We're entering that new level of uncertainty in the system.

So because of that, we have to introduce those degrees of freedom.

Degrees of freedom are direct function of our sample size.

So as we would expect, just like with the sample mean or other sample statistics, as N gets larger, our sample standard deviation will do a better job of estimating our population standard deviation.

The basic approach to hypothesis testing and confidence intervals is similar to the stuff we did before, but we just now have actually a little bit easier way to do it.

We can just do it with a calculator. Hopefully we kind of all understand a little bit of what's going on behind the scenes.

The basic steps of hypothesis tests and confidence intervals are going to be the same regardless of the methods used.

Right. So in the next couple weeks, it's kind of wild, but they don't have that much more material to cover in this class, which is just kind of crazy.

But in the next few weeks, we will use these overall ideas to sort of apply them to different types of methods.

Like, we're still going to be in the normal distribution, but we're going to within the distribution, we're going to look at comparing two different groups in a couple different ways.

So, but before we do all that, let's just do a little bit more practice today.

So we're going to pop back over to confidence intervals.

So I'm going to let you all read through this one and solve it, hopefully with the calculator and then we'll come back together and feel free to work with somebody next to you if you'd like it.

Okay, so what do we get for the 99% confidence interval and what is the interpretation of that confidence interval?

So we should have gone to stat test to the eighth option T interval.

Do you enter this in as data or stats? Stats. Right. What do you put in for X bar? What's the sample mean? Yep. And then what about for s 215. And then we can just set. We had 75 renters, so n is 75. The only catch here is we're asking for a 99% confidence interval.

So on the C level, just make sure to put in 0.99. It's going to default to 0.95, but you want to put in 0.99 there.

This is shown. If you did it with a table, you could just look this up in the table, construct it like that.

You did it with the calculator, you should get something very similar.

So is everybody getting something somewhere around this?

What would be the interpretation of this counter?

We are 99% confident that. What are we trying to do with the confidence interval?

What are we trying to capture? We have sample data. We calculated the mean of that data. Great. The mean of that data is the best guess at the parameter.

But because of sample data, we know it's not perfect.

Right. So the confidence interval gives us plausible range of values in which we think the real parameter are likely lies.

Right. So we're 99% confident that what? We're 99% confident. The true mean is between 1160 and1290. All right, let's do one more. In general, the average weight of newborns is mu equals 3500 grams.

The mean body weight of newborns with a particular disorder is unknown, as is the standard deviation.

It may be useful to know whether the mean body weight of newborns with a disorder is different from the mean body weight of newborns in the general population.

A study was conducted with 12 newborns with disorder and found the same mean body weight to be 3169.5 with the sample standard deviation of 327.88.

Test the hypothesis that newborns with the disorder have mean body weight different from 3500.

Use a 0.05. So we already have Lenol. And the alternative, what's our n? N is 12 degrees of freedom. If our n is 12, our degrees of freedom would just be n minus 1, which would be 11.

What is the sample mean of sample standard deviation?

Go ahead. Yeah, it's this. This will be the sample mean, this will be the sample standard deviation.

So we can just once again use that stat test. And which calculator function are you going to use?

What was that? So this one, it's asking us to do a T test. So for a T test we want to get that T test statistic and that P value.

So we're actually going to use the second option T test.

The eighth one is that T interval. So yeah, let's go to that second option T test and let's enter all this stuff in.

So for the null we would enter in what? We would just enter in that 3500. And then for our X bar we enter in 3161.5. And what do you get for your t test statistic and P value?

Negative 3 something. Hopefully negative 3.5. And what's the P value? That with that we got a very small P value. So that gives us the evidence to conclude what we reject or fail to reject that small P value.

Reject yeah, so we got the small P value. So we reject the null and conclude that we found a significant difference between the newborns with this disorder.

Real quick, I just want to go over a few things from homework.

8. A lot of people miss some of these. The Framingham Heart Study recorded the systolic blood pressure 3534 adults and 95% confidence interval as 126.26.7, 127.9.

Correct interpretation confidence interval is that we are 95% confident that the interval ranging from 126.7 to 127.9 contains population mean.

So when we construct a confidence interval we are 95% complement in the methods used to capture the population parameter at true value.

It's based on the idea of theoretically resampling the population a ton of times.

With the Same sample size 95% of the time we would expect to see the tree population parameter fall between our lower and upper bound.

When we talk about 95% of CIs when you're not talking about chance, rather confidence in our methods, I realize this sort of confusing and they sound very similar.

I also realized that in statistics we talk about chance a lot.

So this thing kind of tends to trick people up a lot.

So I'm sure I want to highlight that along the same lines.

Same one. They talk about how study reports mean change in high density lipoproteins adults eating raw garlic six days a week for six months.

Margin of error for 95% confidence interval is given as + or -6 milligrams per deciliter.

This means that the study used a method that gives a result within six of the true mean HDL change in the population for 95% of all samples of 3534 votes.

So once again construct a CI. It's based on the idea of theoretically resampling the population many, many, many times.

95% of the time we would expect to see the true parameter fall between the low and upper bound.

So this one kind of gets at that theoretical definition of it.

On resampling, you plan to construct a competence interval for the mean mu of all for a normal population with a known sigma.

Which of the following will reduce the size of the margin of error?

Well, all of these actually will reduce that margin of error.

Remember that margin of error is a thing you add and subtract to the sample mean to get the confidence interval.

So if we use a lower level of confidence, that will make our margin of error smaller.

If we increase the sample size, that will decrease standard error.

And then if we reduced sigma, that would do the same.

So this kind of refers to this slide. So we kind of talked through how decreasing the sample size, we take this and decrease it.

We'll make this term smaller and lead to a wider interval.

If we increase the confidence level, it'll lead to a wider interval.

And if we increase the standard deviation, it will lead to a wider interval.

So what you want to do for this one is just kind of take this slide and sort of flip it around, you know.

So increasing the sample size would be a narrower interval.

Decreasing the confidence interval to be an error interval and decreasing the standard deviation will lead to an error interval.

In a test of statistical hypotheses, the P value tells us probability of seeing the results we saw or something more extreme under that assumption that the null is the truth.

So remember, it's that innocent until proven guilty.

We're operating in a world where the null is true and we're mounting up that evidence.

We're calculating probability of seeing the result is.

And if that probability is so small, we can use that as evidence to reject the number.

That is it for me today. Homework nine is due tonight, so make sure to get that done.

Now, we have all our homework that are due on Mondays, TAM labs.

So don't forget that. If you have any questions about that, I will be here for a few minutes.

If not, have a great rest of your day and I will see you on Wednesday.

Can we actually go over that? Sure. Take care. Thank you. Bye. Take care. Is it fair to, like, go over what I got? Sure. And see if it's right and then you tell me if it's wrong or not or.

Yeah, you can just change it. Okay, so for the first one, I got the TE procedure to say that it's large by at least 40.

Yeah. You may use, provided sample size as large, say at least 40.

Okay, question two. I got all of the above. You got it. Okay, question three. I got the inference from a voluntary response sample can't be trusted.

You got it. Test of significance determines the observed effect due to random chance.

A statistical test is more likely to find a significant increase in mean life.

It is based on a very large sample. You got that? Yeah. Okay, now for this one. Why is it based off of a very large sample instead of a small one?

Because in my opinion, you would see a bigger contrast on a small sample while a larger sample goes closer to a moon.

So we have our curve here. Right. So that T test statistic right we have our null in the middle and our t test statistic takes in into account.

Right. So if we have, we have like s over square root n. So if we have a really small N, what's it going to do to this whole term?

Wouldn't it make it miniature? If we have a really small N, it's going to make the whole thing larger.

Right. So let's see. Actually, sorry, this is the wrong one. So s over square root of n. Right. We have that in the denominator of our t test statistic.

So we have our x bar minus mu over here. So if this is smaller, it's going to make this whole thing larger.

Right. So mathematically, if n is larger, we'll be more likely to have a small area over here and reject them always.

So no matter what, you have more sample, you'll be more likely to reject the null.

Okay, that makes more sense. Let me go over here for six. I got the data can be thought of as a random sample from the population of interest.

You got it. Seven was just math. I got 268, 99, negative 289. And the last one should be larger. You got it. Okay, awesome. There's just that one question. I was like, did y'all have a question about something?

We were just confused about one, but we got this. You got it, Right. Okay, awesome. Thank you. Thank you. Have a good rest of the day. No problem at all. Numbers. Okay, how did you like get. So this one kind of used to calculate a 95 volunteers.

It. It



Made With Glean | Open Event

Made With Glean | Open Event

robot