Psychological Statistics Quiz #3

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/22

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

23 Terms

1
New cards

one sample t-test

1. Is the sample mean significantly different than the known population mean?

• Examples:

• Was the class average (sample mean) on Exam 1 significantly higher or lower than Dr. Adams’s historical exam-1 average of 73%?

• Is the sample-mean height of females in PSYC2300 significantly different than the national average of 64.5 inches?

2. Is the sample mean significantly higher or lower than
some other meaningful value (e.g., scale midpoint)?
• Examples:
• On a scale from 1=unhappy to 5=happy, was the mean response
significantly higher than the scale midpoint ("average happiness")?
• On a scale from 1=Democrat to 5=Republican, was the mean
response significantly different from the scale midpoint? In other
words, was the sample significantly biased toward Democrat or
Republican?

2
New cards

over simplified t value

For the (over-simplified) purposes of this intro course...
t is conceptually identical to the z-score of a sample mean
In a one-sample t-test, t represents the number of standard
errors between the sample mean and the pre-determined
value.

like z, as t gets big it gets REALLY big

3
New cards

p- value definition

p = the probability that we would have gotten the results we
got, even if the null hypothesis (H0) were true.


Applying this definition in the context of one-sample t-tests...
IF H0 is true; i.e., IF the population mean of the variable is
equal to our pre-determined set point, THEN we would only
obtain the observed difference between the sample mean
and our pre-determined value in [p*100]% of random
samples.

4
New cards

SPSS One sample-t test diagram 

The first table of output will provide descriptive statistics:

The lower table provides inferential statistics.. (w/ test value)

With t-tests, t is conceptually similar to z. It’s the number of standard errors between the test value and the sample mean 

The sample mean of 73.3 is 1.8 standard errors (1.8*3.7) different from the test value of 80.

Mean difference is the difference between the sample mean and the test value

<p>The first table of output will provide descriptive statistics:</p><p>The lower table provides inferential statistics.. (w/ test value)</p><p>With t-tests, t is conceptually similar to z. It’s the number of standard errors between the test value and the sample mean&nbsp;</p><p></p><p>The sample mean of 73.3 is 1.8 standard errors (1.8*3.7) different from the test value of 80.</p><p></p><p><strong>Mean difference </strong>is the difference between the sample mean and the test value</p>
5
New cards

evaluating p 

Researchers virtually always use the two-sided p.

• the one-tailed p will always be half the size of the two-tailed p.

• Consequently, using the one-tailed p tends to come across as p-hacking.

<p>Researchers virtually always use the two-sided p.</p><p>• the one-tailed p will always be half the size of the two-tailed p.</p><p>• Consequently, using the one-tailed p tends to come across as p-hacking.</p><p></p>
6
New cards

Defining p in context...

If the true population mean grade is 80%, then we should only obtain a sample mean of M = 73.3 (SD = 18.1) – or a sample mean further from 80 – in 8.4 out of every 100 random samples.

If p had been lower than .05, then we would reject the null hypothesis (H0) that the true population mean grade for these assignments = 80%.

If null is true, 8.4% chance of getting 73.3 or lower. Null is in the whole population average grade should be 80. So, most should be around 80. You can still get crazy values, but it’s very unlikely. 

<p><span style="color: rgb(250, 250, 250);">If the true population mean grade is 80%, then we should only obtain a sample mean of M = 73.3 (SD = 18.1) – or a sample mean further from 80 – in 8.4 out of every 100 random samples.</span></p><p>If p had been lower than .05, then we would reject the null hypothesis (H0) that the true population mean grade for these assignments = 80%.</p><p></p><p>If null is true, 8.4% chance of getting 73.3 or lower. Null is in the whole population average grade should be 80. So, most should be around 80. You can still get crazy values, but it’s very unlikely.&nbsp;</p>
7
New cards

df + APA format

Df = degrees of freedom = ... The full explanation is confusing. Just know this: For a one-sample t-test, df = n - 1

Reporting the Results in APA format...A one-sample t-test showed that the mean altruism score in Dr.

Adams’s class (M = 4.14, SD = 1.39) was significantly higher than 3.5, t(151) = 5.68, p < .001, d = 0.46, 95% CI [0.23, 0.69]

ask about italics 

NON-SIGNIFICANT RESULTS IN APA FORMAT:

If the results of your analysis were not significant,

YOU DO NOT HAVE TO REPORT MEAN & SD

A one-sample t-test showed that the mean happiness rating was not significantly different from the scale midpoint of 3, t(59) = 1.29, p = .34, d = 0.14.

DECIMALS IN APA FORMAT: Report all statistics to two decimal places (the hundredths place)...

EXCEPTION: if p < .01, report p to three decimal places (the thousandths place)

<p>Df = degrees of freedom = ... The full explanation is confusing. Just know this: For a one-sample t-test, df = n - 1</p><p></p><p>Reporting the Results in APA format...A one-sample t-test showed that the mean altruism score in Dr.</p><p>Adams’s class (<em>M</em> = 4.14, <em>SD</em> = 1.39) was significantly higher than 3.5, <em>t</em>(151) = 5.68, <em>p</em> &lt; .001, <em>d</em> = 0.46, 95% CI [0.23, 0.69]</p><p><strong>ask about italics&nbsp;</strong></p><p></p><p>NON-SIGNIFICANT RESULTS IN APA FORMAT:</p><p>If the results of your analysis were not significant,</p><p>YOU DO NOT HAVE TO REPORT MEAN &amp; SD</p><p>A one-sample t-test showed that the mean happiness rating was not significantly different from the scale midpoint of 3, t(59) = 1.29, p = .34, d = 0.14.</p><p></p><p>DECIMALS IN APA FORMAT: Report all statistics to two decimal places (the hundredths place)...</p><p>EXCEPTION: if p &lt; .01, report p to three decimal places (the thousandths place)</p>
8
New cards

Cohen’s d 

What is a “small,” “medium,” and a “large” effect?

<p>What is a “small,” “medium,” and a “large” effect?</p>
9
New cards

Independent-samples t-tests

Is the mean of some variable significantly different between groups?

• Significantly different:

• The difference between means in our sample is SO BIG that we conclude there is probably a difference between group means in the population

• i.e., p < .05... reject H0

Independent-samples t-test... Example

In 2016, I collected some data from students at University of Alabama.

One question I asked was “Do you support Trump?” (no; yes)

Another was “What is your parents’ approximate annual household income?”

What prediction would you make about the relation between these two variables?

<p>Is the mean of some variable significantly different between groups?</p><p>• Significantly different:</p><p>• The difference between means in our sample is SO BIG that we conclude there is probably a difference between group means in the population</p><p>• i.e., p &lt; .05... reject H0</p><p><strong>Independent-samples t-test... Example</strong></p><p>In 2016, I collected some data from students at University of Alabama.</p><p>One question I asked was “Do you support Trump?” (no; yes)</p><p>Another was “What is your parents’ approximate annual household income?”</p><p>What prediction would you make about the relation between these two variables?</p><p></p>
10
New cards

Trump independent t-test 

Quick refresher from the one-sample t-test slides:
t represents the number of Standard Errors separating the observed effect from an effect of zero...


... and, in an independent-samples t-test, the observed effect is the size of the difference between means...
... SO:
You can also ignore
Equal variances not assumed
There are 1.935 20.39’s in 39.46
There are t SE’s in the mean difference between groups

results in APA format

An independent samples t-test showed that the average household income among pro-Trump students (M = 225.46, SD = 134.48) was significantly greater than the average household income among anti-Trump students (M = 186.00, SD = 136.39), t(178) = 1.94, p = .03 (one-tailed), d = 0.29, 95% CI [-0.01, 0.59

(need to specify one-tailed because shame)

For pro- and anti-Trump parents’ salaries are equal in the population, THEN we would only obtain a difference between these groups of $39K/year (or larger) in 2.7% of random samples.

11
New cards

Paired-sample t-test

Repeated-Measures, a.k.a. Dependent-Measures,

a.k.a. Within-Subjects, a.k.a. Paired-Samples t-test

Uses:

1. Compare a pre-treatment mean to a post-treatment mean

• Example #1: score on a stats test prior to taking PSYC-2300... vs. score on anstat test after taking PSYC-2300.

• i.e., “treatment effect” of PSYC 2300 on stat knowledge: Were test scores before Psyc2300 (Time1) significantly lower than after Psyc2300 (Time2)?

• Example #2: Take a driving test at Time1... drink 10 beers... Take another driving test (Time2)  i.e., treatment effect of getting hammered on driving ability:

• Were scores @Time1 significantly higher than scores @Time2?

Uses:

2. Were scores on Variable #1 significantly different from scores on similarly, scored Variable #2?

• For this one, the two different variables must be scored on similar scales.

• Example: compare UTA students’ SAT math to SAT verbal --- note: both scored on a scale from 200-800

i.e., did the avg UTA student do significantly better on math or verbal?

• Example: compare students’ happiness on the weekend vs happiness on Monday --- note:nboth scored on a scale of 1=unhappy to 10=happy

• i.e., Is happiness significantly different between weekdays vs. weekends?

12
New cards

SPSS Output – assessing hypothesized decrease in depressive symptoms, pre- vs post- trial of new anti-depressant.

APA format: A paired-samples t-test showed there was not a significant decrease in depressive symptoms from before the drug trial (M = 16.80, SD = 2.77), compared to after the drug trial (M
= 12.00, SD = 3.81), t(4) = 2.30, p = .08, d = 1.03, 95% CI [-0.12, 2.11].

<p><span style="color: rgb(255, 254, 254);">APA format: A paired-samples t-test showed there was not a significant decrease in depressive symptoms from before the drug trial (<em>M</em> = 16.80, SD = 2.77), compared to after the drug trial (<em>M</em></span><span style="color: rgb(255, 254, 254);"><br></span><span style="color: rgb(255, 254, 254);">= 12.00, <em>SD</em> = 3.81), <em>t</em>(4) = 2.30, <em>p</em> = .08, <em>d</em> = 1.03, 95% CI [-0.12, 2.11].</span></p><p></p>
13
New cards

DEFINING p IN ANY SITUATION..

If the null hypothesis were true, then we would find the effect we found (or a larger effect) in [p*100]% of random samples

If there were no difference between depressive symptoms at time-1 and time-2 in the population, we would find a mean difference of MD = 4.80 (SED = 2.08) – or a larger difference – in 8.3% of random samples from the population

14
New cards

Correlation

In t-tests, we were looking at the effect of a categorical IV on a continuous DV...
Correlation measures the relationship between two continuous variables


Positive correlations: when one value increases, the other tends to increase
• IQ and GPA
• Height and Weight


Negative correlations: when one value increases, the other tends to decrease
• Narcissism and modesty
• Frequency of church attendance and profanity use

<p><span style="color: rgb(255, 255, 255);">In t-tests, we were looking at the effect of a categorical IV on a continuous DV...</span><span style="color: rgb(255, 255, 255);"><br></span><span style="color: rgb(255, 255, 255);"><strong>Correlatio</strong>n measures the relationship between two continuous variables</span></p><p><span style="color: rgb(255, 255, 255);"><br></span><span style="color: rgb(255, 255, 255);"><strong>Positive correlations</strong>: when one value increases, the other tends to increase</span><span style="color: rgb(255, 255, 255);"><br></span><span style="color: rgb(255, 255, 255);">• IQ and GPA</span><span style="color: rgb(255, 255, 255);"><br></span><span style="color: rgb(255, 255, 255);">• Height and Weight</span></p><p><span style="color: rgb(255, 255, 255);"><br></span><span style="color: rgb(255, 255, 255);"><strong>Negative correlations:</strong> when one value increases, the other tends to decrease</span><span style="color: rgb(255, 255, 255);"><br></span><span style="color: rgb(255, 255, 255);">• Narcissism and modesty</span><span style="color: rgb(255, 255, 255);"><br></span><span style="color: rgb(255, 255, 255);">• Frequency of church attendance and profanity use</span></p>
15
New cards

Example of Correlation in SPSS

What is the correlation between self-reported self-esteem and self-perceived narcissism?

“Pearson Correlation” = R

“Sig” = p

number of participants = N

Reporting correlations in APA format...Follow this format: There was (not) a significant positive/negative correlation between v1 and v2, r(n -2) = , p = . [quick sentence explaining the correlation in simple language].

APA format:

There was a significant positive correlation between self-esteem and narcissism, r(166) = .23, p = .003. In other words, people who rated themselves higher in self-esteem also tended to rate themselves higher in narcissism.

If narcissism and self-esteem had a correlation of zero in the population, we would find a correlation of r = .23 (or a larger correlation) in 0.3% of random samples from the population

<p>What is the correlation between self-reported self-esteem and self-perceived narcissism?</p><p>“Pearson Correlation” = R</p><p>“Sig” = p</p><p>number of participants = N</p><p>Reporting correlations in APA format...Follow this format: There was (not) a significant positive/negative correlation between <strong>v1</strong> and <strong>v2</strong>, r(n -2) = <strong>, p = </strong>. [quick sentence explaining the correlation in simple language].</p><p>APA format:</p><p>There was a significant positive correlation between self-esteem and narcissism, r(166) = .23, p = .003. In other words, people who rated themselves higher in self-esteem also tended to rate themselves higher in narcissism.</p><p></p><p>If narcissism and self-esteem had a correlation of zero in the population, we would find a correlation of r = .23 (or a larger correlation) in 0.3% of random samples from the population</p>
16
New cards

Correlation vs. Regression...

Correlation provides the strength and direction of a relationship...

As does Regression, but Regression ALSO enables PREDICTION...

• Given a value of Variable-1, Regression provides a model for predicting the value of Variable-2

• Remember y = mx + b from algebra class, back in the day?

• That’s regression. It’s the same exact thing, but with slightly different standard symbols

<p>Correlation provides the strength and direction of a relationship...</p><p>As does Regression, but Regression ALSO enables PREDICTION...</p><p>• Given a value of Variable-1, Regression provides a model for predicting the value of Variable-2</p><p>• Remember y = mx + b from algebra class, back in the day?</p><p>• That’s regression. It’s the same exact thing, but with slightly different standard symbols</p>
17
New cards

Regression SPSS Output

The R (capitalized) of the model is NOT the bivariate correlation coefficient, which for these two variables would be r(102) = -.76, p < .001

R : ”Multiple Correlation Coefficient”... The correlation between the observed values of the dependent variable and the values p redicted by the model.

R2 : % shared variance (also, effect size)

1. Use the output above to create the regression equation for this analysis.

<p>The R (capitalized) of the model is NOT the bivariate correlation coefficient, which for these two variables would be r(102) = -.76, p &lt; .001</p><p>R : ”Multiple Correlation Coefficient”... The correlation between the observed values of the dependent variable and the values p redicted by the model.</p><p>R2 : % shared variance (also, effect size)</p><p>1. Use the output above to create the regression equation for this analysis.</p><p></p>
18
New cards

hings to know about β

It’s the “standardized” slope...

• when all the variables are converted to z-scores, B1 = β → see next slide

• What β tells us: For every 1 SD increase in the predictor variable, the dependent variable will change by β SDs.

• “Fun” Fact: In a regression model with only one predictor variable (like this one), β will equal the bivariate correlation between the two variables

APA format:

A linear regression with Impulsiveness predicting Conscientiousness was significant, as changes in Impulsiveness accounted for 57.6% of the variance in Conscientiousness, R2 = .58, F(1, 102) = 138.56, p < .001, and higher levels of Impulsiveness predicted significantly lower levels of Conscientiousness, b = -0.05, β = -.76, p < .001

<p><span style="color: rgb(250, 250, 250);">It’s the “standardized” slope...</span></p><p><span style="color: rgb(250, 250, 250);">• when all the variables are converted to z-scores, B1 = β → see next slide</span></p><p><span style="color: rgb(250, 250, 250);">• What β tells us: For every 1 SD increase in the predictor variable, the dependent variable will change by β SDs.</span></p><p><span style="color: rgb(250, 250, 250);">• “Fun” Fact: In a regression model with only one predictor variable (like this one), β will equal the bivariate correlation between the two variables</span></p><p><span style="color: rgb(250, 250, 250);"><strong>APA format:</strong></span></p><p><span style="color: rgb(250, 250, 250);">A linear regression with Impulsiveness predicting Conscientiousness was significant, as changes in Impulsiveness accounted for 57.6% of the variance in Conscientiousness, R2 = .58, F(1, 102) = 138.56, p &lt; .001, and higher levels of Impulsiveness predicted significantly lower levels of Conscientiousness, b = -0.05, β = -.76, p &lt; .001</span></p><p></p><p></p>
19
New cards

Bivariate Correlation output → vs ↓ Linear Regression output ↓

In a Bivariate Correlation analysis, degrees of freedom = n – 2. Above, df = n – 2 = 104 – 2 = 102

<p>In a Bivariate Correlation analysis, degrees of freedom = n – 2. Above, df = n – 2 = 104 – 2 = 102</p><p></p>
20
New cards

APA Format Report of NON-SIGNIFICANT results...

A linear regression with Agreeableness predicting GPA was not significant, R2 = .00, F(1, 64) = 0.13, p = .73

<p>A linear regression with Agreeableness predicting GPA was not significant, R2 = .00, F(1, 64) = 0.13, p = .73</p>
21
New cards

chi-square (χ²) test for independence

When to use it:

Given categorical variable A-vs-B and categorical variable X-vs-Y... When people are members of category A (vs B), is it more likely that they are also members of category Y (vs X)?


Examples:
1. If you prefer dogs (vs cats), are you more likely to be an extravert (vs. introvert)?
2. If you are a STEM major (vs. not) are you more likely to be a frequent video gamer (vs not)?
3. If you enjoy cooking (vs don’ t) are you more likely to have a BMI<30 (vs. BMI>30)?

22
New cards

One-Way Analysis of Variance (ANOVA)

When to use it:

• For a numeric variable, is the mean among Group 1 significantly different from the mean among Group 2 (vs. Group 3, vs. Group 4, etc.)?

• ANY situation where you can use independent-samples t-test, you can also use one-way ANOVA.

• If testing between 3 or more groups, you can use one-way ANOVA but NOT independent-samples t-test.

Examples:

1. Is the average IQ different between Democrats, Republicans, or Independents?

2. Is GPA different between psych majors, BNS majors, or engineering majors?

3. Alcohol consumption different between 1st years, 2nd years, 3rd years, 4th years?

23
New cards

2x2 Factorial ANOVA

When to use it:

• Assessing how a numeric variable differs between two different 2-level categorical variables

Examples:

Factor 1: ADHD (yes vs. no).
Factor 2: Adderall (20mg vs. placebo)
Outcome variable: Attention Test Score

2x2 Factorial ANOVA can tell you whether there was an INTERACTION EFFECT
Aka MODERATION
• Above, did ADHD (present vs not) influence attention scores?  Or, did Adderall (20mg vs not) influence attention scores?


• INTERACTION EFFECT: The influence of Adderall on improving Attention depends on whether a person has ADHD.


• INTERACTION EFFECT: The influence of Adderall on improving Attention was moderated by whether participants had ADHD

More Examples:

Factor 1: pre-bed screentime (yes vs. no).
Factor 2: pre-6pm cardio exercise (20min vs. none)
Outcome variable: time elapsed between sleep attempt and sleep onset

Factor 1: alcohol drinker vs. not.
Factor 2: male vs. female
Outcome variable: Autobiographical History of Violence Score