1/109
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Population
All the individuals to whom a research project is meant to generalize.
Sample
A small percentage of the population that is tested.
Measurement
Systematically assigning numbers to objects, events, or characteristics according to a set of rules.
4 Scales of measurement
Nominal
Ordinal
Interval
Ratio
(N.O.I.R)
Nominal scale
Classifies objects or individuals as belonging to different categories. Offers the least information in comparison to other methods. Order doesn’t matter.
(ex: Male vs Female, Ethnicity, Favorite color, etc.)
Ordinal scale
Order of categories matters, but the difference between each category is not necessarily the same. Rank-order data is measured.
(ex: A-F Grading scale. The difference between one person’s B grade from an A may not be the same amount as another person’s D grade from B.)
Interval scale
Characterized by equal units of measurement throughout the scale. The order matters and there is no true zero value (0 = measured characteristic isn’t present).
(ex: Temperature)
Ratio scale
Order matters, all units are of equal size throughout the scale, and there is a true zero value.
(ex: Height, Age in years, Weight)
Statistics ___ and _____ data
Organize; Summarize.
Allows generalizations about a population to be made from a sample.
Descriptive statistics
Statistical techniques used to organize data to determine typical characteristics of different variables. Includes 2 types:
Description of the average score
Description of how spread out or close together the data lie.
3 Different types of averages that can be calculated (Measures of central tendency)
Mode
Median
Mean
Things we can do with statistics:
Describe data
Measure relationships
Compare groups
Mode
The score that occurs most frequently.
Bimodal
If the distribution has two scores that tie for occurring most frequently.
Multimodal
If three or more scores are tied in occurring most frequently.
Median
Defined as the middle point in a set of ordered scores; the point below which 50% of the scores fall. Provides information about the distribution of other scores in the set. Impossible to find on a nominal scale.
Mean
The arithmetic average of the scores in a distribution; is calculated by adding up the scores in the distribution and dividing by the number of scores. It's the most commonly used type of average partly because it’s mathematically very manipulable.
Outliers
Scores that are inordinately large or small, given as much weight as every other score in the distribution; can affect the mean score.
Measures of dispersion
Statistics that describe the spread of data
The 3 measures of dispersion
Range
Variance
Standard deviation
Range
The most straightforward measure of dispersion, it’s the number of possible values for scores.
Exclusive range
Finding the range by subtracting the lowest score from the highest score.
Inclusive range
Subtracting the lowest score from the highest score and then adding 1 to it. (HS - LS )+ 1
By doing this, it will include both the high and low scores in the result.
Standard deviation
Often used for interval and ratio scale data. It’s an approximation of the mean distance that the scores in a set of data fall from the sample’s mean.
Variance
The standard deviation squared, are the most commonly used measures of dispersion. Typically requires a mean to calculate so it’s not appropriate for ordinal and nominal data.
Correlation
A measure of the degree and direction of the relationship between two variables.
Ranges between -1.00 to 1.00 (The closer the absolute value is to 1.00 the stronger the correlation)
Positive correlation
When one variable increases, the other variable increases.
Negative correlation
An increase in one variable is accompanied by a decrease in the other variable.
Scattergram
Type of graph used to demonstrate the relationship between two variables.
Pearson’s product-moment correlation (Pearson’s r)
Correlation test: When two variables being correlated are measured on interval or ratio scales.
Spearman’s rho (ρ)
Correlation test: When one or both variables are measured on an ordinal scale, especially if the variables are rank-ordered.
Bivariate correlation coefficient
When the relationship between two variables is being assessed
Multiple correlation
Reflects the degree of relationship between a set of predictors and the predicted variable. Used if you want to know how well a group of variables predicts one other variable.
While the bivariate correlation coefficient can be positive or negative, this can only be positive.
Multiple regression
Provides information about the individual predictors, includes relative contributions to the multiple correlation and the direction of the relationship of each predictor with the predicted value.
Error variance
Differences within a group, one measure is standard deviation. They are the natural, random fluctuation in scores caused by factors other than the independent variable. The more there is, the more difficult it is to identify consistent differences in performance between groups.
Ratio of the differences between the groups and the differences within the group may be written as
If the independent variable (IV) has little or no effect, the ratio will approximately be 1. But, if the independent variable does have an effect on the scores, the ratio will be greater than 1.
When comparing two groups, in order to find a significant difference you want
Between-group differences to be high and within-group differences to be low
A ratio of between-group differences to within-group differences is used for all statistical techniques that compare groups when data is measured on
interval or ratio scales
What are the common statistical tests used to determine if there are significant differences between groups?
T-tests, ANOVA, parametric and nonparametric
Independent-samples t-test
Used if a researcher wishes to compare two groups.
Dependent-samples (or correlated-samples, paired-samples, or repeated-measures) t-test
Used if comparing two sets of scores from one group of participants tested twice. Used especially if matching is used.
Analysis of variance (ANOVA)
Used if comparing 3 or more groups
Nonparametric
Used when data are not measured on an interval or ratio scale of measurement. Makes no assumption about population parameters, less powerful than parametric.
Parameter
Characteristic of a population
Parametric test
Statistical tests comparing groups using interval or ratio data. Makes assumptions about the parameters of the population.
(ex: A researcher assumes that the sample standard deviation ins a fairly accurate estimate of the population standard deviation.)
Between-groups variance is theoretically
The effect of the independent variable
Within-groups variance is
Error variance, ideally you want this to be low.
The ________ of a correlation coefficient is represented by how far it is from 0, while the _______ is represented by its sign (positive or negative).
strength; direction
You will have a greater chance of finding a significant difference between two groups if you run a
Parametric test
Experiments
Investigations in which the researcher manipulates an independent variable to determine if there are any differences in the dependent variable among equivalent groups.
If performed correctly, yields causal info.
Quasi-experiment
Independent variables are manipulated but the groups are not equivalent.
May yield casual info if confounds are eliminated.
Correlational study
An investigation that explores the effect of a subject variable on a dependent measure.
Does not yield casual info, but does identify relationships between subject variable and the dependent variable.
Placebo
An inert treatment that has no effect on the dependent variable. Helps counteract demand characteristics.
Between-groups research design
(AKA Between-subjects, independent-groups research design)
Where the performance of participants in one or more groups is compared with the performance of participants in another group.
Disadvantages include difficulty finding enough participants, subject attribution, extraneous variables, and instrumentation error.
Control group
Considered the more natural condition. They either don’t experience manipulation or are the ones who experience the placebo.
Experimental group
The treatment group in an experiment
If the control group and the experimental group differ on the dependent measure, it is assumed that
The difference is caused by the difference in the independent variable.
Three requirements that must be met for an experiment to yield casual results
Groups being compared must be equivalent
(temporal priority) Independent variable must be introduced before the dependent variable is measured
The design must be free of other potential confounds
Temporal priority
The independent variable must be introduced before the dependent variable is measured.
Random assignment
The preferred way of obtaining equivalent groups by subjects. All participants have an equal chance of being assigned to any group within the experiment.
Selection bias
Likely to occur if researchers choose which groups participants will be in or if groups are not equivalent.
Random assignment yields the best results with ______samples. Equivalence among groups increases as they become more _______of the population.
larger; representative
Matching
Involves identifying pairs of participants who measure similarly on a characteristic that is related to the dependent variable and then randomly assigning each of these participants to separate experimental conditions.
Uses few participants and cannot depend on random assignment alone to yield equivalent experimental conditions.
Pretesting
A test given before the independent variable is manipulated to establish a measure related to the experiment to find pairs for matching.
Flaws with matching
Difficult or impossible to find adequate matches for each participant and some participants may need to be dropped from the study, reducing how representative of the general population which may increase risk of Type II error.
Within-subjects design
(AKA dependent-samples, paired-samples, repeated-measures designs)
One sample of participants is tested one or more times and compared with themselves. Typically, there’s less error variance as the same people are in every condition.
Disadvantages include susceptibility to demand characteristics, regression towards the mean, and carryover effects.
Subject variable
A characteristic of participants that cannot be manipulated by the researcher. (Age, Gender, etc.)
Two different ways of measuring age differences.
Cross sectional & Longitudinal designs
Cross-sectional design
Typically used by researchers to look for differences between age groups. Cannot provide casual results.
Extraneous variables
Variables that can affect the dependent variable.
If this is present for one group in an experiment, but not for the other, we cannot conclude that the chance in the independent variable caused the dependent variable.
Confounded results
When extraneous variables change along with the independent variable, they provide alternative explanations for the results of the study,
Confound
Extraneous variable, or any other flaws in the research design that limits internal validity.
Hold constant
One way of controlling an extraneous variable by applying the same level of extraneous variable to all groups in a study.
Counterbalance
One way of controlling an extraneous variable by having a variable be part of every condition. In essence, it’s applying the reverse to cancel out the imbalance.
Advantages of holding constant
Typically results in less error variance among the scores than counterbalancing. Makes it easier to reject the null hypothesis using statistical tests.
Advantages of counterbalancing
Yields greater external validity than holding constant
Internal validity
The extent to which the design of an experiment ensures that the independent variable, and not some other variable, caused a measured difference in the dependent variable.
Experimenter bias
Any confound caused by researcher expectations
Demand characteristics
Any confound caused by participant expectations
Single-blind procedure
Either the participants or the experimenter does not know which experimental condition the participants are under.
Double-blind procedure
Both the experimenter and the participants are unaware of the experimental conditions to which particular participants have been assigned.
Instrumentation effect
Occurs when the manner in which the dependent variable is measured changes in accuracy over time. This can be because:
Machines/tools wear out
Participants taking the test multiple times throughout a study may get better at it
Researchers may get stricter or more relaxed as they measure throughout an experiment
Subject attrition
(AKA subject mortality)
Participants may leave the study partway through.
Nonsystematic subject attrition
When participants leave a study (or their data cannot be used) for reasons unrelated to the subject of the experiment itself.
Systematic subject attrition
The participants who quit are distributed unevenly among the groups. Threat to internal validity because it may cause groups in the experiment to become inequivalent.
Comparable treatment of groups
To guarantee maximum internal validity, ensure that different experimental groups are treated as similarly as possible except for the manipulation of the independent variable.
Sensitivity of the dependent variable
In effort to minimize error variance, it is critical to choose a dependent variable that is sensitive enough to detect differences between experimental conditions.
(ex: Measuring the weights of premature and full-term infants to the nearest ounce may not be sensitive enough to detect differences between groups vs. measuring in grams.)
Ceiling effect
When the dependent variable yields scores near the top limit of the measurement tool for one or all groups.
Floor effect
When the dependent variable yields scores near the lower limit of the measurement tool for one or all groups.
Which of the following is NOT a common limitation of using matching to assign participants to groups?
It often leads to two very different groups.
If your variable of interest in a study is a subject variable you will have to run
A correlational study
Two types of within-subjects designs
Pretest-posttest design
Repeated-measures design
Pretest-posttest design
One group of participants is tested two or more times using the same measurement tool, once before and once after the independent variable is manipulated in some way.
Repeated-measures design
Involves multiple measurements per participant. This design would use an ANOVA test.
Longitudinal design
Related to repeated-measures design, involves testing participants multiple times, but looks for changes that occur over time. Duration can range from months to even decades.
Often combined with cross-sectional studies
Carryover effect
When participants perform a task numerous times, or even only twice, their performance on earlier trials might affect their performance on later trials.
(Includes: Practice effect, Fatigue effect, and History effect)
Practice effect
Through repetition of a task for an experiment may cause the participant’s performance to improve.
Fatigue effect
Performance declines with repetition.
History effect
Something happens outside of the experiment at the same time that the independent variable is being changed that affects all or some of the participants’ performances on the dependent measure.
Maturation effect
When participants are tested over a considerable period, their scores may change simply because of the passage of time rather than any effect on the independent variable.