1/26
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
What is an operational definition
It is a statement that describes a procedure for measuring an observable event that represents can abstract construct that cannot be measured directly.
specifies exactly how a variable will be measured or manipulated in a study.
What are the limitations of operational definitions?
not identical to the construct
two general problems:
ommission of components
inclusion of extra components
What are the 3 main methods of measurement? and what are the advantages and disadvantages?
self report:
participants describe their own experiences by answering a questionnaire, interview, or open ended descriptions
advantage: most direct way to measure a construct
responses can be easily distorted
Physiological:
measures heart rate, blood pressure, brain scans etc.
advantages: objective
disadvantages: expensive equipment, creates unnatural experiences, and may be invalid for the construct
Behavioural:
direct observation of behaviour as it happens
advantages: many different ways to measure
disadvantages: behaviour is measured at a specific time in a specific environment that may not be representative (peoples behaviour may change)
What is archival research?
involves measuring behaviour or events that occurred in the past using historical records
What is frequency (f)?
The number of times a score/range of scores appear in your data
it is just a count
used to see patterns in the data easily (which scores are the most common)
What is the mean?
the average of a set of scores
calculated by adding up all the scores and then dividing it by the number of scores
What is a correlation?
measures the relationship between two variables
on a scatterplot:
Positive correlation: dots slope upward → as X increases, Y increases
Negative correlation: dots slope downward → as X increases, Y decreases
No correlation: dots scattered randomly → no clear pattern
it is on a scale from -1 to 1
what are the 4 scales of measurement?
nominal
ordinal
interval
ratio
What is the ratio scale of measurement?
when categories are ordered by amount or size (goes up sequentially)
absolute zero: complete absence of the characteristic
two types:
discrete: only whole numbers (number of children)
continuous: includes decimals (distant)
What is interval scale of measurement?
similar to ratio but it has no true zero point, so if there is a 0 it doesn’t mean that the characteristic is absent
temperature, altitudes, golf
What is ordinal scale of measurement?
it does not provide the exact amount of difference between the scores/ranks
ranked from low to high
What is nominal scale of measurement?
categories or labels with no order
what are 4 ways there can be a bias in measurement?
artifact: a factor that distorts a measurement
experimenter bias: the researcher’s expectations influence how data are collected or interpreted
demand characteristic: clues in the study that reveal the purpose or hypothesis, changing the participants behaviour
participant reactivity: participants change their natural behaviour because they know they are being studied
what are the 4 roles a participant can have?
good subject: tries to hep the researcher by behaving in ways that support the perceived hypothesis
negativistic subject: intentionally behaves in ways that contradict the perceived hypothesis
apprehensive subject: worries about being judged or. evaluated and tries to look good
faithful subject: follows instructions exactly and ignores suspicions about the study’s purpose
what are two ways researches reduce bias?
single blind: a study which the researcher who interacts with the participants does not know the predicted outcome or to which comparison group the participants are assigned to
double blind: a study in which both the researcher and the participants are unaware if the predicted outcome or to which the participants are assigned to
What is validity?
it is the first most important standard in evaluating a measurement procedure
to establish validity the researcher must show that the procedure is truly measuring what it says it is measuring
validity is especially important when measuring hypothetical contracts
What is face validity:
simplest and least scientific type of validity:
it focuses on surface appearance not evidence
“does this measurement look like it measures what it claims to measure
disadvantages: subjective
concurrent validity?
concurrent validity is demonstrated when scores from a new measurement are directly related to scores from an established measurement of the same variable
it is about comparison
shows that two measures are related (not identical)
concurrent validity supports a measure only when both measures asses the same variable, a correlation alone is not enough to prove validity
measures are taken at the same rime
ex: A researcher wants to know if a new, short anxiety questionnaire is valid. They give participants the new questionnaire at the same time as an established, well-validated anxiety scale. If people who score high on the established scale also score high on the new one, the new questionnaire shows good concurrent validity.
what is predictive validity?
is demonstrated when scores obtained from a measurement accurately predict behaviour according to a theory
risk factor tests/screenings
what’s construct validity?
refers to the degree to which a measurement procedure accurately measures a theoretical construct
convergent vs. divergent validity
convergent: is demonstrated by a strong relationship between scores obtained from two or more different methods of measuring the same construct
different measurement procedures (domains/components) should converge (come together) on the same construct
divergent: is demonstrated when measurements of different constructs should little to no relationship
divergent validity ensures that a measurement assesses one specific construct
you want low divergent validity because it shows that there is no correlation with other constructs
What is reliability?
the stability or consistency of the measurement. A measurement procedure is reliable if it produces identical or nearly identical results when used repeatable on the same individuals under the same conditions
what are three ways sources of errors occur?
observer error: mistakes by the person recording or scoring measurements
environmental changes: small changes in the environment can affect measurements
participant changes: changes in attention, mood, or health can affect scores
What are the 4 types of reliability
test-restest reliability
reliability estimated by comparing scores from two sequential measurements of the same individuals using the same measurement procedure
alternative forms/parallel-forms reliability
reliability estimated by comparing cores from two different versions of the same measurement instrument given to the same individuals
interrater reliability
degree of agreement between two or more observers who simultaneously record measurements of the same behaviour
internal consistency
degree to which items within a test or questionnaire measure the same construct. it ensures that multiple items on a test are all measuring the same construct
4 types of internal consistency reliability?
split-half score reliability: take your test and split into two halves, and compute the correlation between the two halves. if the test is internally consistent the scores should match.
KR-20: used for tests with only 2 response options, measures average correlations of all possible split-half combinations, gives you a single number that represents internal consistency.
Cronbach’s alpha (α): for tests with more than 2 response options
Item-total correlations: look at each item individually and correlates it with the total score on the test. it sees if each question matches the overall test
reliable vs valid
a measure cannot be valid unless it is reliable, but a measure can be reliable without it being valid
if measurements aren’t consistent, it cannot be trusted to measure the right thing
a measurement can still be reliable but still measure the wrong thing
both used to evaluate the quality of a measurement procedure
What does N, M, SD, p, r, and CL mean?
N= sample size (number of participants)
M= mean (average score on one varies for a group of participants)
SD= standard deviation (average distance of the scores from the mean; a measure of how spread out the scores are on one variable across the whole range of scores for all participants)
p= a p-value is a measure of statistical significance
r = correlation coefficient (the strength and direction of an. association)
cl= confidence interval (a range within which the true population value is very likely to be)