Variables, Validity, and Reliability – Research Methods

0.0(0)
studied byStudied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/26

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 5:45 AM on 2/6/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

27 Terms

1
New cards

What is an operational definition

It is a statement that describes a procedure for measuring an observable event that represents can abstract construct that cannot be measured directly.

  • specifies exactly how a variable will be measured or manipulated in a study.

2
New cards

What are the limitations of operational definitions?

not identical to the construct

two general problems:

  • ommission of components

  • inclusion of extra components

3
New cards

What are the 3 main methods of measurement? and what are the advantages and disadvantages?

  1. self report:

  • participants describe their own experiences by answering a questionnaire, interview, or open ended descriptions

    • advantage: most direct way to measure a construct

    • responses can be easily distorted

  1. Physiological:

  • measures heart rate, blood pressure, brain scans etc.

    • advantages: objective

    • disadvantages: expensive equipment, creates unnatural experiences, and may be invalid for the construct

  1. Behavioural:

  • direct observation of behaviour as it happens

    • advantages: many different ways to measure

    • disadvantages: behaviour is measured at a specific time in a specific environment that may not be representative (peoples behaviour may change)

4
New cards

What is archival research?

involves measuring behaviour or events that occurred in the past using historical records

5
New cards

What is frequency (f)?

The number of times a score/range of scores appear in your data

  • it is just a count

  • used to see patterns in the data easily (which scores are the most common)

6
New cards

What is the mean?

the average of a set of scores

  • calculated by adding up all the scores and then dividing it by the number of scores

7
New cards

What is a correlation?

measures the relationship between two variables

  • on a scatterplot:

    • Positive correlation: dots slope upward → as X increases, Y increases

    • Negative correlation: dots slope downward → as X increases, Y decreases

    • No correlation: dots scattered randomly → no clear pattern

  • it is on a scale from -1 to 1

8
New cards

what are the 4 scales of measurement?

  1. nominal

  2. ordinal

  3. interval

  4. ratio

9
New cards

What is the ratio scale of measurement?

when categories are ordered by amount or size (goes up sequentially)

  • absolute zero: complete absence of the characteristic

two types:

  • discrete: only whole numbers (number of children)

  • continuous: includes decimals (distant)

10
New cards

What is interval scale of measurement?

similar to ratio but it has no true zero point, so if there is a 0 it doesn’t mean that the characteristic is absent

  • temperature, altitudes, golf

11
New cards

What is ordinal scale of measurement?

it does not provide the exact amount of difference between the scores/ranks

  • ranked from low to high

12
New cards

What is nominal scale of measurement?

categories or labels with no order

13
New cards

what are 4 ways there can be a bias in measurement?

  1. artifact: a factor that distorts a measurement

  2. experimenter bias: the researcher’s expectations influence how data are collected or interpreted

  3. demand characteristic: clues in the study that reveal the purpose or hypothesis, changing the participants behaviour

  4. participant reactivity: participants change their natural behaviour because they know they are being studied

14
New cards

what are the 4 roles a participant can have?

  1. good subject: tries to hep the researcher by behaving in ways that support the perceived hypothesis

  2. negativistic subject: intentionally behaves in ways that contradict the perceived hypothesis

  3. apprehensive subject: worries about being judged or. evaluated and tries to look good

  4. faithful subject: follows instructions exactly and ignores suspicions about the study’s purpose

15
New cards

what are two ways researches reduce bias?

  • single blind: a study which the researcher who interacts with the participants does not know the predicted outcome or to which comparison group the participants are assigned to

  • double blind: a study in which both the researcher and the participants are unaware if the predicted outcome or to which the participants are assigned to

16
New cards

What is validity?

it is the first most important standard in evaluating a measurement procedure

  • to establish validity the researcher must show that the procedure is truly measuring what it says it is measuring

    • validity is especially important when measuring hypothetical contracts

17
New cards

What is face validity:

simplest and least scientific type of validity:

  • it focuses on surface appearance not evidence

    • “does this measurement look like it measures what it claims to measure

  • disadvantages: subjective

18
New cards

concurrent validity?

concurrent validity is demonstrated when scores from a new measurement are directly related to scores from an established measurement of the same variable

  • it is about comparison

  • shows that two measures are related (not identical)

  • concurrent validity supports a measure only when both measures asses the same variable, a correlation alone is not enough to prove validity

  • measures are taken at the same rime

ex: A researcher wants to know if a new, short anxiety questionnaire is valid. They give participants the new questionnaire at the same time as an established, well-validated anxiety scale. If people who score high on the established scale also score high on the new one, the new questionnaire shows good concurrent validity.

19
New cards

what is predictive validity?

is demonstrated when scores obtained from a measurement accurately predict behaviour according to a theory

  • risk factor tests/screenings

20
New cards

what’s construct validity?

refers to the degree to which a measurement procedure accurately measures a theoretical construct

21
New cards

convergent vs. divergent validity

convergent: is demonstrated by a strong relationship between scores obtained from two or more different methods of measuring the same construct

  • different measurement procedures (domains/components) should converge (come together) on the same construct

divergent: is demonstrated when measurements of different constructs should little to no relationship

  • divergent validity ensures that a measurement assesses one specific construct

  • you want low divergent validity because it shows that there is no correlation with other constructs

22
New cards

What is reliability?

the stability or consistency of the measurement. A measurement procedure is reliable if it produces identical or nearly identical results when used repeatable on the same individuals under the same conditions

23
New cards

what are three ways sources of errors occur?

observer error: mistakes by the person recording or scoring measurements

environmental changes: small changes in the environment can affect measurements

participant changes: changes in attention, mood, or health can affect scores

24
New cards

What are the 4 types of reliability

  1. test-restest reliability

  • reliability estimated by comparing scores from two sequential measurements of the same individuals using the same measurement procedure

  1. alternative forms/parallel-forms reliability

  • reliability estimated by comparing cores from two different versions of the same measurement instrument given to the same individuals

  1. interrater reliability

  • degree of agreement between two or more observers who simultaneously record measurements of the same behaviour

  1. internal consistency

  • degree to which items within a test or questionnaire measure the same construct. it ensures that multiple items on a test are all measuring the same construct

25
New cards

4 types of internal consistency reliability?

split-half score reliability: take your test and split into two halves, and compute the correlation between the two halves. if the test is internally consistent the scores should match.

KR-20: used for tests with only 2 response options, measures average correlations of all possible split-half combinations, gives you a single number that represents internal consistency.

Cronbach’s alpha (α): for tests with more than 2 response options

Item-total correlations: look at each item individually and correlates it with the total score on the test. it sees if each question matches the overall test

26
New cards

reliable vs valid

a measure cannot be valid unless it is reliable, but a measure can be reliable without it being valid

  • if measurements aren’t consistent, it cannot be trusted to measure the right thing

  • a measurement can still be reliable but still measure the wrong thing

  • both used to evaluate the quality of a measurement procedure

27
New cards

What does N, M, SD, p, r, and CL mean?

N= sample size (number of participants)

M= mean (average score on one varies for a group of participants)

SD= standard deviation (average distance of the scores from the mean; a measure of how spread out the scores are on one variable across the whole range of scores for all participants)

p= a p-value is a measure of statistical significance

r = correlation coefficient (the strength and direction of an. association)

cl= confidence interval (a range within which the true population value is very likely to be)