Exam 1 Notes Research Methods

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/98

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 12:37 PM on 5/2/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

99 Terms

1
New cards

What happens to a theory when the data do not support the theory’s hypotheses? What might a scientist say and do if the data fail to support the theory?

When researchers collect data that does not support their hypothesis, they evaluate the research design to see if perhaps the data was inaccurate, or if the theory is flawed. If further research using new designs suggests the same results, then the theory is often revised to better match the results. This does not necessarily mean that the theory is proven false- that is not the point of science.

2
New cards

Why can’t theories be proven in science?

Theories can’t be proven in science because no study can predict or account for all possible future results. Results are intended to be falsifiable, meaning it is always possible that a new study could come along and disprove existing results. 

3
New cards

Merton proposed four norms that people in the scientific community strive to follow. How many can you name without looking back?

Universalism: Scientific claims are evaluated according to their merit, independent of researcher’s credentials or reputation

Communality: Scientific knowledge is created by a community, and the findings belong to the community

Disinterestedness: Scientists strive to discover the truth, whatever it is, unswayed by personal conviction, idealism, politics, or profit

Organized skepticism: Scientists question everything, including their own theories, widely accepted ideas, and ancient wisdom

4
New cards

Describe at least two ways journalists might distort (intentionally or not) the science they attempt to publicize.

Journalists often try to reduce complex scientific results to something more palatable for a general audience without background knowledge, so it can be difficult for them to write about scientific information in a way everyone from any background can understand. 

Journalists also often want to convey the information in a way that is interesting and exciting to read about, so they may exaggerate the results. This could mislead readers into thinking the results were a lot stronger than they actually were. They might also only convey half of the truth; so they might stretch the results to make it sound like they suggest something different than they actually do.

5
New cards

When scientists publish their data, what are the benefits?

Allows for falsification, discussion and debate among researchers, replication, and peer-review

6
New cards

What are two general problems with basing beliefs on experience? How does empirical research work to correct these problems?

No comparison group

Susceptible to confounds

With empirical research, you can control for confounds and employ a comparison group

7
New cards

What does it mean to say that research is probabilistic?

To say that research is probabilistic means that it does not explain all possible cases all of the time, it only explains a proportion of possible cases. It is always possible that one person’s experience might differ from research results. 

8
New cards

An Instagram ad claims that amethyst crystals improve immune function. You start carrying one with you and notice that your stuffy nose has abated for the first time in weeks! Does this prove that amethysts really have healing powers? Why not?

It does not prove that amethysts really have healing powers. The issue with using experience as a basis for science is that there is no comparison group, and there is always an opportunity for confounds to be an issue. In this case, there is no way to tell what would have happened if you hadn’t carried the amethyst. Perhaps your stuffy nose would have gone away regardless. Also, there could have been something else that varied systematically with your use of the amethyst that actually led to your stuffy nose clearing. Maybe you had gotten more rest or taken medicine at the same time as you carried the amethyst, which is actually what contributed to clearing your nose.

9
New cards

This section described several ways in which intuition is biased. Can you name all four?

Availability heuristic: judge by what comes to mind easily

Confirmation bias: seek evidence of what we want to believe

Bias blind spot: Belief you are not biased

Hindsight bias: I knew it all along

10
New cards

Why might the bias blind spot be the sneakiest of all the intuitive reasoning biases?

People are naturally biased and can easily fall victim to availability and representativeness heuristics, and if you are able to recognize the inherently biased nature of our reasoning, you can take steps to avoid it. However, the bias blind spot makes us think that other people are biased, but we aren’t, which makes us put too much trust in our own faulty reasoning. It might also stop us from trying to find errors in our thinking. An important part of science is falsification, but if we already believe we are right, we might just accept our own conclusions without searching for evidence against it.

11
New cards

When would it be sensible to accept the conclusions of authority figures? When might it not?

It is important to first ask yourself about the source of the authority figure’s ideas before blindly believing them. If they refer to research evidence in their area of expertise, they may be worth listening to. However, if they are only basing their ideas on their own experience or intuition, they could be susceptible to self-serving biases that make their reasoning faulty. Also, even if they cite evidence, they may only look at evidence that supports their only beliefs, so it is important to evaluate the reliability of their sources.

Instead of only listening to people you view as authority figures, you should base your beliefs on psychological phenomena on research, rather than experience, intuition, or authority. When you evaluate the reliability of the research cited by authorities, it could be reasonable to accept them, but you should always do your own independent research as well.

12
New cards

How are empirical journal articles different from review journal articles? How is each type of article different from a chapter in an edited book?

Empirical journal articles report on the findings within a new research study for the first time. On the other hand, review journal articles summarize the findings of multiple existing research articles on a specific topic. 

An edited book is a collection of chapters on a common topic, each written by a different team of contributors. Empirical journal articles differ from this because they report on the new findings of a single research study, while edited books report on findings across multiple different studies on the same topic. Review journal articles, similarly to edited books, report on the findings of multiple studies by multiple authors. However, review journal articles synthesize these findings into a single article, while edited books report on them across multiple chapters.

13
New cards

What are the components of an empirical journal article?

Abstract, introduction, method, results, discussion, references

14
New cards

How do scholarly articles (research journals) differ from popular press articles (journalism)?

Scholarly articles are peer-reviewed articles on a specific academic discipline or subdiscipline, written by experts for a scholarly audience. They are peer-reviewed anonymously by 3-4 experts

Popular press articles are second-hand reports about the research that summarizes it for a general audience; often overstated or falsely reported

15
New cards

What is the difference between a conceptual variable and the operational definition of a variable? How might the conceptual variables “level of eye contact,” “intelligence,” and “stress” be operationalized by a researcher?

The conceptual variable is the variable being studied in basic conversational terms. The operational definition is the specific way the conceptual variable is being measured or manipulated in the study.

Level of eye contact: Eye-tracking technology to measure eye contact

Intelligence: IQ tests and abstract problem-solving tasks to measure intelligence

Stress: Self-report questionnaires that measure how stable the participants have felt in managing their daily life work/activities.

16
New cards

Practice noticing the difference between a variable and its levels. What might be appropriate levels of the variable “favorite color”? What might be some possible levels of the variable “first language”?

Favorite , orange, yellow, green, blue, purple, pink

First language: English, Spanish, French, German, Arabic, Russian, Mandarin

17
New cards

Practice naming a variable when you see its levels. What might you call a variable whose levels are “fiction” and “nonfiction”? What might you call a variable whose levels are 10 mg, 20 mg, and 30 mg? (Here’s a hint: try using phrases such as “type of ____” or “amount of____.”)

Fiction and nonfiction: Genre of book

10 mg, 20mg, and 30 mg: Amount of medicine taken

18
New cards

Explain why some variables can only be measured, not manipulated. Could “favorite color” be a manipulated variable? Could “level of eye contact” be a manipulated variable?

Some variables cannot be changed if they are part of the participant's life. Some things can’t be changed, like age, personality, physical characteristics, or other traits participants are born with. These must be measured, as it is not possible to change traits.

Favorite color: No, because you cannot manipulate a participant’s preference for a color, only measure it

Level of eye contact: It is possible that you could manipulate this by tasking participants with maintaining eye contact, but it isn’t possible to guarantee that the participant will break eye contact at some moment. So this may be difficult and impossible to measure with full certainty

19
New cards

How many variables are there in a frequency claim?

1

20
New cards

How are causal claims similar to association claims? How are they different?

Both claims suggest a correlational relationship between two variables and often focus on theory-testing initially. However, causal claims argue that one variable directly causes another and that the relationship is not a coincidence, while association claims argue that one variable is only sometimes associated with another but they do not directly influence each other

21
New cards

What kind of study usually supports an association claim? What kind of study is needed to support a causal claim?

Association claims use correlational studies, which include two or more variables, in which all variables are measured and their relationship is tested. Variables are not manipulated in these types of studies, so that researchers can see if they are naturally related without any interference.

Causal claims use experimental studies, in which at least one variable is manipulated and another measured. This helps research establish covariance, temporal precedence, and internal validity to make sure that one variable truly causes another.

22
New cards

Which part of speech in a claim can help you differentiate between association and causal claims?

Association claims typically use words like, “correlation,” “relation,” “predict,” or “linked” to suggest a loose relationship between two variables.

Causal claims typically use words like, “affects,” “leads to,” “increases,” “decreases,” or “prevents” to suggest a stronger, more direct and intentional relationship between two variables.

23
New cards

What are the three criteria causal claims must satisfy?

Covariance: Is there a clear difference between groups to make sure that one thing actually causes another?

Temporal precedence: Does the cause come before the outcome?

Internal validity: Does the proposed cause actually lead to the outcome, or are there potential confounds?

24
New cards

What question(s) would you use to interrogate a study’s construct validity?

Does the dependent variable actually measure the intended construct?

Is the operational definition a good approximation of the conceptual variable?

25
New cards

Define external validity, using the term generalize in your definition.

Extent to which the results of a study generalize to some larger populations as well as to other times and situations

26
New cards

In your own words, describe at least three questions that statistical validity addresses.

How strong is the effect?

How precise is the estimate?

Has the study been replicated?

27
New cards

How can you tell if a study is an experiment versus a correlational study?

The major difference between the two is that, in experimental studies, at least one variable is manipulated, while in correlational studies, the variables are only measured- never manipulated.

28
New cards

Why is a correlational study not able to support a causal claim?

Correlational studies cannot establish temporal precedence (which variable causes the other) or rule out confounds (they can’t control for systematic variability of other alternative variables) so there is no way to be sure that one variable actually causes another with correlational studies. The best they can do is say that two variables are associated with each other, without actually knowing if one causes the other or if there are external variables leading to their correlation.

29
New cards

How does a manipulated variable help a study achieve temporal precedence and internal validity?

When you manipulate the variables in a study, you can arrange the order of the variables to ensure that the independent variable comes before the dependent variable in time. You can also establish internal validity and avoid confounds by using random assignment, matched groups, or control variables.

30
New cards

Why don’t researchers usually aim to achieve all four of the big validities at once?

It is basically impossible to guarantee that all four validities are stable. This is especially true between internal and external validity. When external validity is at its strongest, studies are done in naturalistic, real-world settings where the generalizability can be the highest. However, it is the most difficult to control for confounds in these settings, which harms internal validity. When internal validity is at its highest, the study is done in a laboratory setting where the researchers can control all aspects of the study. However, this harms the external validity, because it is difficult to fully generalize a laboratory study to broader populations. Researchers prioritize specific validities depending on the goals of their study, and hope that follow-up studies will make up for what was unreliable in the original study.

31
New cards

What makes an experiment different from a correlational study? (Use the terms manipulated and measured in your answer.)

Correlational studies include two or more variables, in which all variables are measured and their relationship is tested. Variables are not manipulated in these types of studies, so that researchers can see if they are naturally related without any interference.

Experimental studies include at least one variable being manipulated and another measured. This helps research establish covariance, temporal precedence, and internal validity to make sure that one variable truly causes another.

32
New cards

Define independent variable, dependent variable, and control variable, using your own words.

Independent variable: A variable that is manipulated


Dependent variable: A variable that is measured

Control variable: A variable held constant for all participants

33
New cards

Why do experiments usually satisfy the three causal criteria?

Experiments satisfy covariance because there needs to be a correlation/difference between two groups to claim that one thing actually causes another.

Experiments satisfy temporal precedence because there needs to be certainty that the cause comes before the effect in time.

Experiments satisfy internal validity because they need to control the situation and make sure that no other variables systematically vary with the independent variable and are actually causing the dependent variable.

34
New cards

Define design confound and control variable; then explain how a control variable can help a researcher prevent a design confound. Use the baby persistence study as an example.

Design confound: Threat to internal validity in an experiment when a second variable happens to vary systematically along with the independent variable and therefore is an alternative explanation for the results

Control variable: A variable held constant for all participants

A control variable can prevent design confounds by holding potentially alternative explanations constant across all groups in an experiment to ensure that only the independent variable differs for all participants, allowing the researchers to isolate its effect on the dependent variable.

35
New cards

How does random assignment prevent selection effects?

Selection effects, a threat to internal validity that occurs in an experiment when the kinds of participants in one group systematically differ from those in the other, are prevented by random assignment because random assignment ensures that each participant is equally as likely to end up in any of the groups.

36
New cards

How does using matched groups prevent selection effects?

When using matched groups, participants who are similar on some measured variable are grouped into sets, and the sets are then randomly assigned to different conditions. This prevents selection effects because it ensures that participants with similar characteristics are placed into different groups, so they do not accidentally end up in the same group systematically and skew the results.

37
New cards

What is the difference between independent-groups and within-groups designs? Use the term levels in your answer.

Between-subjects design: Separate groups of participants are placed into different levels of the independent variable, so each participant is only exposed to one level

Within-subjects design: Each participant is presented with all levels of the independent variable

38
New cards

Describe how posttest-only and pretest/posttest designs are both independent-groups designs. Explain how they differ.

Posttest-only design: Participants, randomly assigned to independent variable groups, are tested on the dependent variable only once

Pretest/posttest design: Participants are randomly assigned to at least two groups and are tested on the dependent variable twice- once before independent variable and once after

Both are between-subjects designs because each participant is only exposed to one level of the independent variable.

39
New cards

Repeated-measures designs and concurrent-measures designs are both within-groups designs. What is the main way they differ?

Repeated-measures design: Participants respond to a dependent variable more than once, after exposure to each independent variable level

Concurrent-measures design: Participants are exposed to all levels of an independent variable at the same time, and report their preference as the dependent variable

They differ because in repeated-measures designs, participants are exposed to the dependent variable multiple times, while in concurrent-measures designs they are only exposed to the dependent variable once.

40
New cards

Describe how counterbalancing improves the internal validity of a within-groups design.

Counterbalancing: In a repeated-measures experiment, presenting the levels of the independent variable to participants in different sequences to control for order effects

Counterbalancing prevents order effects because it ensures that each variation of the independent variable level is tested an equal amount of times among different participants.

41
New cards

What are three things you should evaluate about the construct validity of a dependent variable? Describe each one.

Convergent validity: The extent to which a self-report measure correlates with other measures of a similar construct

Discriminant validity: The extent to which a self-report measure does not correlate strongly with measures of dissimilar constructs

Criterion validity: Evaluates whether the measure under consideration is associated with a concrete behavioral outcome that it should be associated with, according to the conceptual definition; how accurately a measure correlates with an established external standard

42
New cards

How do manipulation checks provide evidence for the construct validity of an experiment’s independent variable?

Manipulation check: In an experiment, an extra dependent variable researchers can include to determine how well a manipulation worked

A manipulation check ensures that the intended independent variable was actually manipulated and not something else. For example, if you want participants to be happy or sad, you want to make sure they are not accidentally manipulated to become nostalgic.

43
New cards

Besides generalization to other people, what other aspect of generalization does external validity address?

Generalization to real world, other populations, or other contexts, situations, or settings

44
New cards

Summarize the three threats to internal validity discussed in this chapter.

Design confound: Threat to internal validity in an experiment when a second variable happens to vary systematically along with the independent variable and therefore is an alternative explanation for the results

Selection effect: A threat to internal validity that occurs in an experiment when the kinds of participants in one group systematically differ from those in the other

Order effects: In a within-groups design, a threat to internal validity in which exposure to one condition changes participant responses to a later condition

45
New cards

Explain the difference between a conceptual definition and an operation definition.

Construct: Variable being studied in basic conversational terms

Conceptual definition: Careful, theoretical definition of a construct in the context of a study

Operational definition: The specific way the construct is manipulated or measured in the study

46
New cards

Name and define the three common ways in which researchers operationalize their variables.

Self-report measure: A method of measuring a variable in which people answer questions about themselves in a questionnaire or interview

Observational measure- A method of measuring a variable by recording observable behaviors or physical traces of behaviors

Physiological measure: A method of measuring a variable by recording biological data

47
New cards

In your own words, describe the difference between categorical and quantitative variables. Come up with new examples of variables that would fit the definition of categorical, ordinal, interval, and ratio scales.

Categorical variables: A variable whose levels are qualitative categories

Quantitative variables: A variable whose levels can be recorded as meaningful numbers

Ordinal scale: A quantitative measurement scale whose levels represent a ranked order, and in which distances between levels are not equal

Interval scale: A quantitative measurement scale that has no “true zero” and in which the numerals represent equal intervals between levels

Ratio scale: A quantitative measurement scale that has a “true zero” and in which the numerals represent equal intervals between levels

48
New cards

What is the difference between systematic and unsystematic variability in an experiment?

Systematic variability occurs when an aspect of a study varies with the conditions of the independent variable. Unsystematic variability occurs when there is variation in the study, but it does not vary along with the conditions of the independent variability. Systematic variability is an issue because it could create an alternative explanation for the results of a study.

49
New cards

Name and define each of the 6 threats to internal validity that we’ve covered so far.

Selection effects: A threat to internal validity that occurs in an experiment when the kinds of participants in one group systematically differ from those in the other

Order effects: In a within-groups design, a threat to internal validity in which exposure to one condition changes participant responses to a later condition

Design confound: Threat to internal validity in an experiment when a second variable happens to vary systematically along with the independent variable and therefore is an alternative explanation for the results

Observer bias: A bias that occurs when observer expectations influence the interpretation of participant behaviors or study’s outcome

Demand characteristics: Participants change behavior to match what they think is the hypothesis

Placebo effect: People receiving an experimental treatment only change because they believe they are receiving a valid treatment

50
New cards

Identify which threats are possible in all designs and which are possible only in within or between subjects designs.

All designs: Demand characteristics, observer bias, design confound, placebo effect

Within-subjects: Order effects

Between-subjects: Selection effects

51
New cards

For each of the 6 threats to internal validity, how do researchers avoid or minimize it?

Selection effects

  • Random assignment (default solution)

  • Matched groups (solution in smaller samples)

Order effects

  • Counterbalancing of stimuli-condition combinations and order of conditions

    • When more than two levels, or multi-trial designs, counterbalancing gets complicated and you can do partial counterbalancing

    • Presenting the levels of the IV in different sequences to control for order effects

    • Matching the levels of the IV with different stimuli to control for design confounds

Design confound

  • Control variables

  • Make variability unsystematic between conditions

Observer bias

  • Masked/blind design

  • Codebooks and strict observation protocols

Demand characteristics

  • Double-blind design

  • Obscure purpose

Placebo effect

  • Placebo control group (ideally blind)

52
New cards

When it comes to generalizing from a sample to a population, what aspect of the sample is most important?

How the participants were sampled is more important than how many participants there are. Simply put, was there random sampling?

53
New cards

Which of the three types of claims is almost always focused on generalization?

Frequency claim: Claim that describes a particular rate or degree of a single variable; how common/frequent something is within a population

54
New cards

Explain why researchers who are doing theory testing might not use a random sample. What aspects of their research are they emphasizing (for now)?

Researchers in theory-testing mode typically prioritize internal validity, so they focus on controlling/isolating variables rather than using random sampling to make their study more generalizable. Later, in future studies, random sampling and external validity can be prioritized to see how the theory applies to more diverse populations.

55
New cards

Why do cultural psychologists critique the universality assumption?

Universality assumption: Explicit or implicit belief by researchers that all participants would act pretty much the same no matter what their background is

Cultural psychology: A subdiscipline of psychology concerned with how cultural settings shape a person’s thoughts, feelings, and behaviors, and how these in turn shape cultural settings

The universality assumption ignores the significant impacts culture can have on results internationally and assumes that the results of White westerners can be applied to everyone, while cultural psychology believes the exact opposite

56
New cards

In what ways do lab studies and field studies differ?

Lab study: Study that takes place in a standardized location so researchers can control details

  • Helps internal validity

Field research: A real-world setting for a research study

  • Helps external validity

57
New cards

When an experiment tests hypotheses in an artificial laboratory setting, it does not necessarily mean the results would not apply to the real world. Explain why.

Experimental realism: Extent to which a lab experiment is designed so participants experience authentic emotions, motives, and behaviors

Ecological validity: Extent to which the tasks and manipulations of a study are similar to real-world contexts

It is possible for laboratory studies to have high experimental realism and ecological validity. In addition, because theory-testing mode prioritizes internal validity so much, they ensure that there are no confounds impacting the results. The isolation of specific variables makes the internal validity of the theory very strong, so it can often be easily applied to external settings with further research.

58
New cards

What are the three claims psychologists can make?

Frequency, association, and causal

59
New cards

What does empiricism look like in science?

Trying to collect observations/data to help draw conclusions. Basing conclusions on evidence from systematic observation using the senses or instruments that assist the senses.

60
New cards

What is the order of the theory-data cycle?

Theory → Research questions → Research design → Hypotheses → Data

61
New cards

In the theory-data cycle, what happens if the data does not support the hypothesis?

Researcher can evaluate design and revise theory

62
New cards

In the theory-data cycle, what happens if the data supports the hypothesis?

The theory is strengthened

63
New cards

How does science differ from pseudoscience?

Science is falsifiable, systematic, published, and replicable. Pseudoscience claims to be a science but lacks falsifiability, systematic observation, publication, and replication. Pseudoscience makes extraordinary claims in the absence of extraordinary evidence

64
New cards

Why does pseudoscience lack falsification?

Pseudoscience is too vague for a specific, measurable prediction or claim

65
New cards

Why does pseudoscience lack systematic observation?

Pseudoscience relies on information observation, experience, intuition, and authority, and ignores the results of systematic observations

66
New cards

How does science differ from experience?

Experience does not have a comparison group and is vulnerable to confounds. Scientific research is also probabilistic, so one person’s experience does not account for all possible cases

67
New cards

How does science differ from intuition?

Intuition is biased and vulnerable to heuristics

68
New cards

How many variables are there in association claims?

At least 2

69
New cards

How many variables are there in causal claims?

At least 2

70
New cards

What are the four validities?

Construct validity, external validity, statistical validity, and internal validity

71
New cards

Are variables in correlational studies measured or manipulated?

All measured

72
New cards

Are variables in experiments measured or manipulated?

Independent variable is manipulated, dependent variable is measured

73
New cards

Do correlational studies have covariance?

Yes

74
New cards

Do correlational studies have temporal precedence?

Depends on design type

75
New cards

Do correlational studies have internal validity?

They can try, but it’s not as good

76
New cards

Do experiments have covariance?

Yes

77
New cards

Do experiments have temporal precedence?

Yes

78
New cards

Do experiments have internal validity?

Yes, if good design

79
New cards

What are the three types of order effects?

Practice effect: Participants get better over time

Fatigue effect: Participants get tired over time

Carryover effect: Effect of one independent variable carries over to the next

80
New cards

How do you prevent the ceiling or floor effect?

If the ceiling effect, use a better measure

If the floor effect, use a higher scale of measurement with more trials/items or multiple measures

81
New cards

What are the pros and cons of each type of measure?

  • Self-report

    • Pros:

      • Shows internal states

      • Easy and cheap

  • Observational

    • Pros:

      • More accurate than self-report for behaviors

      • No language

  • Physiological

    • Pros:

      • Unconscious processes

      • Some give precise timing

    • Cons:

      • Can be hard to map directly to conceptual variables

  • All have some opportunity for bias or influencing responses

82
New cards

What is the goal of external validity?

Ask nuanced questions

83
New cards

What is the first section of an empirical journal article?

Introduction

84
New cards

If you want to know which surveys were administered to participants in a study, which section of an article would be the best place to look?

Method

85
New cards

What does the term peer review mean in science?

A journal editor asks anonymous experts to evaluate the quality of a study before it is published 

86
New cards

How are empirical journal articles different from review journal articles?

An empirical article presents results of a new study, but a review article does not

87
New cards

What is the idea of lateral reading techniques for reading about science in the popular media?

Checking claims on alternative, legitimate sources

88
New cards

Which type of validity involves deciding how well the variables in a study were measured or manipulated?

Construct

89
New cards

Which type of variable does a researcher have control over by assigning participants to certain levels?

Manipulated

90
New cards

In an experiment, conditions is another word for…

Levels of the independent variable

91
New cards

When a variable is kept the same for all participants, it is a…

Control variable

92
New cards

What validity would you be concerned about if you asked "Did the experimenter act the same way for both groups, or did she act more cheerful with one group?”

Internal validity

93
New cards

What type of scale of measurement would your score on a memory test from 0-100% correct be?

Ratio

94
New cards

What type of scale of measurement would the state someone currently lives in be?

Categorical

95
New cards

Shyness vs. 15-question survey measuring shyness: which is the conceptual definition, and which is the operational definition?

  • Shyness: conceptual definition

  • 15-question survey: operational definition

96
New cards

Which subfield of psychology pushes back most strongly against the universality assumption? 

Cultural psychology

97
New cards

What question is most important to ask about a sample when you want to generalize to a population?

HOW were people selected?

98
New cards

Research for which type of claim always requires good external validity?

Frequency

99
New cards

When focused on theory-testing, which type of validity are researchers often more willing to sacrifice?

External