Research Methods Exam 3

0.0(0)
studied byStudied by 1 person
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/107

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

108 Terms

1
New cards

Questionable research practices

Decisions researchers make when planning a study of during and after data collection that can lead to erroneous or misleading results; Not considered to be unethical because there is no intent to deceive

2
New cards

Data dredging

researchers collect data on a large number of variables, examine all possible relationships among them, and then focus on the relationships that are statistically significant; Conduct multiple statistical tests and increases the probability of a type I error (conclude relationship is real when it actually occurred by chance)

3
New cards

Data snooping

periodically checking results during data collection to see if they are statistically significant and stopping data collection when statistically significant results are found

4
New cards

Data trimming

Selectively discarding data that do not support a study’s hypotheses; Can get rid of data when they are suspicious of a deception like not following instruction, and for outliers

5
New cards

Data torturing

Improper exploitation of statistical tests; repeatedly analyzing the same data in different ways until something statistically significant emerges

6
New cards

Methodological tuning

Tweaking a study’s methodology until it produces statistically significant results

7
New cards

Accentuating the positive

Giving more weight to results that support one’s hypothesis than to results that do not support it; focusing on one’s statistically significant findings while ignoring nonsignificant findings

8
New cards

HARKing

Hypothesizing after results are known, presenting post hoc hypotheses in a research report as if they were, in fact, a priori hypotheses

9
New cards

Data forgery

report the results of experiments that were never conducted

10
New cards

Duplicate publication

Publish the same work in different journals, makes it appear there is more information available on the topic than there really is

  • Okay to publish and present at a conference

  • Technical article and then rewriting for a nontechnical outlet

    • Communicate to teachers and psychologists

  • Editors ask for it to be written for a journal or edited book

11
New cards

Piecemeal publication

Taking the data from a single study and breaking it into pieces to increase the number of resulting publications, when all the analyses are testing the same hypothesis; Okay to have two sets of data for separate hypotheses or to reanalyze an old data set to test a new hypothesis, a long term study might be published periodically to give updates on findings

12
New cards

Plagiarism

the act of taking someone else’s work or ideas and passing them off as one’s own

13
New cards

Text recycling

When sections of the same text appear in more than one of an author’s own publications

14
New cards

Field research

Conducted in natural settings

15
New cards

Field experiments

Attempt to achieve a balance between control and naturalism in research by studying people’s natural behavioral responses to manipulated independent variables in natural settings

Example - willingness to help someone who collapsed on a New York subway

16
New cards

Natural experiment

Are able to study an IV that could not be manipulated for ethical reasons; Are correlational studies because the researcher doesn’t manipulate the IV, cannot randomly assign research participants to conditions, and has little control over extraneous variables

Example - Compared children who had been born 26 weeks gestation to children carried to full term on cognitive tests

17
New cards

Quasi-experiment

Researcher attempts to achieve naturalism by manipulating an IV in a natural setting using existing groups of people as the experimental and control groups; Have a limited form of random assignment and can manipulate the IV

18
New cards

Nonequivalent control group design

Researcher studies two (or more) groups of people, members of one group have the experimental condition and the other are the control group; Not equivalent because the groups weren’t randomly assigned; Pretesting done to ensure the experimental and control groups are similar on the DV before the IV is introduced

Example - reading in sixth graders, no pretest we don’t know where the groups started, what if one group was already a lot better at reading, want to see the group getting help have higher scores and then group that didn’t have similar scores to before

19
New cards

Nested analysis of variance design

Statistical method that separates the variance in the DV that is due to the effect of the IV from that due to the effect of attending a particular school

20
New cards

Biased selection

Not equal between groups in individual characteristics so these characteristics are confounded with the IV

21
New cards

Focal local controls

Ensures that the control and treatment groups are as similar as possible

Example - get another group of children from a related school

22
New cards

Problems with field research

  • Manipulating the IV

    • Naturally occurring manipulation (uncontrolled intensity, duration, location, time)

      • Example - COVID-19 where the before and after is a naturally occurring manipulation

  • Operational definitions

    • Usually behavioral and assessed during observation

Extraneous variables

  • Can’t control these variables

    • Have to consider the other potential factors and how they could impact a research study

  • Less control over research participants

    • Unrepresentative samples

      • Convenience samples

    • A lack of random assignment

      • Individual differences not balanced across groups

23
New cards

Accosting

Selecting a person who is the target for the intervention, actual pizza place, told it would take X amount of time, would arrive early or late, reason by pizza person would say they were a fast driver or that there was a lot/little of traffic.

Experiment - IVs - reason for early (or late) pizza delivery, Driver’s ability or traffic, DV - tip amount, Tip largest when it was early and attributed to the driver’s ability, Tip lowest when it was late and attributed to the driver’s ability

24
New cards

Single-case research design

Intensive study of a single person, group, organization, or culture. Most common are in behavior therapy, behavior modification, and applied behavior analysis

25
New cards

Case study research

An in-depth, usually long-term, usually unconstrained, examination of a single case for either descriptive or hypothesis testing purposes. Select cases based on validity (heterogeneity among test cases and access or opportunity to collect), and units of analysis

26
New cards

Units of analysis

Level(s) of aggregation at which to collect data, specific to case studies but this can be applied more generally too.

Example

  • Medical student stress

    • Levels include class, student, or both

    • Follow a class throughout their time in school on their stress

    • Could look at an individual student’s stress levels or a couple of them

27
New cards

For data collection in a single-case research, research must…

Plan carefully (formulate plan before data collection for DVs, operational definitions, and sources of information), search for disconfirming evidence, and maintain a chain of evidence

28
New cards

Single-case experiment

Like a case study, except the experimenter exerts more control over the research situation. Obtains baseline, manipulates the IV, controls for extraneous variables, assessed the DV continuously

29
New cards

A-B Design

Also called baseline design. Includes assessment of a behavior at baseline and then introduces the independent variable. Used because the removal of the treatment could cause harm.

30
New cards

A-B-C-B design

Assesses the behavior over baseline, introduce treatment, introduce comparison condition, and reinstate the original treatment.

Example - baseline of blood alcohol test twice a week for three weeks, contingent reinforcement - gets a reward for zero BAC, contingent reinforcement - get a reward regardless of BAC, back to contingent reinforcement

31
New cards

A-B-A design

Also called the reversal design. Assesses behavior over a baseline period, introduces the treatment, and removes the treatment.

Example

  • Degree of distress 15 min after chemotherapy is the baseline, then introduce a treatment of a video game

    • Measured distress levels from the chemotherapy like side effects, did for the chemotherapy sessions

    • Had child play video games after receiving chemotherapy

    • Looked at over 13 chemotherapy sessions

32
New cards

Evidence of an effect in single-case experiments

Magnitude, immediacy, continuation during long-term follow up. Look at graphs.

33
New cards

Why is a stable baseline important for single-case experimental research?

Helps us to draw valid conclusions. If the baseline is unstable, it is less obvious if the treatment had an effect. Sometimes researchers will wait for baselines to stabilize before having the intervention.

34
New cards

Trend (in baseline data)

Baseline data that increases or decreases over time, can make it difficult to evaluate the effectiveness of the treatment. At baseline if you have a slope that continues into the treatment period is a problem because it is unclear if treatment has an effect or if the baseline is continuing to have a positive slope and treatment actually has no impact.

35
New cards

Variability (of baseline data)

Baseline data that has high variability, can make it difficult to evaluate the effectiveness of the treatment

36
New cards

Visual data analysis

The researcher plots a graph of the graph of the results and examines it to determine if the IV had an effect on the DV. Look at magnitude, immediacy, continuation at follow-up, and return to baseline to near pre-treatment levels

37
New cards

Qualitative research

Research method that uses of interviews, participant observation, and or document analysis to find meaning in words and texts. Has a social constructivist perspective and takes biases into account when interpreting and disseminating findings from their studies. Purpose and goals are less on prediction and more on explanation and description, also adds exploration.

38
New cards

Thick description (qualitative research characteristic)

Make note of and analyze rich details about scenes. Piece together these details to create a holistic understanding

  • Example - anthropologist field notes, try to capture as much they possibly can 

    • Observations for individuals, multiple participants, how they interact, other things going on in the setting

39
New cards

Bricolage (qualitative research characteristic)

Get multiple perspective and use multiple forms of data to create a meaningful story. All the data, whatever it is, will be qualitative, won’t be numerical

40
New cards

Naturalistic (qualitative research characteristic)

Examine naturally occurring events in everyday settings

41
New cards

Narrative approach

Examine a single or a few individuals whose stories are used to illuminate larger social issues. Having a person or multiple people telling you their stories.

42
New cards

Phenomenological approach

Highlight several individuals’ lived experiences and what they have and don’t have in common

Example

  • Two college students inhabit the same role, but their day-to-day realities vary

    • Student A - navigate interactions with a roommate

    • Student B - balancing school with raising a family

43
New cards

Grounded theory

Strives to generate a new theory of a social process that is grounded or stems from the data

44
New cards

Theoretical saturation

Locate and interview participants until new data ceases to spark original theoretical ideas. More important in qualitative research than sample size

45
New cards

Ethnography

Immersive study of a group, community, and/or social world. Field notes may be a more common data collection tool. Think anthropology. May also have gatekeepers.

46
New cards

Gatekeeper

Grants or deny permission to enter or conduct research in a specific setting

Example

  • Get permission from tribal leaders to work with individual tribe members

47
New cards

Autothenography

Connect the analysis of one’s own identity, culture, feelings, and values to larger social issues

48
New cards

Non-probability sample

The probability of any person being chosen for participation is unknown

49
New cards

Maximum variation sampling

Selecting individuals and/or sites that are purposely different from each other; have a more diverse sample

50
New cards

Snowball sampling

People who agree to participate nominate others who they think might also be willing to participate

51
New cards

Highly-structured interview

Every participant is asked the same questions in the same order

52
New cards

Semi-structured interview

Have a list of questions that they will try to cover, might ask in different orders based on the conversation flow, ask follow up questions

53
New cards

Low-structured interview

Try to cover general themes but the interviewer might not have a concrete list of questions, let the interview go wherever the participant leads it

54
New cards

Types of interview questions

  • Introductory - describe an occasion when…

  • Probing - give an example of…

  • Specifying - walk through step-by-step

  • Direct - was the experience positive or negative to you?

  • Interpretation - am I understanding you correctly that…

55
New cards

Researcher characteristics (qualitative)

  • Consider whether and how researcher characteristics may influence the participants

  • Advisable for researcher’s basic characteristics to match the respondent’s

    • race/age/sexual orientation

56
New cards

Transcription

Create a verbatim record of the conversation. 3 hours for every 1 hour of the interview.

57
New cards

Memoing

Part of the transcription process, can be lost with AI doing the transcription work. Writing down ideas as you are going through the transcription process, noticing patterns and themes across people

58
New cards

Data analysis (qualitative research)

Qualitative data are examined early in the collection process and new data are gathered based on those examinations to help flush out potential themes and patterns. Data snooping is questionably ethical in quantitative research but it part of the process in qualitative research, it is called data analysis

59
New cards

Coding

Reducing the data into meaningful segments and assigning names for the segments

60
New cards

Open coding

Coding is unrestricted and the codes are phrases explicitly mentioned by participants. As you learn more, you might go back to earlier participants

61
New cards

Axial coding

After you have a good idea of what codes you are using. Open codes are combined into broader themes or subthemes and comparisons are made between them

62
New cards

Themes

Broad units of information that consist of several codes aggregated to form a common idea

63
New cards

Representation (qualitative research)

Researcher is conscious of the biases, values, and experiences they bring to qualitative research. Need for self-disclosure on the part of the research like a positionality statement

64
New cards

Encoding

How a piece is written, word choices, whether specific jargon is used, extent to which methods are addressed. Use wording like “Procedures” rather than “methods” or “Findings” rather than “results”

65
New cards

Program monitoring

Also called process evaluation. Continuing assessment of how well the program is being implemented while it is being carried out.

66
New cards

Formative evaluation

Used to monitor the process or development of a program. Example - evaluation during the program, could revise materials and alter procedures for recruitment after evaluation

67
New cards

Summative evaluation

Used to assess the overall effectiveness of the program. Example - completed at end of the program to answer the question of whether the program had the expected impact on health-related behaviors

68
New cards

Target population

 The particular group of people, such as adolescents or substance abusers, the intervention is intended to reach

69
New cards

Program implementation failure sources

  • Lack of specific criteria and procedures for program implementation

  • Insufficiently trained staff members

  • Inadequate supervision of staff provides opportunity for treatments to drift away from intended course

    • Programs must be tailored to client’s needs to some degree, but it cannot be so tailored that it changes from the intended treatment form

  •  Novel treatment program is implemented with a staff who do not believe in its effectiveness, who are used to doing things differently, or who feel threatened by the new procedure, can lead to resistance or event sabotage

70
New cards

Client resistance in program implementation

  • May be suspicious of the goals of the program or are uncertain about the effects of the program

  • Inaccessibility -  lack of transportation, restricted operating hours, locating the service site and seen as dangerous

  • Threats to client dignity - demeaning procedures, intrusive questions, rude treatment

  • Failure to consider a client’s culture - values, lifestyle, treatment needs, and language difficulties

  • Unusable services - printed materials with a reading level too high for clients, written in a language in which clients are not fluent, or too small of writing for visually impaired clients

  • Reduce resistance by ensuring all viewpoints are considered such as using focus groups, explain any unchangeable aspects of the program that caused concern for members of the focus group

71
New cards

Unintended effects (program implentation)

  • Side effects of the program

  • Unintended effects can exacerbate the program a program was intending to alleviate

    • Example - program to reduce problem behaviors like substance use for adolescents, program effective in short term but in long run those in the group had more substance use than those in the control groups

    • Some unintended effects can also be positive but are less likely to be reported because they are not considered problems

72
New cards

Criteria for evaluating impact of the program

  • Degree of change

    • Change for each of the goals, means and effect sizes

  • Importance of change

    • Goal attainment can be defined either in terms of meeting some preset criterion of improvement or relative to the level at which an outcome is found in a criterion population

    • Example - 0 panic attacks over a 2 week period

    • Number of goals achieved

    • Durability of the outcomes

  • Cost of the program

    • cost-efficiency analysis

  • Acceptability of the program

    • For clients, staff, etc

73
New cards

The ideal strategy for evaluation research is _____ but ______ are the most commonly used evaluation research strategy

true experiment, quasi-experiment

74
New cards

Threats to internal validity in evaluation research

  • Composition of control or comparison group

  • Treatment diffusion

  • Staff compensates control group with some benefits of the treatment group

  • People who are aware they are int he control group might feel a rivalry with the treatment group

  • Resentful demoralization

  • Local history events

75
New cards

Treatment diffusion

Members of the control group learn about the treatment from members of the treatment group and try to apply the treatment to themselves

  • Example Students against drunk driving made their own SADD group

76
New cards

Resentful demoralization

Members of the control group learn they are being deprived of a program that could benefit them and so reduce any efforts they might have been making to solve their problem themselves

77
New cards

Pre-experimental design

When a pretest-posttest design doesn’t include a control group, it is called a pre-experimental design. Sometimes you can’t have a no-treatment control group. Should be avoided when they can be - studies have found that treatment effects were overestimated compared to true experiments and quasi-experiments

78
New cards

Meta-analysis

  • Results of a set of studies that test the same hypotheses are statistically combined

  • Can be used to find the average effect for a treatment

  • Can also estimate effects of possible moderators and what conditions in the program are more or less effective

79
New cards

Interpretation of null results (evaluation programs)

May be from program failure (true null) like implementation failure or from poor research like sampling error or unreliable measures. May also find null between two treatment groups, if they have the same outcome then we can consider using the less expensive or time-consuming treatment. 

80
New cards

Cost-benefit analysis

Compare the dollar cost of operating a program to the benefits that occur when objectives are achieved. Assumption that all outcomes can be expressed in monetary terms

81
New cards

Target population

The group of people we want our research to apply to

82
New cards

Study population

People who meet our operational definition of the target population

83
New cards

Research sample

The people from the study population from whom we collect our data

84
New cards

Probability sampling

Every member of the study population has a known probability of being selected for the research sample

85
New cards

Sampling frame

List of all the people in the study population, such as a roster of all the students attending a particular college or university

86
New cards

Simple random sampling

Researcher uses a table of random numbers to select the participants, process continues until the desired number of participants is acquired

87
New cards

Stratified random sampling

Sampling frame is arranged in terms of the variables used to create the sample, such as with a quota matrix

88
New cards

Quota matrix

Each person in the sampling frame is categorized by gender, ethnicity, and class and is assigned to the appropriate cell; researchers then sample randomly from each cell in proportion to its representation in the population

89
New cards

Systematic sampling

 Start with a sampling frame and select every nth name, where n equals the proportion of the frame that you want to sample

90
New cards

Cluster sampling

Identify groups or clusters of people who meet the definition of the study population; take a random sample of the clusters and use all members of the sampled clusters as research participants

91
New cards

Nonprobability sampling

the probability of the person’s being chosen is unknown

92
New cards

Convenience sample

Consists of people from whom the researcher finds it easy to collect data

93
New cards

Quota samples

Convenience samples are stratified using a quota matrix

94
New cards

Purposive sampling

Researchers use their judgment to select the membership of the sample based on the research goals, frequently used in case study research

95
New cards

Snowball sampling

People who are initially recruited for a study by convenience or purposive sampling nominate acquaintances they think might be willing to participate in the research

96
New cards

Statistical power

1-beta, probability of not making a Type II error, need adequate statistical power to avoid false negative results. Depends on factors such as alpha level, size of effect size for IV on DV, size of the research sample

97
New cards

Type I error

alpha level, usually set at .05, represents the probability of saying there is a relationship when one actually doesn’t exist

98
New cards

Type II error

beta, represents the probability of saying there isn’t a relationship when there actually is one

99
New cards

To determine your sample size, you should be thinking about….?

  • What effect size are you trying to detect

  • What alpha level will you use

  • Will you use a one-tailed or a two-tailed statistical test

  • What level of power do you want

100
New cards

Critical effect size

Target effect size for your study.

Example - Consider the smallest effect you consider important to your theory, such as critical effect size fo r = .25 so anything smaller than that is considered a correlation of 0