Research Design - Semantic 3

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/30

flashcard set

Earn XP

Description and Tags

Definitions, facts, concepts

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

31 Terms

1
New cards

Meta-analysis

A statistical technique that combines the results of multiple studies to identify patterns, inconsistencies, or overall effects in a specific area of research. The independent variable is typically the topic (or topics) being studied, and the dependent variable is effect size.

2
New cards

Purposive sampling

A non-probability sampling technique where the researcher selects participants based on specific characteristics or qualities to achieve a particular purpose in a study. Done to increase control over the representativeness of the sample on a particular domain.

3
New cards

Systematic review

A comprehensive survey of existing literature that systematically evaluates and synthesizes research studies on a particular topic, often including criteria for study selection and assessment of study quality. It does not include a statistical analysis of study results.

4
New cards

Statistical regression

A natural tendency for the performance of a given participant on a measure to “regress” towards the norm (population mean) over time, threatening internal validity.

5
New cards

Marlowe-Crown assessment

A psychological measurement tool used to assess social desirability bias in personality tests, ensuring that responses reflect an individual's true feelings and attitudes.

6
New cards

Variable-centered design

An observational design where the independent variable is an ongoing, naturally-occurring process. It focuses on the relationship between variables rather than individual cases, often used to identify mechanisms of change.

7
New cards

Contingency table

A statistical tool used to analyze the relationship between two categorical variables by displaying their frequencies in a matrix format. Often used to depict the interactions of all independent variable levels.

8
New cards

History treatment effects

Refers to the effectiveness of an intervention being contingent on events occurring outside of the study, therefore rendering it un-reproducible unless the same events happen again. It threatens external validity.

9
New cards

Person-centered design

An observational study design where the independent variable changes across groups of people in different clinical settings, as opposed to continuously changing variables, typically relying on cluster analysis to compare between conditions.

10
New cards

Random assignment

A method used in experimental research where participants are assigned to different treatment groups by chance, minimizing biases and ensuring error is distributed non-systematically. Considered a prerequisite for adequate internal validity.

11
New cards

Post-test only experimental design

A type of experimental design that involves measuring the dependent variable after the treatment has been administered, without any pre-test measurements, consequently lowering internal validity.

12
New cards

Waitlist control design

A research design where participants are placed on a waiting list to receive treatment after another treatment group has been studied, allowing for comparison between groups using time as the primary manipulation.

13
New cards

ABA design

A type of single-subject experimental design consisting of three phases: a measurement phase (A), a treatment phase (B), and a withdrawal phase (A), which helps in evaluating the effect of an intervention on behavior.

14
New cards

Relative risk

An odds ratio of the probability of an event occurring in the exposed group versus a non-exposed group, often used in epidemiological studies to assess the impact of a risk factor.

15
New cards

Trim and fill method

A statistical procedure used to assess and adjust for publication bias in meta-analyses by removing (“trimming”) studies assessed to have biased (positive) findings, then replacing (“filling”) them with studies that have negative findings to establish symmetry in the funnel plot. If the meta-analyst has access to a study’s dataset, they may assess for values causing bias in the results and remove them before inputting them back into the distribution.

16
New cards

Interrupted time series design

A research design that analyzes time series data (multiple measurements at different points in time) before and after an intervention to assess its impact, allowing for the exploration of trends and changes over time.

17
New cards

Quasi-experimental design

A research design that resembles an experimental design but lacks random assignment to treatment or control groups, often used when randomization is not feasible. Control groups are typically referred to as comparison groups in these studies.

18
New cards

Combined control group design

An experimental design in which the control group receives the treatment given to the experimental group in addition to interventions thought to be inactive or counteractive to the success of the treatment in question. Often used to identify an “active ingredient” in the treatment itself.

19
New cards

Pseudo-experimental design

An experimental design lacks manipulated independent variables and control groups.

20
New cards

Dismantled control group design

An experimental design that involves breaking down treatment components to assess their individual effects in comparison to a control group. Most typically, the control group receives select components of the full treatment given to the treatment group.

21
New cards

Odds ratio

A statistic that quantifies the odds of an event occurring in a target group relative to the odds of it occurring in a reference (null) group. It is expressed as a fraction, where the numerator represents the odds in the target group and the denominator represents the odds in the reference group. An odds ratio greater than 1 indicates higher odds in the target group.

22
New cards

Placebo control design

An experimental design that involves using a placebo group to compare against a treatment group, allowing researchers to assess the effects of the treatment while controlling for the psychological impact of receiving treatment.

23
New cards

Standard treatment control design

An experimental design that compares a control group receiving a standard or established treatment against a treatment group that receives a new treatment being tested. This design helps evaluate the effectiveness of the new treatment against the standard.

24
New cards

Intent-to-treat analysis

A strategy in clinical trials where participants are analyzed based on their initial treatment assignment, regardless of whether they completed the treatment or adhered to the protocol. This approach helps maintain the benefits of randomization while controlling for attrition. It mitigates threats to internal validity but lowers effect size.

25
New cards

Multiple baseline design

A single-case research design that involves staggered implementation of interventions across multiple subjects or settings, allowing for comparison of the effects of the intervention while controlling for potential confounding variables. It follows a measurement-treatment-withdrawal (ABA) pattern.

26
New cards

ABAB design

A research design that alternates between measurement and treatment, often used in single-subject research. Subsequent measurements and treatments are referred to as “withdrawals” and “reversals”, respectively.

27
New cards

Experimental single-subject design

A single-subject design that includes an a priori hypothesis to be tested, operational definitions pertaining to that hypothesis, systematic measurement to assess the validity of operational definitions, and statistical methods to analyze patterns in the measurements obtained.

28
New cards

Rosenthal’s Minimum Threshold

A minimum magnitude of mean effect size (in a meta-analysis) in order to be considered meaningful, as measured by the number of negative results needed to render the mean effect size insignificant.

29
New cards

SMART research design

A quasi-experimental design in which there are multiple treatment groups and no control. If a participant does not show improvement in one treatment group, they can be randomly assigned to a different one by the experimenter at will. SMART stands for Sequential Multiple Assignment of Randomized Trials.

30
New cards

Experimenter drift

The changes in a researcher's behavior or adherence to protocols during the course of a study, most often because of personal disagreement or impatience (boredom) with the protocol.

31
New cards

I2 statistic

A measure used in meta-analysis to quantify the degree of heterogeneity among study results, indicating the percentage of variation that is due to heterogeneity rather than chance.