1/37
Flashcards covering key concepts from lecture notes on research, evidence-based practice, validity, bias, types of inquiry, hierarchy of evidence, and experimental design fundamentals.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
What is the distinction between research and evidence?
Research refers to results from individual studies, while evidence refers to cumulative results across studies.
How is evidence defined in research?
Evidence is the synthesis of all valid research studies that answer a specific question.
What does evidence-based practice involve?
Evidence-based practice involves tracking down available evidence, assessing its validity, and using the best evidence to inform treatment decisions.
Is a single study considered sufficient evidence?
No, clinicians must distinguish good from bad research and consider cumulative results.
Besides significant literature, what else must clinicians consider for evidence-based practice?
Clinicians must also consider clinical circumstances, experience and professional judgment, and the patient's values and preferences.
What does 'closeness to the truth' refer to in research?
It refers to the degree to which the design and methods of a study provide for accurate investigation of the event in question, often relating to validity.
What two types of validity can research reviewers assess?
Internal and external validity.
What are examples of threats to internal validity?
Selection bias, maturation, and instrumentation.
How can many threats to internal validity be addressed?
Random assignment to groups addresses many threats to internal validity.
What is external validity?
External validity is the ability to generalize findings beyond the subjects, environmental constraints, and temporal periods of the current study.
How does increasing internal validity often affect external validity?
As internal validity increases (more controls), the generalizability of findings (external validity) may suffer.
How is bias defined in research?
Bias is a systematic error that causes a preference for one outcome over another, resulting from problematic or incomplete controls that lead to skewed observations.
What is selection bias?
Selection bias occurs when procedures for selecting participants are different across groups.
What is channeling bias?
Channeling bias occurs when participants are placed in study conditions according to their prognosis, age, fragility, or other characteristics.
What is interviewer bias?
Interviewer bias is an error introduced by the researcher collecting data.
What is chronology bias?
Chronology bias occurs with historic controls subject to changes in practice over time.
What is recall bias?
Recall bias is characterized by skewed or faulty recollection of events or associations by participants.
What is transfer bias?
Transfer bias refers to differential attrition (loss of participants) across different study conditions.
What is misclassification bias?
Misclassification bias occurs due to problems with the operational definition of grouping variables.
What is performance bias?
Performance bias refers to differences in the clinical quality of intervention across providers.
What is citation bias?
Citation bias occurs when comparative evidence is limited by what is accessible.
What is publication bias?
Publication bias occurs when previous evidence is not available due to publication preferences, often favoring significant results.
What is confounding (as a type of bias)?
Confounding is when an observed association is due to some unknown variable, or an extraneous variable that correlates significantly with both the dependent and independent variables.
What is the philosophical root of quantitative inquiry?
Quantitative inquiry is rooted in empiricism, meaning only phenomena that can be measured are considered 'real', often using numeric scales.
What is the philosophical basis of qualitative inquiry?
Qualitative inquiry is based in hermeneutics, which is the interpretation of contextual meaning, often relying on subjective measures and perceptual biases.
What is considered the strongest level of evidence in the hierarchy of evidence?
Systematic reviews and meta-analyses.
What is considered the weakest level of evidence in the hierarchy of evidence?
Case reports.
What defines experimental studies?
Experimental studies are a type of quantitative inquiry that investigates 'cause' by having the researcher control or manipulate the variables under investigation.
What defines non-experimental studies?
Non-experimental studies may be quantitative or qualitative, lack experimental controls, and investigate relationships but not cause. They are sometimes called 'quasi-experimental'.
How is a variable defined in research?
A variable is any factor relevant to a particular study, which may be known or unknown.
What is an independent variable?
An independent variable is a factor or condition that changes naturally or is intentionally manipulated by the investigator to observe its effect, also known as the 'causative factor'.
What is a dependent variable?
A dependent variable is an observed variable in an experiment or study whose changes are determined by the 'level' of one or more independent variables, also known as the 'response' or 'outcome'.
What is a confounding variable?
A confounding variable is an extraneous variable that correlates significantly with both the dependent and independent variables, or a factor not recognized by the researcher that significantly impacts the outcome.
What is the difference between prospective and retrospective research?
Prospective research looks at events or constructs that have not yet happened or been measured, while retrospective research looks at data that already exist.
What type of research provides the strongest evidence for demonstrating cause and effect?
Randomized Controlled Trials (RCTs).
How does random assignment reduce bias in RCTs?
Random assignment reduces the effect of bias due to intervening variables by assuming that confounding conditions will be equally distributed across groups.
What are the key components of a true experimental design?
A true experimental design includes at least one 'varied condition', concurrent enrollment, random assignment to groups, and follow-up.
What is the purpose of blinding in research?
Blinding attempts to reduce bias due to the expectations or preconceptions of patients or investigators (e.g., in a double-blind study).