intro. to research midterm

studied byStudied by 0 people
0.0(0)
learn
LearnA personalized and smart learning plan
exam
Practice TestTake a test on your terms and definitions
spaced repetition
Spaced RepetitionScientifically backed study method
heart puzzle
Matching GameHow quick can you match all your cards?
flashcards
FlashcardsStudy terms and definitions

1 / 91

encourage image

There's no tags or description

Looks like no one added any tags here yet for you.

92 Terms

1

definition of nursing research

systemic inquiries about nursing practice that include research, testing, and evaluation

  • designed to develop or contribute to generalizable knowledge

New cards
2

definition of evidence based practice (EBP)

characterized by best research evidence, clinical expertise, and patient preferences and values

New cards
3

EBP goals

  • lifelong approach to clinical practice

  • translates knowledge with the goal of improving practice

New cards
4

impact of nursing research VS. impact of EBP

nursing research: growing knowledge base of nursing

EBP: translate the knowledge into practice

New cards
5

nursing research impact

generates the knowledge base for our nursing practice

New cards
6

EBP impact

translate the knowledge base of that we generated through research into our practice

New cards
7

deductive reasoning

  • top → down approach

  • uses general premises to reach specific conclusions

  • used by most nurse researchers

  • ex:

New cards
8

inductive reasoning

  • bottom up approach

  • uses specific observations to form general conclusions

  • ex:

New cards
9

what is a PICO question?

for writing a good researchable question for EBP or nursing research

New cards
10

what does each letter in PICO stand for

P - population

I - intervention

C - comparison

O - outcome

New cards
11

how to identify POPULATION (P) in PICO

how would I describe this group of patients

ex: age, gender, geographic location

New cards
12

how to identify INTERVENTION (I) in PICO

which main intervention, management strategy, diagnostic test, etc. am I interested in?

New cards
13

how to identify COMPARE (C) in PICO

is there a control or alternative you would like to compare to the intervention?

New cards
14

how to identify OUTCOME (O) in PICO

what can I hope to measure, accomplish, improve, or effect?

New cards
15

what is sampling?

selecting a subset of the population to participate in the research study

New cards
16

why is sampling important?

for generalizability and external validity

  • we want our sample to be representative of the population

New cards
17

definition of target population

who we ultimately want our results to apply to

ex: All elementary school students in the nation

New cards
18

definition of accessible population

the subset of people from the target population who we could reasonably enroll in our study

ex: Elementary school students in Pittsburgh, PA

New cards
19

definition of sample

the individuals who meet the criteria and enroll and participate in our study

ex: A randomly selected group of 150 students from the accessible population

New cards
20

what affects sampling error?

  • sample size

  • heterogeneity/variability

New cards
21

what happens to the sampling error when the sample size increases (increases/decreases)

the sampling error decreases

New cards
22

what happens to the sampling error when variability increases (increases/decreases)

increases

New cards
23

definition of sampling error

the difference between our sample statistic and our population parameter (we really want to know population parameter)

New cards
24

do we want the sampling error to be small or large

SMALLLL

New cards
25

point estimate

our single best guess of the unknown population parameter

New cards
26

confidence interval

the amount of uncertainty around the estimate

  • can be narrow or wide

New cards
27

do we want the confidence interval to be large or small

SMALLLL so that it is more precise

New cards
28

what does overlapping mean

means there is no difference between the two groups

  • the treatments and control are equivalent

  • one is not better than the other

New cards
29

what does no overlapping mean

means that one is higher/better than the other

  • one is superior

  • there is a statistically significant difference

  • P > 5

New cards
30

data collection methods

what types of data would you collect with each method?

New cards
31

definition of reliability

means CONSISTENCY

  • getting the same thing over and over again

  • without reliability, we would have no confidence in the data we collect

New cards
32

definition of validity

ACCURACY, TRUENESS

  • the extent to which the instruments used measure exactly the concept that you want them to measure

New cards
33

types of validity

CONTENT, CRITERION, CONSTRUCT

New cards
34

what is content (type of validity)

involves the degree to which the content of the test matches a content domain associated with the construct

New cards
35

what is criterion (type of validity)

correlation between the test and the criterion available (variables) taken as representative of the construct

New cards
36

what is construct (type of validity)

the extent to which your test or measure accurately assesses what it's supposed to

New cards
37

types of bias from structured observation and surveys

  • social desirability

  • recall bias

  • response bias

  • extreme response bias

  • acquiescence

New cards
38

what is a study design?

a guide for the research process

  • the blueprint

  • has the structure to maintain the control in the study

New cards
39

Quantitative research

uses numbers

New cards
40

qualitative research

everything BUT numbers

New cards
41

hierarchy of evidence

arranged in terms of internal validity based on the cause and effect

  • randomized control trial

  • the top is the most evidence

  • the independent variable changes the dependent variable

New cards
42

definition of experimental design

the process of carrying out research in an objective and controlled fashion so that precision is maximized

  • specific conclusions can be drawn regarding a hypothesis

New cards
43

3 required properties of true experimental design

R.M.C.

  1. Randomization

  2. Manipulation

  3. Control

New cards
44

definition of blinding

the concealment of group allocation

  • keeps groups equivalent in everything but the independent variable

New cards
45

definition of allocation concealment

hides the sorting of trial participants into groups so that this knowledge cannot be exploited

New cards
46

definition of intervention fidelity

participants receive the intervention or instructions exactly as described in the study protocol

New cards
47

definition of intention to treat analysis

analyzing the people in the group they are in no matter if they complete the study or not

  • you will be analyzed regardless of whether you participate or not

New cards
48

definition of independent variable

the one you can change in an experiment

  • the cause factor you are testing to see if it affects something else

New cards
49

definition of dependent variable

the results or the effects you measure in the experiment

  • what you observe or count to see if it changes when the independent variable changes

New cards
50

definition of Quasi-experimental design

research approach that aims to establish a cause-and-effect relationship, but LACKS random assignment of participants to groups. 

New cards
51

what makes quasi-experimental design different from an experimental design

  • DOES NOT have randomization

  • participants are placed in groups on everything BUT randomization

  • might NOT have a control group

New cards
52

definition of confounding

some variable that influences the dependent and independent variable

  • can be measured and analyzed

  • can also be unmeasured

New cards
53

definition of bias

any deviation from the truth during the process that can lead to FALSE INFORMATION

New cards
54

strengths of quasi-experimental designs

  • practical

  • less expensive

  • more generalizable

New cards
55

weaknesses of quasi-experimental design

not able to truly test the cause and effect

New cards
56

definition of observational designs

researchers observe and record data about a phenomenon or group without intervening or manipulating any variables, allowing them to study naturally occurring events or behaviors

  • JUST OBSERVING BEHAVIORS

New cards
57

observational vs. experimental/quasi

observational: NO active manipulation

experimental/quasi: active manipulation

New cards
58

definition of cohort study

start with individuals WITH exposure of interest AND individuals WITHOUT exposure

New cards
59

good things about cohort study

  • calculate incidence, prognosis, natural history

  • give us temporality

  • we know the exposure

New cards
60

bad things about cohort study

  • expensive

  • time consuming

  • not good for rare outcomes

New cards
61

definition of case control study

individuals with outcome of interest and individuals without the outcome of interested are identified

  • these 2 groups are studied retrospectively to compare the frequency of the exposure

New cards
62

good things about case control study

  • good fit for rare outcomes b/c they are based on outcomes

  • less expenses

New cards
63

bad things about case control study

  • limited to ONE outcome

  • recall bias

  • sampling bias

New cards
64

definition of cross-sectional study

a group of people is observed, or certain information is collected, at a single point in time or over a short period of time

New cards
65

good things about cross-sectional study

  • for prevalence

  • inexpensive

  • easy

New cards
66

bad things about cross-sectional study

  • no temporality

  • sampling bias

  • not good for rare outcomes

New cards
67

exposure relative to outcome

how close/related that factor that is associated with how the outcome is

New cards
68

timeline terms (3)

  • prospective - look to the future

  • retrospective - looking back at time

  • simultaneous - in the moment

New cards
69

objective of the research definition

concise statement of the specific goals and aims of a research study

New cards
70

definition of internal validity

degree to which change in the dependent variable can be definitely attributed only to the independent variable and not to other variables

New cards
71

definition of external validity

generalizability of findings of experimental people and settings

New cards
72

what is a systematic review

studies of studies

  • clearly states scientific research methods and designed to minimize bias

  • can be qualitative

New cards
73

what is a meta-analysis

  • unique type of systemic review

  • can be quantitive

New cards
74

how to interpret a forest plot

Each horizontal line on a forest plot represents an individual study with the result plotted as a box and the 95% confidence interval of the result displayed as the line

  • want the horizontal lines not overlapping with the middle vertical line

New cards
75

3 levels of measurement

  1. nominal

  2. ordinal

  3. interval

New cards
76

what is nominal level of measurement

  • categories (NO ranking)

    • Ex: sex, age, height, etc.

New cards
77

what is ordinal level of measurement

categories AND ranked (ranking not equal intervals)

  • ex: scale → strongly agree, agree, disagree

New cards
78

what is interval level of measurement

  • equal interval estimates

New cards
79

types of error

  • type 1

  • type 2

New cards
80

type 1 error means

false positive

  • reject a true null hypothesis

New cards
81

type 2 error means

false negative

  • fail to reject a false null hypothesi

New cards
82

statistical significance VS. clinical significance

  • Statistical significance : indicates the likelihood that an observed effect isn't due to chance

  • clinical significance : assesses the practical importance and real-world impact of that effect on patient care

New cards
83

definition of emergent design

the initial plan is NOT tightly describedi

New cards
84

definition of reflexivity

the researcher is an instrument, so the researcher needs to be aware of bias and experiences

New cards
85

definition of purposive sampling

not trying to be generalizable, BUT selecting individuals who will answer

New cards
86

definition of data saturation

how we get our sample size

  • keep sampling until no new research can be attained

New cards
87

definition of triangulation

using multiple sources to corroborate evidence to increase accuracy

New cards
88

definition of memoing

researchers record their data while they observe

New cards
89

definition of bracketing

the researchers setting aside their bias and interpretation

New cards
90

definition of coding

putting texts into themes and common ideas

New cards
91

definition of phenomenology

aimed at understanding the lived experience of individuals

New cards
92

definition of grounded theory

seeks to develop a theory grounded in data, focusing on social processes and interactions

  • illustration or figure of a process

New cards
robot