1/21
evidence based practice & research methods
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Evidence Based Practice (EBP)
make better decisions/get wanted outcome
make informed decisions (use reliable/relevant info)
making:
conscientious: commit effort/resources to finding evidence
explicit: describe in detail claims being made
judicious: make critical judgements of evidence presented
use of best avaliable evidence
EBP: Multiple Sources
use diff sources, some may be:
more/less reliable/relevant
tell diff stories
clarify other evidence
EBP Main Sources Used
scientific literature (studies)
org data (internal)
stakeholders concerns
professional expertise
must ask questions to gather evidence to help identify the problem
EBP: Structured Approach (6 Steps)
Ask (translate into askable q → helps to find evidence to determine if issue must be tackled)
Acquire (ask questions within 4 sources + gather evidence)
Appraise (examine evidence, judge if reliable)
Aggregate (combine past avaliable evidence)
Apply (use answers to make better informed decision of what to/not do )
Assess (evaluate affects of choice)
Scienctific Approach
=aims to understand/predict/control something of interest
logical approach to investigation
depends on data (lab/real world)
communicable, open, public (other scientists can access)
sets out to disprove theories/hypothses
disinterestedness (objective, not influenced by biases)
Theory v. Hypothesis
T: explanation based on observations
H: educated guess for expected outcome before beginning experiment
Why do OB Scientists Engage in Research
= those making decisions in orgs can:
learn from past mistakes
improve decision making
Questions Considered Before Research Begins
research in lab/field? (L= ↑ control- too artifical, cant generalize) (F= theories in real world, ↓ control, hard to determine causation)
who will participants be
how will participants be assigned to diff conditions (ex. random assignment)
variable of interest
how variables measured
→ determines research design
Population v. Sample
P: large group of interest, research selects sample from
S: small group of people who participate in study (should be representitive of pop)
Variable
measures that can take on more than one value
independent: being manipulated, changing dependent
dependent: expected to vary as result of changes in independent
Moderating v. Mediating Variable
MO
3rd variable changes strength/direction of relationship b/w IV & DV
ME:
explains relationship b/w IV & DV (how theyre related)
3 Types of Research Design
Experimental: IV/DV, control/experimental, random assignment
Quasi-experimental: cant randomly assign participants to conditions, not true experiment
Nonexperimental: gathers info wo creating diff conditions (obeservations/survery), most common in OB
Internal Validity v. Threats to Internal Validity
confidence of researcher that changes in DV are caused by changes in IV
threats: factors that are alt explanations for results (esp wo control group/random assignment)
Threats to Internal Validity Examples
selection of participants: experimental group differs from control
testing: might influence how they respond after experiment
instrumentation: differences in measures
statistical regression: regression towards mean overtime
history: events during course of experiment causing changes
maturation: changes in participants w time passage
mortality: people dropping out, remaining people differ
Why are Nonexperimental Designs Used Most Commonly in OB
experiments in lab are limited to ability to simulate work as experienced by worker
experiments at workplace difficult to implement
experiments in lab rely on conveience samples (students)
Quantitative v. Qualitative
scientists use either to collect data
Quan: uses tests/rating scales/questionaires, give results using numbers (more objective, eliminate context surrounding data)
Qual: observations/interviews/case studies/analysis of written doccuments (identify context around behaviour)
Generaliability
can results be applied to other groups/settings
only if sample = respresentitice of pop
Reliability v. Validity
R: consistency of research subjects responses
V: extent of which measure truly reflects what its supposed to measure
convergent: strong relationship b/w diff measures of same variable
discriminant: weak relationship b/w measures of diff variables, unrelated constructs
Observational Research (Participant v. Direct)
= examin natural activites in org, observe what they say/do, objective
narative form (case study)
participant: researcher is functioning mem of unit being studied (pro: secrecy (wont know being studied)
direct: researched observes org behaviour wo taking part in studied activity
lack of control over envirornment, hard to generalize, best for inital examination
Correlational Studies
= examine relationships among variables wo introducing manipulation
correlation doesnt equal causation
uses:
surveys: questionaires gathering data
interview: quantitative/qualitative
existing data: info from org records
Cross Sectional v. Longitudinal Studies
C: IV/DV measured @ same time
L: IV measured @ one time, DV measured another (hard, time consuming)
Hawthorne Effect
favourable response by participants in org experiment
result of factor other than IV being manipulated