Research Methods in Applied Psychology
Data Types in Applied Psychology
Two main data types used to assess outcomes in applied psychology:
Quantitative data
Qualitative data
Researchers may also use mixed-method designs that combine both data types
Quantitative Data
Definition: numerical observations and measurements that allow objective comparisons across groups or conditions; enables tracking changes over time
Example in therapy evaluation:
Randomly assign participants to receive a new therapy vs. no therapy (control)
Measure anxiety levels, depression levels, etc. at end of treatment
Use numerical data to compare outcomes between groups and determine therapy effectiveness
Timeframe example:
Measure anxiety before therapy and after a year of therapy to assess sustained change
Advantages:
Objectivity and ability to test hypotheses using statistical methods
Clear comparisons across conditions
Key terms to remember:
Depression/anxiety scores as dependent variables; treatment condition as independent variable
Ability to compute averages, variances, and change over time
Qualitative Data
Definition: description of a phenomenon without numbers; emphasizes participants' experiences and researchers' interpretations
Nature: highly subjective (participants’ experiences and researchers’ interpretations)
Purpose: identify patterns and commonalities across participants rather than individual scores
Example: understanding fears
Conduct interviews in a class to identify greatest fears
Transcribe interviews and look for recurring themes (patterns across transcripts)
Discuss common fears (e.g., fear of dogs, fear of the dark)
Mixed interpretation: qualitative data can be coded and quantified
Example: coding responses on a five-point scale (1–5) to translate qualitative themes into quantitative data
Acknowledges that even when results are quantitative, the data collection remains qualitative in origin
Strengths and limitations:
Strength: rich context and insight into subjective experiences
Limitation: less generalizable and more interpretive; potential bias in interpretation
When used: often to complement quantitative data or to explore new areas where little is known
Mixed-Methods Design
Definition: combine quantitative and qualitative data within a single study
Rationale: leverage strengths of both approaches; cross-validate findings; gain both breadth (quantitative) and depth (qualitative)
Classroom example discussed:
Qualitative work on sentiments toward Christianity translated into quantitative scores, then analyzed statistically
Practical note: even when ending with quantitative results, the data collection may have strong qualitative roots
Why Applied Psychologists Collect Data
Purpose diversity mirrors basic research, but emphasis differs
Primary goal in applied settings: program evaluation and real-world impact
Test interventions, programs, or therapies to determine effectiveness
Produce findings that inform practice and policy in schools, communities, or larger systems
Unique position: test lab-developed theories in real-world contexts to see if trends hold outside controlled environments
Relationship to basic research:
Theories often originate in basic (lab) research and are tested in applied settings
Applied work can confirm, extend, or challenge theoretical predictions in real-world conditions
Distinctions from basic research:
Primary aim of applied research: evaluation of effectiveness, cost, feasibility, and practical impact
Importance of incorporating resource constraints and practical significance
Terminology:
Program evaluation: the systematic assessment of how well a program achieves its goals in applied settings
Experimental vs. non-experimental priorities may shift toward accountability and policy implications
Naturalistic Observation (Applied Settings)
Definition: researchers observe participants’ behavior in their natural environment
Two main modes:
Unobtrusive (passive) observation: participants unaware they are being studied
Embedded observation (field immersion): researcher becomes part of the group or setting
Ethical considerations:
In unobtrusive observations, informed consent may be challenging; in public places, there is greater latitude to observe without consent
Deception considerations: sometimes used to avoid demand effects, but ethical standards require debriefing and consent when possible
The Hawthorne effect: behavior changes due to being observed; a key caution in observational studies
Public vs. private context:
Public spaces: observational notes may be ethically permissible with proper justification
Private spaces: typically require informed consent and more rigorous ethical safeguards
Practical notes:
Observational coding can yield qualitative data or can be quantified (e.g., tallying behaviors from video)
Causality is difficult to establish in naturalistic observation; correlations can be drawn but not definitive causation without experimental control
Forms of observational research:
Simple naturalistic observations in classrooms or home settings
Field experiments (manipulated scenarios within natural settings to study causal effects)
Longitudinal field observations in sociological or developmental contexts
Questions often addressed:
What behaviors occur under natural conditions?
How do group dynamics unfold in real-world settings?
Observational Approaches: Covert vs. Overt, Inside Groups
Covert observation: researcher participates without informing participants
Overt observation: participants know they are being observed
Key distinctions:
Covert helps avoid participant reactivity but raises ethical concerns
Overt improves transparency but may induce reactivity (participants modify behavior because they know they are watched)
Embedding in groups (ethnographic style):
Researchers become part of the group to observe genuine behavior over time
Ethical and methodological challenges: balancing insider access with objectivity; achieving trust enough to observe natural behavior
Applied examples:
Sociological fieldwork in cults or tight-knit groups; researchers live among participants to observe routines and norms
Ongoing terms:
Overt observation: clear identification and consent
Covert observation: hidden presence; higher ethical risk but can yield more natural behavior
Survey Methods in Applied Psychology
Purpose: gathering broad attitudes, satisfaction, opinions, or experiences from a sample
Example: employee satisfaction program
Pre-program survey to establish baseline satisfaction across work aspects
Implement program
Post-program survey to assess changes in satisfaction
Design limitations:
Pretest-posttest single-group designs are vulnerable to multiple confounds (history, maturation, measurement effects)
Without a control group, it is hard to attribute change to the program itself
Strengthening survey designs:
Use control or comparison groups (randomly assigned locations or departments) to isolate program effects
When possible, implement alternating or multiple sites to enable between-group comparisons
Data interpretation:
Quantitative aggregation: use averages and variances to assess changes rather than focusing on individual scores
Outliers exist; statistical methods (e.g., trimming, robust statistics) can help, but design quality remains critical
Advanced designs:
Random assignment to experimental vs. control groups with multiple comparison groups
Use of multiple locations to enable between-group comparisons to strengthen causal inference
Ethical considerations: ensure informed consent and data privacy in survey administration
Case Studies
Definition: in-depth study of a small number of cases or participants
Characteristics:
Typically involve interviews or detailed data collection from a handful of participants
Can be qualitative or (less commonly) quantitative through coding and counting instances
Strengths:
Rich, contextual information; deep understanding of a phenomenon in a specific setting
Limitations:
Small sample sizes limit generalizability
Often used for exploratory work or theory-building
When used in applied psychology:
Useful for understanding unique or atypical cases, or for illustrating complex processes
Pretest–Posttest, Experimental, and Quasi-Experimental Designs
Pretest–posttest design (single-group):
Measure before intervention, then after intervention
Issues: history, maturation, regression to the mean; lacks a comparison group
Between-subjects experimental design (Randomized Controlled Trials, RCTs):
Randomly assign participants to different groups (e.g., experimental vs. control)
Can include multiple comparison groups to isolate aspects of the program/treatment
Gold standard for causal inference but often more labor-intensive and costly
Quasi-experimental designs:
Follow similar procedures to experiments but lack random assignment
Ethical or practical constraints may prevent randomization
Closer to causal conclusions than correlational studies but with greater risk of confounds
Longitudinal studies:
Track participants over extended periods
Provide information about change over time and long-term effects
Higher cost and attrition risk; useful for correlational analyses and time-series insights
Experimental Methods: Randomized Controlled Trials (RCTs) and Variants
Randomized controlled trials (RCTs):
Core feature: random assignment to conditions to create equivalence on both observed and unobserved variables
Can include multiple comparison groups or active control groups
Strength: strong internal validity; supports causal inferences
Variants and practical considerations:
Multiple comparison groups allow identification of what components are effective
In practice, randomized trials can be resource-intensive; not always feasible
When random assignment is not possible:
Quasi-experiments retain closer causal inference than purely observational studies but cannot fully rule out confounds
Sampling for Applied Research
Key concepts:
Population: the entire group you aim to learn about
Sample: a subset of the population actually studied
Sampling frame: the source from which the sample is drawn
Random sampling (probability sampling):
Purpose: ensure every member of the population has an equal chance of selection
Benefits: improves representativeness and generalizability
Potential issue: may underrepresent certain subpopulations (e.g., minorities) if the population is heterogeneous
Stratified random sampling: divide population into subgroups (strata) and sample from each stratum
Stratified random sampling example and formula:
Suppose population N with strata i having Ni individuals; total sample size n; sample size from stratum i:
Proportional allocation ensures the sample mirrors the population structure
Nonprobability sampling (non-random):
Convenience sampling (accidental sampling): participants are selected due to ease of access (e.g., first-year undergraduates)
Purposive/compulsive sampling: targets specific individuals or groups for a purpose
Consequence: results may not generalize to the broader population; interpret with caution
Population and representativeness:
Representativeness means the sample reflects the population of interest well enough to generalize findings
Sampling frame choices influence representativeness (e.g., using undergraduates as a proxy for all university students)
Sampling strategies in practice:
Use random sampling when possible for generalizable results
When random sampling is impractical, use stratified sampling or carefully defined nonprobability samples with clear limitations declared
Distinction recap:
Random sampling = how you select participants from a population
Random assignment = how you allocate participants to experimental conditions
Ethics in Research with Humans (and Animals)
Why ethics approvals are required in academic settings:
Historical abuses in research (e.g., Nazi experiments; Tuskegee syphilis study) demonstrated the need for safeguards
Ethical protocols protect participants, ensure informed consent, minimize harm, and safeguard privacy
Key ethical concepts:
Informed consent: participants understand the study and agree to participate
Risk/benefit assessment: weighing potential harms against benefits
Confidentiality and data protection
Right to withdraw without penalty
Debriefing: explaining deception, if used, and study aims after participation
Public history context mentioned:
The syphilis study (1932–1972) where participants were not informed of treatment availability
The importance of oversight and ethical guidelines to prevent harm
Ethical implications for applied psychology:
Balancing scientific knowledge with participants’ rights and wellbeing
Ensuring research contributes to welfare without exploiting vulnerable groups
Practical and Philosophical Implications
The role of research design in real-world impact:
Internal validity (causal conclusions) vs. external validity (generalizability)
Trade-offs between rigorous control (laboratory) and ecological validity (real-world settings)
The legitimacy of deception and demand characteristics:
Deception can prevent bias, but raises ethical concerns and requires debriefing
The value of mixed methods:
Rich data from qualitative work paired with generalizable results from quantitative work
Real-world relevance:
Programs in schools, workplaces, and communities rely on rigorous evaluation to justify costs and guide improvements
Foundational principles:
Randomization, representativeness, transparency, and replication as cornerstones of credible applied research
Key Formulas and Notation (Illustrative)
Sample mean estimator:
Random sampling probability (simple random sample):
Stratified random sampling allocation (proportional):
Pretest–posttest change (for a single group):
Conceptual difference in means (two groups):
Notes on interpretation:
Use of averages and variability helps mitigate the impact of individual outliers as sample size grows
Representativeness improves the likelihood that findings generalize to the population of interest
Quick Reference: Glossary Highlights
Applied psychology: research focused on evaluating and improving real-world programs and interventions
Naturalistic observation: observing behavior in natural settings with or without participant awareness
Hawthorne effect: changes in behavior due to awareness of being observed
Randomized Controlled Trial (RCT): participants randomly assigned to conditions to infer causality
Quasi-experiment: similar to RCT but without random assignment
Longitudinal study: data collected from the same participants over time
Stratified random sampling: sampling within defined subgroups to ensure representation
Convenience sampling: nonrandom sampling from readily available participants
Compulsive/Purposive sampling: nonrandom sampling targeting specific individuals to study particular traits
Informed consent, debriefing, and ethical approval: safeguards for participants in research