IB Psychology – Key Concepts, Research Methods & Exam Framework
Conceptual Learning & Hierarchy of Concepts
- Modern IB pedagogy stresses “conceptual learning / concept-based teaching” → students must recognise broad ideas that organise specific facts.
- Definition of a concept
- A general idea applicable to many specific instances.
- Example: chair, sofa, bed = instances; “furniture” = overarching concept.
- Knowing the higher-level concept accelerates understanding of new instances.
- Concept hierarchy in every discipline
- Bottom: very specific terms (e.g., Multi-Store Model of Memory, serotonin, visual cortex).
- Middle: more generic, explanatory terms (e.g., chemical messengers, localization of function, cognitive processes).
- Top: key concepts that capture the discipline’s essence → in IB Psychology there are six.
Six Key Concepts in IB Psychology
- Bias
- Causality
- Perspective
- Measurement
- Change
- Responsibility
- Interconnectedness
- Any topic can be viewed through any concept; concepts overlap (e.g., explaining behaviour = causality; enables prediction/control = change; ethical use = responsibility).
Bias
- General definition: systematic deviation from truth (≠ random error).
- Requires a source; identifying sources helps eliminate/adjust for bias.
- Three broad manifestations
- Bias in research procedures
- Researcher bias, participant bias, sampling bias, publication bias, confirmation bias, gender/cultural bias, dominant respondent bias.
- Additional sources even when the word “bias” is absent: order effects, demand characteristics, experimental mortality, social desirability.
- Link: credibility (trustworthiness, validity, reliability) ↔ absence of bias.
- Bias in interpretation of findings (theoretical orientations)
- Biological or environmental reductionism → inflate some variables, downplay others.
- Bias in behaviour itself
- Cognitive biases, stereotypes, discrimination, diagnostic bias in mental disorders.
- Acceptable exam links: any justified connection between bias & studied content.
Causality
- Relationship where one variable causes another.
- Four scientific functions: Describe → Explain → Predict → Control.
- Example: astronomy moved from description of orbits to causal laws (gravity).
- Psychology’s challenge: human behaviour is complex & multi-determined.
- IB organises factors into biological, cognitive, sociocultural.
- Experiments try to isolate factors; only method enabling cause–effect inferences.
- Evaluative questions
- Bidirectional ambiguity?
- Direct vs mediated causation?
- Short- vs long-term?
- Side effects?
- Domino (chain) causality?
- Strong causal claims demand high internal validity & bias control.
Perspective
- A “way of looking” at phenomena.
- IB tri-level framework: Biological, Cognitive, Sociocultural perspectives.
- Additional levels
- Competing psychological theories/models as perspectives.
- Alternative interpretations of the same data.
- Holistic understanding usually needs multiple perspectives.
Measurement
- Broadly: using research to obtain data about behaviour.
- Includes quantitative and qualitative data (not only numbers).
- Quality of measurement = depth & objectivity of knowledge.
- Key considerations & examples
- Choosing an appropriate research method/tool.
- Impact of instruments on credibility.
- Brain-imaging techniques (strengths, limits).
- Effect size, statistical significance.
- In qualitative work: reflexivity, trustworthiness.
Change
- Generic meaning: process through which something becomes different.
- Two high-frequency IB examples
- Natural development (e.g., cognitive maturation, symptom progression, social learning, enculturation/acculturation).
- Purposeful intervention (e.g., therapy, technological aids, compliance strategies, operant conditioning, prevention programmes).
- Less-frequent example: changing prevalence rates across time/populations.
Responsibility
- Psychologists wield power → responsible for process & consequences of research.
- Predominantly ethical considerations
- During research (deception, consent, protection from harm, right to withdraw, debrief).
- Animal research specifics.
- Data handling: anonymisation, data sharing.
- Reporting & dissemination: avoid stigma, oversimplification; advocate policy responsibly.
- Ethics committees’ decision-making.
- Responsibility considerations permeate every topic because all knowledge stems from research.
Psychology as a Scientific Study
- Working definition: “Psychology is the scientific study of behaviour and mental processes.”
- Demarcation from non-science (TOK crossover)
- Based on empirical evidence.
- Falsifiability: theories must be refutable.
- Replication: history of independent tests.
- Clever Hans illustration
- Initial claim: horse performed arithmetic.
- Pfungst’s systematic falsification revealed experimenter cues.
- Demonstrated empirical testing & guarding against bias.
Behaviour vs Mental Processes
- Behaviour = directly observable (actions, facial expressions, physiological responses).
- Mental processes = internal (attention, perception, memory, thinking) → inferred through behaviour.
- Researchers use behavioural indicators (e.g., eye gaze for attention, cortisol for anxiety).
Overview of Research Methods
Quantitative vs Qualitative
| Aspect | Quantitative | Qualitative |
|---|
| Aim | \text{Nomothetic} – derive universal laws | \text{Idiographic} – in-depth understanding |
| Data | Numbers | Texts (transcripts, notes) |
| Researcher role | Detached/Objective | Involved/Reflexive |
Quantitative Core Concepts
- Variable: measurable characteristic.
- Construct: theoretically defined variable (e.g., anxiety, love).
- Operationalisation: turning constructs into observable measures (e.g., “number of swear words” for verbal aggression).
Three Quantitative Methods
- Experiment
- Manipulate IV → observe DV; controls → causal inference.
- Correlational study
- Measure variables, compute relationship; no causality.
- Quantitative descriptive (survey)
- Describe distribution of a variable; no relation tested.
Qualitative Methods
- Observation (naturalistic / controlled; overt / covert; participant / non-participant).
- Interview / Focus group.
- Case study.
Analysing Research Quality
Generalizability (External Validity)
- Sample → population
- Population validity (quantitative) ↔ Sample-to-population / inferential generalizability (qualitative).
- Setting → real life
- Ecological validity (quantitative) ↔ Transferability / case-to-case generalizability (qualitative).
- Data → construct/theory
- Construct validity (quantitative) ↔ Theoretical generalizability (qualitative).
Credibility / Internal Validity & Bias
- Quantitative term: internal validity (was DV change due solely to IV?).
- Qualitative: credibility / trustworthiness (do findings reflect participants’ reality?).
- Sampling techniques & biases differ by paradigm.
Experimentation in Detail
Confounding Variables
- Extra variables that distort IV → DV relationship (e.g., sleep environment in sleep-memory study). Must be controlled.
Sampling Techniques
- Random sampling: every population member equal chance.
- Stratified sampling: mirror key population strata (age × GPA example).
- Convenience (opportunity): easily available; limits representativeness.
- Self-selected (volunteer): wide reach but motivation bias.
Experimental Designs
- Independent measures
- Random allocation into distinct groups.
- Matched pairs
- Groups matched on a critical variable (matching variable) before random assignment.
- Repeated measures
- Same participants in all conditions; vulnerable to order effects → controlled by counter-balancing.
Validity Trio
- Construct validity: adequacy of operationalisations.
- Internal validity: freedom from confounds/bias.
- External validity: population + ecological.
- Typically, \text{Internal}\;\uparrow \Rightarrow \text{Ecological}\;\downarrow (inverse relation).
Common Threats to Internal Validity (Biases)
- Selection bias – non-equivalent groups.
- Maturation – natural developmental change over time.
- Testing effect – earlier measurement influences later ones.
- Instrumentation – changes in measurement tools.
- History – external events.
- Regression to the mean – extreme scores drift.
- Experimental mortality – differential drop-outs.
- Demand characteristics – participants guess aim.
- Experimenter bias – researchers unintentionally influence outcomes (Rosenthal’s maze-bright vs maze-dull rats).
- Solution: double-blind design.
True, Quasi & Non-Experiments
- True experiment: random group allocation, controlled IV.
- Quasi-experiment: groups formed by natural allocation; some control (e.g., Sharot 9/11 proximity, Maguire taxi-drivers).
- Non-experiment: comparison of pre-existing groups without treatment.
- Continuum: more control → stronger causal inference.
Natural, Laboratory & Field Experiments
- Natural experiment: naturally occurring IV (e.g., government subsidy, Charlton TV on St. Helena).
- Laboratory: high control, low ecological validity.
- Field: real-life setting (e.g., Piliavin subway), higher ecological, lower control.
Research Methods ↔ Key Concepts
- Causality: experiments needed to make cause–effect claims.
- Measurement: method choice + data interpretation = essence of measurement.
- Bias: many biases stem from sampling & procedure.
- Perspective: qualitative vs quantitative can be seen as differing perspectives.
- Change: longitudinal designs capture behavioural change.
- Responsibility: ethics underpin all methods.
IB Psychology Assessment Overview (External & Internal)
- External = Papers 1–3; Internal = IA.
- Paper 1
- Section A: 2\times 4-mark SAQs on content list (biological, cognitive, sociocultural).
- Section B: 2\times 6-mark SAQs applying content to unseen scenarios.
- Section C: 1\times 15-mark ERQ – concept-based; combines a Key Concept + Content Unit + Context (Learning & Cognition, Development, Health, Relationships).
- Paper 2
- Section A: Four Qs (4+4+6+6 marks) based on your four class practicals; assesses method application, concept link, comparison & study design.
- Section B: 15-mark ERQ – discuss unseen study through ≥2 concepts.
- Paper 3 (HL only)
- Resource booklet with 5 sources.
- Q1 graph interpretation (3 m).
- Q2 data→conclusion analysis (6 m).
- Q3 qualitative credibility/bias/transferability (6 m).
- Q4 synthesis ERQ (15 m) using ≥3 sources + own knowledge, linked to HL extensions (Culture, Motivation, Technology).
- IA: Proposal & report of own investigation; demonstrates methodological competence.
Exam Skills Tips (selected)
- SAQs: brief, focused; ~10 min each.
- Transfer skills vital for Section B – practise applying theories to novel scenarios.
- ERQs: argument-driven, balanced evaluation; allocate ~40 min.
- Paper 2A Q1 sets context for examiner – include aim, method, sample, procedure.
- Paper 3 Q2/Q4: scrutinise wording of claims; adjust for causality vs association.
Mathematical & Statistical References (examples)
- Correlation coefficient notation: r(48)=0.30,\;p=0.034.
- Confidence interval on bar graph: \pm1.96\times SD (95% CI).
- Science’s four functions often expressed sequentially → \text{Describe}\rightarrow \text{Explain}\rightarrow \text{Predict}\rightarrow \text{Control}.
Ethical, Philosophical & TOK Links
- Demarcation of science (empirical evidence, falsifiability, replication).
- Methodological analysis across Areas of Knowledge: procedures to obtain knowledge determine strength of claims (TOK reflection).
- Responsibility concept overlaps with real-world policy decisions (e.g., sensitive stereotype research; drug studies with side effects).
Study & Revision Recommendations
- Regularly revisit definitions & examples of six concepts; practise mapping them onto new studies.
- After each research study read, explicitly label: method, sampling, biases, validity types, ethical issues.
- For operationalisation practice: select abstract constructs (e.g., "wisdom") and design multiple observable measures.
- Use generative AI tactically: formulate precise prompts to test understanding, generate follow-up inquiry questions, or simulate exam scenarios.