1/68
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Laboratory Experiment
Highly controlled environment; IV is manipulated. +High internal validity, −Low ecological validity.
Field Experiment
IV manipulated in a natural setting. +More realistic, −Less control of extraneous variables.
Natural Experiment
IV occurs naturally, not manipulated. +Useful when unethical to manipulate IV, −No random allocation.
Quasi-Experiment
IV based on existing differences (e.g., gender). +Can study unique groups, −No causal conclusions.
Naturalistic Observation
Behaviour studied in natural environment. +High ecological validity, −Lack of control.
Controlled Observation
Some variables are controlled by researcher. +Easier to replicate, −Lower ecological validity.
Covert Observation
Participants unaware they're being observed. +Less demand characteristics, −Ethical concerns.
Overt Observation
Participants know they're being watched. +Ethically acceptable, −Demand characteristics.
Participant Observation
Researcher becomes part of the group. +Greater insight, −Researcher bias.
Non-Participant Observation
Researcher remains separate. +Objective, −Less insight into behaviour.
Questionnaires
Set of written questions. +Efficient data collection, −Social desirability bias.
Interviews
Structured: fixed questions; Unstructured: flexible. +Rich data (unstructured), −Time-consuming.
Correlations
Analyse relationship between co-variables. +Can identify links, −No cause-and-effect.
Difference Between Correlational Studies and Experiments
Experiments manipulate variables | Correlational studies do not—they measure co-variables, correlations can't show causation.
Content Analysis
Systematic coding of qualitative data into categories. +Can convert qualitative to quantitative, −Subjective interpretation.
Case Studies
In-depth investigation of one person/small group. +Rich detail, −Low generalisability.
Aim
Statement of study's purpose. Distinct from hypothesis, which is a testable prediction.
Hypothesis
Directional: predicts specific effect; Non-directional: predicts a difference without stating direction.
Population vs. Sample
Population = larger group of interest; Sample = smaller group studied to represent population.
Random Sampling
Everyone has equal chance. +Unbiased, −Time-consuming.
Systematic Sampling
Every nth participant. +Objective, −May not be representative.
Stratified Sampling
Proportional representation of subgroups. +Highly representative, −Complex.
Opportunity Sampling
Those available at time. +Convenient, −Biased sample.
Volunteer Sampling
Participants self-select. +Easy, −Volunteer bias.
Sampling Bias
When the sample is not representative. Affects generalisability.
Pilot Studies
Small-scale practice run to identify issues in procedures, measures, etc.
Experimental Designs
Repeated Measures, Independent Groups, Matched Pairs.
Repeated Measures
Same participants in each condition. +No participant variables, −Order effects.
Independent Groups
Different participants in each condition. +No order effects, −Participant variables.
Matched Pairs
Different but matched participants. +Controls participant variables, −Hard to match accurately.
Observational Design
Use of behavioural categories, event sampling, and time sampling.
Behavioural Categories
Clearly defined behaviours to record during observation.
Event Sampling
Counting each time a behaviour occurs. +Good for infrequent behaviour, −May miss behaviours.
Time Sampling
Recording behaviour at regular intervals. +Reduces overload, −May miss important events.
Open Questions
Allow detailed, qualitative responses. +Rich data, −Harder to analyse.
Closed Questions
Fixed responses (e.g., yes/no). +Easy to quantify, −May lack depth.
Independent Variable (IV)
Variable that is manipulated.
Dependent Variable (DV)
Variable that is measured.
Extraneous Variables
Uncontrolled variables that may affect the DV.
Confounding Variables
Variables that systematically affect the DV and interfere with results.
Operationalisation
Making variables measurable (e.g., "aggression = number of punches").
Random Allocation
Ensures each participant has equal chance in each condition. +Reduces bias.
Counterbalancing
Controls order effects in repeated measures design (e.g., ABBA).
Randomisation
Using chance to reduce investigator bias in design (e.g., random order of trials).
Standardisation
Keeping procedures the same for all participants. +Improves reliability.
Demand Characteristics
Participants guess aim and change behaviour.
Investigator Effects
Researcher unconsciously influences results.
Ethical Issues
Informed consent, deception, protection from harm, right to withdraw, confidentiality.
BPS Code of Ethics
Provides guidelines for ethical psychological research and how to address issues.
Dealing with Ethics
Debriefing, right to withdraw, anonymity, ethical committees.
Peer Review
Process of assessing research before publication to check quality and credibility.
Psychology and the Economy
Research can inform public policy, education, mental health services (e.g., reducing absence, improving treatments).
Reliability
Consistency of a measure. Includes test-retest and inter-observer reliability.
Test-Retest
Measure tested twice. High correlation = reliable.
Inter-Observer Reliability
Two observers' records compared. High agreement = reliable.
Improving Reliability
Standardised procedures, clearer behavioural categories, training observers.
Face Validity
Does the test appear to measure what it's supposed to?
Concurrent Validity
Does the test correlate with a well-established measure?
Ecological Validity
Extent findings generalise to real life settings.
Temporal Validity
Extent findings apply over time.
Improving Validity
Use of standardisation, control of EVs, blind procedures, realistic tasks.
Objectivity
Unbiased research. Improves scientific credibility.
Empirical Method
Gathering evidence through observation and experiment.
Replicability
Ability to repeat study and get same results.
Falsifiability
Scientific theories must be testable and disprovable.
Theory Construction
Building explanations based on evidence.
Paradigm
A shared framework of assumptions in a scientific field.
Paradigm Shift
When a dominant theory is replaced following contradictory evidence.
Scientific Report Sections
Abstract, Introduction, Method (design, sample, procedure), Results, Discussion, References.