Experimental method: types of experiment, laboratory and field experiments; natural and quasi-experiments and their strengths and limitations
Laboratory experiment
Conducted under controlled conditions where the researcher manipulates the independent variable
+ highly controlled
+ establish causation
- demand charactersistics
- low ecological validity
Field experiment
Conducted in a natural environment where the researcher manipulates the independent variable
+ high ecological validity
+ fewer demand characteristics
- extraneous variables
- hard to replicate
Natural experiment
Conducted when the IV is naturally occurring
+ high ecological validity
+ fewer demand characteristics
- difficult to establish causation
Quasi experiment
Involve studying the effects of naturally occurring IVs
+ same as lab/field experiments
- confounding variables
Observational techniques: types of observation – naturalistic and controlled, covert and overt, participant and non-participant observation and their strengths and limitations
Naturalistic
Carried out in a natural environment
+ high ecological validity
+ allows study that would otherwise be unethical
- cannot control extraneous variables
- difficult to replicate
Controlled
Carried out in a lab environment
+ easy to replicate
+ extraneous variables eliminated
- demand characteristics
- low ecological validity
Covert
Participants do not know that they are being observed
+ less demand characteristics
- more ethical issues
Overt
Participants aware that they are being observed
+ more ethical
- more demand characteristics
Participant
Researcher involves themselves in the observation
+ gain a fuller understanding
- may be difficult to record behaviour
Non-participant
Researcher does not involve themselves in the observation
+ easier to record behaviour more objectively
- may be harder to have a full understanding of the behaviour
Self-report techniques: questionnaires; interviews – structured and unstructured, and their strengths and limitations
Structured questions
+ easy to replicate
- socially desirable answers
Unstructured questions
+ rich qualitative data
- difficult to replicate, not standardised
Correlations: analysis of the relationship between co-variables, the difference between correlations and experiments, strengths and limitations
There are no IVs or DVs, but the two variables being measured are co-variables
Plotted on a scatter gram
Types of correlation:
Positive- high scores on one variable go with high scores on the other variable
Negative correlation- high scores on one variable go with low scores on the other variable
No correlation- scores are not connected in any way
Evaluation:
+ allows the relationship between two variables to be examined when a controlled experiment may not be possible due to ethical or practical reasons
+ can be a good starting point for further research
- it is not possible to establish cause and effect
- correlations can be misused
Case studies: strengths and limitations
Case studies involve the detailed study of a single individual or small group.
It usually tends to be from an unusual case of a certain thing
Case studies are generally longitudinal
Evaluation:
+ detailed qualitative data, avoids reductionism
- many case studies have ethical issues such as lack of anonymity and psychological harm
Aims: stating aims, the difference between aims and hypotheses
An aim is a general statement about the intended purpose of a study
Hypotheses: directional and non-directional
A prediction about what will happen in a study
Precise, testable statements
Types:
Directional
Says the direction in which the data is predicted to be
Used when there is supporting evidence
Non-directional
States that there will be a difference, but not which direction it will be
Used when there is a lack of/ contradictory supporting data
Null
States that there will not be a difference
Sampling: population vs. sample; techniques – random, systematic, stratified, opportunity, volunteer; implications including bias and generalisation
Population- group of people that the researcher wants to target
Sample- people that represent the larger population
Types of sampling
Opportunity
Participants selected by convenience
+ convenient in terms of time and cost
- sample is likely to be biased
Volunteer
Participants self-select/volunteer to take part
+ participants motivated to complete the study
- may be biased
Systematic
A system is created to select participants eg every 3rd person selected
+ not biased
- participants may refuse to take part
Random
Participants selected at random
+ less likely to be biased
- participants may refuse to take part
Stratified
Participants selected to reflect the demographics of target populations
+ representative of target populations
- complex and time consuming
Pilot studies and the aims of piloting
Trial run of the research study on a smaller scale
Pilot studies aim to:
Find out if aspects of the design do or don’t work
If parts of the design make the aims of the research obvious (demand characteristics)
See if timings for the tasks are appropriate
Experimental designs: repeated measures, independent groups, matched pairs
Independent groups:
Different participants used in each condition
+ lack of order effects
+ lack of demand characteristics
- larger amounts of participants
- unbalanced groups
Matched pairs:
Each participant matched based on key characteristics
+ eliminates unbalanced groups
+ lack of demand characteristics
- larger amounts of characteristics
- time consuming and difficult
Repeated measures
Same participants used in each condition
+ fewer participants
+ no individual differences
- more order effects
- more demand characteristics
Observational design: behavioural categories; event sampling; time sampling
Behavioural categories- examples of behaviours that have been pre-recorded
Event sampling:
The collection of data every time an event happens in an observation
Evaluation:
+ useful when things happen infrequently
- can be hard to see everything
Time sampling:
The collection of data at pre-determined time intervals
Evaluations
+ reduces number of observations
- could miss important behaviours
Questionnaire construction: open and closed questions; design of interviews
Questionnaire
Set of standardised questions handed out for participants to complete
+ easily distributed
+ standardised
- socially desirable answers
Interviews:
Verbal questioning of participants usually done face to face
+ able to explain questions to ensure understanding
- socially desirable answers
Open questions
+ produce qualitative data
- can be hard to format data
Closed questions
+ easier to analyse
- produces qualitative data
Variables: manipulation and control – independent, dependent, extraneous, confounding; operationalisation
IV- characteristic that is manipulated in the study that causes the DV to change
DV- variable that is measured that changes throughout the experiment as a result of the IV
Extraneous variable- any variable other than the IV that might affect the results of the DV
Confounding variables- when extraneous variables are important enough to cause a change in the DV
Operationalisation— making sure a variable being studies is clearly defined and in a form that can be easily measured
Control: random allocation, counterbalancing, randomisation, standardisation
Random allocation- method used to minimise the effect of confounding variables
Standardisation- ensures that all procedures and instructions are kept the same
Counter-balancing- attempts to balance out order effects by splitting the group and completing the condition in an AB/BA order
Single blind- when participants are unaware of the research aims and do not know which condition they are in
Double blind- when neither the observer nor the participants know the true aim of the study
Demand characteristics and investigator effects
Demand characteristics- clues which help a participant guess the true aim of the study
Investigator effects- refers to any unwanted influence of the investigator on the dependent variable
Ethics: British Psychological Society’s code, ethical issues in design/conduct of studies, dealing with ethical issues
Ethics- the potential for participants to be harmed during research
Psychological body- group that encourages researchers to follow guidelines and ensure participants do not get harmed
Types of ethical issues:
Protection from harm
Participants protected from physical and psychological harm
Dealt with with attempts to rectify unexpected harm
Privacy and confidentiality
Personal information should be kept private
Dealt with by observations happening in places expect to be observed, data and names etc kept private
Deception
Not telling participants the true aim of the study
Dealt with with a debrief/ reconsidering how to carry out the experiment
Informed consent
Consent from people who fully understand what is happening
Dealt with by gaining alternative methods of consent/debrief
Implications of psychological research for the economy
If more effective treatments for mental health issues are developed, more people will be in work
Ineffective treatments may waste time and money
If treatments are effective, implementing these treatments may be costly
Features of a science: objectivity, empirical method, replicability, falsifiability, theory construction, hypothesis testing, paradigms, paradigm shifts
Empirical methods:
These methods gain information through direct observation or experimentation rather than from unfounded beliefs or claims
Important as people can make claims but the only way we know anything to be true was through direct observation
Objectivity
Data is not affected by the expectations and biases of the researcher
Data is collected under controlled conditions
Falsifiability
Theories should be testable and there should be no possibility of them being proven false
Even if a theory has been repeatedly tested, it still wasn’t true or proven, it had just not yet being proved false
Theory construction
The construction of a theory occurs through gathering evidence using empirical methods
It is possible to make clear and precise predictions on the basis of a theory
The processes of deriving new hypotheses from existing theories is known as deduction
Replicability
If a theory is to be trusted, it must be shown to be repeatable across a range of different contexts and circumstances
Replication is also used to assess the validity of a finding
Paradigms
A paradigm is a shared set of assumptions and methods
It has been suggested that psychology is a pre-science as it does not have a universally accepted paradigm
Paradigm shifts
Happens when an existing paradigm is questioned by a few researchers until there is too much evidence to ignore
A new paradigm causes a scientific revolution
Reporting psychological investigations: sections of a report – abstract, introduction, methods, results, discussion, referencing
Abstract
A short summary of all the major elements of a report, including the aims and hypotheses, methods/procedure, results and conclusion
It goes at the beginning of a report, although it is usually one of the last things that are written
Introduction
Gives details of literature that is relevant to the study taking place
Starts with the least relevant and progresses to the most
At the end of the introduction aims and hypotheses are presented
Method
Should be detailed enough to be replicable
Split into several sub-sections
Design- research methods and experimental design
Sample- how many participants, sampling method, target population
Apparatus/materials
Procedure- everything that happened in the investigation from the participants perspective from beginning to end
Ethics- how ethical issues are handled
Results
Summarises key findings, including
Descriptive statistics
Inferential statistics
Qualitative data
Raw data does not go here, it goes in the appendix
Discussion
Summary of the results in words, linked back to past research
Limitations of the study
Wider implications of the study
Referencing
Format:
Should I Do This Like a Pro
Surname, Initial, Date, Title of article/book, Place published/publisher name
The role of peer review in the scientific process
Peer review is the independent assessment of a research paper by experts in the field
Done in order to evaluate the papers quality and sustainability for publication
Quantitative and qualitative data; distinction in collection techniques
Qualitative data:
Data that consists of words/longer answers
+ can provide large amounts of detail
- can be hard to analyse/display
Quantitative data:
Numerical data
+ easier to analyse/display
- can be less useful without a large amount of data being collected
Primary and secondary data, including meta-analysis
Primary data
Data collected by a research specifically for the purpose of their study
+ can ensure data is accurate
- requires planning and resources
Secondary data
Data which has already been collected by someone else
+ inexpensive, requiring minimal effort
- can be less accurate/relevant
Meta-analysis- when a wariest of studies on a particular topic area are summarised together and their findings collated
Descriptive statistics: central tendency (mean, median, mode), dispersion (range, standard deviation)
Central tendency:
Mean- the average of all of the data
Mode- most common value in a set of data
Median- central value in a set of data
Measures of dispersion:
Standard deviation- how far on average each score is in a set of data from the mean
Range- how spread out a set of data is
Analysis and interpretation of correlation, including correlation coefficients
Types of correlation:
Positive- high scores on one variable go with high scores on the other variable
Negative correlation- high scores on one variable go with low scores on the other variable
No correlation- scores are not connected in any way
Correlation co-efficient
A number between -1 and +1, telling us the strength and type of correlation
Levels of measurement: nominal, ordinal, interval
Nominal- data is caatagorised into groups, but the groups have no order
Examples: eye colour, male/female/other
Ordinal- data is ranked or ordered- we know the order, but not by how much one is more than the other
Examples: finishing position in a race, likely scale responses
Interval- data is numerical, where equal intervals between values mean equal differences
Examples- IQ scores, temperature
Content analysis and coding
A method used to analyse qualitative data
The researcher must decide how to systematically sample whatever for of media it is they are analysing
Five types of text used:
Written text
Oral text
Iconic text
Audio-visual text
Hypertexts (texts found on the internet)
The data is then coded by creating categories (by skimming the material and making a list of the main categories)
The categories must be operationalised, comprehensive and mutually exclusive (not overlapping)
Data in each category is usually quantitative (tallies), however it may be qualitative if the researcher describes some examples
Evaluation:
+ inter-rather reliability can be used
- observer bias/subjectivity
Thematic analysis
Converts qualitative data to quantitative data
The first step is to transcribe the data if necessary
The data is then read over repeatedly
The themes are then identifies and re-analysed so they become clear
The researcher can then annotate the transcript with the themes that have been identified
Evaluation:
+ tends to have high ecological validity because it is based on observations of real materials
- process is unscientific and open to researcher bias
Introduction to statistical testing; the sign test – when and how to use it
The sign test is a method used in inferential statistics to determine whether or not an observed result is significant
It is a non-parametric test- there is no assumption that the data will follow a normal distribution
It is known as the sign test as it is based on the number of plus or minus signs present in the data after the calculations have taken place
When to use it:
If the research investigates a difference (an experiment rather than a correlation)
Repeated measures design
Nominal data (data in categories, such as exercised for 30 minutes/did no exercise)
Whether the hypothesis is directional or non-directional
How to use it:
State your hypothesis:
E.g. There will be a difference in stress levels before and after CBT.
Find the differences:
For each pair (before/after), work out if the value increased (+), decreased (−), or stayed the same (0).
Ignore the 0s:
Any cases where there is no change (0) are removed from the test.
Count the signs:
Count how many + signs and how many − signs there are.
Find S (the sign test score):
S = the smaller number of + or − signs.
Find the critical value:
Use a sign test table based on the number of non-zero scores (n) and your significance level (e.g. 0.05 for 5%).
Compare your S to the critical value:
If S is less than or equal to the critical value → the result is significant(reject the null hypothesis).
If S is more than the critical value → the result is not significant (fail to reject the null).
Factors affecting choice of a statistical test, including level of measurement and experimental design
3 distinct criteria:
Have they conducted a test of difference (e.g. a lab experiment) or a test of correlation?
If they have conducted a test of difference, did they use an independent measures design, repeated measuresdesign, or a matched pairs design?
an unrelated design refers to independent measures/groups
a related design refers to repeated measures and matched pairs
Have they collected nominal, ordinal or interval data?
Tests of Difference | Related design | ||
---|---|---|---|
Unrelated design | Correlations | ||
Nominal data | Chi-Squared | Sign test | Chi-Squared |
Ordinal data | Mann Whitney U | Wilcoxon T | Spearman's rho |
Interval data (Parametric tests) | Unrelated t-test | Related t-test | Pearson's r |