Looks like no one added any tags here yet for you.
Aim
Sets out what a researcher wants to meausre
Hypothesis
•A testable prediction, often implied by a theory
•measurable statement
difference between aim and hypothesis
aim has identified what the researcher wants to investigate
but the hypothesis is a testable prediction of what the results will be
Types of hypothesis
alternative and null
alternative hypothesis
'there will be a difference' between two conditions
testable statement which predicts the effect of one variable on another
null hypothesis
testable statement which predicts that one variable will not have an effect on another
'there will be no difference' between the conditions
Types of alternative hypothesis
directional and non-directional
directional hypothesis (one-tailed)
States the direction of the difference or relationship
non-directional hypothesis (two-tailed)
Simply predicts that there will be a difference or relationship between two conditions.
results could go either way
independent variable
variable that is manipulated
dependent variable
the variable that is measured in an experiment
Operationalisation definition
•putting something into operation
making it use able
•clearly defining variables in terms of how they can be measured
•make the variables measurable
how do you operationalise an IV
specifying how the IV will be manipulated
how do you operationalise a DV
specify how the DV will be measured
extraneous variable
•In an experiment, a variable other than the IV that might cause unwanted changes in the DV.
•not the IV but can affect the DV
'extra' variable
how do we 'get rid' of the extraneous variables?
control them
how do we control variables?
standardise or randomisation
standardisation
make sure all conditions are the same for each ppt
e.g. set of instructions given to all ppts
randomisation
leaving the differences in conditions up to chance
remove bias
e.g. random allocation of ppts, random allocation of materials (e.g. words in memory test)
confounding variable
variable whose influence on the dependent variable cannot be separated from the independent variable being examined
•occur at the same time in which IV manipulated or ppts are allocated. add a third variable to study
Extraneous variables can be either:
participant variables
or
situational variables
Participant variables
Individual differences in the personal characteristics of research participants that, if not controlled, can confound the results of the experiment.
•anything to do with the people used in the study, which could effect the DV other than the IV
examples of participant variables:
age, gender, mental illness, IQ score
Situational variables
features in the environment that the study was conducted in which could effect the DV other than the IV
examples of situational variables
Noise, temperature, smells, and lighting.
Reliability
consistent results
validity
accuracy - assume the test measures precisely what it aims to measure - meaning the data collected is accurate and represents some truth compared to others outside of the study
two main types of validity
internal and external
types of internal validity
face and concurrent
internal validity
Extent to which procedure/instrument is measuring what it intends to measure.
examples of internal validity
IQ tests, personality questionnaire
types of internal validity
Face validity and concurrent validity
What is face validity?
whether the instrument looks like it is measuring what it claims to measure
What is concurrent validity?
How well scores on one measure correlate with those on a related measure
•comparing new test with existing one to see whether they produce similar results...if new test does then concurrent validity
what is external validity?
the degree to which the investigator can extend or generalize a study's results to other subjects and situations.
• is it accurate representation of behaviour in real life?
what are the three types of external validity?
ecological, population, temporal
what is ecological validity
does the experiment relate to the real world?
can we generalise to the real world?
what is temporal validity?
can we generalise it to current time period? is it representative of modern day society?
Population validity
can it be generalised to target population
does it represent the whole population?
Two ways of assessing reliability
Test-retest
Inter-observer reliability
what is test-retest theory?
same method given to same ppts on two occasions, to see if same results obtained
(are same results obtained is process is repeated?)
What is inter-observer reliability?
The extent to which there is agreement between two or more observers involved in observations
must have 0.8+ correlation coefficient
Types of experiments
Laboratory, field, natural, quasi
Lab experiment
An experiment that takes place in a controlled environment where the researcher manipulates the IV and records the effect on the DV while maintaining strict control of extraneous variables.
•high control of variables
•manipulation of IV
•random allocation of ppts
strengths of lab experiment
•high control - manipulation of IV, standardisation, remove extraneous variables - increases internal validity
•Replicable - repeated easily due to high control - increases reliability - consistency
Weakness of a lab experiment
•low ecological validity (external validity)- doesn't reflect real-life - too artificial
•high control- more susceptible to demand characteristics- cues in environment lead to guessing aim - socially desirable - please experimenter
field experiment
•an experiment conducted in the participants' natural environment
•but IV is manipulated
Strength of field experiment
•High ecological validity- reflects real life-low control- own setting
•often without ppt awareness - act more naturally - less social desirability - increases internal validity
Weakness of field experiment
•not easily repeated -low control- not much standardisation - decreases reliability - can't find the same results again
•low internal validity - low control- no standardisation -decreases internal validity
natural experiment
no manipulation of IV
natural environment
naturall occurring
strength of natural experiment
•high ecological validity - reflects real life - low control
•less social desirability - ppt no awareness that they're being studied - increase internal validity
•can be used, in situations which would be unethical to manipulate the IV
weakness of naturak
•not easily repeated - low control- decreases reliability
•low internal validity - low control -no standardisation - decreases internal validity
•more time- consuming + expensive than lab
quasi-experiment
high in control/all variables controlled
IV- preexisting difference amongst/naturally occurring so can't naturally assign
IV's cant be randomly assigned
Strength of quasi experiment
•high control - standardisation -increases internal validity
•replicable- repeated easily due to high control increases reliability
Weakness of Quasi experiment
•low ecological validity -doesn't reflect real life - too artificial
•high control - more demand characteristics - socially desirable as easier to guess aim
types of experimental design
Independent measures, repeated measures and matched pairs
independent group design
ppts only go through one condition
repeated measures design
ppts go through BOTH conditions
matched pairs design
ppts matched based on similar characteristics (e.g. gender or IQ) then allocated to either condition (one from each pair does conditions A, second does condition B)
strengths of independent group designs
•no order effects - only going through one condition - ppts won't become bored, tired or practiced
•reduce demand characteristics - harder to guess aim as only doing one conditions - harder to guess aim
weakness of independent group design
•no control over ppt variables - individual differences
•require LOTS/MORE ppts
Strength of repeated measures
•ppt variables controlled - ppts take part in both conditions...experimenter comparing like with like (removes ppt variables) increasing internal validity
•fewer ppts needed
(removes ppt variables and extraneous variables...high internal validity)
Weakness of repeated measures
• increase order effects - ppts complete both conditions - can become bored/tired/get better from practice (practice effect)
how to deal with the weakness of independent group design
random allocation of ppts - reduces experimental bias + theoretically distributes variables evenly
how to deal with the weakness of repeated measure designs
counterbalancing- ensures each condition= tested first/second in equal amounts - does not remove order effects but attempts to balance out the effects of order between the two conditions
strengths of matched pair design
only take part in one condition - order effects = removed
demand characteristics less of an issue
lower individual differences and the effect they have
weakness of matched pair design
ppts can never be matched exactly! - ppt variables are only reduced! not removed!
time consuming- ppts need to complete a task before experiment in order to be matched
Sample
a group of people who take part in the study
sampling
process of gaining the sample
target population
the larger group of people that the experimenter is aiming the study at - sample taken from this group
representative
whether the sample reflects the target population
generalisation
The extent to which findings and conclusions from a particular experiment can be broadly applied to the population. This is possible if the sample of people is representative of the population.
what are the 5 types of sampling?
random, opportunity, volunteer, stratified, systematic
random sampling
every member of the target population has an equal chance of being selected
method of random sampling
each member of target population given number....put all numbers in a generator- select one number out until the desired sample number has been reached - ignore any repeats
Strengths of random sampling
highly representative- every member = equal chance of being selected - good as removes experimenter bias - higher population validity
Weakness of random sampling
Cannot guarantee the sample is totally representative of the target population - by chance could still be bias
time consuming
systematic sampling
picking the nth number from a list of names or numbers
strength of systematic sampling
removes experimenter bias - higher population validity - more representative
weakness of systematic sampling
by chance - sample can still be biased - therefore less representative lower population validity
volunteer sample
consists of people who are willing to volunteer for a study - ppts choose to take part
strength of volunteer sample
Easy to collect - quick
Participants are more engaged since they want to be there.
weakness of volunteer sample
Volunteer bias is a problem asking for volunteers may attract a specific type of person who is keen to please the researcher. This may affect how findings can be generalised.
opportunity sample
A sampling technique where participants are chosen because they are easily available + willing to take part
strength of opportunity sample
quick and easy - data will be collected at a faster rate
weakness of opportunity sample
-experimenter bias - researcher could ask ppts who they think would prove the hypothesis - reduce population validity + make sample less representative
- ppts will be collected from same place which means ppts = likely to have similarities - which = not reflective of whole target population
order effects
A extraneous variable arising from the order which Ps take part in all conditions, so their performance improves/worsens across conditions through practice/fatigue.
way to reduce order effects
counter balancing - reduce fatigue/boredom
participant variables
Individual differences in the personal characteristics of research participants that, if not controlled, can confound the results of the experiment.
way to reduce participant variables
matched pairs
situational variables
features in the environment that if not controlled may alter results of the experiment
way to reduce situational variables
lab experiments - have minimal situational variables
investigator effects
Any effect of the investigator's behaviour (conscious or unconscious) on the research outcome (the DV).
This may include everything from the design of the study to the selection of, and interaction with, participants during the research process.
way to reduce investigator effects
standardised instructions
double-blind technique
pilot study
-> small scale trial run study
-> before the real thing
to check the investigation runs smoothly - solve methodological problems/gain feedback e.g. ppts guessing aim/not understanding task/remove extraneous variables
what are the 6 ethical guidelines according to the BPS (British Psychological Society)?
Deception
Informed Consent
Protection of ppts
Confidentiality
Right to withdraw
Debrief
protection of ppts
ppts cannot be subject to any:
psychological harm or physical harm
solution of protection of ppts
debriefing
opportunity of sampling
Confidentiality
keeping data safe/protected:
ppts results must remain anonymous
and must be kept secret/locked away at all times
solution of Confidentiality
encrypted data/password protected data
no names + use of numbers or initials instead