1/54
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Ethical issues
A situation where the moral rights and welfare of participants might conflict with the goals of scientific research.
British Psychological society code of ethics and conduct
A set of guidelines that psychologists in the UK must follow to ensure their work is ethical, professional and respectful to the rights and dignity of individuals.
Informed consent
Participants must agree to take part with full knowledge of the study’s purpose and procedures - sometimes researchers can’t give full details (to avoid demand characteristics), meaning consent might not be fully informed.
Deception
Participants should not be deliberately misled unless absolutely necessary - if deception is used, it can cause distress or make participants feel tricked.
The right to withdraw
Participants should know they can leave the study at any time without penalty or pressure - if they don’t feel free to leave, their rights are violated.
Protection from physical and psychological harm
Researchers must protect participants from physical or psychological harm (e.g. stress or embarrassment) - some studies can result in emotional distress (e.g. Milgram), even if unintentionally.
Confidentiality
Participants’ personal information must be kept private and not shared without consent - breaching this can cause damage to trust and harm participants socially or emotionally.
Privacy
Participants should only be studied in environments where they’re expected to be observed - observing people without consent in a private setting (like their home), would be unethical.
Debrief
After the study, participants should be fully informed about its true aims and purpose, especially if deception was used - without proper debriefing, participants may leave confused, distressed or misinformed.
Presumptive consent
Asking a group of people similar to the intended participants whether they would agree to take part in the study if they were fully informed - if most say ‘yes’, researchers assume (presume) that the actual participant would also agree.
Ethics committee
A panel of experts (including psychologists, laypeople, sometimes lawyers or medical professionals)who evaluate research proposals to make sure that participants will be protected from harm and that ethical standards are upheld.
Cost-benefit analysis
An ethical decision-making tool where researchers or ethical committees compare the possible ethical costs e.g. harm, deception to the scientific or practical benefits (e.g. increased knowledge, improved treatments) of a study.
Reliability
The extent to which a test or measurement produces consistent results - if the same study or test is repeated under the same conditions, it should produce similar results.
Replication
The process of repeating a psychological study using the same methods to check whether the results are consistent and reliable.
Validity
The extent to which results accurately measure what they are supposed to measure (do what you set out to do).
Internal validity
Whether the results of a study are due to the variables being tested, and not other factors (like cofounding variables or biases) - whether the research did what it intended to do.
External validity
How well the results of a study can be generalised to other people, settings, or time periods - the extent to which the results of a study can be generalised to other situations and people.
Ecological validity
Can the results be applied to real-life settings? (vs. artificial lab settings)
Temporal validity
Are the results still valid today? (or are they outdated?)
Population validity
Can the results be applied to other people populations? (e.g. beyond just students?)
Generalisability
The extent to which the findings of a study can be applied to other settings, populations, times and measures.
Independent variable (IV)
The variable that the researcher changes or manipulates to see its effect.
Dependent variable (DV)
The variable that is measured to see if it’s affected by the IV.
Standardisation
Keeping procedures and instructions the same for all participants in a research study so that the investigation is fair.
Control Group
The group in an experiment where nothing happens to them but they’re still being experimented on (essentially the ‘baseline’) - determines whether the IV had any effect on the DV.
Extraneous Variables
Any variable other than the IV that could affect the DV if they’re not controlled e.g. time of day, background noise, difficulty.
Operationalise
Clearly defining variables in a way they can be measured or manipulated in a study - turning a concept more specific, measurable and observable (easy replication).
Demand characteristics
Clues in a research study that give away to participants what the researcher is investigating - which may lead to change in behaviour from the participant(s) (consciously or unconsciously).
Natural Experiment
When the researcher takes advantage of a pre-existing independent variable
Can be anywhere
IV occurs naturally
Very low control
Field experiment
Done in a more natural/everyday setting (anywhere outside a laboratory) - participants are often unaware they’re participating
In a real word setting e.g. school
IV manipulated by researcher
Less control over extraneous variables (compared to lab)
Quasi experiment
Often resembles proper laboratory experiments. However the experiment doesn’t directly manipulate the IV. Also resembles natural experiments but quasi experiments are typically planned whereas natural isn’t
Can be anywhere
IV isn’t manipulated since it’s based on pre-existing differences (e.g. gender, age, etc.)
Control depends, often more than natural but less than lab
Laboratory experiment
An experiment conducted in a highly controlled environment (not always a lab), the researcher manipulates the IV and records the DV whilst maintaining strict control of extraneous variables - participants are aware they’re taking part but many not know the true aims of the study
In a highly controlled environment
IV is manipulated
High control
Low realism (artificial setting)
Risk of demand characteristics
Situational variables
Features of a research situation that may influence participants behaviour e.g. order effects, time of day, temperature, noise, instructions given, lighting.
Participant variables
Any characteristics of individual participants. This is related to individual characteristics of each participant that may impact how they respond e.g. background differences, mood, anxiety, intelligence, awareness, other characteristics that are unique to each person.
Investigator effects
Any (unintentional) influence of the researcher’s behaviour/characteristics on participants/data/outcome - these cues may be unconscious nonverbal cues, such as muscular tension or gestures or vocal cues like the tone of voice.
Order effects
Can occur in a repeated measures design, and refers to how the positioning of tasks influences the outcome e.g. practice effect (the performance in the second condition may be better because the participants knows what to do) or fatigue effect (the performance might be worse in the second condition because they’re tired) or boredom effect. It refers to the order of the conditions having an effect on the participants’ behaviour.
Counterbalancing
A control method used in repeated measures design to deal with order effects (e.g. fatigue, boredom, practice). It helps spready out order effects across conditions so they don’t bias the results.
It varies the order in which participants complete the experimental conditions e.g. half of the participants do condition A then B, while the other half do B then A
Matched Pairs Design
Involves using different but similar participants in each condition of an experiment. The researcher matches the participants in each condition by important characteristics that may effect performance e.g. alcohol tolerance, driving ability etc.
Independent Groups Design
Involves using different participants for each condition of the experiment e.g. giving one group of participants a driving test with no alcohol and a different group of participants the same test after a pint of beer.
Repeated Measures Design
Involves using the same participants in each condition of an experiment e.g. giving a group of participants a driving test with no alcohol, then the same test after a pint of beer at a later time.
Random allocation
A method used in experiments to assign participants to different conditions by chance, rather than by choice or pattern. This helps ensure that each participant has an equal chance of being in any condition.
This can be done by writing everyone’s name on slips of paper, put them in a hat then randomly draw the names for each group
Single blind design
When participants don’t know which conditions of the experiment they are in - for example whether they are receiving the real treatment or a placebo (done to reduce demand characteristics).
E.g. You’re testing whether an energy drink improves reaction time, half the participants get the real energy drink, while half get a drink that looks and tastes the same but has no caffeine
Double blind design
When neither the participants nor the researchers (who interact with them or measure the results) know which condition each participant is in (controls for demand characteristics and researcher bias).
E.g. In a medical trial, participants don’t know if they’ve received the real drug or placebo and the researcher giving the pills also don’t know which is which (only a third party does, like a computer or another team member)
Standardised Instructions
When the researcher gives all participants the same directions and information in the same way, so every participant knows exactly what to do, what’s expected of them and experiences the experiment in the same manner. This is used to control extraneous variables especially researcher effects like tone of voice etc.
Correlation
A technique that measures whether or not there’s a relationship between two variables (co-variables).
Positive correlation
As one variable increases, the value of the other variable will also increase e.g. the more time spent studying, the higher the exam results.
Negative correlation
As one variable increases, the other variable being measured decreases (and vice-versa) e.g. the more time spent socialising with friends, the less revision gets done.
Zero correlation
The two variables are not related at all e.g. the earnings of a fisherman and the size of a portion of chips.
Pilot study
A small scale trial run of the actual investigation that researchers carry out before the main study.
To check if instructions are clear to participants
See if questions make sense (aren’t confusing)
Identify and fix any practical problems
Check validity and reliability
Aim
The intended purpose of an investigation i.e. what it is actually trying to discover.
Hypothesis
A testable statement. It makes a general prediction about what will happen in an experiment - usually about how the IV will affect the DV.
Alternative hypothesis
Predicts that the IV will have an effect on the DV (that something will change or there will be a difference). Opposite to null hypothesis.
Null hypothesis
Predicts that there will be no difference or relationship (any results found are due to chance).
Directional hypothesis
Predicts the direction of the expected effect or difference. It tells you how one variable will affect the other - whether it will increase/decrease or improve/worsen.
Used when there is a previous research suggesting what direction the results will go in - if you have a good reason to expect a certain outcome
Non-directional hypothesis
Predicts there will be a difference or relationship, but doesn’t state what direction it will go in - saying ‘something will change, but we’re not sure which way’.
Used when there’s little or no previous research to suggest what the outcome will be, or if the evidence from past studies is mixed