Looks like no one added any tags here yet for you.
What are unstructured observations?
observer notes down nature of the key behaviours + when they occur
likely to produce qualitative data.
Define aim.
A general statement about the purpose of an investigation.
How is an aim usually phrased?
To investigate the relationship between...
To find out whether...
Define hypothesis.
A specific, testable statement or prediction regarding the outcome of an investigation.
Define operationalisation.
Defining a variable in a way that makes it measurable.
What are the two main types of hypotheses?
experimental
null
What are the two types of experimental hypothesis?
non-directional (2 tailed)
directional (1 tailed)
What does an experimental hypothesis say?
There will be an effect or difference or relationship.
What's the difference between a non-directional and directional experimental hypothesis?
non-directional is more general, it doesn’t state the direction of the findings
directional is more specific, it states the direction of the findings
Define dependent variable.
Effected by the independent variable + measured by the researcher.
What do standardised instructions control?
Less likely to get investigator effects/ bias + demand characteristics.
How can reliability in experiments be improved?
Use lab experiments to increase control over variables so replication is easier.
Define internal validity.
The extent to which the test measures what it intends.
What are the five types of sampling?
random
opportunity
volunteer
stratified
systematic
Name the four experimental methods.
laboratory
field
natural
quasi
What is a lab experiment?
in a lab with highly controlled conditions
independent variable is manipulated directly by the researcher
What are the advantages of natural experiments?
provide opportunities for research that may not otherwise be done for practical or ethical reasons (e.g. Romanian orphans)
high ecological validity because they involve the study of real life issues + problems as they happen
What are the disadvantages of quasi experiments?
participants can't be randomly allocated, so there may be participant variables
demand characteristics, which lower internal validity
What are the disadvantages of independent groups design?
participant variables are likely
more participants are needed
What is a participant observational method?
When the researcher joins the group they are observing + takes part in their activities.
What are the strengths of a controlled observation?
High control over variables makes replication easier, so reliability can be checked. This increases the ecological validity + makes the research more scientific.
What are the weaknesses of event sampling?
If the specified event is too complex, the observer may overlook important details.
What are the weaknesses of questionnaires?
responses may not be truthful (social desirability bias)
often produce response bias (respondents respond in similar ways)
acquiescence bias- people tend to agree regardless of question
What are the strengths of structured interviews?
Easy to replicate due to standardised format.
What are case studies?
in-depth investigation, description + analysis of an individual, small group, institution or event
involve a range of methods to gather data
researchers build up a case history
highly idiographic
use retrospective + longitudinal techniques.
What does retrospective mean?
A type of study which collects data about something which happened in the past.
What is a double blind procedure?
When neither the participant or the researcher know which condition is which, or the aim of the research.
What are standardised instructions?
When the same instructions are given to each participant.
Define validity.
The extent to which the test measures what it intends to measure.
What are two ways of assessing validity?
face validity
concurrent validity
What are naturalistic observations?
When the behaviour is observed in the natural setting. The psychologist doesn't influence the behaviour of those being observed.
How is random sampling carried out?
A random method selects the sample from a list of the target population.
Define volunteer sampling.
Participants select themselves to be part of the sample (self-selected sampling).
What is independent groups design?
Participants are randomly allocated to one condition only.
What is a disclosed (overt) observational method?
When the participant being observed knows they are being observed for the purpose of research.
What are unstructured interviews?
More open in their nature, the interviewer asks questions in response to the interviewees previous answer. The researcher is mindful of steering the interview towards topics they need data on.
What does a null hypothesis say?
There will be no effect/ difference/ relationship.
When are you more likely to choose a directional hypothesis?
When there is already previous research.
Define independent variable.
Variable the researcher may manipulate which affects the dependent variable.
Why is it important to control variables?
So that we can be sure that it is the independent variable that has affected the dependent variable.
Define confounding variable.
may affect the dependent variable + vary systematically
due to systematic errors
What are systematic errors?
error affects all participants in the same way, therefore affects the dependent variable in a consistent way so is more serious
lead to confounding variables
Define extraneous variable.
May affect the dependent variable
What are non-systematic (random) errors?
Error does not affect all participants in the same way.
What type of variables are demand characteristics and investigator effects?
Extraneous variables.
What are demand characteristics?
clues in an investigation which may convey information about the purpose of the research to the participants
may lead to the participants working out the hypothesis and changing their behaviour
How are demand characteristics controlled?
single blind procedures
deception
What are investigator effects?
Aspects of the investigator + their presence that can influence the participants or the responses they give (e.g. age or gender).
What is experimenter bias?
When the experimenter only sees the results they expect to find.
How are investigator effects limited?
double blind procedures
standardised instructions
What does random allocation to conditions control?
Any participant variables are divided equally across conditions, which reduces investigator bias.
What does counterbalancing control?
Order effects are balanced across conditions.
Define standardised procedures.
When the procedure is carried out in the same way each time.
What do standardised procedures control?
Less likely to get investigator effects/ bias + demand characteristics.
What is a single blind technique?
When participants don't know which condition they're in, or the aim of the research.
What does a single blind technique control?
Less likely to get demand characteristics.
Define reliability.
The extent to which a method of measurement or test produces consistent findings.
What are two ways of assessing reliability?
test-re-test method
calculate inter-observer reliability
Describe the test-re-test method.
involves administering the same test on different occasions
if the test is reliable, results should be the same (correlation of 0.8 or higher)
there must be enough time between tests that participants can't remember their answers, but not so long that their opinions have changed
Describe inter-observer reliability.
extent to which there is agreement between two or more observers involved in observations
pilot study should be done to check behavioural categories are being used in the same way
2 separate observers watch the same event but record data independently
scores are correlated
How can reliability in questionnaires be improved?
Use closed questions. They're less ambiguous because everyone will respond in the same way without misinterpretation.
How can reliability in interviews be improved?
use more structured interviews with fixed questions so the data is less ambiguous
record the conversation so information isn't missed.
How can reliability in observations be improved?
operationalised categories
video recording so info isn't missed
Define generalisability.
The extent to which findings can be applied to the population.
What are the two types of validity?
internal
external
Define external validity.
The extent to which the findings of a study can be generalised to other situations.
What are the three types of external validity?
temporal
ecological
population
Define temporal validity.
Whether findings hold true over time.
Define ecological validity.
Extent to which findings can be generalised to other settings + situations.
Define population validity.
Whether the participants in the study accurately represent the target population.
What are the six observational methods?
naturalistic
controlled
participant
non-participant
over
covert
What are controlled observations?
psychologist attempts to control some variables
mostly done in a lab
What are participant observations?
When the researcher joins the group they're observing.
What are non-participant observations?
When the psychologist observed the group from the outside.
What are overt observations?
The participants know they're being observed for the purpose of research.
What are covert observations?
The participants don't know they're being observed.
What are the strengths of naturalistic observations?
High external validity as it's studied in real life situations. Findings can be generalised.
What are the limitations of naturalistic observations?
lack of control makes replication difficult, so less scientific as you can't check for reliability
may be uncontrolled variables which reduce ecological validity
Define face validity.
Whether a measure appears to measure what it aims to measure.
Define concurrent validity.
If results obtained are close to those obtained on other recognised + well established tests.
Define target population.
The group of people the researcher is interested in.
Define random sampling.
Where all the members of the target population have an equal chance of being selected.
What are the strengths of random sampling?
No researcher bias.
What are the limitations of random sampling?
difficult + time consuming to conduct
may still end up with an unrepresentative sample
Define opportunity sampling.
Where a researcher decides to select anyone who happens to be available at the time + location of the study.
How is opportunity sampling carried out?
The researcher asks whoever is around at the time + place of their study if they'd like to participate.
What are the strengths of opportunity sampling?
convenient
saves time + effort
What are the limitations of opportunity sampling?
sample isn’t representative of target population
researcher bias
How is volunteer sampling carried out?
Researcher advertises research + participants respond.
What are the strengths of volunteer sampling?
easy + less time consuming
used for socially sensitive research topics (e.g. mental health) where it would be inappropriate to approach people + impossible to gain a list of the target population (doctor confidentiality)
What are the limitations of volunteer sampling?
Volunteer bias, so sample may not fully represent target population, which lowers population validity.
Define stratified sampling.
The composition of the sampling reflects the proportions of people in certain subgroups/ strata within the target population.
How is stratified sampling carried out?
Researcher identifies the strata of the population + the proportions needed of the sample to be representative are worked out. The sample are selected at random from each strata.
What are the strengths of stratified sampling?
no researcher bias
high population validity, sample is more representative so can be generalised + more valid conclusions are made
What are the limitations of stratified sampling?
difficult to reflect the exact proportions of the target population
time consuming
Define systematic sampling.
When every nth member of the target population is selected.
How is systematic sampling carried out?
A list of people in a specific order is made. A sampling system is nominated (may be nominated randomly).
What are the strengths of systematic sampling?
No researcher bias, which improves validity.
What are the limitations of systematic sampling?
Sample may not be as representative as random sampling, as not all members of the target population have an equal chance of being selected. This reduces the population validity and generalisability of the findings.
What is a field experiment?
in a more naturalistic, real-world situation
independent variable is manipulated directly by the researcher