1/284
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
What is a type I error?
False positive (we assume our findings show smt when they don't)
p = 0.10
What could making a type I error result in?
- Rejecting the null hypothesis when it is true
- Accepting the alternatives hypothesis when it is false
- Assuming the results are due to IV when they were due to chance
How do we make a type I error?
When the p value is too high e.g. p = 0.1 (allows for chance to impact too much)
What a type II error?
False negative (we assume our findings don't show anything when they do)
p = 0.01
What could making a type II error result in?
- Accepting a null hypothesis that is false
- Wrongly accepting an alternative hypothesis
- Assuming the results were due to chance when they were due to the IV
How do we make a type II error?
When we set the p value too low e.g. p = 0.01
Why might we use p = 0.01?
It can be necessary for testing new medical treatments such as drugs. This is because you have to be very confident when it comes to drug trials that the drug does/doesn't work
What does p = 0.05 mean?
Only 5% probability results are due to chance. 95% of the data proves that the IV affects the DV
Why do we use p = 0.05 in psychology?
To avoid type I and II errors (because it is in the middle of these errors so reduces the chance of making these mistakes)
Why do we not want to make type I or II errors?
We want to ensure we correctly accept or reject our hypotheses
What is a type I error?
False positive (we assume our findings show smt when they don't)
p = 0.10
What could making a type I error result in?
- Rejecting the null hypothesis when it is true
- Accepting the alternatives hypothesis when it is false
- Assuming the results are due to IV when they were due to chance
How do we make a type I error?
When the p value is too high e.g. p = 0.1 (allows for chance to impact too much)
What a type II error?
False negative (we assume our findings don't show anything when they do)
p = 0.01
What could making a type II error result in?
- Accepting a null hypothesis that is false
- Wrongly rejecting an alternative hypothesis
- Assuming the results were due to chance when they were due to the IV
How do we make a type II error?
When we set the p value too low e.g. p = 0.01
Why might we use p = 0.01?
It can be necessary for testing new medical treatments such as drugs. This is because you have to be very confident when it comes to drug trials that the drug does/doesn't work
What does p = 0.05 mean?
Only 5% probability results are due to chance. 95% of the data proves that the IV affects the DV
Why do we use p = 0.05 in psychology?
To avoid type I and II errors (because it is in the middle of these errors so reduces the chance of making these mistakes)
Why do we not want to make type I or II errors?
We want to ensure we correctly accept or reject our hypotheses
Who came up with thematic analysis?
Braun and Clarke (2006)
What kind of data does thematic analysis look at?
Qualitative
Summarise what thematic analysis is.
Breaking qualitative data down into smaller chunks. Not interested in number of behaviours, interested in patterns of behaviours
Name the 5 stages in thematic analysis.
1. Familiarisation with the data
2. Generating initial codes
3. Searching for themes
4. Reviewing themes
5. Defining/naming themes
Describe how to carry out thematic analysis.
1. Perform stages 1,2 and 3 all at once
2. Make conclusions from the data connections you have made in these stages and design a new study to test this theory more specifically (this is stage 4)
3. The researcher would then finalise their themes (if the results supported this) and publish (this is stage 5)
Describe familiarising oneself with the data.
Reading/looking over all the qualitative data and looking for behaviour patterns of interest such as body language or eye contact (get familiar with what it shows)
(At this stage you are not creating codes, you are just looking)
Describe generating initial codes.
Looking at all/any overarching themes/categories as a result of the behaviour patterns noticed in stage one e.g. when alighting the bus in the learning practical people say thank you and raise their hand too/other forms of non-verbal thank yous
Describe searching for themes.
Making the initial codes specific (fixed categories) - selecting the initial codes that have the most significant relationship. E.g. in the example above, we would be looking at verbal thank yous and non verbal thank yous (this category was developed from the behaviours we put into general initial categories/codes)
Describe reviewing themes.
Performing the study/a slightly different study again and seeing if qualitative data collected fits within the themes laid out
Describe defining themes.
The researcher now clearly defines the themes to publish
Why do we use thematic analysis?
Looks at more complex behaviour e.g. by counting the number of graffiti in a city we don't know why people have done the graffiti - a thematic analysis could help us make conclusions about that purpose (e.g. by looking at the writing (whether +ve or -ve) and the images next to it (displaying political people etc)
Assess thematic analysis in terms of validity.
+ Data more ecologically valid as it is looking at behaviour in context of other factors (this is much closer to real life as humans do not usually perform an action in isolation)
- Poor internal valid as studies don't have a determined aim, they just look generally for patterns
Assess thematic analysis in terms of subjectivity.
Could be subjective as the experimenter's aims/biases could be projected onto the themes/patterns noticed
Assess thematic analysis in terms of amount of information.
+ Familiarising oneself with the data and making initial codes considers all the information gathered in a study, and does not just focus on one aspect
- This type of analysis takes a significant period of time (e.g. analysing ALL the data, and re-performing a study to further confirm conclusions)
Assess thematic analysis in terms of holism.
+ Similar to validity, looks at behaviour in context rather than in isolation (gives a better representation)
- Could be less holistic as some patterns could overshadow others, meaning some important conclusions about behaviour are missed
- Sometimes quantitative data is also very valuable (could we gain even more by doing a quantitative analysis also)
What are the different types of sampling?
- Random
- Stratified
- Volunteer/self selecting
- Opportunity
Describe random sampling.
When pp are drawn randomly and everyone has an equal chance of being selected. This can be done through methods like random number generators etc
Give a strength of random sampling.
Ensures there will be a variety of participants
Give a weakness of random sampling.
Can take a longer amount of time to define list the target population and then select the sample
Describe stratified sampling.
When the sample statistically represents the target population. E.g. if the target population is 60% male and 40% female the sample should reflect this.
What is a strength of stratified sampling?
- It allows the sample to be more representative of the target population
- Potential extraneous variables can be controlled for by selecting more specific sub groups
What is a weakness of stratified sampling?
- It is very time consuming as you have to carefully select and count the number of pp.
- Also works badly for certain factors
- There is also a possibility that people may be grouped inaccurately
Describe volunteer/self selecting sampling.
Waiting for volunteers to sign up. Here the pp select themselves through advertisments or surveys
What is a strength of volunteer sampling.
- PP are unlikely to drop out of the study as they volunteered. This makes the study less likely to be interrupted
What is a weakness of volunteer sampling?
- Volunteers may be biased
- They may also be more likely to guess or want to guess the purpose of the study
Describe opportunity sampling.
Take samples from members of the population you have access to until you have a sample of the desired size.
Give strengths of opportunity sampling.
It is very fast and convenient as it requires little preparation and no list of the target population
Give a weakness of opportunity sampling.
Sample of pp are less likely to accurately represent target population
What is a target population?
The group the participants are drawn from
What is the sample?
The subset of the population who actually participate in the study
What is a research method?
Different ways to collect information during a study or experiment.
Give examples of research methods.
- Lab experiments
- Case studies of BDPs
- Animal studies
- Scanning studies
- Self report methods (questionnaires and interviews)
What research methods do the topics use?
- Cognitive- lab, BDP case studies
- Social - self report
- Biological - scanning
- Learning - animal studies, observations
Disadvantage of self report methods?
- Not everyone completes questionnaires
- Questions can be hard to answer (poorly set up)
- People can choose the midpoint if one is offered - gives no results
- People lie
Advantage of self report methods?
- Ask lots of people
- Fast and easy
- Ethical
Why do people lie in self report studies?
- Social desirability - pp give answers to make themselves look good
- Demand characteristics - pp give answers they think fit the researchers aims
- 'Screw you' effect - pp give answers they think the researcher doesn't want (this mostly happens in questionnaires not interviews)
Self report studies often have a pilot study first. What is this and why?
This is a small scale study to test the real questionnaire. The pilot study allows for issues to be found and modified.
What are the different types of questions in questionnaires?
- Open
- Closed
- Likert
- Contingent
Describe open questions.
Give qualitative data but hard to analyse. No fixed answer, encourages pp to talk personally and in more depth
Describe closed questions.
Force pp to have an opinion (yes/no etc). Quantitative data which is easy to analyse but lack detail.
Describe likert scaling questions.
Assesses the strength of an opinion. Gives quantitative data. However people tend to opt for middle values.
Describe contingent questions.
These questions depend on the answer to the previous question (if yes go to... etc)
What is another name for questionnaires?
Inventory
Describe interviews.
Face to face convos between the researcher and pp. Usually have more open questions
What are the different types of interview?
- Structured
- Semi-structured
- Unstructured
Describe a structured interview.
An oral questionnaire
Describe a semi-structured questionnaire.
Basic outline of questions but researcher can deviate from this
Describe an unstructured interview.
No set questions. Basically a conversation
Advantages of interviews.
- Generate detailed qualitative data
- Verbal and non-verbal communication and thus data
- More difficult to lie
- Semi-structured interviews can lead to serendipitous findings
- More validity as rapport built between people
Disadvantages of interviews.
- Researcher bias
- Demand characteristics and social desirability
How are observations used in psychology?
As a method or technique
Describe observations.
They are a non-experimental research method - no manipulation of variables. This is just a researcher observing behaviour and looking for patterns
Describe issues with observation as a research method.
We cannot ascertain cause and effect, merely note a correlation
Describe observational research.
Where an entire study is an observation (no control of variables)
Are observation studies ethical and why?
They are ethical. This is because the people being observed are in a public place and are 'expecting' to be observed
What types of data can be collected by observation?
Qualitative and quantitative
What are the different types of observation?
- Naturalistic or controlled
- Structured or unstructured
- Participant or non-participant
- Overt or covert
(these can be combined)
What are naturalistic and controlled observations?
- In a naturalistic observation, behaviour is studied in a natural situation where everything has been left as it normally is.
- In a controlled observation, some variables are controlled by the researcher, reducing the naturalness of the behaviour being studied. Participants are likely to know that they are being observed and the study may take place in a lab.
What are participant and non-participant observations?
- Pp observation is where the researcher actively participates in the thing they are observing e.g. if an experimenter is looking at what parents talk about at the park, they may pretend to be a parent.
- A non- participant observation is where the researcher observes the pp without actively participating
In a non-participant observation, it is easier to record data and the observer can remain unbiased.
What is a covert and overt observation?
- A covert observation is where the participants are unaware they are being observed. This means the observer is hidden e.g. the experimenter hides in a van to watch people on the street.
- An overt observation is where the participants are aware they are being observed and the observer is not hidden e.g. the observer is sat in an open space with a clipboard making notes.
What is a structured and unstructured observation?
- In an unstructured observation, the researcher will record all relevant behaviour but has no system. This is because the behaviour that is being studied may be unpredictable. An unstructured observation may be the first step in an investigation, and the results may be used to create a coding system for further structured observations
- A structured observation is when the researcher has a system that is used to record behaviour. They will use a coding system (see below) to tally the number of times a behaviour occurs (quantitative data), and they will also use various sampling procedures (again, see below) to decide what to observe and when.
What is a coding system?
When behaviour is operationalised by being broken into different categories.
What are sampling procedures?
How the observer should record every instance of the behaviour being studied. This includes:
- Event sampling: counting each time a particular behaviour is observed
- Time sampling: recording data at particular intervals. For example, what is an individual doing every 30 seconds?
What is peer review?
A process in which scientists examine other scientists' research
Who performs peer review?
Researchers in the same field as the experimenter and research article in question
Why do we peer review research?
To prevent:
- Fraudulent
- Poor quality
- Repetitive information
What is an important example of what happens when peer review does not happy?
Andrew Wakefield's study on MMR vaccine
Describe the process of peer review.
1. Researcher submits an article to a journal
2. Journal sends it to appropriate experts
3. Experts review originality, methodology, validity, bias etc
4. Feedback from reviewers determines whether the article is accepted, rejected or in need of revision
5. Editors then make the final decision as to whether to accept or reject the research based on reviewer's comments/ recommendations
Where should fraudulent research be stopped in the peer review process?
At the stage where experts are reviewing methodology/validity/bias etc
How much research is rejected/accepted after peer review?
Roughly 90% rejected usually (10% accepted) ish
Why is peer review so important?
- Research will be published, read and used by other people. It could act as a basis for theories or even real life processes so it is very important that it is accurate
- Also useful to identify red flags such as bad researchers (protect integrity and also actually protecting people/animals)
Why can having published research be useful for researchers?
They can become known names and get more funding etc
What are some strengths of peer review?
- Can prevent fraudulent research being published and having negative effects
What are some issues with peer review?
- It doesn't always work e.g. Andrew Wakefield (studies can be peer reviewed but things can be missed)
- Once studies are published there is nothing you can do (cannot make conclusions for people - research can be used to support negative things)
Publishing biases:
- Can be hard to find experts if the field is very niche (what makes a reviewer qualified?)
- A lot of research that finds non-significant results is not published
- Editors/reviewers may be biased (have own research they want to publish, don't like researcher etc)
- Researchers may be paid to research which can bias results/methodology etc (funding provided to editors etc can bias)
- Much research is not published (based on bias of editor not necessarily on factual issues)
- Not being published in a scholarly article does not mean that the research is not very important
Objectivity
Where the research process should not be influenced by the experimenters emotions, opinions or expectations
Subjectivity
Where the research process is influenced by the researchers emotions, opinions or expectations
Generalisability
Extent to which results can be applied to the population
Validity
Legitimacy of data collected
Ecological validity
Extent to which findings can be generalised beyond the particular study
Relibaility
Ensuring that all procedures are the same for each pp so their results are comparable