1/52
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Define correlations.
Measure the relationship/ association between two variables.
Define co-variables.
The variables investigated in a correlation.
Three strengths of correlations.
-Quick and economical to carry out.
-Can be used when it is unethical to manipulate the IV.
-If a relationship is found, then it could be justified by further research.
Two limitations of correlations.
-No cause and effect determined.
-Untested variable could be causing the relationship.
Define case-study.
An in-depth investigation, description or analysis of a single individual, group, institution or event. E.G. case of HM, Phineas Gage, London Riots 2011.
Characteristics of case studies.
-Often use qualitative data.
-Often takes place over a long period of time (longitudinal).
-Uses a range of different research methods thus increasing reliability, by the process of triangulation.
Three strengths of case studies.
-Rich in detail.
-Shed light on rare or atypical behaviour.
-Start of future research.
Four limitations of case studies.
-Hard to generalise.
-Content is subjective.
-Personal accounts of participants/ their family and friends are prone to inaccuracies.
-Based on retrospective accounts.
Define content analysis.
Allows us to analyse qualitative data and turn it into numerical data which allows us to analyse this data and find patterns, e.g. conversations, speeches and texts.
Process of content analysis.
1. Read and re-read item.
2. Find common themes within the data.
3. Give examples of identified themes.
4. Count and tally each time a theme occurs.
Process of thematic analysis.
1. Read and re-read item.
2. Find common themes.
3. Under each theme provide evidence from the text that shows it fits that theme and then state where it came from, e.g. line 14.
Two strengths of content analysis.
-High external validity - the data being analysed can be generalised to everyday life as it is taken from real life sources.
-Flexible - can produce both qualitative and quantitative data.
Two limitations of content analysis.
-Observer bias reduces the objectivity and validity of findings as different observers may interpret the meaning of behavioural categories differently.
-People are studied indirectly and so the communication they produce is analysed outside of the context within which it occurred.
Define reliability.
A measure of consistency. If a particular measurement can be repeated, giving the same results, then that measurement is described as being reliable.
Process of test-retest method.
1. Give the same test or questionnaire to the same person on different occasions.
2. If the test/ questionnaire is reliable then the results should be the same or very similar.
3. The two scores will then be correlated to check if they are similar.
4. If the correlation is +0.8 or above then the test has good reliability.
Process of inter-observer reliability.
1. Observers should conduct their research in at least teams of two.
2. Observers watch the same event as each other but record data on their own.
3. Data collected by observers should be correlated to assess its reliability. (+0.8 = goof reliability)
How to improve the reliability of interviews?
-Interviewers must be properly trained and use no leading questions.
-Structured interviews - interviewer's behaviour is more controlled due to fixed questions.
How to improve the reliability of experiments?
-Standardised procedures - procedures the same for all participants makes results reliable.
How to improve the reliability of observations?
-Observationers should have operationalised behavioural categories so that each observer does not have a different perception of what each theme is.
Define validity.
Testing whether a study is measuring what it intends to.
How to use face validity to assess validity?
-This can be determined by simply looking through the test or scale or by passing it to an expert to check.
-E.G. give IQ test to expert.
How to use concurrent validity to assess validity?
-Involves comparing a test with another recognised and well-established test that measures the same topic.
-If the result of the new test are very similar (e.g. +0.8) then the test is high in concurrent validity.
How to improve the validity of experiments?
-Use a control group.
-Use a single blind procedure - reduce demand characteristics.
-Use double blind procedures - reduce demand characteristics and investigator effects.
How to improve the validity of questionnaires?
-Assure respondents that all data submitted will remain anonymous meaning they will be more truthful.
How to improve the validity of observations?
-Covert observation ensures participants are not aware that they are being observed, making their behaviour more natural and realistic.
-Behavioural categories need to be specific and easy to measure.
How to improve the validity of qualitative methods?
-More depth and detail in case studies and interviews.
-This extra depth better reflects the real life behaviours of participants.
-Qualitative methods have higher ecological validity than quantitative.
What do stats tests tell us?
-Used to decide whether any pattern in the data is significant or whether the pattern was caused by chance.
-When conducting a stats test, psychologists will set a significance level - the p value (usually 0.05).
-This is the minimum level of risk that psychologists are willing to take that the results are due to chance.
-E.G. p<0.05
What hypothesis do we accept if it was highly probable that results were caused by chance?
Null hypothesis.
What hypothesis do we accept if it was highly improbable that results were caused by chance?
Research hypothesis/ alternate hypothesis.
Give three experimental designs.
Independent groups design, repeated measures design, matched pairs design.
Give the three levels of measurement.
Nominal data, ordinal data, interval data.
Define nominal data.
A level of measurement where data are in separate categories.
Define ordinal data.
A level of measurement where data are ordered in some way , difference between units is not the same.
Define interval data.
A level of measurement that includes units of equal, precisely defined scores.
Give the way to remember how to choose a statistical test.
Carrots should come mashed with swede under roast potatoes.
When to use a one-tailed test?
If a directional hypothesis was used.
When to use a two-tailed test?
If a non-directional hypothesis was used.
Define type one error.
Incorrectly rejecting the null as the P value is too lenient (e.g. 0.1) and so it makes it easier to prove that results are significant.
Define type two error.
Incorrectly accepting the null as the p-value is too strict (e.g. 0.01) and so it makes it harder to prove that results are significant.
What is an abstract?
-First section in a journal which is a short summary (200 words).
-Includes the major elements of the research: aims and hypothesis, method, results.
-Allows the reader to get a quick picture of the journal.
What is an introduction?
-Begins with a review of previous research.
-This is so the reader knows what other research has been done.
-Should start broad and become more specific.
What makes up a method part of a journal?
-Research design and justification.
-Sample.
-Apparatus.
-Material.
-Procedure.
-Ethics.
What makes up a results part of a journal?
-Summarise the key findings from investigation.
-Descriptive statistics - tables, graphs, measures of dispersion.
-Inferential statistics.
-Raw data should appear in the appendix.
What makes up a discussion part of a journal?
-Summary of results verbally.
-Relationship to previous research.
-Criticism or praise of methodology used.
-Implications of psychological theory and possible real-world applications.
-Suggestions for future research.
How to research a book?
surname, first initial. (year published). title of publication. where from.
Define empirical methods.
-Information gained through direct observation or experiment rather than by argument or belief.
-Scientists look for empirically based facts.
-Claims are tested through direct testing in order to ensure they are right.
Define objectivity.
-Empirical data should be scientific.
-Not affected by the expectations, personal opinions and biases of the researcher.
Define replicability.
-Repeat study to check validity.
-A study can be repeated over a number of different contexts and circumstances.
-If the results are the same then this demonstrates that findings can be generalised.
-It is important for scientists to have standardised procedures so other scientists can repeat them to verify results.
Define theory construction.
-A theory is a collection of general principles that explain observations and facts.
-Such theories help us understand and predict the natural phenomena around us.
-Scientists can then test these theories in their studies.
Define hypothesis testing.
-Should be possible to make clear and precise predictions on the basis of a theory.
-Theories can be scientifically tested.
-A hypothesis should be tested using systematic and objective methods to determine whether it will be supported or refuted.
Define falsifiability.
-Karl Popper (1934) argued that the key criterion of a scientific theory is its falsifiability.
-Popper suggested that genuine scientific theories should hold themselves up for hypothesis testing and the possibility of being proven false.
-He believed that even when a scientific principle had successfully and repeatedly been tested it was not necessarily true as instead it has simply not been proven false yet.
Define a paradigm.
-Thomas Kuhn suggested that what distinguishes scientific disciplines from non-scientific disciplines is a shared set of assumptions and methods - a paradigm.
-Social sciences lack a universally accepted paradigm.
-Natural sciences are characterised by having a number of core unified principles.
-Psychology is marked by too many internal disagreements and has too many conflicting approaches to qualify as a science.
Define a paradigm shift.
-Kuhn suggested that in science, one theory remains dominant despite occasional challenges from disconfirming studies.
-If evidence against the dominant theory builds up, the dominant theory can no longer be maintained.
-In this case the dominant theory was overthrown - a paradigm shift.