Research methods -y13

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/52

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

53 Terms

1
New cards

Define correlations.

Measure the relationship/ association between two variables.

2
New cards

Define co-variables.

The variables investigated in a correlation.

3
New cards

Three strengths of correlations.

-Quick and economical to carry out.

-Can be used when it is unethical to manipulate the IV.

-If a relationship is found, then it could be justified by further research.

4
New cards

Two limitations of correlations.

-No cause and effect determined.

-Untested variable could be causing the relationship.

5
New cards

Define case-study.

An in-depth investigation, description or analysis of a single individual, group, institution or event. E.G. case of HM, Phineas Gage, London Riots 2011.

6
New cards

Characteristics of case studies.

-Often use qualitative data.

-Often takes place over a long period of time (longitudinal).

-Uses a range of different research methods thus increasing reliability, by the process of triangulation.

7
New cards

Three strengths of case studies.

-Rich in detail.

-Shed light on rare or atypical behaviour.

-Start of future research.

8
New cards

Four limitations of case studies.

-Hard to generalise.

-Content is subjective.

-Personal accounts of participants/ their family and friends are prone to inaccuracies.

-Based on retrospective accounts.

9
New cards

Define content analysis.

Allows us to analyse qualitative data and turn it into numerical data which allows us to analyse this data and find patterns, e.g. conversations, speeches and texts.

10
New cards

Process of content analysis.

1. Read and re-read item.

2. Find common themes within the data.

3. Give examples of identified themes.

4. Count and tally each time a theme occurs.

11
New cards

Process of thematic analysis.

1. Read and re-read item.

2. Find common themes.

3. Under each theme provide evidence from the text that shows it fits that theme and then state where it came from, e.g. line 14.

12
New cards

Two strengths of content analysis.

-High external validity - the data being analysed can be generalised to everyday life as it is taken from real life sources.

-Flexible - can produce both qualitative and quantitative data.

13
New cards

Two limitations of content analysis.

-Observer bias reduces the objectivity and validity of findings as different observers may interpret the meaning of behavioural categories differently.

-People are studied indirectly and so the communication they produce is analysed outside of the context within which it occurred.

14
New cards

Define reliability.

A measure of consistency. If a particular measurement can be repeated, giving the same results, then that measurement is described as being reliable.

15
New cards

Process of test-retest method.

1. Give the same test or questionnaire to the same person on different occasions.

2. If the test/ questionnaire is reliable then the results should be the same or very similar.

3. The two scores will then be correlated to check if they are similar.

4. If the correlation is +0.8 or above then the test has good reliability.

16
New cards

Process of inter-observer reliability.

1. Observers should conduct their research in at least teams of two.

2. Observers watch the same event as each other but record data on their own.

3. Data collected by observers should be correlated to assess its reliability. (+0.8 = goof reliability)

17
New cards

How to improve the reliability of interviews?

-Interviewers must be properly trained and use no leading questions.

-Structured interviews - interviewer's behaviour is more controlled due to fixed questions.

18
New cards

How to improve the reliability of experiments?

-Standardised procedures - procedures the same for all participants makes results reliable.

19
New cards

How to improve the reliability of observations?

-Observationers should have operationalised behavioural categories so that each observer does not have a different perception of what each theme is.

20
New cards

Define validity.

Testing whether a study is measuring what it intends to.

21
New cards

How to use face validity to assess validity?

-This can be determined by simply looking through the test or scale or by passing it to an expert to check.

-E.G. give IQ test to expert.

22
New cards

How to use concurrent validity to assess validity?

-Involves comparing a test with another recognised and well-established test that measures the same topic.

-If the result of the new test are very similar (e.g. +0.8) then the test is high in concurrent validity.

23
New cards

How to improve the validity of experiments?

-Use a control group.

-Use a single blind procedure - reduce demand characteristics.

-Use double blind procedures - reduce demand characteristics and investigator effects.

24
New cards

How to improve the validity of questionnaires?

-Assure respondents that all data submitted will remain anonymous meaning they will be more truthful.

25
New cards

How to improve the validity of observations?

-Covert observation ensures participants are not aware that they are being observed, making their behaviour more natural and realistic.

-Behavioural categories need to be specific and easy to measure.

26
New cards

How to improve the validity of qualitative methods?

-More depth and detail in case studies and interviews.

-This extra depth better reflects the real life behaviours of participants.

-Qualitative methods have higher ecological validity than quantitative.

27
New cards

What do stats tests tell us?

-Used to decide whether any pattern in the data is significant or whether the pattern was caused by chance.

-When conducting a stats test, psychologists will set a significance level - the p value (usually 0.05).

-This is the minimum level of risk that psychologists are willing to take that the results are due to chance.

-E.G. p<0.05

28
New cards

What hypothesis do we accept if it was highly probable that results were caused by chance?

Null hypothesis.

29
New cards

What hypothesis do we accept if it was highly improbable that results were caused by chance?

Research hypothesis/ alternate hypothesis.

30
New cards

Give three experimental designs.

Independent groups design, repeated measures design, matched pairs design.

31
New cards

Give the three levels of measurement.

Nominal data, ordinal data, interval data.

32
New cards

Define nominal data.

A level of measurement where data are in separate categories.

33
New cards

Define ordinal data.

A level of measurement where data are ordered in some way , difference between units is not the same.

34
New cards

Define interval data.

A level of measurement that includes units of equal, precisely defined scores.

35
New cards

Give the way to remember how to choose a statistical test.

Carrots should come mashed with swede under roast potatoes.

36
New cards

When to use a one-tailed test?

If a directional hypothesis was used.

37
New cards

When to use a two-tailed test?

If a non-directional hypothesis was used.

38
New cards

Define type one error.

Incorrectly rejecting the null as the P value is too lenient (e.g. 0.1) and so it makes it easier to prove that results are significant.

39
New cards

Define type two error.

Incorrectly accepting the null as the p-value is too strict (e.g. 0.01) and so it makes it harder to prove that results are significant.

40
New cards

What is an abstract?

-First section in a journal which is a short summary (200 words).

-Includes the major elements of the research: aims and hypothesis, method, results.

-Allows the reader to get a quick picture of the journal.

41
New cards

What is an introduction?

-Begins with a review of previous research.

-This is so the reader knows what other research has been done.

-Should start broad and become more specific.

42
New cards

What makes up a method part of a journal?

-Research design and justification.

-Sample.

-Apparatus.

-Material.

-Procedure.

-Ethics.

43
New cards

What makes up a results part of a journal?

-Summarise the key findings from investigation.

-Descriptive statistics - tables, graphs, measures of dispersion.

-Inferential statistics.

-Raw data should appear in the appendix.

44
New cards

What makes up a discussion part of a journal?

-Summary of results verbally.

-Relationship to previous research.

-Criticism or praise of methodology used.

-Implications of psychological theory and possible real-world applications.

-Suggestions for future research.

45
New cards

How to research a book?

surname, first initial. (year published). title of publication. where from.

46
New cards

Define empirical methods.

-Information gained through direct observation or experiment rather than by argument or belief.

-Scientists look for empirically based facts.

-Claims are tested through direct testing in order to ensure they are right.

47
New cards

Define objectivity.

-Empirical data should be scientific.

-Not affected by the expectations, personal opinions and biases of the researcher.

48
New cards

Define replicability.

-Repeat study to check validity.

-A study can be repeated over a number of different contexts and circumstances.

-If the results are the same then this demonstrates that findings can be generalised.

-It is important for scientists to have standardised procedures so other scientists can repeat them to verify results.

49
New cards

Define theory construction.

-A theory is a collection of general principles that explain observations and facts.

-Such theories help us understand and predict the natural phenomena around us.

-Scientists can then test these theories in their studies.

50
New cards

Define hypothesis testing.

-Should be possible to make clear and precise predictions on the basis of a theory.

-Theories can be scientifically tested.

-A hypothesis should be tested using systematic and objective methods to determine whether it will be supported or refuted.

51
New cards

Define falsifiability.

-Karl Popper (1934) argued that the key criterion of a scientific theory is its falsifiability.

-Popper suggested that genuine scientific theories should hold themselves up for hypothesis testing and the possibility of being proven false.

-He believed that even when a scientific principle had successfully and repeatedly been tested it was not necessarily true as instead it has simply not been proven false yet.

52
New cards

Define a paradigm.

-Thomas Kuhn suggested that what distinguishes scientific disciplines from non-scientific disciplines is a shared set of assumptions and methods - a paradigm.

-Social sciences lack a universally accepted paradigm.

-Natural sciences are characterised by having a number of core unified principles.

-Psychology is marked by too many internal disagreements and has too many conflicting approaches to qualify as a science.

53
New cards

Define a paradigm shift.

-Kuhn suggested that in science, one theory remains dominant despite occasional challenges from disconfirming studies.

-If evidence against the dominant theory builds up, the dominant theory can no longer be maintained.

-In this case the dominant theory was overthrown - a paradigm shift.