Research Methods Y13

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/109

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

110 Terms

1
New cards

What is a case study in psychology?

A detailed, in-depth analysis of an individual, group, institution, or event.

2
New cards

What types of cases do case studies examine?


Both unusual cases (e.g., rare disorders, 2011 riots) and typical cases (e.g., elderly recollections of childhood).

3
New cards

Who was Phineas Gage? (Case study example)

A man who survived an iron rod passing through his skull, providing insights into brain function and personality changes.

4
New cards

What methods are used to gather case study data? What type of data do they produce?

Interviews, observations, questionnaires, and sometimes experimental testing to collect qualitative and quantitative data.

5
New cards

Why are case studies often longitudinal?

They track changes over time and gather additional data from family, friends, and the individual.

6
New cards

Why are case studies useful, data wise?

They provide detailed, valid insights into unusual behaviors, unlike superficial experiments or questionnaires.

7
New cards

How do case studies help psychology’s understanding?

They improve understanding of normal functions (e.g., HM’s memory study) and can inspire new theories. One contradictory finding can cause a whole theory to be revised.

8
New cards

Why is generalising case studies difficult?

Small sample sizes, researcher bias (they make subjective selections and interpretations,) and unreliable personal accounts (memory decay) limit wider application.

9
New cards

What is reliability in psychology?

Reliability is a measure of consistency—a test or measure is reliable if it produces the same results when repeated.

10
New cards

What is the test-retest method?

A way to assess reliability by giving the same test to the same people on different occasions and comparing the results.

11
New cards

What types of tests commonly use the test-retest method?

Questionnaires, psychological tests (e.g., IQ tests), and sometimes interviews.

12
New cards

Why is timing important in test-retest?

The gap should be long enough to prevent recall but not so long that attitudes or abilities change.

13
New cards

How is test-retest reliability measured?

The two sets of scores are correlated—if the correlation is significant and positive, the test is reliable.

14
New cards

What is inter-observer reliability?

The extent to which two or more observers record behavior in the same way, reducing subjectivity and bias.

15
New cards

Why is inter-observer reliability important in observations?

It ensures that data is consistent and objective, preventing differences in interpretation between researchers.

16
New cards

How can inter-observer reliability be established?

By conducting a pilot study to ensure observers use behavioral categories consistently.

17
New cards

How should observers collect data?

They must watch the same events but record data independently to avoid influencing each other.

18
New cards

How is inter-observer reliability assessed?

By correlating the data from different observers—high correlation = high reliability.

19
New cards

Why is a correlation test used in reliability assessment?

To check if two sets of data match, ensuring consistency in test-retest reliability or inter-observer reliability.

20
New cards

What correlation coefficient indicates high reliability?

+0.80 or above—anything lower suggests the test or categories need redesigning.

21
New cards

How is the reliability of questionnaires measured?

Using the test-retest method, where two sets of data should correlate above +0.80 for high reliability.

22
New cards

What can be done if a questionnaire has low reliability?

Rewrite or remove ambiguous questions to ensure consistency in responses over time.

23
New cards

How can question format affect reliability?

Closed questions (fixed-choice) improve reliability by reducing interpretation, unlike open-ended questions.

24
New cards

How can interview reliability be ensured?

Use the same interviewer each time to maintain consistency in questioning and interaction.

25
New cards

What if the same interviewer can't be used for all interviews? (Reliability)

Train all interviewers to avoid issues like leading or ambiguous questions, especially in structured interviews.

26
New cards

How does the interview type affect reliability?

Structured interviews are more reliable because they use fixed questions, while unstructured interviews are less reliable due to their free-flowing nature.

27
New cards

Why are lab experiments considered reliable?

They offer strict control over conditions, such as instructions and testing environment, ensuring consistency.

28
New cards

What does lab control mainly ensure?

Precise replication of the method, not necessarily the reliability of the findings themselves.

29
New cards

What factor can affect the reliability of lab experiment findings?

If participants are tested under slightly different conditions each time, it can undermine the reliability of the results.

30
New cards

How can the reliability of observations be improved?

By ensuring behavioral categories are properly operationalized, measurable, and self-evident (e.g., "pushing" vs. "aggression").

31
New cards

What is important when defining behavioral categories?

Categories should be distinct (e.g., avoid overlap like "hugging" and "cuddling") and cover all possible behaviors.

32
New cards

What happens if behavioral categories are poorly defined?

Observers may make inconsistent judgments and record data differently, reducing reliability.

33
New cards

What is content analysis?

A type of observational research where people are studied indirectly through the communications they produce.

34
New cards

What is the aim of content analysis?

To summarize and describe communications in a systematic way to draw overall conclusions.

35
New cards

What types of communication can be analyzed in content analysis?

Conversations, texts, books,TV shows.

36
New cards

What is coding in content analysis?

Coding is the initial stage where large data sets (e.g., interview transcripts) are categorized into meaningful units.

37
New cards

How can coding produce quantitative data?

By counting the frequency of specific words or phrases, like counting derogatory terms for the mentally ill in newspaper reports.

38
New cards

What is thematic analysis?

A method for analyzing qualitative data by identifying and refining themes that represent key patterns in the data.

39
New cards

How is thematic analysis conducted?

After transcribing the data, researchers review it repeatedly to identify recurring themes, which are then refined and assigned codes.

40
New cards

How are identified themes used in thematic analysis?

Themes are used to support or challenge theories, with specific quotes or data serving as supporting evidence.

41
New cards

How are themes tested for validity?

Researchers collect new data to check if the themes adequately explain the new information before writing the final report.

42
New cards

How does content analysis avoid ethical issues?

Content analysis typically uses publicly available materials (e.g., TV shows, films, internet content) that do not require explicit consent. This reduces ethical concerns like harm or invasion of privacy. Even with more sensitive content, the data is often externally valid and doesn't require permission.

43
New cards

How is content analysis flexible?

Content analysis is flexible because it can generate both qualitative data (themes) and quantitative data (frequencies). This allows it to be adapted to different research goals, whether to explore patterns or measure trends.

44
New cards

What is a limitation of content analysis related to bias?

A key limitation is researcher bias, as content is often analyzed outside its original context. This can lead to misinterpretations of intentions or meanings, especially in more descriptive or thematic analyses. Although researchers acknowledge their biases, subjectivity can still affect the results.

45
New cards

Define internal validity

Whether the effect observed in an experiment is due to the manipulation of the IV and not another variable

46
New cards

What may affect internal validity?

If Ps respond to demand characteristics and behave in a way that’s expected of them

47
New cards

Define external validity and give 2 types

Whether data can be generalised to other situations outside of the research environment they were originally gathered in - ecological and temporal validity

48
New cards

Define ecological validity

Whether data is generalisable to the real world, based on the conditions research is conducted under and the procedures involved

49
New cards

Do lab experiments have high/low ecological validity?

Low - they exert a high degree of control over EVs (doesn’t occur in a natural environment) - results are too artificial

50
New cards

What can impact ecological validity (apart from the experiment’s setting?)

The task - if the task used to measure the DV has low mundane realism, this lower ecological validity

51
New cards

Defin etemporal validity

Whether findings from a study/ concepts from a theory remain true over time - e.g. Freud’s ‘penis envy’ (reflects Victorian society’s patriarchal nature)

52
New cards

What is face validity and how is it assessed?

Face validity refers to whether a test appears to measure what it is supposed to. It can be assessed by eyeballing the measure or having an expert review it.

53
New cards

What is a limitation of face validity?

Face validity is subjective and does not guarantee that a test actually measures what it claims to. A test may look valid but lack scientific accuracy.

54
New cards

How is concurrent validity assessed?

A new test is compared with a well-established one. If the results are similar and the correlation is +.80 or above, the new test has high concurrent validity.

55
New cards

What is face validity and how is it assessed?

Face validity is whether a test appears to measure what it claims to. It can be assessed by eyeballing the test or having an expert review it.

56
New cards

What is a limitation of face validity?

It is a superficial measure and subjective. Just because a test looks valid does not mean it accurately measures what it claims to.

57
New cards

How can validity be improved in experimental research?

Use control groups to isolate the effect of the IV, standardise procedures, and use single-blind (to reduce demand characteristics) or double-blind (to reduce both demand characteristics and investigator effects) techniques.

58
New cards

How can validity be improved in questionnaires?

Include a lie scale to detect social desirability bias and ensure anonymity to encourage more truthful responses.

59
New cards

What factors improve validity in observational research?

Covert observations increase ecological validity by preventing participant awareness, and clearly defined behavioural categories prevent subjective interpretation.

60
New cards

Why do qualitative methods tend to have high validity?

They provide detailed, real-life insights that better reflect participant experiences. Case studies and interviews allow for deeper understanding compared to quantitative methods.

61
New cards

What is interpretative validity?

The extent to which the researcher’s interpretation of events matches those of their Ps

62
New cards

How can researchers ensure interpretative validity in qualitative research? (3 ways)

By the coherence of the researcher’s reporting, using direct quotes from participants for accuracy and triangulation (cross-checking data from multiple sources like interviews and observations).

63
New cards

What is statistical testing used for?

To determine if the results from an investigation are statistically significant, not only just occurring by chance - used to determine if accept/ reject null hypothesis

64
New cards

What types of design are in a related and unrelated design?

Related - matched pairs and repeated measures; unrelated - independent groups

65
New cards

What is nominal data, and how is it categorized?

Nominal data is categorical (sorted into categories) and discrete, meaning each item fits into only one category. For example, counting how many people prefer apples, oranges, or bananas.

66
New cards

What is ordinal data and how is it different from interval data?

Ordinal data is ranked or ordered (e.g., rating psychology on a scale of 1-10). However, intervals between ranks are not equal, making it less precise than interval data.

67
New cards

Why is ordinal data sometimes considered "unsafe"?

It is subjective and lacks precision. Differences in scores (e.g., rating psychology as a "4" vs. an "8") may not be consistent across participants. This is why raw scores are converted to ranks for statistical testing.

68
New cards

What is interval data and what makes it more precise than ordinal data?

Interval data uses equal, standardized units of measurement (e.g., time, temperature, weight). It is more accurate and preserves more detail than ordinal data.

69
New cards

What mneumonic is used for statistical tests?

70
New cards

What mnuemonic is used for statistical tests? Include whether critical values have to be greater than/ less than the calculated value.

Carrots Should (Sign) Come Mashed With Swede (Spearman’s) Under Roast Potatoes

Going Down: Nominal, Ordinal, Interval (NOI)

Going Across: Unrelated Design, Related Design, Correlations (IRC)

Equal to/less than Mann-Whitney, Sign and Wilcoxon

71
New cards

Which experimental design/ test must you have for a sign test and which one for Spearman’s rho?

Sign - related; Spearman’s rho - correlation

72
New cards

What is an alternative hypothesis? What 2 types can you have?

The alternative hypothesis (H₁) predicts a difference or relationship between variables. It can be directional (specific prediction) or non-directional (just states a difference).

73
New cards

What is the purpose of a null hypothesis?

The null hypothesis (H₀) states that no significant difference or relationship exists between variables. Statistical tests determine whether we reject or fail to reject H₀.

74
New cards

How do psychologists determine whether to accept the null or alternative hypothesis?

If the evidence supports H₁, we reject H₀ and accept the alternative hypothesis. If not, we fail to reject H₀, meaning the findings are not strong enough to support H₁.

75
New cards

What is the difference between descriptive and inferential statistics?

Descriptive statistics summarize and organize data (e.g., mean, range), while inferential statistics analyze data to make predictions or draw conclusions about a larger population.

76
New cards

Why do researchers use inferential statistics?

Since collecting data from an entire population is often impractical, inferential statistics allow researchers to make generalizations based on sample data.

77
New cards

How do descriptive statistics help researchers?

They summarize and simplify data, making patterns easier to see without making predictions (e.g., averages, percentages, variability).

78
New cards

What are the 6 main sections of a psychological report?

A psychological report follows a standard format: Abstract, Introduction, Methods, Results, Discussion, and References.

79
New cards

What is the role of the abstract in a psychological report?

The abstract provides a concise summary (around 150 words) of the study’s aims, hypothesis, method, results, and conclusions to help researchers quickly assess its relevance.

80
New cards

Why do psychologists read abstracts?

Abstracts allow psychologists to quickly evaluate multiple studies and decide which ones are relevant for further review.

81
New cards

What is included in the introduction of a psychological report?

The introduction provides a literature review of relevant theories, concepts, and studies. It follows a logical progression, starting broadly and becoming more specific, leading to the aims and hypothesis of the study.

82
New cards

Why is the methods section detailed in a psychological report?

The methods section provides enough detail for replication and includes subsections on design, sample, materials, procedure, and ethics to ensure clarity and transparency in how the study was conducted.

83
New cards

Why do researchers include both descriptive and inferential statistics?

Descriptive statistics (e.g., tables, graphs, averages) summarise data, while inferential statistics (e.g., statistical tests, significance levels) determine whether the results support the hypothesis.

84
New cards

What is the purpose of the discussion section?

The discussion summarises findings, relates them to previous research, acknowledges limitations, suggests improvements, and considers wider implications, such as real-world applications.

85
New cards

Why is referencing important in a psychological report?

Referencing acknowledges the work of other researchers, ensures academic integrity, and follows a standard format like Harvard Referencing for books and journals.

86
New cards

Why do psychologists rely on probability in statistical tests?

Psychologists use probability because they cannot test entire populations under all conditions. Statistical tests estimate the likelihood that findings are not due to chance, allowing researchers to draw reasonable conclusions from sample data.

87
New cards

What does 𝑝 ≤ 0.05 mean in psychology?

It means there is a 5% chance that the results occurred by chance. If significance is found at this level, researchers can reject the null hypothesis and accept the alternative, though some uncertainty remains.

88
New cards

Why might a researcher use a significance level of 𝑝 ≤ 0.01 instead of 𝑝 ≤ 0.05?

In high-risk studies, such as drug trials, stricter significance levels reduce the chance of errors, ensuring findings are highly reliable before making important conclusions.

89
New cards

What does a lower 𝑝-value indicate in statistical testing?

A lower 𝑝-value means results are less likely to be due to chance, making the findings more statistically significant and increasing confidence in the conclusion.

90
New cards

What is a Type I error in statistical testing?

A Type I error occurs when the null hypothesis is wrongly rejected, meaning the researcher claims to have found a significant result when none actually exists. This is also called a false positive or optimistic error.

91
New cards

What is a Type II error in statistical testing?

A Type II error happens when the null hypothesis is wrongly accepted, meaning a real effect goes undetected. This is known as a false negative or pessimistic error.

92
New cards

How do significance levels influence Type I and Type II errors?

A lenient significance level (e.g., 𝑝 ≤ 0.10) increases the risk of a Type I error, while a stringent significance level (e.g., 𝑝 ≤ 0.01) raises the chance of a Type II error. Psychologists use 𝑝 ≤ 0.05 to balance both risks.

93
New cards

What is a paradigm in science?

A paradigm is a shared set of assumptions and methods that define a scientific discipline, shaping research and understanding.

94
New cards

Why did Kuhn argue that psychology lacks a paradigm?

Psychology has conflicting approaches (e.g., cognitive, behavioral), unlike natural sciences with unifying theories, making it a pre-science according to Kuhn.

95
New cards

What is a paradigm shift?

A paradigm shift happens when new evidence challenges existing beliefs, leading to a scientific revolution

96
New cards

What is a theory in science?

A theory is a set of general laws or principles that explain events or behaviors, based on evidence gathered through empirical observation.

97
New cards

How are hypotheses tested in psychology?

A hypothesis makes a testable prediction based on a theory. It is tested using systematic, objective methods, which can either support or challenge the theory.

98
New cards

What is deduction in hypothesis testing?

Deduction is the process of deriving new hypotheses from an existing theory, allowing for further scientific testing and refinement.

99
New cards

What does falsifiability mean in science?

Falsifiability is the idea that a scientific theory must be testable and open to the possibility of being proven false through experimentation.

100
New cards

What did Karl Popper argue about scientific theories?

Popper believed that no theory is ever fully proven, only not yet falsified. Theories that survive repeated attempts to be disproven are considered the strongest.