1/60
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
What is the first step in quantitative research?
The starting point of quantitative research, which shows a deductive approach and guides what the researcher wants to explain.
What is a hypothesis in quantitative research?
A testable statement drawn from theory that indicates variables and their expected relationship. Note: Not always used; some quantitative research begins with broad questions.
What is the purpose of selecting a research design in quantitative research?
Choosing a design (e.g., experiment, cross-sectional survey, longitudinal) affects external validity and the ability to infer causality.
What is operationalization?
The process of turning abstract concepts into measurable indicators. This is a key issue in quantitative research, requiring valid and reliable measures.
What factors influence the selection of research site(s) in quantitative research?
The selection depends on the topic and access. Practical issues like obtaining permissions may force changes in initial plans (e.g., as seen in Teevan & Dryburgh's study of delinquent boys).
Why is an ethics review required in quantitative research?
Required for all research involving human participants to ensure their welfare, consent, and confidentiality.
What considerations guide Step 7: Selecting Participants / Sampling in quantitative research?
This depends on the research design. Large studies may use complex probability sampling, while experiments rarely use elaborate sampling. Practical issues like low response rates can also influence participant recruitment.
How are research instruments administered in experiments?
Step 8: Administering Research Instruments
For experiments, this involves:
Pre-test
Manipulating the independent variable for the experimental group
Post-test
How are research instruments administered for surveys and structured observation?
Step 8: Administering Research Instruments
Surveys: Involve structured interviews or self-completion questionnaires.
Structured observation: Requires observing a setting and recording predefined behavior categories.
What is coding in the context of data recording?
Converting responses or qualitative data into numbers for systematic and accurate recording, especially for data that cannot be directly recorded (e.g., age, income).
What is the primary goal of data analysis in quantitative research?
To use statistical techniques to test relationships between variables, check the reliability of measures, and examine patterns, correlations, and differences.
What key questions are addressed during Step 11: Interpretation of Findings?
Step 11: Interpretation of Findings
Researchers link results to the research purpose, asking:
Did we answer the research question(s)?
Were hypotheses supported?
What do findings mean for theory?
Should theory be revised or rejected?
What is the purpose of Step 12: Writing Up in quantitative research?
To present findings publicly (e.g., paper, report) to demonstrate the relevance of conclusions, validity, and robustness of findings. Only published research influences knowledge.
What is a concept in research?
A category used to organize social reality, forming the foundations of theory. Examples include emotional labor, crime, and academic achievement.
Why is measurement important in quantitative research?
Identify subtle differences between people.
Provide consistent measurement across time and researchers (reliability).
Enable statistical analysis of relationships between variables.
Distinguish between a nominal definition and an operational definition.
What are indicators?
Observable measures that represent a concept (e.g., income as a direct indicator of wealth, absenteeism as an indirect indicator of morale).
What are common sources of indicators used to measure concepts in quantitative research?
Multiple indicators are often preferred for improved validity and reliability.
What is a Likert Item, and how does it contribute to measurement?
Likert items are part of multi-indicator attitude scales consisting of a series of statements on the same theme. Respondents choose from ordered categories (e.g., agreement, evaluation), and scores from individual items are aggregated into an overall score.
What are the rules for constructing effective Likert scales?
Items must be statements, not questions.
All items must relate to the same topic and be interrelated (measure the same attitude).
Include positive and negative wording to avoid response sets.
Respondents who answer all items identically may need exclusion.
Why is it often preferable to use multiple-item measures rather than single-item measures in quantitative research?
How do dimensions of concepts impact measurement in quantitative research?
Many concepts have multiple components (dimensions). Measures should reflect all major dimensions to provide a comprehensive profile. For example, 'professionalism' can involve confidentiality, fiscal honesty, and continuing education. Simple concepts (e.g., age) can use single indicators if appropriate.
What matters most regarding measures of concepts?
Measures must be reliable and valid to accurately capture the intended concept.
What is unstructured data?
Information that is not pre-organized into fixed categories, such as answers to open-ended survey questions, interview transcripts, or documents, requiring coding before statistical analysis.
What is coding (for unstructured data)?
The process of identifying themes, patterns, or categories in qualitative, unstructured responses and assigning each theme a label (code), often converted into numbers for quantitative analysis.
Outline the steps involved in post-coding unstructured data.
Read all responses to identify themes.
Create a coding frame (list of categories).
Assign numbers to each category.
Re-read all responses and assign the correct code.
Enter coded data into a spreadsheet.
What are potential problems associated with post-coding?
Inconsistency between coders, measurement error, and reduced validity (codes may not accurately represent respondents’ intended meanings). Poorly designed open questions can exacerbate these issues.
What are the three principles of good coding categories?
Must Not Overlap: Each response must fit one and only one category, ensuring codes represent distinct concepts.
Must Be Exhaustive: All possible responses must fit into one of the codes, often with an “Other” category for rare or unexpected responses.
Clear Coding Rules: Provide instructions with examples for each category, improving coder consistency and ensuring inter-coder reliability.
How does coding in qualitative research typically differ from quantitative post-coding?
In qualitative studies, coding is often more interpretive and iterative. Its primary aim is to identify patterns for thematic analysis rather than for numerical statistical analysis, as seen in quantitative post-coding.
What is reliability in measurement?
The consistency of a measure, meaning it gives the same results under consistent conditions.
What are the three main forms of reliability?
How is 'stability over time' assessed using the Test-Retest Method?
The measure is administered at Time 1 (T1) and again to the same respondents at Time 2 (T2), with an expectation of a high correlation between the two observations (Obs1 and Obs2) if the measure is stable.
What are common problems or challenges with Test-Retest Reliability?
Answers at Time 1 (T1) may influence answers at Time 2 (T2), or events occurring between administrations (T1 and T2) may change the underlying attitudes being measured, even if the instrument itself is reliable. There is no ideal solution, and complex designs are often required.
What is Internal Reliability (Internal Consistency)?
Internal reliability concerns whether the multiple items used to measure a concept are consistent with each other. If items all measure the same concept, scores on each item should be related (e.g., agreeing on voting importance and freedom of speech).
What is Cronbach's Alpha used for?
The most common statistic for assessing internal reliability (internal consistency) of multiple items measuring a concept, ranging from 0 to 1. Higher values indicate better consistency, with .80 typically considered minimum acceptable and .60 acceptable for exploratory research.
Describe the Split-Half Method for assessing internal reliability.
A technique to test internal reliability where items are split into two halves (randomly or odd/even-numbered items). The correlation between each respondent’s score on the two halves is then calculated. A correlation of 1 signifies perfect consistency; 0 signifies no consistency.
Why is inter-observer consistency important when multiple researchers are involved in subjective judgments?
It ensures reliability by assessing whether different observers agree in their assessments (e.g., coding open-ended questions or structured observations), thereby reducing measurement error and subjective bias. If observers classify the same behavior differently, reliability suffers.
What is measurement validity?
Measurement validity concerns whether an indicator (or a set of indicators) truly measures the concept it is intended to measure (e.g., Do IQ tests truly measure intelligence? Do multiple-choice exams truly measure academic ability?).
Name four forms of evidence used to demonstrate measurement validity.
Face validity
Concurrent validity
Construct validity
Convergent validity
Explain Face validity and how it is assessed.
Face validity is the most basic form of validity, referring to whether, on the surface, the measure appears to reflect the concept it claims to measure. It is an intuitive and subjective judgment, often assessed by asking experts in the field or by researchers' own intuition.
What is Concurrent validity and how is it assessed?
Concurrent validity is established when a measure correlates with a relevant criterion measured at the same time. It's assessed by identifying a criterion that theory suggests should be related to the concept, then measuring both around the same time and comparing results (e.g., high job satisfaction scores should correlate with lower absenteeism).
What is Construct validity and how is it assessed?
Construct validity assesses whether the measure behaves in ways consistent with theoretical expectations about the relationships between concepts. It's assessed by identifying theoretical predictions (e.g., job satisfaction increases when job variety increases) and testing whether variables correlate in the expected directions. If not, the measure, deduction, or theory may be flawed.
What is Convergent validity and how is it assessed?
Convergent validity is demonstrated when a measure correlates with other measures of the same concept (especially those using different methods). It's assessed by collecting data using two different methods that claim to measure the same construct and comparing the results (e.g., managers' estimates of time spent on activities vs. direct observation).
What are potential problems or nuances with Convergent Validity?
A lack of correlation between different measures of the same concept (e.g., victimization surveys not matching official police statistics) doesn't always mean one measure is wrong; the measures may sometimes tap different aspects of the same phenomenon, leading to convergent invalidity rather than outright invalidity.
What is the relationship between reliability and validity?
If a measure is not reliable, it cannot be valid, because an inconsistent measure cannot accurately capture a single concept. However, a measure can be reliable but not valid (e.g., a bathroom scale that reliably adds 5 kg every time leads to consistent but inaccurate results).
What are the practical implications of understanding reliability and validity in research?
Researchers should be cautious when assuming measures are valid just because they are commonly used. Limited testing can mean measures are weaker than assumed, requiring researchers to interpret findings with awareness of possible measurement issues.
What are the four main goals of quantitative researchers?
Measurement
Establishing Causality
Generalization of Findings (External Validity)
Replication
Explain Measurement as a goal of quantitative research, including key concerns.
Quantitative researchers aim to measure social phenomena (e.g., prejudice, homelessness) scientifically to detect social patterns and test theories. Key concerns are Reliability (consistency) and Validity (accuracy), as all subsequent analysis depends on strong measurement.
Why is 'establishing causality' a goal in quantitative research?
Beyond simply describing, quantitative researchers seek to understand why social patterns occur by identifying dependent variables (what is explained) and independent variables (possible causes). Causality is difficult due to social complexity and often unclear time order in cross-sectional designs.
What methods can improve causal inference in quantitative research?
Experiments: Offer stronger causal inference due to manipulation of independent variables.
Statistical controls: Help rule out alternative explanations.
Longitudinal designs: Track variables over time to clarify temporal order.
What is 'generalization of findings' (external validity) in quantitative research?
The goal of ensuring that research findings apply beyond the specific people or setting studied, encompassing both real-world settings and broader populations.
What are the two major aspects of external validity concerning generalization?
Generalizing to Real-World Settings: Whether findings apply in natural, everyday environments, as artificial settings (e.g., labs, staged interviews) can limit applicability.
Generalizing to Other People or Populations: Whether findings apply beyond the study participants, typically made possible by a representative (probability) sample drawn from a specific population. Results should not be generalized beyond the population from which the sample was drawn.
Why is replication important in quantitative research?
Replication reflects the scientific principle that findings should be reproducible, helping ensure results are not affected by researcher bias, values, or errors. This strengthens confidence in the validity of findings and addresses the higher risk of subjectivity in social research.
What are some challenges researchers face in achieving replication in social research?
Replication is often undervalued and rarely published. Exact duplication of complex social settings is difficult, and if conditions differ, discrepancies may be blamed on design differences rather than true error. Only through careful replication can genuine social transformation be confirmed.
List the six main critiques of quantitative research.
Treating people like natural phenomena
False sense of precision
Disconnection from everyday life
Abstract analysis of relationships
Explanations lack participant perspectives
Objectivist assumptions
Explain the critique of quantitative research regarding "treating people like natural phenomena." How do quantitative defenders respond?
Critics argue that quantitative researchers fail to distinguish humans from the natural world, suggesting that since human actions are interpreted and meaning-laden, natural science methods may not fully capture social phenomena. Defenders counter that humans are part of the natural world and can be studied scientifically, which is indispensable for detecting societal patterns.
Elaborate on the critique of quantitative research concerning "false sense of precision." How do quantitative researchers respond?
Quantitative measures can create an illusion of accuracy because fixed-choice questions may oversimplify and ignore the complexity of meaning (e.g., Cicourel, 1964). Quantitative researchers argue that careful question design and pretesting can partially address this issue.
What does the critique "disconnection from everyday life" imply about quantitative research? How do quantitative researchers mitigate this?
Research instruments like surveys or structured interviews can fail to reflect real-world contexts; respondents may lack knowledge or interest, and experiments often occur in artificial, short-term conditions. Quantitative researchers mitigate this with filter questions and careful sampling.
Describe the critique "abstract analysis of relationships" in quantitative research. What is the counter-argument?
Quantitative research focuses on relationships between variables, sometimes ignoring human interpretation and social context, leading to explanations that can feel remote from lived experience. Quantitative researchers counter that surveys can include questions about meaning and attitudes, though more participant-centered approaches are often qualitative.
How does the critique "explanations lack participant perspectives" challenge quantitative research?
While quantitative research may identify patterns (e.g., increasing rates of births to unwed mothers), it may not explain why these exist from the subjects’ point of view. It can accurately tabulate trends but fail to capture the lived experiences and social meanings behind them, potentially misinterpreting social phenomena.
Explain the critique of quantitative research's "objectivist assumptions." How do quantitative researchers respond?
Quantitative researchers often assume a social reality independent of observers or a fixed social order. Critics argue this ignores how people create social reality through interaction. Quantitative researchers respond that some social phenomena exist and have effects regardless of perception, and their scientific study is valid, even if the debate continues.