Sociology: Scientific Method and Research Methods Notes
Scientific Method in Sociology: Overview
- Social science research is grounded in gathering and analyzing empirical evidence.
- Quantitative sociologists often develop hypotheses to explain a social phenomenon they’re interested in (e.g., voting patterns).
- The field includes debates about whether to strictly follow the scientific method or adopt alternative approaches; some critique the method, but many still use it as a framework.
- The scientific method in sociology involves testing theories about the social world using evidence to confirm, disprove, or challenge those theories.
- Example given: a study finding that victims of violent crime are more likely to vote Republican. A theory (victimization and voting) is tested using a survey of a random sample of the American population.
- Key takeaway: you start with a theory or question, gather evidence, and then assess whether the evidence supports or challenges the theory.
Steps of the Scientific Method (as highlighted in the course textbook)
- 1) Ask a research question
- Should be neither too vague nor too narrow.
- Examples of good scope: broad enough to be relevant beyond a single classroom, but specific enough to be researchable (e.g., how does class size or instructor characteristics influence student participation).
- Avoid overly broad questions like "How does society function?" or overly narrow classroom-specific questions without broader applicability.
- 2) Consult existing sources (literature review)
- Use scholarly sources (e.g., Google Scholar) to familiarize with prior findings, theories, methods, contradictions.
- Acknowledge that no one reads every article; aim to understand foundational and controversial work, recent developments, and gaps.
- The goal is to build on previous work and identify how your study fills a gap or resolves conflicting results.
- 3) Develop a hypothesis
- A hypothesis is an educated guess about how two or more variables are related.
- Introduce independent (causal) and dependent (effect) variables:
- Independent variable: the cause of the change
- Dependent variable: the effect
- Example framing: age (X) might influence the likelihood of severe COVID outcomes (Y); Gender or salary can be discussed similarly.
- Notation example: for variables X (independent) and Y (dependent), a simple functional relationship can be written as
Y = f(X)
- In practice, a common statistical form is: Y = eta0 + eta1 X + ext{error} \, ( ext{or } Y = a + bX + \epsilon )
- 4) Design and conduct the study
- Choose a method aligned with your question and hypothesis:
- Surveys, experiments, secondary data analysis, ethnography, interviews.
- The choice of method shapes how you collect data and what you can claim about causality, correlation, and generalizability.
Operational Definitions and Measurement
- Define concepts precisely (operationalization): how you will measure your concepts.
- Example: bias
- Define what kind of bias you’re studying (racial, ethnic, gender, class, education) and whether you mean individual bias or institutional bias.
- Decide how to measure bias in your study (e.g., callbacks for job applications by race, survey attitudes, etc.).
- The importance of measurement validity and justification
- Regardless of the measure, you must clearly define what you’re studying and justify that your measure actually captures that concept (validity).
- Risk: researchers can claim to measure something but actually measure something else (misalignment between concept and measurement).
- Example in practice
- A study on race-based bias in the workplace: define the bias as race-based discrimination in hiring decisions; use a concrete measurement like callback rates for résumés with racially suggestive names.
- The study was published with explicit definitions of bias, population, and setting (workplace, not everyday interactions).
Measuring and Sampling Concepts
- Population vs. sample
- Population: the entire group you want to learn about (e.g., all US adults, all eligible US voters).
- Sample: a subset representing the population.
- Random sampling and representativeness
- Random sample: each member of the population has an equal chance of being selected.
- Example: selecting UNLV students by randomizing from an alphabetized registrar list.
- A good sample size is not a magic number; what matters is randomization and representativeness.
- Common questions on sample size
- For a population as large as US voters, a sample around 1,500 respondents (as in many Gallup polls) can yield representative results if the sampling is random and coverage is adequate.
- Population coverage and practical constraints
- Some populations are hard to access (e.g., LGBT individuals with Alzheimer’s in a local region); random sampling may be impractical, requiring alternative methods or targeted sampling.
- Random vs non-random samples
- Random sampling helps avoid skewing results by including diverse ages, races, education levels, etc.
- Non-random samples may bias results toward particular subgroups (e.g., college freshmen, a specific demographic).
- The relationship between research question, population, and data collection
- Your research question helps determine who your population is and whom you should survey or interview.
- The design of data collection methods (survey vs interview) should align with your population and research goals.
Surveys (Quantitative)
- What surveys are
- Data collected via questionnaires, typically closed-ended rather than open-ended.
- Strengths: good for measuring opinions/attitudes; scalable to large populations.
- Weaknesses: weaker at measuring actual behavior; vulnerable to social desirability bias (respondents tell you what they think is socially acceptable rather than what they actually do).
- Social desirability bias
- Respondents may overreport desirable behaviors (e.g., gym attendance) or underreport undesirable ones.
- Classic problem: Lapierre’s study (1930s)
- Lapierre traveled with a Chinese couple across 251 establishments to test discrimination.
- Beforehand (via a survey-like call), 250 of 251 establishments said they would not serve the couple; in actual practice, many did serve them, revealing a gap between stated attitudes and actual behavior.
- The value and limits of surveys
- Great for reaching large audiences quickly and cost-effectively; less effective for predicting precise behaviors.
- Measuring specific population samples
- Gallup polls illustrate using large, repeated samples to forecast electoral outcomes, though not with perfect certainty due to turnout variance and sample bias.
- Target population and random sampling in surveys
- A survey should target a defined population (e.g., all US eligible voters) and use a random sample to ensure equal probability of selection.
- Practical considerations when designing surveys
- If studying a particular niche population (e.g., a small, hard-to-reach group), surveys may be less suitable; alternative methods may be needed.
- Relationship to the research process
- Survey design is the last step in data collection but is shaped by the earlier steps: research question, theory, literature review, and hypothesis.
Experiments (Quantitative)
- What experiments are
- Involve manipulating one or more independent variables to observe effects on a dependent variable.
- Two main types: field experiments (in the natural environment) and lab-based experiments (controlled, artificial settings).
- Field experiments
- Conducted in real-world settings (e.g., sending resumes to employers and observing callback rates in actual recruitment environments).
- Pros: higher external validity; cons: less control over confounding variables.
- Lab-based experiments
- Conducted in controlled environments (classrooms, labs) with tighter control over variables.
- Pros: more precise control and measurement; cons: possible artificiality and participant awareness of being studied (Hawthorne effect).
- Key critiques
- Artificial lab settings may fail to capture real-world behavior; participants may alter behavior due to observation.
- Some scholars argue experiments mainly reveal behaviors of college freshmen (typical psychology samples) rather than general populations.
- Example: resume discrimination in field experiments
- Employers evaluated via submitted resumes with varying race-linked cues to test callback rates; results reveal bias in the field, not just in lab settings.
- Vignettes as a type of experimental method
- Short, carefully crafted hypothetical scenarios that vary on key characteristics.
- Used to test attitudes toward policies like the Affordable Care Act by controlling context (e.g., association with Obama).
- The debate on causal inference
- Experiments aim to establish causality (X causes Y) but require careful design to rule out confounds and ensure ethical conduct.
Secondary Data Analysis (Statistics, Archives, Unobtrusive Data)
- What it is
- Analyzing data that were collected for other purposes (e.g., historical texts, policy documents, newspaper archives).
- Benefits
- Unobtrusive; no interaction with subjects; can provide historical context and long-run perspectives.
- Limitations
- Data accuracy and quality can be difficult to verify; data may not perfectly fit your current question.
- Examples from the transcript
- Christina Mora’s Making Hispanics: uses newspaper archives and policy texts to trace how the term Hispanic emerged historically.
- Rory McVeigh’s The Right: analyzes historical newspaper clips to study the decline and rise of a political movement across the 1910s–1930s.
- Strength: historical and contextual insight
- Particularly useful for understanding past social processes, meaning, and discourse without relying on current observations.
Ethnography (Qualitative)
- What ethnography is
- An immersive, in-depth study of social life in the subjects’ natural environment.
- Researchers engage with participants and become part of the setting to understand meanings from within (emic perspective).
- Emic perspective and thick description
- Emic perspective: understanding a culture from the inside, in its own terms.
- Thick description: describing not just actions but the meaning behind actions (e.g., why someone closes one eye, what that action signifies in context).
- Data collection in ethnography
- Fieldwork, participant observation, and direct involvement in daily life of the group studied (e.g., college classroom dynamics, social movements).
- Mundane vs. significant aspects
- Ethnography captures routine, day-to-day activities that surveys/interviews might miss, such as internal group rituals, informal gatherings, and the development of group norms.
- Process-driven nature
- Fieldwork is time-consuming and requires meticulous note-taking and later coding of data into themes.
- Field notes should be chronological (e.g., 08:20 arrival, 08:30 activity) and include who was present, what happened, where, who led, and how it compared with prior observations.
- Field notes and coding
- After fieldwork, researchers write detailed field notes and then code them to identify recurring themes.
- Coding highlights patterns and helps structure a narrative around the findings.
- Example: Eviction study by Matthew Desmond
- Focused on eight families but produced a deeply detailed account (hundreds of pages) to illustrate eviction dynamics.
- Process-driven nature and scope
- Ethnography often involves two modes:
- In-depth life histories or case studies of specific groups or events.
- Process-driven ethnography: how a culture or group develops over time (collective identity, culture formation).
- The fieldwork burden
- Ethnographers often accumulate extensive field notes (e.g., hundreds of pages) and spend substantial time coding and analyzing.
- Ethics and critique
- Ethnography is sometimes defended as offering genuine, insider perspectives, but it faces critiques about generalizability and the risk of the researcher’s influence on the observed group.
- Field notes vs. published work
- Field notes are the raw material; published ethnographies present interpreted, thick descriptions and theoretical insights.
Interviews (Qualitative)
- What interviews are
- One-on-one conversations between a researcher and a participant, typically open-ended and flexible.
- Types of interviews
- In-depth interviews: thorough exploration of a topic.
- Semi-structured interviews: guided by an interview guide but allows the interviewer to deviate and explore unanticipated areas.
- Life history interviews: focus on the respondent’s entire life course.
- Interview guides and flexibility
- An interview guide provides a baseline set of questions.
- Semi-structured interviews allow interviewers to adapt questions based on the interviewee’s responses.
- Data collection and transcription
- Often recorded (with consent) and transcribed for coding and analysis.
- Coding and analysis
- After transcription, researchers code the transcripts to identify themes and patterns.
- A typical qualitative project might conduct around 40 interviews, requiring substantial transcription and coding time.
- Strengths of interviews
- Rich, detailed accounts that explain why people hold certain views or engage in certain behaviors.
- Better able to capture motivations, meanings, and explanations than surveys.
- Limitations
- Interviews are time-consuming and labor-intensive; findings may not be easily generalizable.
- Attitude-behavior link and fallacy
- Surveys often measure attitudes, not always behaviors (attitude-behavior fallacy).
- Interviews can provide a nuanced understanding of the reasons behind behaviors and offer deeper insights into decision processes.
- When to use interviews
- Particularly useful for past experiences, sensitive topics (e.g., sex life), or complex decision-making processes where direct observation is not possible.
Cross-Cutting Themes and Practical Implications
- The methodological spectrum in sociology includes: surveys (quantitative), experiments (quantitative), secondary data analysis (quantitative/qualitative), ethnography (qualitative), and interviews (qualitative).
- The first two methods are often quantitative; the latter two are qualitative, though secondary data analysis can straddle both approaches.
- The choice of method should align with the research question, theory, and the type of data needed to answer it.
- An effective research plan often integrates multiple methods to triangulate findings and address potential biases.
- Ethical and practical considerations
- Be mindful of biases (e.g., social desirability in surveys, observer effects in ethnography).
- Consider validity, reliability, and generalizability when interpreting results.
- Summary of key ideas from the lecture
- Hypotheses are tested through a range of methods, with careful attention to operational definitions and measurement.
- Literature reviews inform theory development and help justify research questions and methods.
- Different data collection methods offer different strengths and weaknesses; no single method is universally best.
- Ethnography emphasizes depth, context, and meaning; surveys emphasize breadth; experiments emphasize causality under controlled conditions; secondary data analysis emphasizes unobtrusiveness and historical context.
- Independent vs. Dependent Variables
- Conceptual relation: If X is the independent (causal) variable and Y is the dependent (effect) variable, a simple model can be written as
Y = eta0 + eta1 X + \u03b5 - Interpretation: changing X is associated with changes in Y, all else equal.
- Random Sampling Probability (conceptual)
- In a simple random sample of size N from a population of size N, each unit has an equal chance of selection:
P( ext{select unit } i) = rac{1}{N} \text{for } i = 1,2,
\dots,N
- Example of a research question with a measurable outcome
- If studying how age affects the likelihood of severe COVID outcomes, you might model or test the relationship with a regression: ext{Outcome} = eta0 + eta1 \text{Age} +
- Conceptual distinction in measurement
- Operational definitions ensure concepts are measured, e.g., bias is operationalized as race-based hiring callbacks; attitude might be measured via survey responses; actual behavior might be measured via observed actions in the field.
Notes on Major Studies and Examples Mentioned (from the transcript)
- Lapierre’s 1930s study on discrimination against a Chinese couple showed a discrepancy between attitudes (declared willingness to serve) and behavior (actual service in practice).
- The bias study in the workplace used a clear operationalization of race-based bias in hiring and included a timeline of publication and revision (2002/2004 initial publication and a 2017 update).
- The “Making Hispanics” study by Christina Mora used secondary sources (newspapers, policies, op-eds) to trace the historical emergence of the term Hispanics, illustrating unobtrusive secondary data analysis.
- Rory McVeigh’s work on the Ku Klux Klan (The Right) used historical newspaper clips to understand the rise and fall of a political movement, again illustrating the value of secondary-source ethnography for historical questions.
- The Affordable Care Act vignette example demonstrates how vignettes reveal what aspects of policy framing influence attitudes (e.g., whether policy is associated with Obama).
- Eviction study by Matthew Desmond is cited as an example of in-depth ethnography focused on a small number of families to yield rich, nuanced understanding of broader social processes.
How to Use These Notes for Exam Preparation
- Be able to define and contrast the main research methods: surveys, experiments (field vs. lab), ethnography, interviews, and secondary data analysis.
- Understand what operational definitions are and why they matter for validity and reliability.
- Be able to explain independent vs. dependent variables and give concrete examples.
- Explain the role of literature reviews and how they help justify a new study.
- Be able to discuss the strengths and limitations of each method and give real-world examples from the notes (e.g., Lapierre, Gallup polls, ethnography of eviction).
- Recognize when a method is appropriate given the research question and population of interest, including when random sampling may be impractical.
- Recall key terms: thick description, emic perspective, field notes, coding, random sampling, population, sample, social desirability bias, vignettes, unobtrusive data, and the attitude-behavior fallacy.