J

Research Methods Review Flashcards

Chapter 7: Experimental Design

  • Requirements of a True Experimental Design:
    • True experimental designs aim to establish causality through controlled manipulation and observation.
  • Criteria for Establishing Causality in True Experiments:
    • Covariation: There must be a correlation between the independent and dependent variables.
    • Temporal Order: The independent variable must precede the dependent variable.
    • Non-spuriousness: Alternative explanations for the relationship must be eliminated.
  • Quasi-Experimental Designs:
    • These designs resemble experimental designs but lack random assignment.
    • Major types include:
      • Nonequivalent control group designs
      • Before-and-after designs
      • Time series designs
  • Statistical Controls in Quasi-Experimental Designs:
    • Used to account for pre-existing differences between groups when random assignment is not possible.
    • Techniques may include analysis of covariance (ANCOVA) or propensity score matching.
  • Sources of Causal (Internal) Validity:
    • History: Events occurring during the study that could affect the dependent variable.
    • Maturation: Natural changes over time in participants.
    • Testing: The effect of taking a test on subsequent administrations of the same test.
    • Instrumentation: Changes in the measurement instrument.
    • Regression to the mean: The tendency for extreme scores to move closer to the average upon retesting.
    • Selection: Differences between groups due to non-random assignment.
    • Attrition: Loss of participants during the study.

Chapter 8: Survey Research

  • Definition of Survey Research:
    • A method of gathering information from a sample of individuals using a standardized questionnaire.
  • Features Making Survey Research Popular:
    • Versatility: Can be used to study a wide range of topics.
    • Efficiency: Data can be collected from large samples at relatively low cost.
    • Generalizability: Results can be generalized to the population from which the sample was drawn.
  • Rules and Pitfalls in Writing Clear and Meaningful Questions:
    • Use clear and concise language.
    • Avoid double-barreled questions (asking two things at once).
    • Avoid leading questions (that suggest a desired answer).
    • Avoid negative questions (that use double negatives).
    • Ensure questions are culturally appropriate.
  • Poorly Worded Survey Questions and Measurement Validity:
    • Can lead to misunderstanding, misinterpretation, and inaccurate responses.
    • Threatens the validity of the survey results.
  • Importance of Questionnaire Organization:
    • The order of questions can influence responses.
    • Start with easy, non-threatening questions.
    • Group related questions together.
    • Place sensitive questions later in the survey.
  • Basic Survey Designs:
    • Cross-sectional: Data collected at one point in time.
    • Longitudinal: Data collected at multiple points in time.
    • Trend: Data collected from different samples at different times.
    • Panel: Data collected from the same sample at different times.
    • Cohort: Data collected from individuals who share a common characteristic or experience.

Chapter 9: Qualitative Research

  • Origins of Qualitative Research:
    • Rooted in anthropology, sociology, and other social sciences.
    • Focuses on understanding the meaning and interpretation of social phenomena.
  • Types of Ethnographic Study:
    • Realist Ethnography: Objective account of a culture or group.
    • Critical Ethnography: Examines power relations and social inequalities.
    • Autoethnography: Researcher's personal experiences are central to the analysis.
  • Participation and Observation Roles in Field Research:
    • Complete Observer: Researcher does not participate in the activities of the group being studied.
    • Participant Observer: Researcher participates in the activities of the group being studied.
    • Complete Participant: Researcher fully integrates into the group and their identity as a researcher may be concealed.
  • Process of Participant Observation:
    • Entering the Field: Gaining access to the research site and establishing rapport with participants.
    • Developing & Maintaining Relationships: Building trust and maintaining ethical boundaries.
    • Sampling People & Events: Selecting participants and events that are relevant to the research question.
    • Taking Notes: Recording observations, interviews, and reflections.
    • Managing Personal Dimensions: Addressing emotional and ethical challenges.
  • Intensive Interviewing:
    • In-depth, open-ended interviews with a small number of participants.
    • Aims to understand participants' experiences, perspectives, and meanings.
  • Focus Groups:
    • Small group discussions facilitated by a moderator.
    • Used to explore attitudes, beliefs, and experiences related to a specific topic.
    • Samples are typically purposive, aiming for representation of key subgroups.

Chapter 10: Secondary Data and Content Analysis

  • Definition of Secondary Data:
    • Data that were collected by someone else for a different purpose.
    • Examples include government statistics, archival records, and survey data.
  • Factors Influencing the Quality of Secondary Data Analysis:
    • Data quality: Accuracy, completeness, and reliability of the data.
    • Data relevance: Appropriateness of the data for the research question.
    • Data access: Availability and accessibility of the data.
  • Ethical Issues in Secondary Data Analysis:
    • Privacy: Protecting the confidentiality of individuals.
    • Informed consent: Ensuring that participants have given consent for their data to be used.
    • Data ownership: Respecting the rights of the data owners.
  • Strengths and Weaknesses of Using Secondary Data:
    • Strengths: Cost-effective, time-saving, allows for large-scale analysis.
    • Weaknesses: Data may not be relevant, data quality may be questionable, limited control over data collection.
  • Benefits and Drawbacks of Using Historical Research:
    • Benefits: Provides insights into past events, trends, and patterns.
    • Drawbacks: Data may be incomplete, biased, or difficult to access.
  • Types of Comparative Research:
    • Cross-national: Comparing data across different countries.
    • Cross-cultural: Comparing data across different cultures.
    • Cross-temporal: Comparing data across different time periods.
  • Definition of Content Analysis:
    • A systematic method for analyzing the content of communication.
    • Typically begins with text, speech broadcasts, or visual images.
  • Difficulties in Developing Reliable and Valid Coding Procedures in Content Analysis:
    • Defining coding categories: Ensuring that categories are clear, mutually exclusive, and exhaustive.
    • Training coders: Ensuring that coders understand the coding scheme and apply it consistently.
    • Assessing inter-coder reliability: Measuring the extent to which different coders agree on the coding of the same content.

Chapter 12: Evaluation Research and Policy Analysis

  • Definition and Purpose of Evaluation Research:
    • The systematic assessment of the design, implementation, or outcomes of a program or policy.
    • Purpose is to provide information for decision-making and improvement.
  • Inputs, Outputs, and Outcomes in Evaluation Research:
    • Inputs: Resources invested in a program (e.g., money, staff, materials).
    • Outputs: Direct products or services delivered by a program (e.g., number of clients served, number of workshops conducted).
    • Outcomes: Changes that occur as a result of the program (e.g., improved health, increased employment).
  • Types of Evaluation Research:
    • Needs Assessment: Identifies the needs of a population.
    • Evaluability Assessment: Determines whether a program can be evaluated.
    • Process Evaluation: Examines how a program is being implemented.
    • Impact Evaluation: Assesses the effects of a program on outcomes.
    • Efficiency Evaluation: Compares the costs and benefits of a program.
  • Importance of Design Decisions:
    • The choice of evaluation design can affect the validity and reliability of the findings.
    • Designs may include experimental, quasi-experimental, and non-experimental approaches.
  • Goal of Policy Research:
    • To provide evidence-based information to inform policy decisions.
  • "Evidence-Based" Policies:
    • Policies that are based on the best available research evidence.
  • Challenges of Implementing True Experimental Designs in Evaluation Research:
    • Lack of randomization: It may be difficult to randomly assign individuals or groups to treatment and control conditions.