RESEARCH METHODS

VARIABLES AND CONTROLS

Independent Variable (IV)

  • Definition: A variable that is manipulated to test its effect on the dependent variable.

  • Conditions: At least two conditions; typically includes an experimental condition and a control condition.

    • Experimental condition: Group exposed to the IV (e.g., doodling group in Andrade).

    • Control condition: Group not exposed to the IV for comparison (e.g., control group in Andrade did not doodle).

  • Allocation: Participants can be assigned to groups by researcher choice or random allocation.

    • Advantages: Reduces bias; increases validity since participant types vary.

    • Weaknesses: Random allocation may result in individual differences between conditions affecting validity (e.g., intelligence variability).

Dependent Variable (DV)

  • Definition: The variable being measured to determine the impact of the IV.

  • Example: In Andrade, the DV was the participants' scores on monitoring and recall tasks.

Confounding Variables

  • Definition: Variables other than the IV that may affect the DV, introducing confusion and lowering validity.

  • Types:

    • Participant variables: Characteristics like personality, age, gender, intelligence, and memory.

    • Situational variables: Conditions inherent to the study environment such as lighting, noise, etc.

    • Uncontrollable variables: Confounding variables that cannot be eliminated from the study, negatively impacting validity.

Controls

  • Definition: Measures taken to minimize or eliminate confounding variables in a study.

  • Example: Splitting participants into AC condition and control condition to compare concentration effects.

  • Importance: Controlling variables enhances the validity and reliability of the results. High levels of control standardize procedures enhancing replicability.

VALIDITY AND ITS TYPES

Validity

  • Definition: Refers to how accurately a study measures what it is intended to measure.

  • Experiment validity hinges on ensuring that only the IV affects the DV, without confounding variables interfering.

  • Demand Characteristics: Participants knowing the study's true aim might change behavior, reducing validity.

  • Socially Desirable Responses: Participants may answer in a way they think is socially acceptable rather than truthfully.

Enhancing Validity

  • Double-blind Technique: Neither participants nor observers know which condition participants are in to avoid demand characteristics and researcher bias.

Ecological Validity

  • Definition: The degree to which a study's findings can be generalized to real-life settings.

  • Higher ecological validity arises from natural settings (field experiments), while artificial lab settings generally reduce it.

  • Mundane Realism: The similarity of tasks in a study to everyday tasks (e.g., helping a person vs. giving an electric shock).

Temporal Validity

  • Definition: Relates to whether a measure reliably reflects traits over time.

Criterion Validity

  • Definition: Measures how well one variable predicts another.

  • Types:

    • Predictive Validity: Measure's ability to predict future outcomes (e.g., personality tests predicting job performance).

    • Concurrent Validity: Measure correlating well with criteria assessed simultaneously (e.g., depression scale correlating with a clinical diagnosis).

RELIABILITY AND ITS TYPES

Reliability

  • Definition: Study's consistency realized through high controls leading to replicable and repeatable results.

  • Standardization is critical, ensuring uniform procedures across all participants.

Types of Reliability

  1. Inter-Rater Reliability: Consistency between two observers rating the same behavior.

  2. Inter-Observer Reliability: Observers report the same consistent behaviors rather than rate them.

  3. Test-Retest Reliability: Checking for consistency of a questionnaire or task over separate occasions.

  4. Split-Half Method: Assessing questionnaire consistency by splitting it into halves and comparing results.

GENERALISABILITY

  • Definition: The extent to which study findings can be applied to a broader population.

  • Representative Sample: Larger, diverse samples yield higher generalizability compared to small or homogeneous samples.

  • Example: A study involving 10 women vs. 5000 multigendered individuals spanning various ages across regions.

ORDER EFFECTS

  • Definition: Changes in participant behavior due to task order.

  • Types:

    1. Practice Effects: Improvement on repeated tasks due to memorization.

    2. Fatigue Effects: Decreased performance from tiredness or boredom.

  • Solutions:

    • Randomization: Random distribution of task sequences.

    • Counterbalancing: Balancing order (AB/BA) to mitigate order effects.

    • Independent Measures Design: Prevents exposure as different participants engage in each condition.

DATA TYPES

Quantitative Data

  • Objective, numerical data suitable for comparisons but lacks insights on 'why' outcomes occur.

Qualitative Data

  • Detailed, subjective, behavioral insights explaining participant actions.

ETHICS

Ethical Guidelines for Humans

  1. Deception: Ethical concerns regarding misleading participants.

  2. Informed Consent: Required permission from participants, with clarity on study purpose.

  3. Right to Withdraw: Participants may exit at any time.

  4. Protection from Harm: Ensuring physical and psychological safety.

  5. Confidentiality: Safeguarding personal data.

  6. Debriefing: Explaining the actual purpose post-study to alleviate any distress caused by deception.

Ethical Guidelines for Animals

  1. Numbers: Using the fewest animals necessary for valid results.

  2. Replacement: Using alternatives to animal testing where feasible.

  3. Pain and Distress: Minimizing animal suffering during research.

  4. Reward and Housing Considerations: Ensuring enrichment and appropriate social conditions.

RESEARCH METHOD TECHNIQUES

Experiments

  • Types: Lab, Field, Natural.

Observations

  • Types: Overt/Covert, Participant/Non-Participant, Structured/Unstructured, Naturalistic/Controlled.

Self-Report Methods

  • Types: Interviews (Structured, Unstructured, Semi-Structured), Questionnaires.

Case Studies

Correlations

  • Positive and Negative correlations detailed in findings context.

Longitudinal and Cross-Sectional Studies

  • Differences in participant observation across time frames.

Experimental Design

  1. Independent Measures/Groups Design

  2. Repeated Measures/Groups Design

  3. Matched Pairs Design

SAMPLING METHODS

Types

  1. Opportunity Sample (convenient, but may lead to bias).

  2. Volunteer Sample (self-selecting, ethical, but time-consuming).

  3. Random Sample (ensures equal chance, but may limit generalizability).

robot