Definition: A variable that is manipulated to test its effect on the dependent variable.
Conditions: At least two conditions; typically includes an experimental condition and a control condition.
Experimental condition: Group exposed to the IV (e.g., doodling group in Andrade).
Control condition: Group not exposed to the IV for comparison (e.g., control group in Andrade did not doodle).
Allocation: Participants can be assigned to groups by researcher choice or random allocation.
Advantages: Reduces bias; increases validity since participant types vary.
Weaknesses: Random allocation may result in individual differences between conditions affecting validity (e.g., intelligence variability).
Definition: The variable being measured to determine the impact of the IV.
Example: In Andrade, the DV was the participants' scores on monitoring and recall tasks.
Definition: Variables other than the IV that may affect the DV, introducing confusion and lowering validity.
Types:
Participant variables: Characteristics like personality, age, gender, intelligence, and memory.
Situational variables: Conditions inherent to the study environment such as lighting, noise, etc.
Uncontrollable variables: Confounding variables that cannot be eliminated from the study, negatively impacting validity.
Definition: Measures taken to minimize or eliminate confounding variables in a study.
Example: Splitting participants into AC condition and control condition to compare concentration effects.
Importance: Controlling variables enhances the validity and reliability of the results. High levels of control standardize procedures enhancing replicability.
Definition: Refers to how accurately a study measures what it is intended to measure.
Experiment validity hinges on ensuring that only the IV affects the DV, without confounding variables interfering.
Demand Characteristics: Participants knowing the study's true aim might change behavior, reducing validity.
Socially Desirable Responses: Participants may answer in a way they think is socially acceptable rather than truthfully.
Double-blind Technique: Neither participants nor observers know which condition participants are in to avoid demand characteristics and researcher bias.
Definition: The degree to which a study's findings can be generalized to real-life settings.
Higher ecological validity arises from natural settings (field experiments), while artificial lab settings generally reduce it.
Mundane Realism: The similarity of tasks in a study to everyday tasks (e.g., helping a person vs. giving an electric shock).
Definition: Relates to whether a measure reliably reflects traits over time.
Definition: Measures how well one variable predicts another.
Types:
Predictive Validity: Measure's ability to predict future outcomes (e.g., personality tests predicting job performance).
Concurrent Validity: Measure correlating well with criteria assessed simultaneously (e.g., depression scale correlating with a clinical diagnosis).
Definition: Study's consistency realized through high controls leading to replicable and repeatable results.
Standardization is critical, ensuring uniform procedures across all participants.
Inter-Rater Reliability: Consistency between two observers rating the same behavior.
Inter-Observer Reliability: Observers report the same consistent behaviors rather than rate them.
Test-Retest Reliability: Checking for consistency of a questionnaire or task over separate occasions.
Split-Half Method: Assessing questionnaire consistency by splitting it into halves and comparing results.
Definition: The extent to which study findings can be applied to a broader population.
Representative Sample: Larger, diverse samples yield higher generalizability compared to small or homogeneous samples.
Example: A study involving 10 women vs. 5000 multigendered individuals spanning various ages across regions.
Definition: Changes in participant behavior due to task order.
Types:
Practice Effects: Improvement on repeated tasks due to memorization.
Fatigue Effects: Decreased performance from tiredness or boredom.
Solutions:
Randomization: Random distribution of task sequences.
Counterbalancing: Balancing order (AB/BA) to mitigate order effects.
Independent Measures Design: Prevents exposure as different participants engage in each condition.
Objective, numerical data suitable for comparisons but lacks insights on 'why' outcomes occur.
Detailed, subjective, behavioral insights explaining participant actions.
Deception: Ethical concerns regarding misleading participants.
Informed Consent: Required permission from participants, with clarity on study purpose.
Right to Withdraw: Participants may exit at any time.
Protection from Harm: Ensuring physical and psychological safety.
Confidentiality: Safeguarding personal data.
Debriefing: Explaining the actual purpose post-study to alleviate any distress caused by deception.
Numbers: Using the fewest animals necessary for valid results.
Replacement: Using alternatives to animal testing where feasible.
Pain and Distress: Minimizing animal suffering during research.
Reward and Housing Considerations: Ensuring enrichment and appropriate social conditions.
Types: Lab, Field, Natural.
Types: Overt/Covert, Participant/Non-Participant, Structured/Unstructured, Naturalistic/Controlled.
Types: Interviews (Structured, Unstructured, Semi-Structured), Questionnaires.
Positive and Negative correlations detailed in findings context.
Differences in participant observation across time frames.
Independent Measures/Groups Design
Repeated Measures/Groups Design
Matched Pairs Design
Opportunity Sample (convenient, but may lead to bias).
Volunteer Sample (self-selecting, ethical, but time-consuming).
Random Sample (ensures equal chance, but may limit generalizability).