Notes on Scientific Method & Experimental Design (No Single Title)
Scientific Method: Key Steps and Concepts
The video emphasizes the scientific method as a framework for logically and systematically exploring questions to find answers. Core steps discussed include: forming a question, researching background information, making a hypothesis, doing an experiment, analyzing data, and communicating the results.
The process is presented as iterative and non-linear in practice: scientists may circle back to earlier steps, revise hypotheses, and pursue new questions based on unexpected data. This iterative nature is a hallmark of true science.
Real-world context is used to illustrate ideas (e.g., gravity, genetics, medical trials) and to highlight the importance of replication and publication for establishing reliable knowledge.
Serendipity is acknowledged as a part of science: chance observations can lead to important discoveries when data are preserved and revisited.
The lecture stresses that a robust scientific process requires careful design, repeatable measurements, and the ability for others to reproduce results.
Formulating a Testable Question and Background Research
A scientific question must be testable: it should be something you can investigate with an experiment, not a question that requires only a recipe or descriptive answer.
If initial questions are too broad, start with a general question and refine it after background research.
Example from the lecture: questions about popping boba can be made more specific (e.g., what kinds of liquids can be turned into popping boba? or what determines the size/shape of popping boba?) rather than asking how popping is made in general.
Background research involves gathering information from books, the Internet, and experts to learn what data are needed and how to design the experiment. This helps identify variables, methods, and potential confounds.
Hypothesis Construction
A hypothesis is an educated guess that connects background research to the expected outcome. It typically takes an if-then form and includes a measurable prediction.
Example from popping boba: if I use liquids that are very acidic, then the popping boba will be less spherical (based on sodium alginate’s behavior in non-acidic environments).
A hypothesis should be testable and specify what you will measure (e.g., diameter, height, or shape) to determine support for the claim.
Variables in an Experiment
Independent variable (IV): the factor deliberately changed about the experiment (the cause). Example: acidity level of the liquid, caffeine dose.
Notation: IV is what you manipulate and observe its effect on the outcome.
Dependent variable (DV): the data or outcome measured to assess the effect of the IV (the effect). Example: reaction time, diameter of popping boba, blood pressure change.
Notation: DV is the data that depends on the IV.
Controlled variables (CV): all other conditions kept the same across experimental groups to avoid confounding effects (e.g., temperature, quantities of ingredients, time of day).
Experimental design essentials:
Include a control group that does not receive the experimental treatment (or receives a placebo) to establish a baseline.
Random assignments to groups to reduce biases related to participant characteristics (age, gender, health status).
In some studies, a double-blind design is used where neither participants nor researchers know who is receiving the real treatment, reducing bias in observation and reporting.
In some cases, a single-blind design is used where only participants are unaware of the treatment; researchers may know for safety or procedural reasons.
Practical note: experiments should be designed so that only one independent variable is tested at a time to isolate its effect.
Experimental Controls and Placebos
Control group: a baseline group that does not receive the experimental treatment, or receives a placebo, to compare against the treated group.
Placebo: a fake treatment that resembles the real one but has no therapeutic effect; helps control for psychological effects and bias.
Random assignment: subjects are randomly placed into control or experimental groups to distribute personal characteristics evenly and reduce systematic bias.
Double-blind design: neither the participants nor the researchers know who is in which group, preventing both observer and participant bias.
Single-blind design: only the participants are unaware of group assignment; researchers know, which can introduce some bias but may be necessary for safety or logistics.
Ethical considerations: when testing drugs or treatments in humans, safety and ethical concerns can influence whether double-blind is feasible; placebo use is common in medicine but must be justified by potential benefits and minimal risks.
Bias and confounding factors: failure to randomize or blind can introduce biases; controlling for variables such as stress, other medications, or environmental factors is crucial to ensure valid results.
Data Collection, Observation, and Reliability
Observations vs. data: make observations as part of data collection, aiming for objective, repeatable measurements rather than subjective impressions.
Repetition for reliability: conduct multiple trials (recommended 3–5 or more) to obtain a reliable estimate of the outcome and to assess variability.
Reliability concerns: if participants show wide variability in baseline measurements or responses, the experiment may not yield meaningful conclusions.
Documentation: keep a detailed lab notebook, record methods, measurements, and observations, and consider videos, photos, or drawings to document procedures and results.
Analyzing Data and Graphing with DRY MIX
The DRY MIX mnemonic helps organize graphing of variables:
D = Dependent variable (the data you measure) – to be placed on the Y-axis (Responding).
R = Responding (referencing the same DV concept as above; the talking presenter associates this with the Y-axis and data behavior).
Y = Y-axis label (dependent variable).
M = Manipulated (the independent variable you change) – to be placed on the X-axis (Independent).
I = Independent (conceptually the IV).
X = X-axis label (independent variable).
Practical steps for graphing:
Identify IV (independent) and DV (dependent).
Set X-axis to the independent variable (e.g., dose of caffeine, number of reps per week).
Set Y-axis to the dependent variable (e.g., reaction time, max bench press).
Use consistent increments (e.g., minutes to minutes, or consistent unit steps) and avoid skipping increments unless justified (squiggles may denote gaps).
Label axes with units (e.g., x: dose in grams; y: reaction time in seconds).
Example 1: Benchtest data
IV: number of reps per week (per week) → X-axis
DV: max bench press (lbs or kg) → Y-axis
Data points example: (20, 145), (30, 165), (40, 165), (70, 220)
Example 2: YouTube data
IV: number of uploads per week → X-axis
DV: number of views → Y-axis
Data points example: (20, 33), (25, 145), (50, 180), (70, 150)
Why graphing matters: converts data into a visual form to reveal trends (rising, falling, or non-linear patterns) and facilitates comparison across conditions.
Note on units and consistency: keep units constant across trials (e.g., all times in seconds, all distances in inches or millimeters) and maintain consistent increments to avoid misinterpretation.
Worked Case Studies and Activities
Cough syrup study (case study):
Independent variable: type of cough syrup (new vs standard) or formulation differences.
Dependent variable: patient-reported symptoms (e.g., coughing frequency, severity).
Bias and controls: bias can arise if participants know which syrup they receive; placebo and double-blind designs help reduce bias.
Placebo + double-blind rationale: to measure true pharmacological effect separate from psychological expectations.
Memory and jogging study (case study):
Design flaws noted: jogging group also consumed energy drinks; quizzes were not identical across groups; these introduce confounding variables.
Independent variable: jogging (yes/no).
Dependent variable: memory test score.
Control variables that should have been held constant: the memory quiz questions and the energy drink condition.
Key learning: when multiple variables are changed, it becomes impossible to attribute effects to a single cause.
Blood pressure medication case (case study): Cardius vs Presterol
Data discussed: Cardius group saw an 18-point drop in blood pressure; Presterol group saw a 12-point drop.
Reported interpretation: Cardius appeared more effective; claim discussed as 33% more effective, though a simple calculation shows different results depending on the reference baseline (e.g.,
Absolute difference: 18 mmHg vs 12 mmHg
Relative improvement with Cardius relative to baseline: (18 - 12) / 12 = 0.50 (50%), if comparing to Presterol).
Important caveats: broader data, long-term effects, side effects, and patient well-being need to be considered before concluding overall superiority.
Randomization, placebo, and bias questions from the activity:
Why random assignment matters: to avoid systematic bias where certain characteristics cluster in one group.
Why placebo and double blind improve experiments: reduce expectancy effects and observer bias.
Ethical and practical notes: safety, patient welfare, and the complexity of human trials often shape design choices.
Iterative Science: Serendipity, Replication, and Machines
Iteration and revision: scientists frequently revisit early steps when new data arrive, refining questions and hypotheses rather than following a rigid linear path.
Serendipity: accidental discoveries can become meaningful findings when data and observations are kept and revisited.
Replication and publication: repeating experiments under the same conditions by others strengthens confidence in results; publication enables peer review and broader validation.
Machines vs humans in measurement: machines can provide objective, repeatable measurements with calibrated accuracy, helping to reduce human bias; complex biology often benefits from instrumented measurements to improve reliability.
Real-world reminder: many grand scientific advances arose from collaborative, multi-lab efforts rather than a single experiment, highlighting the importance of reproducibility and shared methodology.
Practical Takeaways and Quick Reference
Always start with a clear, testable question; refine via background research before forming a hypothesis.
Identify and separate IV (what you change), DV (what you measure), and CVs (factors you keep constant).
Use random assignment and placebo controls when feasible to mitigate biases; consider double blind when safety and ethics allow.
Collect multiple trials and document methods meticulously in a lab notebook; graphs should clearly label axes and units.
Be prepared to revise hypotheses and questions based on data; results may raise new questions and lead to further experiments.
Understand the limitations of studies involving humans; account for confounding variables like stress, medications, diet, time of day, and measurement conditions.
When communicating results, present data, analysis, and conclusions clearly, and consider publication and replication to establish reliability.
Keep a curious and flexible mindset: science is iterative, collaborative, and often nonlinear, with serendipitous discoveries alongside carefully controlled experiments.
Example caffeine result: if a person ingests ten grams of caffeine, their reaction rate improves by a factor of 3.
Dose example:
Independent vs. dependent variables (basic mapping):
Blood pressure study data discussed: Cardius vs Presterol
\Delta BP{Cardius} = -18 \, \text{mmHg}, \Delta BP{Presterol} = -12 \, \text{mmHg}
Relative improvement (illustrative):
Note: transcript mentions 33% which highlights the importance of careful calculation and units when reporting relative effects.
Final reminder: the scientific method is a practical, living process—success hinges on careful design, transparent data, replication, and ethical consideration in study conduct.