Unit 2

Learning outcomes

  1.  Describe science;

  2.  Differentiate between inductive and deductive approaches to science;

  3.  Differentiate between basic (pure) and applied science;

  4.  Describe the steps in the scientific method;

  5.  Give an example of the use of the scientific method; and

  6.  Explain why scientific uncertainty is inherent in the scientific method


  • Science: a way of knowing or seeking reliable but not infallible knowledge about the real, and observable, natural, and physical world

    • Falsely understood as esoteric, abstract, and complex

    • Scientia: “to know”

    • Science gathers, processes, classifies, analyzes, and stores information on everything observable in the universe

    • The use of evidence derived from observation as a means of building truths

    • Measurable

  • Why science matters

    • Almost all material surrounding and activities result from science

    • Understanding science helps one make intelligent consumer choices about products and services

      • Eg, diet, health care, exercise, etc.

    • Scientifically literate citizens can ensure that the government uses scientific research properly to make decisions on environmental, socual, economic, and military policies

      • Evaluate claims

  • Disciplines:

    • Physical science deals with the nature of matter and energy. It includes disciplines such as physics and chemistry.

    • Earth science, including geology and astronomy, studies non-living matter on Earth and in the universe.

    • Behavioural science examines human behaviour and organization within individuals (psychology) or groups (social sciences).

    • Biological science, such as zoology and botany, investigates living things

    • Many cross the boundaries

      • Eg. anthro is biological and behavioural, geog is earth and behavioural

  • Sub disciplines

    • Zoology

      • Specializations

        • Entomology is the study of insects

        • Herpetology is the study of amphibians and reptiles

        • Ornithology is the study of birds

        • Mammalogy is the study of mammals

        • Ichthyology is the study of fish

          • Study topics:

            • Development, evolution, ecology, physiology, and behaviour

  • Ancient Greek: reason to explain phenomenon

    • Conclusion is valid if the argument is sound

    • Issue: what you think and the reality are often not the same

    • Popular still in math and logic

  • Empiricism: (renaissance) demands ideas be tested in the real world, making observation, measurement, and experiments the central aspects of science

    • Induction and deduction (https://www.livescience.com/21569-deduction-vs-induction.html)

    • Inductive method: first collects and analyzes data to form a hypothesis

      • Derives a general conclusion from specific premises or examples

      • The truth of the premises merely makes it probable but not necessary that the conclusion is true

      • Researchers are not guided by method to determine what should be measured or how it should be measured

        • Doesn’t start with hypothesis

  • Does ensure greater inclusion of data

  • Bottom-up approach

  • Leads to hypothesis generation

  • Deductive method: starts with possible explanation of how or why something is the way it is, then collects data on the subject and analyzes it to examine the hypothesis

    • Top-down approach

    • Derives a specific conclusion from a general premise

    • Generally more efficient for advance scientific knowledge

    • Recycled change that research will be lead astray

    • Used more frequently in sciences that already have well established hypotheses and theories

    • Scientific method is derived from deductive principles

    • An organized process scientist use to solve problems and find answers to questions

  • Induction can be used when little is known about a subject to gather initial information and then form a hypothesis for future investigation

    • Eg. how the theory of evolution by natural selection (Darwin) was formed

  • Two categories based on the objective of research; Basic (or pure) and applied (or practical)

  • Applied science is the practical application of basic scientific research

    • Not all basic science has a immediate application (sometimes considered useless – misguided as applied science often builds off of basic science)

  • Basic: seeks knowledge to satisfy the disciplinary curiosity about something

    • Carried out in academic institutions by scientist who publish their research in journals

    • Eg. the role of calcium in zebra mussel shell growth

  • Applied: seeks to solve problems and often produces new technology or products

    • Scientists working in private industries

    • Finding are not usually made public

      • Some journals are devoted to publishing applied research

  • Eg. examining ways to prevent the spread of invasive zebra mussels into Ontario’s waterways

  • Applied and basic research are closely linked; sometimes distinctions are blurred (eg. biotechnology)

  • Science doesn’t need to be goal directed to be useful

  • Basic research drives ideas 

  • The scientific method

    • Hypothesis: must be testable

    • Predictions: refers to hypothesis, predicting what will be observed if hypothesis is correct

    • Experiment: must be reproducible so it can be confirmed or refute findings, record the results

    • Hypothesis is accepted if measurements or observations agree with predictions; otherwise rejected

      • If hypothesis is accepted, it may be subject to further experimental tests, and if it continues to hold up, become a scientific theory

      • If the hypothesis is rejected, the researcher may examine alternative hypotheses to generate different predictions

        • Many hypotheses are proposed and tested before a scientific problem is understood

  • Scientists often repeat experiments to confirm findings or test different predictions of a hypothesis.

  • Consistent results across studies strengthen the hypothesis.

  • Testing all predictions is impossible, leaving room for new evidence to refute a hypothesis.

  • Predictions can be tested and proven true or false; hypotheses can be rejected but never fully proven true.

  • Science embraces varying degrees of uncertainty rather than absolute truths.

  • A well-tested hypothesis can become a theory, such as the theory of evolution.

  • Theories can change or be rejected based on new evidence or observations.

  • Skepticism, questioning beliefs, and relying on evidence are key traits of good scientists and critical thinkers.

  • The Case of the Peppered Moth

    • Initial observations

      • Pre-Industrial Revolution England: Light-coloured peppered moths were more common and camouflaged on lichen-covered trees, while dark moths were conspicuous.

      • Post-Industrial Revolution: Pollution killed lichen, darkening tree trunks, and dark moths became more common in urban areas.

  • Kettlewells Hypothesis

    • Moth colouration is adaptive based on tree colour, protecting moths from bird predation

  • Experiments and Findings:

    • Barrel Experiment

      • Moths preferred backgrounds matching their colour (dark moths on dark strips, light moths on light strips).

  • Bird Predation Experiment

    • Birds preferentially ate moths on contrasting backgrounds, supporting the adaptive hypothesis.

  • Mark-recapture experiment

    • More dark moths recaptured in polluted areas; more light moths in unpolluted areas, aligning with predictions


  • Scientific skepticism:

    • Alternative hypotheses could explain the results (e.g., moth movement or other causes of disappearance).

    • Despite limitations, repeated testing supported Kettlewell’s hypothesis.

  • Scientific process:

    • Testing hypotheses through multiple methods and successive studies is essential for scientific progress.

  • Sampling in studies:

    • Measuring an entire population is often impractical, instead, scientist use a sample (n) to infer results for the population

    • A representative sample ensures valid inferences about the entire population

  • Sampling bias/error

    • Occurs when the sample does not represent the entire population (eg. sampling only Canadians for a global diet study)

    • Results may only apply to the sampled group and not the entire population

  • Controlled experiments:

    • Involve assigning subjects to treatment groips (eg. regular vs. high fat diets)

    • Differences in initial conditions (eg. weight, diet, history) can introduce sampling error

  • Minimizing sampling error:

    • Design studies with diverse and randomized samples

    • Control for variables like age, gender, initial weight, and diet history

  • Be aware of sampling errors when interpreting study results, especially those reported in the media

  • Role of Statistics:

    • Statistics describe outcomes of scientific studies and help evaluate results using probabilities.

    • Basic statistical tools include descriptive statistics like the mean and variance.

  • Mean (Average):

    • Represents the central tendency of a dataset.

    • Calculated by summing all observations and dividing by the sample size (n).

    • mean = sum/n

  • Variance:

    • Describes the spread or distribution of data within a sample.

    • Low variance indicates data points are close to the mean, while high variance shows greater spread.

  • Frequency Distribution:

    • Histograms visualize variance by showing how often values occur in a dataset.

    • Two samples can have the same mean but differ in variance.

  • Confidence in Results:

    • Higher variance reduces confidence that a sample’s mean represents the population mean.

    • Large variances in treatment groups may make differences in means unreliable, suggesting the need for larger sample sizes.

  • Critical Evaluation of Studies:

    • Reported differences in means should be viewed skeptically without considering the variance.

    • Low variance increases confidence that observed differences are real and not due to chance or sampling error.

  • Two Approaches to Assess Study Results:

  1. Assessing Differences Between Groups:

  • Compares means and variances of two or more groups (e.g., weight differences in diet groups).

  1. Assessing Relationships Between Variables:

  • Examines how two variables relate, such as fat intake and weight gain.

  • Graphing Relationships:

  • Independent Variable (x-axis): Predicted to influence the dependent variable (e.g., fat intake).

  • Dependent Variable (y-axis): Predicted to be affected by the independent variable (e.g., weight gain).

  • Correlation:

  • Positive correlation: Both variables increase together.

  • Correlation does not imply causation; relationships may be influenced by third variables (e.g., exercise).

  • Controlling Variables:

  • Experiments that control external factors (e.g., exercise) provide more reliable evidence for causal relationships.

  • Correlation does not imply causation; variables may be linked by a third factor (e.g., exercise).

  • Controlled experiments are needed to confirm causal relationships and rule out other influencing variables

  • Placebo: fake treatment; treatment that is known to have no medical effect

  • Placebo effect: psychological effect of expecting something to happen and in turn something happening

  • Placebos can be used for a control group

  • A control group gives you something to compare your results to

  • Randomly assign participants in two groups; control and experimental

    • Both groups need to receive the same thing (controls for other variables)

    • Eg. same diet, amount of sleep, exercise, etc.

    • Only thing is different is one group gets a pill and the other gets a placebo

  • Plants wouldn't need a placebo since they don't have a nervous system to experience the placebo effect

  • Control group cannot know they got the placebo or not

  • To prevent bias or manipulation, the experimenter cant know who got the placebo or the real thing

  • Double blind experiment: the experimenter and the subjects do not know which of the groups being studied is the experimental group, and which of the groups is the control group

    • Good for eliminating biased research results

  • Science vs. Subjectivity:

    • Science relies on observation and measurement, aiming for objectivity.

    • Unlike arts or philosophy, science avoids subjective value judgments (e.g., morality or beauty).

  • Bias in Science:

    • Scientists are influenced by personal factors like knowledge, culture, religion, politics, and socioeconomic background.

    • These biases can shape their choice of topics, experimental design, data analysis, and interpretation.

  • Challenges to Objectivity:

    • Hanks (1996): Believing scientists are completely objective is misguided; their individual approaches may influence their research.

    • Alberts (2011): Science thrives in systems where accountability, meritocracy, and fairness (irrespective of status or connections) guide evaluation.

    • Perez et al. (2022): Advocates for nurturing humanity in science, creating spaces for connection and resistance, particularly for racially minoritized scholars.

  • Key Takeaway:

    • While science strives for objectivity, acknowledging and addressing inherent biases is essential for maintaining credibility and inclusivity in research.

  • The New Eygpt (Alberts, 2011) notes

  • Egypt's Revolutionary Context: Following peaceful demonstrations in Tahrir Square, Egypt is experiencing a mix of exhilaration and uncertainty as it works toward establishing a functional democracy.

  • Role of Science in Democracy:

    • Science provides global knowledge benefiting labor, health, and prosperity, essential for national development.

    • Democracies thrive on qualities inherent in science: creativity, rationality, openness, tolerance, and respect for evidence.

    • A "scientific temper," as advocated by India’s Jawaharlal Nehru, is vital for democratic progress.

  • Science as a Meritocracy:

    • Success in science demands merit-based evaluations, where ideas and results matter more than the source.

    • Egypt must adopt merit-based systems in academia, research funding, and public institutions to foster excellence.

  • Lessons from Other Nations:

    • In India and Egypt, granting life tenure in government positions after a short period has led to inefficiency.

    • Thriving institutions require accountability and merit-based advancement, not favoritism or social connections.

  • Challenges in Implementation:

    • Merit evaluation requires credible and unbiased peer review, as seen in the U.S. higher education system.

    • Egypt, like the U.S., faces the challenge of transitioning public institutions, such as schools, to a trust-based, meritocratic system.

  • Key Takeaway: Establishing meritocracy in science and governance is critical for Egypt’s success in both democracy and national development.

  • Purpose and Process of Peer Review:

    • Peer review is a system designed to ensure the quality and reliability of scientific studies, despite its limitations in fostering innovation.

    • Scientists submit papers to peer-reviewed journals, which undergo evaluation by editors and independent experts in the field.

  • Steps in Peer Review:

    • Editors assess submitted papers for relevance and interest.

    • Papers deemed suitable are reviewed by experts for experimental design, statistical analysis, and logical conclusions.

    • Based on feedback, papers are either accepted, revised, or rejected.

    • Some journals employ a double-blind system, where neither authors nor reviewers know each other's identities, to reduce bias.

  • Structure of Research Articles:

    • Primary Research Articles:

      • Include sections like introduction, methods, results, discussion, conclusion, and references.

      • The introduction establishes the rationale for the researchThe methods section describes how the study was performed in enough detail that other scientists could reproduce the experiment. The results section presents original data, often including statistics, graphs, and tables. The discussion is an opportunity to reflect on the results in context, identify any limitations, and recommend future research pathways

      • Present original data, often with technical terms and statistical analyses.

      • Serve as primary sources, as they report firsthand research findings.

    • Review Articles:

      • Summarize and analyze existing research on a specific subject.

      • Do not present new data but synthesize findings from primary sources.

      • Considered secondary sources, published in peer-reviewed journals or specialized review journals like Trends in Ecology and Evolution.

  • "Publish or Perish" Culture:

    • A scientist’s reputation and career often depend on the quantity and quality of their published work.

    • While rigorous, the peer-review system does not make papers immune to criticism or error.

  • Key Takeaway: Peer review plays a central role in maintaining scientific integrity, but it is not without challenges. Understanding the publication process helps evaluate the reliability and context of scientific findings.

  • Publish or Perish (Clapham, 2005) notes

    • Publishing vs. Non-Publishing Scientists:

      • Some scientists prioritize frequent publication over thorough analysis, while others fail to publish at all, losing valuable knowledge.

      • Unpublished work risks being lost forever, especially in cases where scientists pass away or fail to document their findings comprehensively.

    • Obligation to Publish:

      • Scientists, particularly those receiving public funding, have a duty to share their research.

      • Published findings inform conservation efforts, scientific progress, and evidence-based management.

    • Benefits of Publishing:

      • Advances Science: Publications stimulate ideas, modify hypotheses, and drive new research, forming the backbone of the scientific method.

      • Enables Peer Review: Peer review, despite its imperfections, helps refine research, identify flaws, and improve study design.

      • Encourages Precision: Writing forces scientists to organize data, critically analyze results, and articulate insights clearly, leading to deeper understanding.

    • Consequences of Not Publishing:

      • Long-term studies that remain unpublished may be criticized for design flaws too late to correct.

      • Speaking about ideas without scrutiny risks creating an illusory sense of validation.

    • Personal and Professional Growth:

      • Writing helps scientists think more deeply and systematically about their work.

      • Publications enhance career prospects, signaling research competence and dedication.

    • Overcoming Challenges:

      • Novice scientists often struggle with writing but improve with practice and mentorship.

      • Collaborating with skilled writers, such as graduate students, can help overcome barriers to publishing.

    • Call to Action:

      • Scientists should prioritize publishing significant findings, dedicating uninterrupted time to writing.

      • Publications are a scientist’s legacy, critical for advancing both personal careers and the collective body of scientific knowledge.

    • Key Message: Publishing research is essential for scientific progress, personal growth, and professional success. Scientists must overcome barriers, prioritize dissemination, and ensure their work contributes to the broader scientific community

  • Popular science articles

    • Found in magazines like Scientific American and Discover.

    • Simplifies scientific research for a general audience.

    • Not peer-reviewed and lacks detail for experiment replication.

    • Usually written by professional writers, not scientists.

  • Internet Accessibility:

    • Peer-reviewed journals and popular science content are widely available online.

    • Includes science news websites, magazines, and books.

  • Cautions with Online Scientific Information:

    • Some websites prioritize entertainment, marketing, or advocacy over accuracy.

    • Quality assurance is limited, as anyone can publish online.

    • Readers must critically evaluate the credibility and reliability of sources.

  • Criteria for evaluating the quality of internet research sources:

  • Purpose: What is the purpose or intent of the website, and how well is this conveyed to the reader? Is the purpose best described as i) scholarly research or reference material, ii) business or marketing, iii) entertainment, iv) news, or v) advocacy? Is the purpose stated on the website, and is it clear to the reader?

  • Authors and sponsors: Who is the website's author, and who produced or sponsored it? Was it i) an individual, ii) a commercial enterprise, iii) a non-profit organization, iv) an academic institution, v) a research agency, or vi) a government agency? Are the authors identified by name, or are they anonymous?

  • Qualified scientific source: Are the authors a well-qualified scientific source? Are the credentials of the individuals listed, or is there any indication that they are experts in their field? Is it a reputable scientific organization, institution, agency, or department?

  • Source bias-reduced: Is the source bias-reduced? Here are some things that might suggest that the source is biased: i) the authors are trying to sway the opinion of the audience, ii) they are selectively choosing the information that they present in favour of their viewpoint, iii) they will profit directly from the information provided, and iv) they are trying to sell something.

  • Empirical evidence: Are data from scientific studies, reports, surveys or reviews provided on the website (e.g., averages, percentages, statistically significant differences, graphs)?

  • Accuracy: How reliable, accurate, and error-free is the information? Is the empirical evidence the results of original research by the authors? If the information is from other sources, are the references provided so that the accuracy of the information can be verified?

  • Information reviewed: Has the information on the website been reviewed in any way to ensure its quality? Has the website been reviewed and rated by an independent internet reviewing service? Is there any indication that there has been some editorial control of the content? Has the information been peer-reviewed for its scientific merit?

  • Up-to-date: Is the information current? When was the website created, updated or revised, published or copyrighted? Can you determine how regularly the website is updated? Is the website well-maintained, or are there links that don’t work?

  • Scientific Misconduct (GoodStein, 2002)

  •  Introduction to Scientific Misconduct

  • Scientific misconduct is a critical issue in research, involving fraud and misrepresentation in scientific studies and publications.

  • Author David Goodstein recounts his experience with scientific misconduct, emphasizing the importance of formal regulations in universities.

  • Understanding Scientific Misconduct

  • Definition: Scientific misconduct involves fraudulent misrepresentation of research results or methods, primarily in biomedical sciences.

  • Goodstein notes that while serious misconduct (e.g., faking data) is rare, its implications for scientific integrity are significant.

  • The process of science is generally self-correcting; however, the contamination of the scientific record by fraudulent work remains a serious concern.

  • Government Regulations and Case Studies

  • Historical issues with government handling of scientific misconduct cases, often conflating serious fraud with lesser misconduct.

  • Goodstein's involvement in drafting regulations at the California Institute of Technology illustrates the practical implications of regulatory frameworks.

  • A famous case highlighted the role of proper regulations and protocols during misconduct investigations.

  • The Nature of Fraud

  • Intent to Deceive: Fraud is differentiated from minor errors or misconceptions. It requires intentional misrepresentation of data or methods.

  • Common behavior includes omitting failures and exaggerating successes in research papers, but this does not qualify as fraud.

  • Research suggests the prevalence of misconduct is higher in biomedical sciences compared to other fields such as physics and geology.

  • Factors Contributing to Scientific Misconduct

  • Career Pressure: Universal motivator among researchers, pushing some to cut corners rather than adhere to rigorous methodologies.

  • Misunderstanding of Reproducibility: Many scientists believe they know the expected results without thorough experimentation, leading to a willingness to misrepresent findings.

  • Field-Specific Issues: Biomedical sciences, characterized by variability in biological results, may provide cover for potential fraud.

  • Goodstein cites the Piltdown Man case as a historical example of fraud that was eventually rejected by the scientific community despite initial acceptance.

  • Changes in Regulatory Definitions

  • Differences in definitions of scientific misconduct between federal agencies and institutional policies

  • The federal definition originally included a catch-all phrase about deviation from accepted practices, causing controversy in the scientific community.

  • In 2000, a new guideline refined the definitions and criteria for misconduct, stressing that misconduct must be committed knowingly and proven beyond a preponderance of evidence.

  • Current and Future Impacts of Misconduct

    • Growing pressures in science today stem from competition for research funding and positions, augmenting the potential for misconduct.

    • The reliability of peer review, crucial for maintaining scientific integrity, is jeopardized by personal interests of referees competing for the same resources.

    • Goodstein emphasizes that the 'Myth of the Noble Scientist' presents a false image of researchers, who, despite their adherence to honesty in data reporting, engage in competitive behavior reflective of ordinary human ambition.

  • Conclusion

    • Acknowledging the competitive nature of scientific research is essential for addressing the realities of scientific misconduct.

    • Differentiating between minor scientific conduct errors and serious misconduct is crucial for the integrity of the scientific enterprise.

    • As the scientific landscape continues to evolve, proactive measures and honesty about the realities of research will be key in mitigating misconduct.

  • Claudia Lopez Lloreda article (https://www.science.org/content/article/university-investigation-found-prominent-spider-biologist-fabricated-falsified-data)

    • McMaster University Investigation:

      • Concluded that behavior ecologist Jonathan Pruitt engaged in data fabrication and falsification.

      • The investigation revealed Pruitt failed to meet the research integrity expectations of a tenured professor.

      • McMaster's findings were made public after two years of investigation and a settlement.

    • Reactions:

      • Co-author Kate Laskowski, who had previously raised concerns about Pruitt’s work, expressed relief at the university’s clear labeling of the misconduct.

      • Laskowski had retracted papers due to data anomalies, which led to the retraction of 15 of Pruitt’s papers over three years.

  • Pruitt’s Career and Response:

    • Pruitt had received a prestigious Canada 150 Research Chair title in 2018 but resigned after the misconduct investigation.

    • Pruitt declined to comment on the findings but indicated they would speak after the release of their first book.

  • Journal Retractions:

    • Several journals, including Animal Behavior, retracted papers before McMaster's findings were made public, after reviewing the original data.

    • McMaster’s investigation covered eight of Pruitt’s papers, finding issues like data duplication and inadequate record-keeping.

  • Impact on Research Community:

    • The official findings confirm data fabrication and falsification, adding closure for Laskowski and other co-authors.

    • The case is seen as a cautionary tale, with lessons learned for data integrity in the research community

  • Findings of Misconduct (Ori, 2015)

    • Defining Research Misconduct

      • Any institution that applies for or receives Public Health Services (PHS) support is subject to ORI's guidelines.

      • Applicable activities include:

        • Grant applications or proposals

        • Research training or activities related to that training

        • Operation of tissue and data banks or dissemination of research information

        • Plagiarism of research records

  • ORI defines research misconduct as:

    • Fabrication: Making up data or results.

    • Falsification: Manipulating research materials or changing data to misrepresent research.

    • Plagiarism: Appropriation of another person’s work without credit.

  • Conditions for misconduct:

    • Must represent a significant departure from accepted practices.

    • Must be committed intentionally, knowingly, or recklessly.

    • Must be proven by a preponderance of evidence.

  • Consequences of Misconduct

    • Findings may lead to debarment or voluntary exclusion agreements.

    • In 2005, ORI received 265 allegations; only 8 resulted in official action.

    • Debarred individuals/institutions cannot participate in federally funded research; debarments are widespread.

    • Voluntary Exclusion Agreements are negotiated to avoid full investigation.

    • Excluded entities are listed on ORI’s PHS Administrative Action Bulletin Board for three years.

    • ORI data for 2006-2007 showed 24 investigations led to 15 voluntary exclusions and 8 debarments.

    • Specific cases included falsification of data, fabrication of records, and severe consequences like criminal charges.

  • Causes of Misconduct

  • The integrity of science relies on the accurate presentation of data and methods.

  • Risk factors for misconduct:

  1. Career pressure among researchers.

  2. Anticipation of results causing researchers to skip proper methods.

  3. Specific fields' low reproducibility encourage misconduct.

  • Falsifications tend to be uncovered through colleague reports, admissions, or failed replication.

  • Example Cases

    • UAB: The BCX-34 and BioCryst Case

      • Involves misconduct by UAB President Claude Bennett and BioCryst concerning clinical trials for BCX-34.

      • Image manipulation and data fabrication were significant components of the misconduct.

      • Investigations revealed discrepancies between data provided to FDA and what was actually produced.

      • Consequences included severe criminal charges and lifetime bans from drug testing for involved researchers.

    • Hwang Woo-suk’s Cloning and Stem Cell Research

    • Notably cited for fraudulent data on stem cell cloning involving fabricated and manipulated images.

    • Misrepresentation of DNA tests and consent breaches highlighted ethical violations.

    • Coercion of colleagues for participation raises concerns about autonomy and informed consent.

    • The public trust in scientific research is adversely affected, underscoring the need for integrity.

  • Conclusion

    • Research misconduct has severe consequences that extend beyond individuals to affect public health and trust in science.

    • Continuous scrutiny and adherence to ethical standards are vital for the integrity of scientific research.

Definitions in Science

1. Alternate Hypotheses

A statement proposing abehaviorstive explanation for a phenomenon, tested against the null hypothesis in scientific experiments.

2. Applied Science

The practical application of scientific knowledge to address real-world problems and develop new technologies or products.

3. Basic Science

Research aimed at increasing fundamental understanding without immediate application; driven by curiosity and knowledge-seeking.

4. Deductive Reasoning

A logical process where a general premise leads to a specific conclusion, often used to test hypotheses.

5. Experiment

A structured procedure carried out to investigate a hypothesis, involving manipulation and control of variables.

6. Falsifiable

The characteristic of a hypothesis or theory that allows it to be proven wrong through evidence or experimentation.

7. Hypothesis

A testable, specific prediction about the relationship between variables based on prior knowledge or observation.

8. Inductive Reasoning

A logical process that involves forming general conclusions from specific observations or data.

9. Observation

The act of monitoring and recording phenomena or behaviors as part of scientific inquiry.

10. Prediction

A forecast based on a hypothesis about what will occur under specific conditions during an experiment.

11. Skepticism

A critical approach that involves questioning the validity of claims and requiring evidence before acceptance.

12. Science

A systematic way of acquiring knowledge about the natural world through observation, experimentation, and analysis.

13. Scientific Method

An organized process that scientists use to systematically investigate phenomena, formulate hypotheses, conduct experiments, and analyze results.

14. Scientific Theory

A well-substantiated explanation of an aspect of the natural world based on a body of evidence and tested hypotheses.

15. Testable

Describes a hypothesis or statement that can be evaluated through observation or experimentation.

16. Uncertainty

The inherent lack of exactness or predictability in scientific results and measurements, reflecting varying degrees of confidence.

17. Bias

A systematic tendency to favor certain outcomes over others, potentially distorting research results.

18. Blind Experiment

An experimental procedure where subjects do not know whether they are receiving the treatment or a placebo to reduce bias.

19. Control

A standard of comparison in experiments that isolates the variable being tested to determine its effect.

20. Correlation

A statistical measure that describes the extent to which two variables change together, but does not imply causation.

21. Data

Factual information collected during an experiment, used for analysis and inference.

22. Double-Blind

An experimental design where neither the participants nor the experimenters know who belongs to the experimental or control group to prevent bias.

23. Mean

The average of a set of numbers calculated by dividing the sum of the values by the total number of values.

24. Placebos

Inert substances used in controlled experiments to assess the effectiveness of a treatment while controlling for psychological effects.

25. Random Assignment

The random placement of subjects into experimental groups to ensure that each group is comparable and reduce bias.

26. Sample

A subset of a population used to make inferences about the larger group from which it is drawn.

27. Sample Size

The number of subjects included in a study; larger sample sizes generally provide more reliable and valid results.

28. Sampling Error

The error that occurs when a sample does not accurately represent the population from which it was drawn.

29. Statistical Significance

A measure that indicates whether the observed difference between groups is likely due to chance or represents a true effect.

30. Statistical Tests

Mathematical procedures applied to data to determine the likelihood that an observed effect is due to chance.

31. Statistics

The science of collecting, analyzing, presenting, and interpreting data.

32. Variance

A statistical measure that represents the spread or distribution of a set of data points around the mean.

33. Peer Review

A process of evaluating scientific work by experts in the same field before publication to ensure research quality and credibility.

34. Popular Source

Materials aimed at a general audience that simplify and communicate scientific concepts but are not peer-reviewed.

35. Primary Source

Original research articles that present firsthand findings or data.

36. Scholarly Source

Research published in peer-reviewed journals, characterized by rigorous standards and credibility.

37. Scientific Misconduct

Unethical behavior in research, including fabrication, falsification, and plagiarism of scientific results.

38. Secondary Source

Materials that summarize, interpret, or analyze primary research, such as review articles or textbooks.

robot