SIFT, Science vs Pseudoscience, and Research Methods Notes

SIFT Model for Evaluating Information

  • SIFT stands for Stop, Investigate, Find, Trace.
    • Stop: pause to check your emotions before reacting to information. Example: TikTok or social media content about sensitive topics (e.g., school shootings) can provoke strong emotions; important to pause and reassess before forming judgments.
    • Investigate: identify the source and its background.
    • Ask: Who is the source? Is it from a scientific journal, an expert, or another outlet?
    • Consider the background of the expert (method of authority) and whether not all experts are truly experts.
    • Assess the agenda or motives of the source.
    • Do not rely on a single source.
    • Find: locate the best information on the topic from multiple trusted sources.
    • Look for credible outlets, journals, and corroborating evidence.
    • Trace: trace the claims back to their original sources.
    • Check for original peer-reviewed content (e.g., in prestigious journals such as JAMA).
    • Trace the quotes or claims to the primary material to verify accuracy.
  • Example applied to a TikTok claim about Disney lowering the drinking age to 18 (years ago):
    • Stop: manage emotional reaction to the claim.
    • Investigate: identify Mousetrap News as the source; note it provided no court filings or evidence; the piece had a clunky write-up and no byline.
    • Find: compare with other outlets (Yahoo News, MSN, etc.); check for legitimate reporting.
    • Trace: Mousetrap News labeled itself as “real Disney news” but that is a 100% fake site; verify the site’s about page and other sources; check whether other outlets carried the story; no credible peer-reviewed article found.
  • Additional credibility check tips:
    • Look for a byline and the author’s credentials.
    • Check the article’s evidence, such as court filings, official documents, or data.
    • Seek multiple sources to confirm the claim.
    • Be wary of articles lacking verifiable sources or journalistic quality (punctuation errors, no author, no corroboration).
  • Distinguishing myth vs fact example:
    • Myth: “your sight deteriorates if you read in the dark or on a monitor.”
    • Fact: eyes get tired quickly, recover after rest; the difference is about fatigue, not permanent deterioration.
  • Purpose of SIFT: to cultivate critical evaluation of information, especially in an age of easy dissemination on social media and diverse news outlets.

Science vs Pseudoscience

  • Pseudoscience: appears scientific but is often deceptive.
    • Uses scientific terms to give an illusion of legitimacy.
    • Hypotheses are often unverifiable and based on anecdotal evidence.
    • Tends to be irrefutable and not easily testable against disconfirming evidence.
    • Relies on selective evidence and may deploy jargon to convince lay audiences.
    • Demonstrates confirmation bias and belief perseverance; uses new jargon to seem credible.
  • Science: evidence-based, testable, and self-correcting.
    • Hypotheses are solvable and testable, and can be directly observed with instruments.
    • Questions asked are solvable and testable through empirical methods.
    • Hypotheses are empirical, systematic, falsifiable, and open to public verification.
    • Public verification: findings are shared with the scientific community through journals and conferences; others can critique and replicate.
    • Replication is central to verification and self-correction; findings may be preregistered and data openly shared.
  • Key characteristics of science (four):
    • Empirical: based on observable data.
    • Systematic: bias-minimized, structured, and repeatable methods.
    • Falsifiable: capable of being proven false.
    • Public verification: results subjected to peer review and replication.
  • Important processes in science:
    • Preregistration: researchers publicly declare hypotheses and analysis plans before data collection; helps prevent HARKing (hypothesizing after results are known).
    • Replication: independent repetition of studies to verify results.
    • Distinction between fraud (deliberate deception) and ordinary errors/mistakes; science is self-correcting but requires ongoing verification.
  • Post hoc vs a priori hypotheses:
    • A priori: predictions made before data are collected; stronger evidentiary value.
    • Post hoc: hypotheses generated after examining the data; higher risk of type I error.
  • Practical implication: in the real world, pseudoscience can masquerade as science; science emphasizes testability, openness, and falsifiability.

The Four Primary Goals of Science

  • Describe behavior: describe what is observed in a given situation using systematic observations.
  • Predict behavior: forecast when and under what conditions a behavior will occur; involves models and analyses (e.g., regression) to forecast outcomes.
  • Explain (determine causes of) behavior: true experiments enable causal inferences due to randomization, manipulation of variables, and control of extraneous factors.
    • True experiments require:
    • Random assignment to conditions.
    • Manipulation of the independent variable.
    • Control over confounding variables.
    • Temporal precedence: the cause occurs before the effect.
    • Temporal precedence: the time order must be such that the cause precedes the effect, often stated as t(cause) < t(effect).
    • Covariation/correlation: variables must covary to support potential causal links, but covariation alone does not prove causation.
    • Directionality and third-variable problems: correlational designs cannot establish causality due to potential reverse causation and confounds.
  • Modify/Change behavior: research can aim to alter behavior (applied aims) based on underlying theories.
  • Distinctions within the reasoning framework:
    • A priori predictions are made before data collection; post hoc explanations are generated after data analysis.
    • A key distinction is between causal inference (need true experiments) and correlational findings (cannot establish causality without temporal precedence and control).
  • What makes a strong hypothesis (four criteria):
    • Empirical: observable and measurable.
    • Systematic: conducted under controlled and unbiased procedures.
    • Falsifiable: possible to be proven false.
    • Public verification: results shared for scrutiny and replication.
  • Important statistical concepts touched in this section:
    • Type I error: rejecting a true null hypothesis.
    • Defined as \alpha = P(\text{reject } H0 | H0 \text{ is true}).
    • Null hypothesis: a statement of no effect or no difference to be tested against.
    • Post hoc analyses increase the risk of Type I error if not properly controlled.

Experimental vs Nonexperimental Methods; Basic vs Applied Research

  • Nonexperimental (descriptive) research: describes behavior but does not explain why or how; methods include:
    • Observational (naturalistic and laboratory settings): watch and record behavior.
    • Case studies: intensive examination of a single case or a few individuals.
    • Surveys: questionnaires and interviews to gather broad data.
    • Historical/archival research: examining past records to relate to current events.
    • Correlational research: examines relationships between variables but cannot prove causation.
    • Qualitative vs quantitative approaches; mixed methods combine both.
  • Experimental research: true experiments that establish cause-effect relationships through manipulation and control; includes random assignment and manipulation of the independent variable.
  • Basic (pure) vs Applied research:
    • Basic research seeks to build and refine knowledge and theories, sometimes with no immediate practical application (e.g., theoretical issues in cognitive psychology).
    • Applied research seeks solutions to real-world problems and often involves program evaluation and immediate applicability.
    • These two can overlap and inform each other; feedback between basic findings and applied contexts often leads to new hypotheses and revised theories.
  • Examples discussed in class:
    • Classical conditioning (Pavlov) discovered serendipitously during digestion studies; led to therapies like systematic desensitization.
    • Practical example: addressing practical problems like designing dorms and dining halls to improve efficiency and student experience (human factors angle).
    • Applied research example: program evaluation in education and professional development (workshops, institutes).
  • The two-arrow concept in slides (inform each other): basic research informs applied work and applied findings can refine or challenge theories in basic research.
  • Research design considerations mentioned:
    • Selection bias and the importance of random assignment to avoid biased groups.
    • Quasi-experiments: preexisting groups where random assignment is not possible; can imply causation but with limitations.
    • Mixed designs: combining quasi-manipulated and actual manipulated variables.
    • Temporal considerations and control of extraneous factors are crucial for drawing conclusions about causality.
  • The research process and real-world constraints:
    • Idea generation often comes from prior literature, theory, observation, and personal experiences.
    • Reading the literature, examining references, and identifying gaps are essential steps.
    • Theories provide a framework to generate hypotheses and predictions; testing against competing theories is common (e.g., false memory theories: fuzzy-trace theory vs. source-monitoring theory).
  • Practical considerations for applied settings:
    • Program evaluation in education and organizational settings.
    • Interdisciplinary areas such as human factors combine cognitive science and engineering to improve system design.
    • Real-world problems can drive research questions and experimental designs.

Primary vs Secondary Sources

  • Primary sources: the original or firsthand accounts of observations and results.
    • Examples: journal articles, theses and dissertations, conference papers/posters, interview transcripts, raw data (e.g., recorded and transcribed interviews), original artifacts.
    • Characteristics: firsthand reports and raw observations directly from the researcher(s).
  • Secondary sources: secondhand reports that discuss or summarize primary sources.
    • Examples: textbooks, review papers, meta-analyses, popular articles that synthesize others’ research.
    • Risks: misinterpretation, out-of-context quotations, incomplete representations of original work.
  • Practical guidance:
    • When in doubt, seek the primary source to verify details.
    • Be cautious with secondary sources that may omit nuances or misrepresent findings.
    • Meta-analyses synthesize data across studies but still rely on the quality of the included primary studies.
    • News outlets and popular media may not provide complete methodological details; verify with primary sources.
  • Special notes:
    • The file-drawer problem: some studies with null results remain unpublished; this affects the completeness of the literature and calls for including null results when possible (e.g., journals dedicated to null findings).
    • The Journal of Null Findings is an example of a venue that highlights null results to address publication bias.

Research Databases and Literature Searching (Berry Library Context)

  • Library resources and search tools mentioned:
    • Berry library website: www.berry.edu/library
    • Journal locator, online catalog, research videos, research guide, and access to research databases (left sidebar).
    • WorldCat as a broad search tool for articles, books, and videos.
    • Focus on psychology databases under Psych Articles / PsycArticles (represented as PsycArticles and PsychArticles in the lecture).
  • Practical search workflow:
    • Start with the research databases section for psychology (P). Examples include Psych articles and the PsycArticles databases.
    • Use Advanced Search rather than Basic Search for more control over search fields and operators.
    • Begin with a general topic idea and generate keywords or subject terms to use in searches.
    • Consider starting with a secondary source to gain a broad overview and then drill down to primary sources.
    • Build a literature map by tracing back to classic foundational papers and then follow newer papers that cite them to see how lines of thought have evolved.
    • Use multiple databases to ensure coverage and to locate both classic and current work.
    • The tree-branch analogy: classic foundational work forms the trunk; recent work forms the tips, highlighting how research has branched and evolved.
  • Practical tips for students new to psychology literature searching:
    • Don’t rely solely on Google or AI-generated resources; use library databases for peer-reviewed and credible sources.
    • When you locate a relevant article, check its references for additional primary sources.
    • For assignments, you may be asked to annotate several articles and discuss their variables and relationships; these are usually not descriptive-only but aim to identify relationships and possible hypotheses.
    • The first writing assignment will build on mastering these sources and the ability to distinguish primary from secondary literature.
  • Closing note on research workflow:
    • Start with a topic you find interesting, locate several articles, extract key variables, examine how they relate, and identify potential hypotheses and study designs for future work.
    • The process emphasizes critical thinking, careful sourcing, and an understanding that theory and data guide inquiry—often in a dynamic, iterative loop between basic and applied questions.