1/148
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Science
A means of/approach to acquiring knowledge.
Intuition
Relying on gut feelings, emotions, or instincts to guide decisions.
Authority
Accepting ideas because authority figures state they are true.
Rationalism
Using logic and reasoning to acquire knowledge.
Empiricism
Acquiring knowledge through observation and experience.
Common sense
Folk psychology; intuitive beliefs about behavior.
Tenacity
Holding onto beliefs despite contradictory evidence.
Mysticism
Knowledge through spiritual or supernatural means.
Empirical
Knowledge based on experience, observation, and quantitative data.
Objective
Free from bias, opinion, preferences, and values.
Theory-driven
Based on general principles explaining why and how things occur.
Tentative
No jumping to conclusions; knowledge subject to revision.
Progressive and incremental
Builds on existing body of knowledge.
Parsimonious
Often the simplest explanation is the best; Occam's razor.
Public
Science produces public knowledge; results shared with scientific community.
Systematic
Carefully planned procedures for data collection and analysis.
Realism
Perceptions exist outside the mind.
Regularity
Recurring patterns in phenomena; non-randomness of behavior.
Determinism/Causality
Phenomena result from causes that precede them.
Discoverability
Solutions or explanations exist for posed queries.
Quantifiability/Testability
Phenomena of interest can be operationalized and observed.
Falsifiability/Refutability
Claims/hypotheses can be wrong/refuted.
Confirmation bias
Tendency to focus on cases that confirm beliefs while ignoring contradictory evidence.
Theory
Broad explanation of observed phenomena that allows predictions.
Karl Popper's approach
Scientific claims must be expressed so observations could potentially count as evidence against them.
Relationship between universality and falsifiability
Greater universality leads to greater falsifiability.
Broad claim
More ways to be proven wrong equals greater falsifiability.
CAVEAT
The claim must be specific enough to test.
Example of a non-falsifiable claim
'Everyone is exactly where they are in life because of fate.'
Example of a vague claim
'Everyone creates their own reality through perception.'
Three goals of science
1. Describe, 2. Predict, 3. Explain.
Describe (goal of science)
Make careful observations to document phenomena.
Example of describing
Accessing records at medical marijuana licensing centers to see which conditions people are getting licensed to use medical marijuana.
Predict (goal of science)
Identify how two behaviors or events are systematically related and use that information to predict whether an event or behavior will occur in a certain situation.
Example of predicting
Predicting that an individual who uses medical marijuana likely experiences pain.
Explain (goal of science)
Determine the cause of behavior.
Example of explaining
Understanding the mechanisms through which marijuana reduces pain.
Conceptual definition
Abstract or general meaning of the construct.
Example of conceptual definition
Depression might be conceptually defined as 'people's tendency to experience negative emotions such as anxiety, anger, and sadness across a variety of situations.'
Operational definition
How the construct is measured or observed.
Example of operational definition
Depression can be operationally defined as scores on the Beck Depression Inventory, number of depressive symptoms, or official diagnosis.
Main stages of the scientific method
1. Observation, 2. Develop a Theory, 3. Develop a Testable, Refutable Hypothesis.
Observation (stage of scientific method)
Descriptive Questions and Cause-and-Effect Questions.
Develop a Theory (stage of scientific method)
Identify the variables presumed to be associated with your observations.
Variable
Behavioral/environmental characteristics that vary across/between individuals/settings.
Constructs
Variables that represent behavior or psychological processes.
Develop a Testable, Refutable Hypothesis (stage of scientific method)
Hypothesis must be stated in such a way that it can potentially be wrong/refuted.
Experimental Control
Random assignment of participants.
Statistical Control
Measure confound and statistically remove its effect.
Validity
The key criterion in the evaluation of any piece of research or test (measure); the appropriateness of inferences drawn from data.
Research Validity
Data = results of research study.
Test and Measurement Validity
Data = test scores.
Internal Validity
Extent to which causal inferences about observed relationships are sound (relationship is real & not artifactual).
External Validity
Extent to which results can be generalized to and across alternate measures, participants/populations, settings, times.
Statistical-conclusion Validity
Appropriateness of inferences made from data as a function of conclusions drawn from statistical analyses.
Construct Validity
Extent to which study/operationalizations align with theory/conceptualizations (manipulation for experiments).
Population Validity
Generalizability across different groups of people; representative sampling from target population.
Ecological Validity
Generalizability across different settings and contexts; real-world applicability of findings.
Temporal Validity
Generalizability across different time periods; stability of findings over time.
History Effects
External events occurring between measurements that could affect the outcome.
Maturation Effects
Natural changes that happen to participants over time due to aging, learning, or development.
Testing Effects
When taking a pretest influences performance on later tests.
Mortality/Attrition Effects
When participants drop out of your study, potentially creating bias.
Selection Effects
Systematic differences between groups that exist before treatment begins.
Diffusion/Imitation of Treatment Effects
When control group participants learn about and copy the treatment, reducing differences between groups.
Regression to the Mean Effects
When participants with extreme scores naturally move toward average scores on retesting.
Instrumentation effects
Changes in measurement tools or procedures between pre- and post-tests, such as using different test durations or having observers become more skilled over time.
Noncompliance effects
When participants do not follow the treatment protocol as intended, making it unclear whether results reflect the treatment's true effectiveness.
Extraneous variables
Any variable other than those being studied; these are not necessarily problematic if not confounded.
Confounding variables
Extraneous variables that systematically vary with the independent variable, providing alternative explanations for results.
Random Sampling
Ensures representativeness and external validity by choosing a representative sample from the entire population, where every member has an equal and independent chance of selection.
Random Assignment
Ensures internal validity by controlling confounding variables, equating groups by giving every sample member an equal chance of assignment to any condition.
Simple Random Sampling (SRS)
A probability sampling method where every individual and every possible sample has an equal chance of selection, considered the gold standard.
Stratified Random Sampling
A probability sampling method that involves dividing the population into distinct groups (strata) and taking separate simple random samples from each stratum to ensure representation of important subgroups.
Convenience Sampling
A nonprobability sampling method where individuals who are easiest to reach are selected, leading to systematic bias favoring accessible individuals.
Voluntary Response Sampling
A nonprobability sampling method where individuals self-select into the sample based on general appeal, leading to systematic bias favoring motivated individuals.
Low statistical power
Insufficient ability to detect true effects, often due to a small sample size; controlled by conducting power analysis to ensure adequate power (0.80).
Violations of statistical test assumptions
When the assumptions required for a statistical test are not met, which can lead to incorrect conclusions; controlled by meeting test assumptions.
Poor reliability of measures
Measurement error that reduces the ability to detect relationships; controlled by using reliable measures.
Sample size (N)
The number of participants needed in a study; larger samples increase statistical power.
Effect size
The magnitude of the effect being studied; larger effects are easier to detect.
Statistical power (1 - β)
The probability of detecting an effect if it exists.
Statistical Power
The probability of detecting an effect if it exists (.80 or 80%).
Sampling Error
Random variability in statistics from sample to sample. Statistics from different samples drawn from the same population will rarely be exactly the same.
Relationship with Sample Size
Larger sample sizes increase statistical power and decrease sampling error; smaller sample sizes decrease statistical power and increase sampling error.
Statistical Significance
Whether results are unlikely due to chance (p < .05). Affected by sample size.
Practical Significance
Whether results are meaningful in real-world context. Measured by effect size.
r (Correlation coefficient)
YES, this is an effect size. Measures strength and direction of relationships (-1.00 to + 1.00).
d (Cohen's d)
YES, this is an effect size. Measures magnitude of difference between groups.
P (p-value)
NO, this is NOT an effect size. Probability of results if null hypothesis is true.
Primary Threat to Construct Validity
Loose connection between theory and research study. Misalignment between conceptual and operational definitions.
Experimenter Effects
Characteristics of experimenter influence results.
Experimenter Expectancy
Unconscious expectations influence outcomes.
Demand Characteristics
Participant expectations about study aims.
Social Desirability
Responding in socially acceptable ways.
Hawthorne Effect
Changing behavior due to being observed.
Construct
A 'psychological variable' that cannot be directly observed. It involves internal processes & behavioral tendencies.
Independent Variable (IV)
Variable that is manipulated or controlled by the researcher.
Dependent Variable (DV)
Variable that is measured to see if it is affected by the independent variable.
Continuous Variable
A variable that can take on an infinite number of values within a given range.