1/58
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Four ways of knowing
Intuition
Authority
Rationalism
Empiricism
Wilhelm Wundt
Set up the first lab to study conscious experience
Essentially, measuring reaction time
Wanted to show that the conscious experience can be studied/measured
Loved to muse about consciousness and what it means to exist
William James (functionalism)
Believed consciousness is an ever-changing flow of images and sensations
Concerned with how the mind functions to adapt to the environment
Admired Darwin and his theory of natural selection
Watson and Skinner (behaviorism)
Psychology MUST study observable behavior objectively
Watson – Little Albert
Skinner – Animals
Uncritical acceptance
Tendency to believe positive or flattering descriptions of yourself
Dunning-Kruger Effect
People with low ability at a task overestimate their ability (don’t know enough to know how bad they are)
Confirmation bias
Tendency to search for, interpret, and remember information that confirms one's preconceptions
Where do research questions come from?
Theory
Layperson definition – a hunch about how something works or why something happens
Scientific definition – is a coherent explanation or interpretation of one or more phenomena
Hypothesis
Testable hunch or educated guess about behavior
Null Hypothesis
Predicts no effect or no difference between groups/conditions.
Acts as the “default” assumption to be tested
Alternative Hypothesis
Predicts there is an effect or a difference.
What the researcher expects or is testing for.
Operational Definition
States the exact procedures used to represent a concept.
Allows abstract ideas to be tested in real-world terms.
This is a working definition of a variable (capable of being objectively measured) and is vital when building hypotheses
Conceptual Definition
Independent Variable
The variable that the researcher thinks will influence or predict the outcome
Sometimes it is manipulated (e.g., type of therapy: CBT vs. no treatment).
Sometimes it is a pre-existing difference (e.g., sex, smoker vs. nonsmoker).
Dependent Variable
The variable that is measured to see if it changes because of the IV.
Think of it as the effect or outcome.
Example: Do exam scores (DV) differ depending on whether students had coffee (IV) that morning?
Confounds/Extraneous Variables
An unmeasured third variable that influences, or “confounds,” the relationship between an independent and a dependent variable by suggesting the presence of a spurious correlation
Simple definition: hidden third variables
Basic research
A research question that focuses on understanding fundamental principles and theories without immediate practical application
Example: How many items (digits) can be stored in our short-term memory?
Applied research
A research question that aims to solve specific, practical problems using psychological principles and theories.
Example: Which intervention best slows memory deterioration in aging populations?
Correlational research
Measure two variables, see if they relate
Correlation ≠ causation
Random assignment
Each participant has an equal chance at being assigned to all conditions
Randomization spreads out extraneous and confounds (known or unknown)
Single-blind experiments
Participants don’t know which group they belong to (treatment VS placebo)
Double-blind experiments
Participants don’t know which group they belong to (treatment VS placebo)
Experiment practitioners don’t know which group they are treating (treatment VS placebo)
Lab study
High internal validity
Low external validity
Field study
High external validity
Low internal validity
Belmont Report Principles
Justice
Fair distribution of risks & benefits
Equitable burdens
Equal access to research findings
Beneficence
Do good, minimize harm
Maximize benefits to individuals & society
Avoid unnecessary risks
Respect for Persons
Informed consent & confidentiality
Respect for autonomy
Cultural sensitivity
Informed consent
A process where participants are fully informed before agreeing.
Requires clear, understandable language (no jargon).
Participation must be voluntary and free from coercion.
Anonymity
the state of being unknown or unidentified
Confidentiality
involves protecting sensitive information from unauthorized access
International Review Board (IRB) Composition
Every IRB must have at least 5 members, from diverse backgrounds.
Must include scientists, non-scientists, and a community member unaffiliated with the institution
IRB Levels of Research Risk - Exempt
Low risk, standard measures, existing data
IRB Levels of Research Risk - Expedited
Minimal risk (cognitive testing, memory testing, blood draws)
IRB Levels of Research Risk - At-Risk
Greater than minimal risk
Reviewed by full board of IRB members
Used for at-risk populations (e.g. children) or invasive methods
Conflicting interests between groups
An unavoidable ethical conflict
Use of deception
An unavoidable ethical conflict - some deception is necessary for valid results
Wakefield Study Issues
Sample size: only 12 children
Invasive procedures
Conflict of interest
Manipulated patient records
Science is probabilistic
Physics: laws (apples fall)
Psychology: tendencies (not everyone fits)
Findings are probabilistic, notabsolute
This is why replication is essential
Red flags in science
Overclaiming ('proves')
Unfalsifiable ideas
Cherry-picked data
No replication / single-study claims
Constructs
Abstract psychological ideas that require careful definitions to measure
Psychometrics
This is the science of measuring psychological constructs
How do we make a good scale
How do we test reliability and validity
How do we model constructs
Levels of measurement
Nominal
Ordinal
Interval
Ratio
Reliability
Reliability refers to the consistency of a measure
Test-Retest Reliability
This test how consistent the construct is across time
Examples: IQ tests, big 5 personality tests
Internal Consistency
refers to the consistency of people’s responses across items on a multiple-item measure (e.g., there is more than one question to respond to)
no unrelated items in the scale
Cronbach's Alpha: measure of internal consistency
assesses reliability
Interrater Reliability
Inter-rater reliability is the extent to which different observers are consistent in their judgments
Validity
Validity is the extent to which the scores from a measure represent the variable they are intended to
Face Validity
the extent to which a measurement method appears “on its face” to measure the construct of interest
Content Validity
The extent to which a measure covers the entire construct of interest
Criterion Validity
The extent to which people’s scores on a measure are correlated with an outcome (known as criteria) that one would expect them to be correlated with
Concurrent Validity
The extent to which a measure correlates with other measures or outcomes assessed at the same time.
Predictive Validity
The extent to which a measure predicts future outcomes or behaviors
Discriminant Validity
the extent to which a measure does not correlate with measures of conceptually distinct variables
Convergent Validity
The extent to which a measure correlates with other measures of the same construct
Demand Characteristics
Cues in an experimental setting that influence participants' behavior by suggesting the experimenter's expectations, potentially biasing the results.
(participants guess purpose)
Experimenter Expectancy Effect
How a researcher's expectations can inadvertently influence the behavior of participants and the outcomes of an experiment.
Subtle influence
Type 1 Error
incorrectly supported the hypothesis
false positive
Type 2 Error
incorrectly rejected the hypothesis
false negative
Explain the difference between basic and applied research. Provide a personally relevant example.
Basic: A research question that focuses on understanding fundamental principles and theories without immediate practical application
Example: How does social media impact interpersonal relationships among teenagers?
Applied research: A research question that aims to solve specific, practical problems using psychological principles and theories.
Example: What strategies can be implemented to improve student engagement in online courses?
Explain the difference between reliability and validity. Can you have one without the other? Provide an example.
Reliability: refers to the consistency of a measure
Validity: the extent to which the scores from a measure represent the variable they are intended to
You can have one without the other. For example, a scale can consistently read one weight and be wrong (invalid).