1/124
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
When Causal Claims Are a Mistake
• Does social media pressure cause teen anxiety?
- Covariance: if there is an association between social media pressure and teen
anxiety
• Covariance is only one that can be measured by survey/observation
- Temporal sequence: x (social media pressure) causes y (teen anxiety)
- Confounds (internal validity): can't eliminate confounds = low in internal validity
Research
gathering information (online, hard copies, etc.)
Scientific Research
collecting & analyzing data
How do we measure theories and hypotheses?
• Self report (#1 way gather data, can be biased); observation
Examples of Scientific Method
Human/animal subjects, often during lab or field
Scientific Method Steps
Steps: 1. Produce theory; 2. hypothesis (precise testable
prediction); 3. gather data; 4. analyze results/draw conclusions; 5.
record it
• Use of Scientific Method (Steps):
systematic method to collect and evaluate data
producer of research
- For coursework in psychology
- For graduate school
- For working in a research lab
- Not very likely
Consumer of research
- For psychology classes
- When reading printed or online news stories based on research
- For your future career
- All consumers
- Benefits of being a good consumer: don't waste money
Metaphysical (Supernatural) Systems (unexplainable, paranormal)
- Animism: belief of ancient people (soul/spirit in everything)
- Mythology & Religion: explaining behavior on religion (Gods, Goddesses)
- Astrology: palm readings, zodiac signs
Philosophy
- Philosophers: think, question things, ponder (Plato: propionate of nature,
Aristotle: you are who you are because of your experiences - nurture)
- Speculation versus Experimentation (years ago = speculation, today =
experimentation)
- Years ago asked questions (subjective), but today different ways of research
(objective)
- Empiricism: experience/observation, systematic empiricism: gather info from experience (Descartes)
Physiology
study function of living things (animals and people)
- E.g., Galvani (1700s):
frog studies proving nerve impulses are more than electrical
Experimental Psychology
found in Germany
- Wundt 1879 (birth of psych): first lab gathering information
- Schools of Thought: different ways of thinking such as: unconscious, conscious,
etc.
- Important to say X caused Y to predict and explain behavior to apply in real world
- How to gather most research: self-report
- Correlational research more common than experimental because hard to find
causation
How Scientists Approach Their Work
• Scientists are empiricists (experience/observation); information needs to be more
systematic though
• Scientists test theories (indirectly) test hypothesis: the theory data cycle
• Scientists tackle applied and basic problems.
• Scientists dig deeper (investigate questions).
• Scientists make it public: the publication process (questioning research, possibly replicate; important to publish findings)
• Scientists talk to the world: from journal to journalism.
• Journalists sometimes scan information and come to cause and effect conclusion from reading headline
Four Cycles
• Theory-Data
• Basic-Applied
• Peer Review
• Journal to Journalism
Theory-Data Cycle
• Theory: general statement about relationship between 2 ideas/variables
• Hypothesis: precise testable prediction, follows theories
• Data: set of observations, "does my data support my prediction?"
• Empiricism and scientific method
• Most important cycle in science
• Ask questions based off theories, make predictions, collect data
• If p is over .05 means data is not significant
• If data supports prediction = stronger theory
Theory example
Is there a relationship between studying and tests
Research question example
Are there different study techniques that would influence grade received?
Research design:
How will I test my hypothesis?
Hypotheses example
Those who study over time get better grades than those who cram. Two techniques: studying VS cramming (research tells us people prefer studying)
Data and example
want to know if you did or didn't support hypotheses; can either support/strengthen theory or be non supportive = revise theory or improve research design (ex: cramming over time instead of just the night before)
Cupboard theory (Bolby):
reason baby is attached to the mom because mom is a source
of food
Contact comfort theory (Harlow):
monkeys/infants attached to mom because she provides comfort
Good Theories
• Supported by Data
• Falsifiable: theory enables you to make hypotheses that can be falsifiable (proven wrong)
• Parsimonious (simple)
• Weight of Evidence: data supports theory or data doesn't support theory; how much support we have for theory
Basic-Applied Cycle
Basic Research/Science
- Goals: enhance general body of knowledge
- Examples: how stress levels influence academic cheating, are men or women
likely to suffer from depression (self-report)
• Applied
- Goals: conducting research to solve practical problem(s)
- Examples: more stress = more cheating, how would we reduce stress levels. Ask more questions: what about being female leads to depression? Psychological differences?
Peer-Review Cycle: How do scientists share the results of their research?
- Scientific Journals: usually published monthly, submit to editor of journal (takes
submission, forwards it to collection of reviewers
- Scholarly validity: are you investigating what you say you are?
- Peer-reviewed: been reviewed by scholarly researchers
- Periodical: appears in regular intervals
Journal to Journalism Cycle: Journal versus Journalism
journals (scientific, peer-reviewed, audience: scientific
scholars) and journalism (activity/profession of being journalist, ex: news caster,
audience: general public, try to make information easy to be understood, interesting to
read, and needs to be accurate)
Journal to Journalism Cycle: Benefits and risks of journalism coverage
Journalism usually single study research, Inaccurate representation of general public (sometimes reported about animals or a
different country). Journalists often make up conclusion when there is none; some studies are disconfirmed, but journalists don't follow up with the report it's disconfirmed. Can be useful in daily lives if accurate.
Direct experience:
you saw the movie yourself
Indirect experience:
Friend saw movie, tells you it's good (based off someone
else's experience)
Control Group:
doesn't get the treatment
Comparison Group:
broad term can be controlled/experimental group, just a group you compare to
Experimental Group:
gets the treatment (ex: applies coconut oil)
Confound Variable:
variable that systematically changes with the independent variable; worst case scenario
Extraneous variables:
3rd variables, things you can't get rid of but can try to minimize
Internal validity:
cause/effect; if we have high internal validity = high confidence X causes Y; low = not confident X causes Y
Research vs intuition
Ways in which intuition is biased
o Being swayed by a good story
o Availability heuristic (shortcut): mental shortcut, something more available (recall
easier) think it's more reliable = biased
o Ex) on plane a lot trying to sell you life insurance, mentions plane accident
o Confirmation bias ("tunnel vision"): tend to look for info confirming
preconceptions
o Ex) two guys meet blonde girl, 1 stops talking to her because thinks she's a
"dumb blonde", other guy continues talking to her and finds out she's a
neurosurgeon
o Bias Blind Spot: biased about being biased; think we are exception
The intuitive thinker vs. the scientific reasoner
o Intuition guides decision making, thinking scientifically prevents bias
Trusting authorities on the subject
What to consider?
o Source
o Will they profit in some way
o Education level/credentials
o Gather info scientifically or intuitively
Finding and reading the research
Consulting scientific sources
Finding scientific sources
Reading the research
Finding research in less scholarly places
Empirical journal articles:
info being presented for first time, most articles empirical
Review journal articles:
summary of published studies already conducted; look like long intro with no methods section
Qualitative
one long intro, description of other's research
Meta-analysis (quantitative)
usually has meta-analysis in title, looks like one research section; takes many research studies to gather conclusions, but no own research
Chapters in edited books:
every chapter by different scientist/psych usually with a
common theme; audience: psych, psych students (scholarly)
Full-length books:
usually for general population; least common source in psych, written by one author
Finding Scientific Sources
• PsycINFO: database for finding info in psych, managed by APA, most peer-reviewed, have to access it in college/university
• Google Scholar: not limited to psych articles, doesn't tell if peer-reviewed, accessible to everyone
Reading Research
• Components of an empirical journal article
- Abstract: summary of article, important to see what research is about
- Introduction: explains topic, theoretical explanation of past info., current study, hypotheses (background, rational, hypotheses)
- Method: participants, material, procedure (step-by-step)
- Results: stats, analysis
- Discussion: description of results in words, limitations of research, future research
- References
Reading with a Purpose
• Empirical journal articles
- What is the argument?
• First place to look at: abstract (determine to go further)
- What is the evidence to support the argument?
• 2nd place look at: ending of intro (hypotheses)
• 3rd: first part of intro
• 4th: 1st paragraph discussion section (key findings)
• 5th: methods
Finding Research in Less Scholarly Places
• All made for general audience
• The retail bookshelf: often with few references
• Wikis as a research source: not reliable
• The popular press
Variable versus constant:
- Vary or change (variable)
- Stays the same, no change (constant)
• Manipulated & Measured Variables
- Ex) effects of working on college performance
- 1. measured: college performance (behavior 1 response)
- 2. manipulated (predictor): working (control over)
- 3. constant: college students
• From conceptual variable to operational definition (operationalized variables)
- How long things are measured (quantify)
Frequency Claims
A frequency claim describes a particular rate or degree of a single variable. Frequency claims involve only one measured variable.
Examples:
a. 2/5 Americans worry everyday
b. 75% of the world smiled yesterday
Association Claims
• An association claim argues that one level of a variable is likely to be associated with a particular level of another variable.
• Association claims involve at least two measured variables.
• Variables that are associated are said to correlate.
Examples:
• Ex) Romantic partners who express gratitude (+) are 3x more likely to stay together
• Ex) People who multitask the most are the worst at it (-)
• Goal: predict behavior
Positive Association
(Claim: romantic partners who express gratitude are 3 times more likely to stay together)
Negative Association
(Claim: people who multitask the most are the worst at it)
Zero Association
(Claim: a late night dinner is not linked to childhood obesity)
Causal Claims
• Two variables, one of which causes the other
• One manipulated variable (x) & one measured variable (y)
• Ex) Music lessons causes higher IQ
• Ex) Smoking causes liver cancer
Interrogating Frequency Claims
• Example: 80% of college students have been depressed during the last year.
- Construct validity: How are we measuring depression?
- Construct validity: How are we measuring depression?
- External validity, or generalizability: validity (Is this sample representative? Can we apply the finding in our sample to the rest of the college student population?
Interrogating Association Claims
Example: People who multitask are the worst at it.
- Construct validity: Am I measuring frequency of multitasking and how good each
individuals is at it?
- External validity: Do the findings in this sample apply to the larger population of multitaskers?
- Statistical validity: is the stat test I chose accurately measuring?
Construct validity:
the extent to which variables measure what they are supposed to measure
external validity
extent to which we can generalize findings to real-world settings
Statistical validity
the extent to which statistical conclusions derived from a study are accurate and reasonable
statistical validity of association claims
• Strength and Significance
• Avoiding two mistaken conclusions
- Type I error: reject when shouldn't have
- Type II error: fail to reject when should have
Covariation:
things are related like studying and test scores they covary
Temporal Sequence (directionality):
x precedes y, to prove this must manipulate the independent, that's why experimental research not survey research is used
Eliminating confounds:
one thing that varies or changes systematically with the
independent variable, obscure ability to see if x causes y
- Ex) study time (IV) if conducting 2 sessions where one group studied in morning, one in afternoon, one group could be tired (confound)
Experiment
manipulate at least one variable look for changes in second
Independent variable:
Creates our groups; manipulated, can be measured (ex: helping
people and low and high self-esteem, independent variable: self-esteem (measured)
Dependent variable:
always measured; (in prev. example dep variable: helping behavior)
Random assignment:
when IV creates groups, not manipulated
when causal claims are a mistake
• Does social media pressure cause teen anxiety?
- Covariance: if there is an association between social media pressure and teen anxiety
• Covariance is only one that can be measured by survey/observation
- Temporal sequence: x (social media pressure) causes y (teen anxiety)
- Confounds (internal validity): can't eliminate confounds = low in internal validity
Other Validities to Interrogate in Causal Claims
• Construct validity: measuring what you said you would measure (ex: anxiety, social media pressure)
• External validity/generalizability: target audience (teens)
- Relationship between external and internal: increase internal decreases external
• Statistical validity: using appropriate stats (p value less than .05, is that stat difference a true difference or just chance)
• Increasing internal validity: using random assignment (decreases external because it controls variables = artificial situation and then taking those findings out into real world)
Prioritizing Validities
-Which of the four validities is the most important?
~ causal: internal
-It depends on what kind of claim the researcher is making and the researcher's priorities.
Three common types of measurement
- 1. Behavioral: observe behavior
- 2. Physiological: measure internally bodily functions (heart rate, blood pressure, MRI- brain scan, EEG-analyzing blood)
- 3. Self-report: questionnaire, verbal (questionnaire has more honest responses, more common because more tie to think of answers, less time, cheaper, can get info from large group)
Converging Measures
measuring something in more than one way
Categorical versus Continuous:
categorical (label & qualitative: difference in kind/type (gender, political party) continuous (quantitative- differences in amounts (age & income)
Measurement Scales
• The scale that is used determines the type of statistical test conducted
nominal
most basic; weakest
- Detects differences between category but not within
- Events assigned to categories
- Arbitrary # assignment
- No order implied
- Weakest level
- Examples: do you like spinach (yes or no answer), have a scale (how much you truly like spinach)
Ordinal
- Stronger level
- Ordering of objects, behaviors or individuals on some dimension
- Can't quantify differences between categories (ex: 4 labels freshman, sophomore, junior, senior; can't tell differences between categories)
- Examples: top ten college football teams 1-10 (no way of knowing why one's better)
Interval (equal interval scale)
- Stronger level
- Differences between categories are meaningful quantities
- Differences represent equal increments (intervals)
- Not a true zero (0 doesn't mean 0 amount)
- Examples
• Temperature
• Semantic differential (list of opposing adjectives, bubble in response) & Likert (strongly agree or strongly disagree; scales are typically equal, well-established scale)
Ratio
- Provides the most information; strongest scale
- Ordering of scores
- Equal intervals
- True zero (lack of something)
- Physical attributes of objects
- Examples: How much money do you make per year? How many pets do you own? How much did you spend on dinner last night?
- Number for anything = ratio
Importance of Measurement Scales
• Determines the amount of information provided by a particular measure
• Determines the types of statistical analyses that can be done
Three Types of Reliability
• 1. Test-retest (.70): taking the test again and getting consistent results. Correlation must be at least 0.70
• 2. Interrater (inter observer)
- Correlation (.70) (if data is continuous)
- Percent agreement (85%) (if data is categorical)
• 3. Internal (internal consistency)
- Cronbach's Alpha (.9+ = excellent, .8+ = good, .7+ = acceptable, .5- =
unacceptable) is the content measuring the intended outcome
Face Validity and Content Validity: Does It Look Like a Good Measure?
Both face validity and content validity are subjective ways to assess validity.
Face validity:
It looks like what you want to measure.
Content validity
The measure contains all the parts that your theory says it should contain.
• Versus Internal reliability (consistency)
Another way to gather evidence for criterion validity is to use a _________
known-groups paradigm.
The Relationship Between Reliability and Validity: Bathroom scale example
- If someone that weighs 200 lbs. steps on the scale 10 times, and it reads "200" each time, then the measurement is reliable and valid. If the scale consistently reads "150", then it is not valid, but it is still reliable because the measurement is very consistent. If the scale varied a lot around 200 (190, 205, 192, 209, etc.), then the scale could be considered valid but not reliable.
Why are both producers and consumers of research important?
Producers generate new knowledge through experiments and studies, while consumers apply research findings in various fields such as clinical practice, education, and policy-making.
What are the historical perspectives on human knowledge?
1. Metaphysical (animism, supernatural beliefs)
2. Philosophy (speculation-based knowledge)
3. Physiology (scientific foundations of psychology)
What is empiricism?
A method of gaining knowledge through observation and experimentation.
What is the theory-data cycle?
The continuous process of forming theories, testing hypotheses, and collecting data to refine theories.
What makes a "good" theory?
- Falsifiable (can be tested)
- Parsimonious (simple explanations preferred)
- Supported by the weight of evidence
What are the differences between applied, basic, and translational research?
*Basic*: Expands knowledge without immediate application.
- *Applied*: Solves practical problems.
- *Translational*: Bridges basic and applied research.
What is the peer-review cycle?
The process of experts evaluating research before publication to ensure quality and validity.
What are the risks and benefits of scientific journalism?
*Benefits*: Increases public awareness and accessibility.
- *Risks*: Potential misinterpretation or sensationalization of findings.
How is research probabilistic?
It aims to generalize findings based on probability, not absolute certainty.
What is the difference between research and experience?
Research includes control groups and comparison groups, while personal experience lacks systematic controls.
What are common confounds in research?
Variables that systematically vary with the independent variable, potentially skewing results.
What are cognitive biases that affect intuition?
- Availability heuristic
- Confirmation bias
- Bias blind spot