1/88
-Description: How are people, thinking, feeling, or acting in response -Explanation: Understand what causes an event to occur “how” “why” -Prediction: predict future events based on previous observations Application: how can we help change peoples behaviours and improve lives
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Types of psychological questions:
-Description: How are people, thinking, feeling, or acting in response
-Explanation: Understand what causes an event to occur “how” “why”
-Prediction: predict future events based on previous observations
Application: how can we help change peoples behaviours and improve lives
Prior Research on the topic: Covid vaccine
-Hornsey,Harris & feilding, 2018 looked at anti vax attitudes in 24 countries. Attitudes we associated with motivational factors like reactance and disgust
-people don't like to be told what to do
What is Science?
-Must be an observation:systematic empiricism
-Must be testable: empirical research questions
-Results must be shared: public knowledge
What makes an idea testable:
-Can be supported or opposed with data: value judgements cant be tested
-Can be falsified
What isn't science:
-Pseudoscience: activities and beliefs that pretend to be science but dont follow scientific principles
-pseudoscience uses ad hoc hypotheses to make data fit the theory, subjective, avoids peer review, studied are vaguely describes; cant be reproduced
Basic vs Applied Research
-Basic: Conducted to collect better understanding of human behaviour, without trying to solve a problem
-Applied: Conducted to try and solve a problem
Why is Psychology a science:
-Because it adheres to scientific method; uses empirical observation, testable, falsifiable, shares results
-Qualitative methods:
Methods that produce data such as written text, photos, interviews, videos etc
structured vs unstructured interviews
-Structured interviews: asks client a list of questions, record responses
-Unstructured interviews: Let client lead the conversation
cross sectional surveys
Measure some constructs; see how they are associated
-experiments: Manipulate one construct, the measure another (EG. Randomly assign some people to experience awe ( experimental condition) while others do not ( control), and the measure prosocial behaviour
Longitudinal studies:
Measure constructs repeatedly to see how they change overtime
Multimethod Design
Use of multiple designs incorporated into one: Applied question: If we get people to spend more time in nature, will
that increase prosocial behavior over time?
■ Test with a combination of research design elements
● E.g. an experimental manipulation + longitudinal follow ups
Generating research ideas ( what to think about first)
■ Think groundbreaking
■ Basic or Applied?
Degrees of Scientfic Progress:
Large, groundbreaking progress tend to:
■ Tackle questions of broader significance
■ Be relevant to a number of different research areas
■ Shift how researchers conceptualize a topic
Small, incremental progress tends to:
■ Advance a specific question, limited in scope
■ Be relevant to a specialized area
Ground breaking research: Basic Approach
What's an important phenomenon that we do not understand?
○ What’s been holding us back from understanding it?
○ What are some new ways we can bridge that knowledge gap
Important Basic Research Advancement Occurs:
-When a new theoretical model is developed that parsimoniously explains
a phenomenon
-When a key idea (an existing theory, assumption, piece of conventional
wisdom, etc.) is challenged
-When a new method is uncovered that can tackle previously-unexplored
questions
Need to Belong Theory (Baumeister & Leary, 1995)
People have a fundamental need for social connection,
similar to our need for food
● We need frequent, pleasant interactions with others
● We need relationships with those others, stable, enduring,
with concern for each other’s welfare
● When these needs aren’t met, we suffer
● We only need so many relationships (satiation)
Are Humans Inherently Self Interested? Yamagishi et al., 2014:
People played an online economic game where they
allocated real money to self vs stranger
○ Only 7% of respondents kept all the money
Sebastian-Enesco et al., 2013
Toddlers played a game with an adult
○ Prosocial option (both self and partner benefit) vs
selfish option (only self benefits)
○ Toddlers consistently chose prosocial option, even if
adult partner chose selfish option
Ground Breaking Research: The Applied Route
What’s an important societal problem
○ What’s a factor that we think or know to be causing that problem
○ What are some new ways we can solve that problem
approaches to applied advancement (4)
■interventions
When a new exercise, treatment, way of thinking, etc. can be
implemented to help with a problem
Better Decision Making:
■ When making a certain kind of choice helps the problem
○ Persuasion:
■ When people can be convinced that something is the problem
○ Policy Implementation:
■ When there’s something that the government or another
organization can do to help solve the problem
The Wrong Way to Do a Lit Review:
■ Exhaustively find and read ALL the research that’s been done on
the topic
● Do not discriminate between different sources and journals –
give everything equal weight
■ Make sure your lit review reads like a list of previous work
● Don’t try to tie it together into a narrative – just read and
describe them study by study
■ Then, slap your own research idea on top of that
● Don’t use literature to inform your idea. Just tell us what they
did, and then what you’re going to do, without integration
Conducting a Literature Review
Reviewing existing literature is essential for:
● Figuring out what’s already done and what has not
● Informing your hypothesis
● Informing your design
■ In research, it’s often good to be unoriginal (don’t reinvent the
wheel)
low vs high quality sources (lit review)
■ You want to draw from expert’s stuff first and foremost
■ Consider the source: Research can be everywhere
○ E.g. academic journals, textbooks, news articles,
blogs, pop-psych books, garbage social media posts
● Differ in Target audience: Scientific vs lay audience
● Differ in terms of originality: Primary sources (original research) vs secondary
sources (summaries of research)
○ Primary Sources: Place where the research was originally
published
■ Written for other experts, Preferred by researchers because they
typically Included full methods and results of a study
● Cite claims that are made
What is a literature review?
A summary of the most relevant work that has previously been
published on your specific topic of interest
■ It is NOT an exhaustive list of everything ever done
Secondary Sources:
Summarizes information from primary sources
■ Typically written to be accessible to a
non-expert
■ Less preferred by researchers because they
may be: Incomplete or Inaccurate
-not peer reviewed (quality control problem)
Research ethics: Core principles 1
Core Principle 1: Respect for Persons (most important)
● Respect people’s autonomy with informed consent: Participants should always give free, informed and
ongoing consent
● Ways to Violate the Respect Principle: Failing to disclose risks:
■ Milgram study had this problem
○ Failing to make the study understandable
■ Legalese (complicated legal wording); improperly translated(misinfo); populations with diminished capacities (kids)
○ Coercive incentives: An offer they can’t refuse
Ethics Core Principle 2:
Core Principle 2: Concern for Welfare
The Benefits of the study must outweigh the risks
● Ways to Violate the Welfare Principle: Not doing everything possible to avoid risks
■ There are usually risk reduction steps available, and the onus is on the researcher to
take them
○ Conducting a bad study: Bad research wastes everyone’s time
Ethics core principle 3 (Tuskegee study example)
Core Principle 3: Seek Justice:
● The study should be conducted justly, meaning: Participants must be compensated fairly, Risks and benefits of the study must be distributed
equitably across groups, Researchers must act with integrity
● Ways to Violate the Justice Principle: Unreasonably low compensation, One group participates and another group benefits, Lying to the participants or to the scientific community
●Researchers must uphold ethical standards to maintain participants trust in us
○ Tuskegee Study: No informed consent, Failure to mitigate risks, Benefits of study did not outweigh risks, Study continued when syphilis treatment already existed. 128 people died, 40 wives infected, 19 children born with syphilis
■ The study recruited poor black men specifically and Deeply unjust
■ Long Term Impact: Widespread distrust in medicinal
research within black communities,
What is a Theory
A coherent explanation or interpretation of one or more phenomena
○ It is the answer to the “why” question. What is the mechanism?
○ Theories often deal with theoretical constructs:
■ Variables that can’t be directly observed
■ E.g. memory, costs/rewards, schizophrenia
whats a Phenomenon
○ An established finding
○ Something we know to be true from repeated observation
○ Phenomenon: Over generations, species become increasingly well suited to new
climates
○ Theory: This occurs because of natural selection (Theory of Evolution)
Attachment Systems (relationships + family)
Attachment System (Bowlby, 1979)
○ Theory: We have a biologically based system that promotes attachment to
close others, often one person in particular (attachment figure)
● Origins of Attachment System: Babies are very weak and helpless
■ Staying close to parents in childhood promotes survival
○ Attachment system is adaptive because it promotes infant-caregiver
bonding
● Attachment in Adulthood: Pair bonding is also adaptive:
■ Children were traditionally more likely to survive with the help of
both parents
○ The attachment system, which we use to attach to parents in childhood,
transfers to romantic partners in adulthood
■ Fraley et al, 2005
● Activation of the Attachment System
○ When we are not distressed, there’s no attachment activation
○ When we are distressed, the attachment system activates and motivates
us to seek out our attachment figure
● Activation Theory Explains: New couples rely on each other for support and don’t like to be apart. Couples how physiological stress symptoms when one partner travels for
work
○ Breakups are highly distressing and take about a month to recover from
○ People try to re establish contact with loved ones during disasters
Perspective
A broad approach to explaining a phenomenon
● At what level are you measuring things?
● E.g. Developmental, biological, cognitive
■ Broader and vaguer than a theory
models
Models:
■ A precise explanation of a specific phenomenon
● E.g. Why are some people really clingy with their friends and
partners?
■ Narrower and more specific than a theory
What are theories helpful for
■ Organizing what we know
■ Making testable predictions
Conscientiousness
● Conscientiousness: The propensity to control impulses, be goal directed,
plan, delay gratification
hypothesis
hypothesis: Tentative statements about the association between
variables that can be directly tested
● Theoretical claim: Conscientious people are better at controlling
impulses
● Hypothesis: Conscientious people are less likely to cheat on their
romantic partner
Theory: Formality
■ Formality: How clearly specified is the theory? How specific and detailed are the components of the theory?
■ Informally described: Losses are more painful than gains
Theory: Scope
How broad a range of behaviours is captured by the theory?
● How broad is the theory?
○ How many phenomena does it attempt to explain?
Theoretical Approach
● What kinds of theoretical ideas is the theory constructed
from
● Are you trying to explain how something happens, or why it
happens?
Functional Theories:
■ Explaining something’s function or purpose (the why)
■ Evolutionary psychology contains many functional theories
Mechanistic Theories:
■ Explaining something’s mechanism (the how)
■ Neuroscience contains many mechanistic theories
Theories Are Working Truths
○Testing Hypotheses can modify theories: If hypothesis is supported, theory is supported
○ A single study cannot prove or disprove a theory
○ Replication is critical for confidence in a theory
Direct vs Conceptual replication
Direct Replication:
■ Repeating the study in the same manner
■ Increases confidence in the hypothesis
Conceptual Replication:
■ Using different methods to test the same research question
■ Increases confidence in theory
Open science movement: What creates false positives?
■ Incentives to publish: Academics are rewarded for publishing (with jobs, grants,
tenure, respect, etc.) which can motivate people to take
shortcuts
■ Assuming your effect doesn’t exist, how likely are these results?
○ You should get a false positive 1/20 times
○ But if you try to test the same thing 20 different
ways… then you falsify your findings
whats a p-value
The probability that you would get these results if the
null hypothesis were true
What does it mean to "add or drop experimental conditions"?
It refers to changing the experimental setup by including or excluding certain conditions that were initially planned to be tested.
How do open materials and data improve research transparency?
Allow readers to see and test all dependent variables (DVs).
Readers can analyze data with and without covariates (can influence but not main interest).
Ensure that experimental conditions are visible and testable.
potential downsides of open sharing
Sharing data can have ethical implications.
direct replication
Direct replication: Conducting the exact same study again to see if you obtain the same results.
The best replication attempts use a very large sample (e.g., collected from across many labs).
Most journals publish these now, even if they fail.
Credibility Revolution
The movement has expanded to many other ways we can improve research practice.
Credibility is not just about statistical results.
How can we improve the validity of:
Our measures?
Our experimental manipulations?
Our samples? (e.g., greater diversity)
constructs
Variables that cannot be observed directly
■ E.g. traits, emotions, attitudes, abilities
conceptually vs operationally defining constructs
Conceptual Definition: Explains what a construct means in theory (e.g., happiness is a state of well-being and contentment).
Operational Definition: Specifies how the construct is measured or observed in practice (e.g., happiness is measured by self-reported satisfaction on a scale of 1-10).
Types of measurement
● Self report measures: Interviews or questionnaires
○ People report their beliefs, behaviour, history, etc.
● Behavioural measures: Observations of behaviour
○ Could be naturally occurring or lab induced
● Physiological measures: Assessment of bodily states
■ E.g. brain imaging (fMRI, PET); heart rate
Methodological Advances
New measurement options can become available with
new technology
Feasibility
Resource limitations (e.g. time, money) may constrain
your choice
Reliability (true score, obtained score)
■ Does your measurement consistently measure the same thing?
■ Accuracy: No measure is going to be completely accurate
○ E.g. scale will be slightly off, questionnaire scores
won’t be identical
● True Score: The real score on the variable
● Obtained Score: The score the measure gives
measurement error
-Difference between true score and obtained score
● Want to minimize measurement error: Does your measure give consistent results under the same conditions? (repeating questionnaire questions)
● E.g. If nothing changes: Scales should give the same weight. Questionnaire results shouldn’t change if taken twice
test-Retest reliability
○ Test it with test-retest correlation
○ Same test is given twice with some time in between
○ Good for stable qualities (e.g. personality) not good
for temporary states (e.g. mood
Parallel Forms Reliability:
○ Different forms of the same test used
Internal Consistency:
Test with split-half correlation:
■ Top half of questionnaire is compared to
bottom half
interrater Reliability
Multiple raters observe behavior to increase reliability
Validity
■ Does your measurement consistently measure the right thing?
■ How do you test the validity of a measure?
Face Validity:
Does the measure look like it measures the thing it’s
meant to measure? (how effective is it)
content validity
○ Does the measure capture all the important facets of
the construct?
Criterion Validity:
○ Convergent Validity:
■ Does it correlate with similar variables?
○ Predictive Validity:
■ Does it predict expected outcomes?
Discriminant Validity
○ The measure does not strongly correlate with unrelated constructs.
○ Also known as Divergent Validity
○ If you don’t achieve discriminant validity, your measure is likely too general or broad
Grit Measure:
■ “You have a certain amount of intelligence, and you can’t really do
much to change it”
■ Your talent is something about you that you can’t change very much
■ “You can learn new things, but you can’t really change your basic
intelligence”
categorical data
■ Represent with Pie charts and bar graphs
■ Each value represents a discrete category
■ Order does not matter
Numerical Data
■ Represent with histograms and scatter plots
● Sometimes time series graphs, if data collected over time
■ Each value represents either a real number (e.g. age) or a place on
a continuum (e.g. a rating scale)
■ Order Matters
● E.g. Happiness where 1 = very unhappy, 7 = very happy
discrete vs continuous numerical data
■ Discrete: The variable has a discrete, finite number of values
○ E.g. Day of the month on which you bought your last
avocado, Only 31 possibilities
■ Continuous: The variable has an infinite number of values
● Usually Assumed to be Normally Distributed
Time series graphs
■ A special kind of line graph that shows how something changes
over time
■ X-axis: Time, usually as a discrete variable
■ Y-axis: A continuous variable you care about
survey research
Survey research uses self-report
■ People are reporting on their own thoughts, feelings, behaviors, etc.
○ Survey research tries to obtain generalizable samples
■ Ideally large and random
Survey Advantages:
■ Can assess non-observable variables, as well as variables that you
cannot (ethically) manipulate
● Demographic information (e.g. sex, age, ethnicity)
● Attitudes and beliefs
● Past behaviour
● Current behaviour that cannot be observed
● Quick to administer and score
Can gather a lot of information
● Requires few resources
interviews
● Structured or unstructured:
● Costly
● Interviewer bias
● Social desirability concerns
phone survey
■ Phone surveys:
● Structured or unstructured
● (Used to be) easy to get random samples
○ Cell phones and telemarketing ruined that
● Cheaper
● Fewer social desirability concerns
questionnaires
● Paper or electronic
● Cheapest
● Fewest social desirability concerns
● We are focusing mainly on questionnaires
Survey Disadvantages
Accuracy may be low
● Participants may lack insight about certain variables
● May forget previous behaviour
● May respond in a socially desirable manner (i.e lying)
Not manipulating IV, thus cannot determine causation
● True of all correlational/non-experimental research
Developing Valid Survey Questions:
■ Each item should be BRUSO: Brief, Relevant, Unambiguous,
Specific, and Objective
■ A good survey item is Brief: Avoid long or run-on sentences, unnecessary words, technical terms, acronym, jargon
■ A good survey item is Relevant: Avoid the temptation to include lots of extra items “just in case” (Especially personal, “nosy” questions)
■ A good survey item is Unambiguous: Avoid vague or imprecise terms
● Avoid negative wording: “Do you disagree with the idea that parents should not
spank their children”
■ A good survey item is Specific: Avoid Questions that ask two things at once
■ A good survey item is Objective: Avoid leading questions, emotionally charged words
Open-Ended vs Closed-Ended Items:
Open-Ended Items: Allow participants to respond however they want
○ What is the most important thing in running a
business?
■ Closed-Ended Survey Items: Closed-ended questions give a limited number of responses
■ Closed-Ended Categorical Items: For categorical questions, simply provide a list of options
■ Closed-Ended Continuous Items: For continuous items, we use rating scales
○ Pick a number on a scale (e.g. 0 to 10)
● Key researcher decisions: How many points on the scale?
○ What are the anchors: The labels on the ends of the scale
● Likert Scale: Common type of rating scale used to assess degree
of liking or agreement
Simple Random Sampling:
■ Everyone in the population has an equal chance of participating
Stratified Random Sampling:

○ Stratified Random Sampling: Important Subgroups are identified:
● E.g., ethnicity, gender, age, income, etc.
■ Obtain a random sample of each subgroup to mirror the population
■ E.g., sample students from public, Catholic, and private elementary
schools in London
Nonrandom Sampling
Everyone in the population does not have an equal chance of
participating
● May lead to a biased sample: Characteristics differ from the population
● Often due to selection bias: Sampling procedures that favour certain characteristics
Convenience sampling:
● Use participants who are easily available
● Very common
● Limits external validity
Student Populations:
■ 81% of research participants are university students
■ Advantages:
● Easy access to students, free
● Educate students about research
■ Disadvantages:
● Less variability in age, education, intelligence, wealth
Internet Populations
■ Tend to: Have a lot of free time
● Be lower income
● Be more tech savvy
● Take research less seriously
Voluntary Participation
■ Ethically, participation must be voluntary, but this can affect external
validity
● Volunteer bias:
○ Volunteers are different than non-volunteers
○ More educated
○ Higher social class
○ Higher intelligence
○ Higher need for approval
○ More social
○ More “arousal-seeking”
○ Women volunteer more
Generalizability
■ Is random sampling always necessary?
■ Need to consider how much participant characteristics are likely to
affect results
● Sometimes, very much
○ E.g. political polling
● Sometimes, less so
○ E.g. Vision and reaction time
○ E.g. Mere exposure effec
what do employers want
communication skills, strong work ethic, sense of inititive, teamwork skills, interpersonal skills,
Building an academic network
Advantages of an academic network:
■ Opportunities:E.g. scholarships, internships
■ Advocacy: E.g. reference letters
■ Mentoring and support
Getting to know your professors: Office hours:
■ Class discussions: