1/100
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
1st Challenge to the notions to numbers
Judgment Phase - Are participants thinking about the same question as the researcher?
2nd Challenge to the notions to numbers
Response Translation Phase - Can participants translate internal psychological states into a value on a response scale?
Perspective Taking
Stepping into the shoes of your average participant and imagining how they will interpret a question
Focus Group
When a small, representative sample of participants from the group of interest meet to discuss their experiments
Open-Ended Questions
Allowing people to respond in their own words
Pros of Open-Ended Questions
- Refine structured ratings scales
- Use as primary source of data
- Code in a number of ways
Cons of Open-Ended Questions
- Coding is resource intensive
- Reverts data to numbers that participants could have reported, if asked
- May be vague and not provide much of an answer
Structured Self-Report Scales
People answer in a (usually) 1-7 scale, where each value means a certain extent to a question
Pros to Structured Self-Report Scales
- Greater Reliability
- Establish the judgmental context
- Protects participants anonymity
Cons to Structured Self-Report Scales
- Limits contextual information and richness
- Tricky to assess sensitive questions
- Must take great care to develop items
Anchors
The words that label each end of the scale
True or False - Anchors are ALWAYS only on the edge
False
True or False - When asking a question, you should keep it simple and clear
True
Things to Avoid with Writing a Question
- Negations (“Did you not help due to safety concerns?”)
- Jargon (“What prosocial behaviors do you engage in regularly?”)
- Double Negatives (“No one should feel guilty for not helping”)
- Double Barreled Questions (“How much do you like giving advice to fix other people’s problems?”)
- Forced Choice Questions (“Is it more important for you to volunteer your time or donate to charity?”)
True or False - Open ended questions are great for NEW areas of research, while structured self-report questions are good for contributing to ESTABLISHED research
True
Double-Barreled Question
Unintentionally asks 2 things at once.
EX: Was her confrontation of the offensive remark helpful.
Was the remark offensive?
Was her confrontation helpful?
Forced-Choice Question
Intentionally makes you choose between 2 or more options
EX: “Would you rather help by donating time or money?”
Asks: How likely are you to donate to ___
Questions we ask for creating a response
What are the types of scales we may use?
How many numbers to use?
What anchors will you build into your scale?
What is the best numbering system?
True or False - Within rating numbers, giving too little numbers leads to indecision and less precision, while giving too many is crude.
False
Unipolar Numbering System
Rating begins at a very low value and moves up to a subjective maximum points of the dimension of interest
Bipolar Numbering System
Rate a quality that deviated in both directions from a zero point
When combining items into scales, you should…
- Think about what you want to measure
- Write a lot of questions
- Use data analysis to identify the best items
True or False - When creating a scale, you SHOULDN’T avoid questions that don’t yield variance
False
Floor Effects
When scores tend to be at the bottom of a scale
Ceiling Effects
When scores tend to be at the top of a scale
Factor Analysis
Determines if all items on a measure assess the same psychological construct or if some items are influenced by different construct
Tells us how many factors/components/subscales are present
True or False - .3 or above is good for Eigen values (factor analysis)
True
Statistical Reliability
Cronbach’s alpha determines inter-item reliability within the same scale
Tells us if our scale is reliable, and is used on only 1 factor/component/subscale
True or False - .7 or above is good for Cronbach’s alpha (statistical reliability)
True
Pseudo-Experiment
Tests a claim about a variable by exposing people to the variable of interest and then noting that these people feel, think, or behave as expected
EX: “Eating Fruit Loops causes kids to grow”
Control Group
Used to assess effects of experimental manipulation
EX: Group A - Fruit Loops, Group B - Other cereal
Pretest-Posttest
Data collected from two presumably comparable groups of participants, before and after one group receives a manipulation
Selection Bias
A threat to External Validity in which sampling people from unrepresentative samples by using imperfect sampling techniques
Fix - Random sample and random assignment
Non-Response Bias
A threat to External Validity in which respondents themselves are the source of bias by not answering.
EX: People who choose to answer surveys systematically differ from people who choose not to do so
Fix - Encourage participants to participate through rewards, clarity, etc
Mere Measurement Effect
A threat to External Validity in which there’s a tendency for participants to change their behavior simply because they have been asked how they will act in the future
Fix - A control group not asked about behavior
History
A threat to Internal Validity in which there’s changes that occur across the board in a very large group if people. Unrelated to the IV, but may appear as a treatment effect.
EX - Thanksgiving, weather, etc
Maturation
A threat to Internal Validity in which there’s changes that occur over time in a specific person or group of people due to normal development or experience (growth & learning)
Fix - Control group
Hawthorne Effect
A threat to Internal Validity, and a subset of studying people can change people, in which workers increase productivity when they think they are being studied, which may be mistaken for treatment effects
Testing Effects
A threat to Internal Validity, and a subset of studying people can change people, in which most participants perform better on a test the second time they take it. Tests - learning and strategy. Physical task - practice. Attitudes - Polarization and desirable responses.
1st Testing Effects Fix
Hold a true experiment with pre-tested control group
Separate testing effects from the treatment effect.
2nd Testing Effects Fix
Eliminate the pretest
If participants are randomly assigned to treatment or control, assume any difference after treatment is due to the treatment.
3rd Testing Effects Fix
Wait as long as possible to administer posttest
Participants might forget what they learned.
Best to do when having a control group is unethical.
Regression to the Mean
A threat to Internal Validity in which people who receive extreme scores on a measure tend to score closer to the mean on a later test.
True or False - Regression to the mean is caused by chance.
True
True or False - You can have both testing effects and regression to the mean at the same time
Testing - Low and high scores both go up
Regression - Low goes up, high goes down
True
Confound
Something that should have remained constant, but instead varied. Changes systematically with the IV
A threat to Internal Validity
Confound, Co-vary
Artifact
Something that should have varied, but instead remained constant. Who/what/when that represents a restricted context in which the IV impacts the DV.
A threat to External Validity
An Artifact puts a Boundary on our Conclusions
True or False - Mere Measurement Effects are very common in market and consumer research
True
Experimental Mortality and Attrition
A threat to Internal Validity in which there’s failure of some participants to complete the study.
Homogeneous Attrition
A threat to External Validity in which there’s an equal level of attrition across all conditions. The people that remain are likely to be different from the previous total - sample change.
Heterogeneous Attrition
A threat to Internal Validity in which attrition rates are noticeably different in two or more conditions. It erases normal benefits of random assignment.
Fixes for Attrition
- Carefully consider procedures and materials
- Do everything within reason to keep people from dropping out of a study through information & enthusiasm, inoculation, breaks, and increasing incentive.
Experimenter Bias
Experimenters’ expectations about their studies bias their experimental observations
Fix - Double-blind procedure, in which both experimenter and participants are kept unaware of the participants’ treatment condition.
1st Type of Experimenter Bias
When experiments see what they expect to see, while others don’t
2nd Type of Experimenter Bias
Experimenters treat participants differently and according to their expectations
Participant Reaction Bias
Occurs when people realize they’re being studied and behave in ways they normally wouldn't
Meet Expectations
A part of Participant Reaction Bias in which participants try to please the experimenter, feel normal, perform “duties” as a participant, and usually guess the wrong hypothesis.
Demand Characteristics
A part of “Meet Expectations” within Participant Reaction Bias in which it’s the characteristics of an experiment itself that subtly suggest how people are expected to behave.
Disconfirm Expectations - Participant Reactance
A part of Participant Reaction Bias in which there’s a tendency of participants to try to disconfirm an experimenter’s hypothesis
Look Good - Evaluation Apprehension
A part of Participant Reaction Bias in which people’s concerns about being judged favorably or unfavorably by another person affects their behavior
3 Ways of Reducing Participant Reaction Bias
- Reduce evaluate apprehension
- Unobtrusive Observation
- Indirect Measures
Reduce evaluate apprehension
The first way of reducing participant reaction bias in which you a) increase anonymity and b) use a cover story
Unobtrusive Observation
The second way of reducing participant reaction bias in which you make participants unaware of a condition through either a) hidden cameras, or b) a bogus pipeline (getting a pre-measure)
Indirect Measures
The third way of reducing participant reaction bias in which you alter the way people judge others’ behaviors through reducing subtle questions and a measure reaction time (IAT)
Non-Experimental Design
Does - Attempt to understand and interpret behavior
Doesn’t - Manipulate a variable, random assign, have control over who receives a treatment or at what time, nor searches for the cause of a behavior
Archival Research
The first type of Non-Experimental Design in which you examine naturally existing public records to test a theory or hypothesis.
Hospital records, marriage licenses, political speeches, etc
True or False - You should use an archival research when studying an ethically sensitive topics, when variables are hard to manipulate, and when you want to generalize
True
Pros to Archival Research
- Potential for high external generalizability
- No participant reactance or social desirability
Cons to Archival Research
- Difficult to have high internal validity
- Many potential third variables / confounds
- Data sets missing variables of interest
- Reverse causality
Case Studies
The second type of Non-Experimental Design in which you make careful analyses of the experiences of a particular person or group
True or False - Case studies try to explain unusual events by NOT relying on established scientific principles
False
Pros to Case Studies
- Usually the first step in research on a topic
- Uncovers general psychological principles
- Involves people with rare conditions
- Studies experience that would be difficult or impossible to recreate in the lab
Cons to Case Studies
- Cannot test causality or use statistical analysis
- Limits on external generalizability
Single Variable Research
The third type of Non-Experimental Design in which a study is designed to describe some specific property of a large group of people. Typically descriptive.
True or False - Single Variable Research can test causality
False
Population Survey
A subset of Single Variable Research in which the goal is to represent THE population of interest by asking lots of people that are representative of the full population
Census
The first type of Population Survey (Single Variable Research) in which there’s a body of data collected from every member of a population of interest
Survey
The second type of Population Survey (Single Variable Research) in which you identify a subset of people in the population and then use their answers to estimate answers of the entire population.
True or False - Surveys use random selection, as the goal is to select the right sample in order to describe the POPULATION
True
Cluster Sampling
A subset of Surveys in which a modified version of random selection is used:
1) Create manageable list of all possible locations to find members of a population
2) Select a reasonable number of locations from total list of all locations
3) Randomly sample people from each selected location
Sampling Error (margin of error)
A subset of Surveys that reflects likely discrepancy between the results we get in a specific sample and results we probably would’ve gotten from the entire population
True or False - Sampling error is only positive
False
Public Opinion
A subset of Surveys where research is designed to determine the attitudes and preferences of specific populations. Used by Marketing Research to determine preferences for products and services
Epidemiology
A subset of Surveys in which there’s scientific study of the causes of disease. Within psychology, they’re descriptive studies that focus on the prevalence of psychological disorders within meaningful, well-defined populations
Observational Research
The fourth type of Non-Experimental Design in which investigators record the behavior of people in their natural environments
Data Collection
The first step of Observational Research in which collecting data should be unobtrusive (don’t interfere with natural behavior and people don’t know they’re being studied), and should avoid selective perception
Data Analysis
The second step of Observational Research in which large amounts of data are examined thoroughly, looked for patterns, and is used to generate hypotheses to test with experiments
Pros of Observational Research
- Describes real behavior in natural settings (generalize)
- Shows how behavior unfolds over time
Cons of Observational Research
- Qualitative, relies on subjective judgment
- Doesn’t tell us how one variable influence another
Similarities between Observational and Archival Research
- Behaviors observed are always real
- Researchers do not manipulate anything
Differences between Observational and Archival Research
- In observational, researchers make the observations rather than them already being made, increasing control over what they observe and how they observe it
- Observational is usually on a smaller scale than archival
Correlational Studies
The fifth type of Non-Experimental Design in which you gather observations about a group of people and test for associations between variables. An indicator of the strength of association between two variables
True or False - The range for correlational studies is -1 to 0 and 0 to 1, but never 0
False
True or False - Theoretical conclusions are easy to get from Correlational Studies.
False
True or False - Correlation does not equal Causation.
True
Person Confound
Occurs when a variable seems to cause something because people who are high or low on the variable also happen to be high or low on some individual difference variable that is associated with the outcome variable of interest
Environmental Confound
Similar to person confounds except it refers to situational rather than personal variables
Events outside of the person mimic the influence of the IV on the DV
Operational Confound
Occurs when a measure designed to assess a specific construct inadvertently measures something else as well. Deals in how items are operationally defined
Easy to spot then the other variable has nothing to do with the IV and has a lot to do with the DV
Reverse Causality
What you think is the DV might actually be the IV
Longitudinal Studies
The sixth type of Non-Experimental Design in which you follow people over a long period of time and make repeated assessments of the variables of interest