1/141
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Historical Context of research
Empirical approach (direct observation)
Social Cultural context of research
Influences researchers choice of topics, society's acceptance of findings, and the locations in which research takes place.
Moral context of research
Demands researchers maintain highest standards of ethical behavior
Multi-method approach to research
Search for an answer using various research methodology and measures of behavior
independent variable
The experimental factor that is manipulated
Constructs
Concepts or ideas (Intelligence, depression, aggression, and memory)
Operational Definition
Explains a concept solely in terms of observational procedures used to produce and measure it
Reporting
What separates what you have observed from what you infer
Reliability=
Consistency
Reliability:
Stability or consistency of measurement.
Validity:
how well a test actually measures what it is supposed to measure — and whether we can make accurate decisions from it.
Validity
Accuracy
Goals of the scientific method
Description (seek to describe events/relationship)
Prediction (hypothesis)
Explanation (understanding cause)
Application (Apply knowledge and research methods)
What is the IRB
Institutional Review Board
It reviews research proposals to ensure ethical considerations are met.
Minimal Risk in research is-
When procedures or activities in the study are similar to those experienced in everyday life
Informed Consent
Person explicitly expressed willingness to participate in a project based on clear understanding of research
Privacy
Rights of the individual to decide how info about them is communicated to others
Steps for ethical compliance
1) Review face of the proposed research situation
2) Identify the relevant ethical issues
3) Consider multiple viewpoints
4) Consider alternative methods or procedures and their consequences
Why should we care about research?
Foundation of psychology, and relevant to everyday life
What do procedures do in research?
Create information (researchers, academics, etc)
What do consumers do in research
"interrogate" information (therapists, teachers, etc)
Names of the Four Scientific Cycles
Theory data cycle
Basic-Applied research cycle
Peer-review cycle
Journal to journalism cycle
The Four Scientific Cycles:
Theory-Data cycle
Scientists collect data to test, change, or update theories
The Four Scientific Cycles:
Theory-Data cycle
Theory-
A statement that describes general principles about how variables relate to one another.
The Four Scientific Cycles:
Theory-Data cycle
Empiricism-
Collect data to figure out or challenge theory
The Four Scientific Cycles:
Theory-Data cycle
What makes a good theory
Supported by data
Falsifiable
Parsimonious ( all things being equal)
The Four Scientific Cycles:
Basic-Applied research cycle
Basic research:
Applied research:
Basic research: goal is simply to enhance general knowledge
Applied research: done with practical problem in mind; research will be directly applied
The Four Scientific Cycles:
Peer-review cycle:
Submission -> review -> feedback -> revisions -> review
The Four Scientific Cycles:
Journal to journalism cycle
Popular press picks up a topic in the field
Research vs. Experience-
Experience has NO comparison group
Ex: punching a punching bag to release anger vs. sitting quietly to release anger
The good story/it makes sense-
Cognitive bias
Punching a punching bag just makes sense
The present/present bias-
Cognitive bias
Can't remember times when you didn't punch a punching bag
Pop-up princple/availability heuristic-
Cognitive bias
Information that is more salient is what we're more likely to make judgements about
Ex: fearing plane crashes more than car accidents because plane crashes receive more intense, memorable media coverage, despite being statistically less frequent.
Cherry picking the evidence/confirmation bias
Cognitive bias
Seeing what we want to see
Asking biased questions-
Cognitive bias
Being overconfident in research-
Cognitive bias
Not a good indicator of accuracy
Confounds:
Alternative explanations
3rd variable
Tuskegee Syphilis Study:
Government study from 1932-1972 which investigated effects of untreated syphilis on African American males.
They were told they were being treated, when they weren't.
Milgram obedience study:
Ethical considerations:
stressful to participants
lasting effects of study
Beneficence (ethics):
Cost-benefit analysis
Does it do more harm than good
Belmont report:
Respect for persons: informed consent
Respect for Beneficence: cost benefit analysis for participants and society
Respect for Justice: How are participants selected
APA ethical princples (5):
1) Beneficence and non-maleficence
2) Fidelity and responsibility
3) Integrity
4) Justice
5)Respect for people's rights and dignity
Deception:
Through omission:
Through commission:
Through omission: Leaving out info
Through commission: lying
Frequency Claim
A statement about how often someone does something
Association claims-
Positive:
Negative:
Curvilinear:
Zero:
Positive: as one increases the other increases, vice versa
Negative: As one increases, the other decreases
Curvilinear: inverted u; positive up to a certain point then negative
Zero: no relatinoship
Causal Claim:
One variable causes another
Dependent Variable:
What's being measured
Conceptual definition:
Researcher's definition of a variable at an abstract level
Construct validity:
How well were all the variables are measured and manipulated
External validity:
To whom or what can you generalize theclaim
Statistical validity:
How well does your data support the conclusion
Internal validity:
Are there alternative explanations for the outcomes
Co-variance
Show the statistical significance between both variables (As one changes, the other does too)
Temporal Precendence:
A comes first in Time before B
Randomized experiment:
Manipulation of IV
Random assignment
Measurement of DV
Quasi experimental:
Manipulation of IV
Nonrandom assignment
Measurement of assumed DV
Non experimental:
Measurement of assumed IV
Nonrandom assignment
Measurement of assumed DV
Are survey and poll inter changeable?
Yes
Survey and poll:
Do you need a large sample or a high response rate?
Not necessarily; representativeness is more important.
Survey and poll:
Pre-testing:
Piloting a survey before the actual experiment; detects possible problems beforehand
Survey and poll:
Interviewing style:
Rigid vs. conversational; opportunities to clarify meaning can increase validity
Survey and poll:
Open-ended questions
Rich information, but requires coding
Survey and poll:
Closed/forced choice format
Respondent picks from options
Gather presence or absence of constructs listed
Survey and poll:
Optimizing
The way we want people to process surveys. Search memory for relevant information & integrate into judgement; translate judgement into response.
Survey and poll:
Satisficing:
Revised response strategy that requires less effort; mental shortcuts; people are fatigued/questions are too advanced/etc
a respondent's tendency to provide low-effort or "good enough" answers, rather than putting in the necessary mental work to provide highly accurate, optimal responses.
Survey and poll:
Conversational conventions:
Norms and expectations about everyday conversations influence the interpretation of question and response meanings (EX: info presented at the beginning is unimportant, so people skim it)
Survey and poll:
Response alternatives in questions can...
Clarify intended meaning of question or focus, Remind correspondents of material they may not otherwise consider;
Survey and poll:
Other considerations for developing questions for participants
Context matters (give info they THINK the researchers want)
Order effects
Adjacent questions/subsequent judgment
Social desirability
Acquiescence
Survey and poll:
Primacy & Recency effects-
the tendency to show greater memory for information that comes first or last in a sequence
Survey and poll:
Adjacent questions/ subsequent judgement-
Content of earlier questions influence interpretation of later questions
Survey and poll:
Social desirability:
Desire to present oneself in a positive manner
Survey and poll:
Acquiescence
Tendency to endorse any assertion made in a question, regardless of its content
Writing Well-Worded Questions:
Leading question-
May prime or bias the respondent;
"Do you think that relations between Blacks and Whites...
Will always be a problem or a solutions will eventually be worked out (negative)
VS
Are as good as they're going to get or will they eventually get better (positive)
Writing Well-Worded Questions:
Double barreld questions-
Asked two questions in one
Writing Well-Worded Questions:
Double negative
Difficult to interprets; "I wouldn't never do that to someone"
Writing Well-Worded Questions:
Question order matters?
Can provide meaning; can prime
Response Set Types
Yeah saying-
Saying 'yes' or 'strongly agree' to everything
Response Set Types
Nay saying:
Saying 'no' or 'strongly disagree' to everything
Response Set Types
Fencing sitting
Answering in the middle
Correlational research-
Provides basis for making predictions
How are surveys used?
Assess people's thoughts, opinions, and feelings
Response rate bias
Threat to the representativeness of a sample that occurs when some participants selected to respond to a survey systematically fail to complete the survey (e.g., due to failure to complete a lengthy questionnaire or to comply with a request to participate in a phone survey).
Sampling-
Careful selection of participants that allow researchers to generalize findings from the sample to the population
Representativeness-
The ability to generalize from a sample to a population
Selection bias-
When procedures used to select the sample result in over-representation of some segment of the population
probability sampling-
Selecting people randomly from a list
Probability sampling- Selecting people randomly from a list
Sampling frame:
a list of all of the members of a population
Non-probability sampling-
Non-randomly selecting people
Probability sampling
Simple random sampling-
Each element of the population has an equal chance of being included in the sample
Probability sampling
Cluster sampling
randomly select clusters of participants from population, then include every element in that cluster
Probability sampling
Cluster sampling
Multistage sampling-
2 random samples are taken from population –
cluster from population, then sample from cluster
Probability sampling
Stratified random sampling-
Divide population into subpopulations then randomly select particiapants
Probability sampling
Stratified random sampling- Divide population into subpopulations then randomly select particiapants
Strata:
Subpopulations of interest (young, middle age, old)
Probability sampling
Stratified random sampling- Divide population into subpopulations then randomly select participants
Oversampling:
One or more groups are intentionally overrepresented
Probability sampling
Stratified random sampling- Divide population into subpopulations then randomly select participants
Why would a researcher use oversampling?
to improve the representation of minority classes in imbalanced data
Probability sampling
Systematic sampling
choosing every nth person in a population
Are larger samples better?
Not necessarily; not useful if not representative.
Generalizability:
How well does the relationship you found in your sample represent the relationship in the population?
Non-probability Sampling include:
accidental sampling; quota sampling; purposive samples; snowball sampling
Non-probability sampling-
Convenience/accidental samples
Data is collected from the cases at hand until the sample reaches a designated size