Looks like no one added any tags here yet for you.
What are the four sources of knowledge?
- Authority
- Personal experience
- Tradition
- Intuition
Authority (source of knowledge)
I believe it’s true because Dr. Jones says it’s true
Personal experience (source of knowledge)
I believe it's true because I've experienced it
Tradition (source of knowledge)
I believe it because it's always been that way
Intuition (source of knowledge)
I believe it bc i feel it
confounding variable
a variable that is significantly related to your IV and DV but may distort their true relation
Empiricism
I believe it is true bc I can measure it
Reasoning
I believe it is true bc it is logically derived
Science is a continual interaction between these sources of knowledge
Reasoning and empiricism
What are the four objectives of science
Describe, explain, predict, control
What are the 5 tenets of science
Determinism, empiricism, replicability, falsifiability, parsimony
definition of research and why we do it
the systematic investigation into a study of materials and sources in order to establish facts and reach new conclusions
the scientific method
Assume a natural cause for the phenomenon
Make an educated guess about the cause
Test your guess
Revise your hypothesis
Re-test your guess
Make a condition
What things do critical thinkers do
Avoid oversimplification
Consider alternative explanations
Tolerate uncertainty
Maintain an air of skepticism but be open-minded
N of one fallacy
drawing conclusions/ generalizations from anecdotal evidence
the difference between descriptive and explanatory research
Descriptive: fills in the research community's understanding of the initial exploratory research
Explanatory: attempts to connect ideas to understand cause and effect
the difference between quantitative and qualitative research, and be able to recognize examples
Quantitative Research:
Uses numerical data and statistical analysis
Identifies patterns, trends, and relationships
Provides objective and precise results
Qualitative Research
Analyzes non-numerical data (interviews, observations)
Focuses on understanding meaning, context, and subjective experience
Basic research
to enhance the general body of knowledge rather than address a specific, practical problem
Applied research
done with a practical problem in mind, and the researchers conduct their work in local, real-world context
the steps of conducting research
Formulate hypotheses
Select appropriate IV and DV
Limit alternative explanations for variation
Manipulate Ivs and Measure DV’s
Analyze variation in DV’s
Draw inferences about the relationship
Research vs personal experience
Experience has no comparison group
Experience is Confounded
Research is better than experience
Found that venting one’s anger does not help one feel better
Research Results are Probabilistic
Its findings do not explain all cases all of the time
Availability heuristic
things that pop up easily in our mind tend to guide our thinking
Present/present bias
when we fail to look for absences, and easily notice what is present/ failer to consider proper comparison group
Confirmation bias
the tendency to only look at information that agrees with what we already believe
Bias blind spot
the belief that we are unlikely to fall prey to other biases
When should we trust what authority figures tell us?
- if the authority systematically and objectively compared different conditions like a researcher
- if they've read good research and are interpreting it for you
- if they're basing their conclusions on empirical evidence
Know the basic publication process (including peer review process and reviewed blind)
Scientists publish their research in journals, following a peer-review process that leads to sharper thinking and improved communication
Know the basic layout of a journal article and the purposes of each section
- Abstract
- Introduction
- Method
- Results
- Discussion
- References
Know the difference between independent and dependent variables.
Independent: variable being manipulated in order to measure dependent
Dependent: variable being predicted and measured
Recognize the difference between a measured and manipulated variable, and a conceptual and operational variable
measured: simply observed and recorded (dependent)
manipulated: controlled (independent)
conceptual: abstract concepts, sometimes called constructs
- must be carefully defined
operational: a measured and manipulate-able variable from the conceptual variable by operationalizing it
In terms of operationalization, understand the three levels of hypotheses
Conceptual - State expected relationships among concepts
Research - Concepts are operationalized so that they are measurable
Statistical - State the expected relationship between or among summary values of populations, called parameters
Know the difference between the three types of claims, and be able to recognize examples
Frequency claims: describe a particular rate or degree of a single variable (proportions)
- ex: 2 out of 5 Americans worry daily
Association claims: argues that one level of a variable is likely to be associated with a particular level of another variable (sometimes said to correlate)
- ex: people with higher incomes spend less time socializing
Causal claims: arguing that one of the variables is responsible for changing the other
- ex: music lessons enhance IQ
internal validity
how pure a test is; how well it ensures that a test is only looking at the relationship b/w A and B and not other variables
external validity
how well the measure generalizes out
content validity
a measure must capture all parts of a define construct (capture all the reasons something is a good gym)
statistical validity
the extent to which a study's statistical conclusions are accurate and reasonable
What is power
The probability of NOT missing a significant effect
How do you increase power
maximize treatment
increase sample size
control environment
Know the difference between Type I and Type II Error
Type I: false positive -- finding an association between to variables when no association exists
Type II: false negative, miss - error made if you don't have sufficient power for a study and miss finding a real effect/result that is actually there
Know what elements are necessary when making a causal claim
Covariance: the extent to which two variables are observed to go together determined by the results of the study
Temporal precedence: means that on variable comes first in time before the other variable
Internal validity: a study's ability to eliminate alternative explanations for the association
Face validity
whether the measure seems to be a reasonable measure of the construct
Construct validity
How well the construct is operationalized: what is gym
content validity
when you make sure you capture all the dimensions of the variable you are going to measure (gym example)
Understand the role of the IRB
Institutional Review Board
- this group reviews and approves studies at a university before the study is conducted
- interprets ethical principles and ensures research using human participants is conducted ethically
Know the 5 General Principles of the APA Ethics Code
- beneficence and nonmaleficence: take precautions to protect participants from harm and to ensure their well-being
- justice: treat all groups fairly
- fidelity and responsibility: establish relationships of trust
- respect for persons: people should be treated as autonomous agents
- integrity: teach accurately and truthfully
Understand the importance of confidentiality, informed consent, and avoiding multiple relationships.
limits of confidentiality -- suicidal ideation, homicidal ideation, suspected abuse of children, the elderly or disabled, court order from a judge
- informed consent: each person learns the project, its risks and benefits, and decides whether to participate
special populations in research that require extra protection
children, pregnant women, prisoners, etc.
Understand plagiarism
representing the ideas or words of others as one's own
What is deception in research
deception: withholding some details of the study from participants
what is the necessity of deception
to observe participants without them knowing it's a study so we can see their natural behaviors
what is debriefing and what is its purpose
takes place after deception to describe the nature of the deception and why it was necessary
- attempts to restore an honest relationship with participant
Data fabrication
inventing data to fit a hypothesis
what is data falsification
when results are influences by selectively deleting observations or influencing subjects to act in the hypothesized way
Know the basics of conducting ethical animal research
the three R's
- Replacement: researchers should find alternatives to animals in research when possible (computer simulations)
- Refinement: researchers must modify procedures and other aspects of animal care to minimize or eliminate animal distress
- Reduction: researchers should adopt experimental designs and procedures that require the fewest animals possible
Understand the difference between qualitative and quantitative (nominal) variables
Qualitative variables are categorical and describe qualities or characteristics.
eye color or gender
Quantitative (nominal) variables are numerical and represent quantities or amounts.
Understand the difference between nominal, ordinal, ratio, and interval variables
nominal: names
ordinal: rank order of quantitative variables
interval: the numerals of a quantitative variable that meets two conditions: the numerals represent equal distance between levels, and there's no true 0
ratio: when the numerals have equal intervals and there is a true 0 (means none)
Pros and Cons of: Self-report vs. observational measures vs. physiological measures vs. open ended questions
self-report: operationalizes a variable by recording people's answers to questions about themselves in a questionnaire or interview
observational: operationalizes by recording observable behaviors or physical traces of behaviors
physiological: operationalizes by recording biological data, such as brain activity, hormone levels, or heart rate
pros and cons of open-ended questons:
- pro: good amount of info, allows them to respond how they want
- con: how to code the info in a meaningful way, much more time consuming
fixed alternative
Multiple choice, yes/no, T/F
Likert
rating ATTITUDE or OPINION of participants
Rating scale
rating FREQUENCY or AMOUNT of behavior
socially desirability responsing
giving answers that make them look better than they really are because they're shy or embarrassed
- can be avoided by anonymity
response sets
response sets: aka non differentiation -- type of shortcut respondents can take - rather than thinking carefully about each question, they answer all of them the same
- can be avoided by forced-choice questionsor attention tests
Test-retest reliability
if the test is taken twice, how close are Time A and Time B scores?
interrater reliability
whether two observers give consistent ratings to a sample of targets
Internal validity
a study participant gives a consistent pattern of answers, no matter how the researcher phrases the question
- to see how well the items group together
- look at Cronbach's alpha to show how strong your scale is
Criterion Validity
how well your measure is correlated with a relevant behavioral outcome collect data that shows a measure is correlates with expected behavioral outcomes
Convergent validity
how similar your measure is with other measures
Divergent validity
how dissimilar your measure is with other measures
reliability vs validity
Reliability refers to the consistency of a measure (whether the results can be reproduced under the same conditions).
Validity refers to the accuracy of a measure (whether the results really do represent what they are supposed to measure).
Know the ways psychologists might measure behavior in an observational study
Observational Measures
behavior r is observes
Then it is coded by researchers
Physiological measures
Objective measurements are taken
Biomarkers, EEG, brain scans, BMI, heart rate
Know the concept of a p value
the strength of
Know the difference between a scale, an inventory, and a test, and be able to recognize examples of each
Scale: how variables are defined or organized
Inventory: any checklist, questionnaire or personality measure
Test: an objective and standardized measure of an individual's mental and/or
Why does question order matter and what are order effects?
- matters because being exposed to one condition changes how participants react to the other condition
- order effects: happen when exposure to one level of the independent variable influences responses to the next level
- in a within-group design is a confound
- practice effects: a long sequence makes the participants bored or tired toward the end
- carryover effects: some form of contamination carries over from one condition to the next
Know how negative wording, leading questions, and double barreled questions may affect survey results
- negative wording: negative phrasing can cause confusion and reduce the construct validity
- double-barreled question: asks two questions in one - have poor construct validity because people might only respond to one part of the question
- leading question: one whose wording leads people to a particular response
Understand the pros and cons of open-ended questions vs. forced-choice questions
open-ended
pros: spontaneous, rich information
cons: responses must be coded and organized, which is time consuming and difficult
forced choice
pros: easy to code and efficient
cons: wording and order of questions are much more important
Know how to avoid survey issues
- be careful with question wording
- avoid double-barreled, leading, and negative wording questions
Understand some accuracy limitations of self-reporting
- can be inaccurate unintentionally
- people may not be able to accurately explain why they acted as they did
Why might an observational study design be better than a self-report measure in some cases?
- people can't always accurately report the reason behind their behaviors
- people's memories about events in which they participated are not very accurate
Observer bias
researcher expectations influence their interpretation of a group's behavior
observer effect
researcher behavior influences the behavior of the group
reactivity
a change in behavior when study participants know another person is watching, better or worse
ways to prevent observer bias and effect
- train observers well, develop clear rating instructions, use multiple observers, use a masked design
ways to avoid reactivity and effects
blend in -- make unobtrusive observations,
use a one-way mirror
wait it out
measure the behavior's results
Bias sampling
unrepresentative
easy to get, cheap, fast, convinent
unbiased sampling
Representative
random
takes more work, time and/or money
simple random sample
use a basic from of random selection from a population to get your participants
probability sampling
technique for which you can specifiy the probaibility that a given partivipant will be selected from a population
make broader genralizations
non-probablilty sampling
it is impossible to specify the probability of selecting any one individual
the sample may or may not be representative of the population
limited external validity but can test specific theories
systematic sampling
pop divided by sample size to provide you with number K, then from a random starting point you select ever kth individual
cluster sampling
take a group of schools or countries from a state and then sample those groups/ use all individuals in those clusters
mulitstage sampling
take a group of clusters, and then take a random sample from those clusters/ not all individuals used
stratified random sampling
recruiting a certain percentage of participants from different groups for a reason
convenience sampling
using whatever participants are easily avaliable
purposive sampling
convinence sampling in which the goal is to select participants with particular characteristics
quota sampling
convenience sampling in which the goal is to select participants with particular characteristics until you have enough
snowball sampling (or referral sampling)
involves including participants in the sample who have been referred by other participants
random assignment vs random sampling
sampling involves selection of overall participants
assignment involves selection to groups
Choosing a sample size
how much money, time, resources, person power, expected effect size, type of analysis,variability of the data, number of conditions, diminishing returns
Bivariate correlations measure
an association that involves exzactly two variables (at the same time in the same group of ppl)
use r to describe
use t when their are two categorical levels
use Anova (f test) when ther are three or more categorical levels
how to avoid afects of outliers in our results
data transformation ( taking square roots of values)
changing score (next highest plus one or the mean plus two SD)
remove cases