1/146
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Knowledge through tradition + limitations
-passed on through SOCIALIZATION, culture, institutions, common sense knowledge
-cumulative knowledge
-we often want to conform with collectively agreed-upon knowledge and principals, may limit inquiry in favour of social harmony
-can create prejudice, closed mindedness, cultural relativism
Knowledge through authroity + limitations
-from those holding status or expertise
-trusting EXPERTS judgements can help inquiry and ensure more confidence of info being correct
-authority figures can make errors or misuse authority
-advertising industry commonly misuses authority to sell products
Knowledge through rationalism
-knowledge through logic and reasoning
-premises stated and logical rules followed to arrive at sound conclusions
-can make error in premises
-if one does not know formal logic rules, easy to make errorse
Science
the systematic study of structure and behavior of physical and natural phenomena through experimentation and observation
-builds knowledge through TESTABLE explanations and PREDICTIONS
-protects from errors of "common sense knowledge" and individual knowledge acquisition
3 features of science
1. systematic empiricsm
2. empirical questions
3. public knowledge
Systematic empiricism
-general scientific approach
-knowledge via experience
-meticulous form of observation involving planning, testing, and analsysis
empirical questions
questions about the way the world is
-can be answered through testing/observation
Public knowledge
-when research is published, it can be accessed to create public knowledge
-publication helps expand CUMULATIVE knowledge in the field
-also allows science to eb SELF CORRECTING so that it actually reflects how the world works
Scientific enterprise consists of...
theory, data collection, data analysis
Theory
explanation for observations that relate to a specific facet of life (ex. gender, crime, etc.)
-what IS, not what should be, no value judgements unless agreed upon criteria
-explains WHAT the theory is, why this is so
Social regularity
social research aims to find patterns in human behavior and social life, things like social norms create predictable regularities
Aggregate
-combined units that lose individual detail, identifies broader patterns
-ex. looking at AVERAGE income, not individual income (can vary for many reasons)
-social theory deals with AGGREGATED behavior, explains REGULAR patterns to understand systems by which we operate
Quantitative vs categorical variables
-quant measured by assigning numbered values to individuals
-categorical is a quality, measured by assigning a label to each (ex. country of birth)
-this is distinct from qualitative
Dialectics in social research
Refers to tension between explanations:
1. Idiographic and nomothetic explanations
2. Inductive and deductive theory
3. Quantitative and Qualitative data
-dictate and underlie varying research approaches
Idiographic
discovering UNIQUE characteristics about a group, concerns interpretations and meanings by that group
-QUALITATIVE
-ex. studying street gangs, in-depth life study
Nomothetic
uses general principals that are generalizable beyond group of study
-ex. studying street gangs, what neighbourhood factors influence probability of joining a street gang
Inductive
-not based on prior theory, large, particular observations yield theories that explain them
-specific observations to general explanations of orderly patterns
-finding -> theory
Deductive
-specific predictions that begin with general statements
-theoretically expected pattern, observations test it
-theory -> finding
Hypothetico-deductive method
-primary way researchers use theories
-beginning with phenomena and then either using an existing theory to explain it or constructing one
-make prediction about new phenomenon that should be observed if theory is correct (hypothesis)
-conduct study then reevaluate theory
-works in a cycle, allows for revisions and creations of new studies
Quantitative data
numerical data, can be aggregated, compared, averaged, and summarized
-common in experiments and surveys
-ex. correlation btwn years of education and income
-typically DEDUCTIVE, tests theory
Qualitative data
-non-numerical data, in depth information, captures complexity of social phenomena
-common in interviews and participant observation studies
-ex. experiences of ethnic minorities with workplace discrimination
-typically INDUCTIVE, generates theory
Pseudoscience
beliefs that claim to be science but are not
-lacks one or more of the three features of science (systematic empiricism, empirical questioning, public knowledge)
-may not be falsifiable (prevented from being empirically verifiable)
3 goals of science
-to DESCRIBE: making careful observations
-to PREDICT: by regularly observing two events are systematically related, we can predict when one may br followed by another in certain conditions
-to EXPLAIN: determining causes of behavior
Basic research
conducted for achieving an accurate and detailed understanding of behavior without trying to SOLVE problems
Applied research
addresses a practical problem, can influence policy, laws, etc.
Heuristics
mental shortcuts we rely on to form and maintain beliefs,
if something makes intuitive sense or is a widely shared belief, instead of assessing info systematically we will typically accept it true or false based on this
-we tend to focuses on cases that confirm our beliefs (confirmation bias)
Research literature
published research in a field
-tells you if a research question has been answered
-evaluates relevance of research question
-gives ideas on how to conduct studies
-tells how your study may fit into literature
Professional journals
-periodicals that publish original research articles, published in issues containing several articles
Empirical research reports
-one of two common types in professional journal artcles
-describe one or more new empiricla studies condutced by authors
-introduce quesiton, explain importance, review previous research, methodology and results, conclusions
review articles
-other common article type
-summarize previously published research on a topic and offer new ways to organize/explain results
-when devoted to presenting a new theory, it is called a THEORETICAL ARTICLE
-when it is a summary of previous results in the field, it is called a META ANALYSIS
Double-blind peer review
process that most psych journals undergo, where researchers dont know identity of editors, and vice versa
Scholarly books
books written by researchers and practitioners mainly for use by other researchers and practitioners
monograph
written by one or few authors, presents topic coherently like an extended review article
Edited volumes
recruits many authors to write different chapters on different aspects of the same topic, generally follows similar perspective
Most common sources of inspiration for research questions
1. informal observations (ex. milgram's experiment on obedience based on nazi war criminals)
2. practical problems (ex. effectiveness of psychotherapy)
3. previous research
How to evaluate research questions
1. Interestingness - is the answer in doubt? does it fill a gap in literature? does it have important practical implications?
2. Feasibility - can it be successfully answered with reasonable time, money, access to participants, and resources?
Research design
Framework for collection and analysis of data
-What are you trying to learn?
-What is the nature of the question?
-What type of explanation? (nomothetic, idiographic)
Nomothetic explanations in research design
-cause and effect, expressed based on general rules and principles, often QUANTITATIVE
Idiographic explanations in research design
rich, particular description of a person or group, not generalized, EMPATHETIC understanding, often QUALITATIVE
Research design types
Experimental
Cross-sectional
Longitudinal
Case study
Experimental design
only design where you can claim causality, requiring
1. Co-variation (Association)
2. temporal order (x comes before y)
3. non spuriousness (elimination of extraneous variables)
ex. potential spuriousness: evaluating grades of students at front vs back of class, potential variable is motivation
WEAKNESSES: IV cant be manipulated, long-term causes cannot be simulated
Extraneous variables and confounds
extraneous: any vairable other than the IV that CAN influence the DV
confound: specific type of EV that provides an alternate explanation for results by varying alongside the variables under investigation (does influence outcome, cannot pinpoint reason for outcome)
Cross sectional designs
taking observations at ONE point in time (no before vs after)
-NO treatment/variable manipulation, two or more variables MEASURED for association
-ex. a national census to look at a country's condition at one point in time
-collects data from large pool of subjects and compares
differences
-often w questionairres, interviews, observation
-can examine effects of variables that CANNOT be manipulated in experiments (ex. age, socioeconomic status, race)
Cross sectional designs issues
-internal validity in establishing direction of causation
-ex. if there is an association btwn well being and income, which influences the other first
-causation can be RECIPROCAL
-external validity (generalizability)
-if random methods are not used for participant selection, cannot ensure findings are reflective of population
Longitudinal designs
cases examined at a particular time, then later particular time(S)
-provides info about time order and variable changes
-establishes DIRECTION of causation (ex. if T1 shows an increase in income and T2 shows increase in life satisfaction, we can conclude causal order)
Two basic types of longitudinal designs
1. Panel study - same people, households, etc. at diff times
2. Cohort study - people sharing same EXPERIENCE studies at diff times, but different people may be studied at each time ex. studyingpeople birn in the 90s at 3 diff years but diff subjects
Longitudinal designs issues
-attrition over time (people quitting participation)
-can be difficult to determine optimal time frames of study
-panel conditioning : attitudes may change bc theyre being studied
Case study design
in depth study of a single case, can be a person, organization, etc.
-can be qual/quant
-less emphasis on external validity, more about providing in depth picture of a specific case that is not achievable wiht other methods
Case study design types of cases
critical case: illustrates conditions under which a hypothesis does or doesnt hold
ex. studying a person for whom certain counselling techniques are (un)successful
extreme/unique case: unusual cases, helps understand common cases ex. studying someone who has had several marriages helps understand common marriage patterns
revelatory case: examines a case/context that hasnt eben studied yet ex. when declassiffied info is released
Case study design issues
-external validity
Concept
ideas/mental representations of things
-can be independent or dependent variables (Ex. taking tylenol or affects the dependent outcome of a headahce)
-concepts include crime, substance use, research ethics, ect.
Why measure concepts?
dileniates fine differences between people/issues being studied
-consistently identifies differences
-allows us to estimate strength of correlation
Indicators two kinds
Nominal: describes a concept in words (ex. defining a political party or alcoholism)
Operational: describes how a concept is to be measures
(Ex.questions about political affiliation and beliefs in a questionnaire)
Indicators
tell us whether there is a link and how strong it is
-one indicator per concept is adequate but more is better
Multiple-item indicators
-more than one variable/question to measure a concept
-reduces likelihood of incorrect answers due to misinterpretation of questions or definitions
-enssures definition is correctly understood
-can make finer distinctions and access wider range of issues about concept
-allows for faster analysis and cluster analysis
Operationalization
process of converting concepts into indicators or specific questions in a questionnaire/interview
Coding unstructured data
- Derive codes (labels or titles given to the themes or categories)
- Assign numbers to the codes
- Basic principles to observe: Categories must not overlap, categories must be exhaustive, there must be clear rules for how codes are applied
Reliability
consistency of measurement, based on:
1. Stability over time
2. internal reliability
3. inter-observer consistency
Reliability - stability over time
whether results of a measure fluctuate over time despite that what is being measured isnt changing
-stability can be measured by retesting
-difficult to measure stability bc of extraneous variables
Reliability - internal reliability
whether multiple measures administered at a time are consistent
-measured with alpha coefficient or split half method
-correlation of at least 0.8 is minimum internal reliability
Reliability - inter observer consistency
all observers should classify behaviors/attitudes in the same way
-ex. if two observers are recogding amount of aggression on a playground at the same time their estimates should agree if definitions are same
Measurement validity
concerned with whether one is measuring what they are actually trying to study
Measurement validity note
-unreliable measures are invalid, data is not usable
-measures can be invalid but reliable, data can yield same result but still not be measuring what it intends to, can still inform research to a degree
Diff types of measurement validity
1. Face validity - measure appears to be valid at first glance, often weak
2. Concurrent validity - measure correlates with general relevant criteria to concept
3. Construct validity - concepts relate to eachother consistent with researchers theory, confirmed if results match prediction
4. Convergent validity - measure of concept correlates with second measure using diff technique
criterion validity
the extent to which a measure is related to an outcome
-type of concurrent validity
-if the predictions coincide with findings it is valid
-criterion: any variable that is likely to be correlated with the construct being measured (ex. anxiety, testing performance)
-called concurrent when measured at same time as construct, but if criterion is measured FOLLOWING the construct being measured, it is called predictive validity (Ex. how do current grades correlate with college gpa?)
discriminant validity
scores on the measure are NOT related to other measures that are conceptually different (Ex. that self esteem and mood do not correlate)
Characteristics of a good hypothesis
-Testable and falsifiable
-logical (informed by previous theory, reasoning, and observation)
-positive (stating a relationship DOES exist)
Statistical significance
unlikely due to random chance, likely due to real difference in population
Type 1 error
false positive, assuming results are statistically significant when no real relationship exists
Type 2 error
missing an effect, concluding no relatinoship exists when it acutally does
Measurement
assignment of scores to individuals so that they represent a characteristic
Psychometrics
the scientific study of the measurement of human abilities, attitudes, and traits
-often CONSTRUCTS, things that cannot directly observed but can measure general tendencies
conceptual definition
a researcher's definition of a variable at the theoretical level, how it relates to other variables
self-report measures
participants report own feelings, thoughts, actions (ex. rosenberg self esteem scale)
behavioral measures
aspect of participants behavior is recorded
physiological measures
record physiological processes (ex. horomone levels, blood pressure, etc.)
converging operations
the use of several research approaches to solve a single problem
-various operational definitions converge on the same construct
-when they support similar patterns, likely that theconstruct is useful and properly measured
-ex. different measures of stress are correlated w eachother and related to immune system functioning
Levels of measurement
nominal, ordinal, interval, ratio
Nominal lvl
-for categorical variables, assigning scores in category labels
-are two indivdiuals the same or differnet based on variable being measured
(ex. marital status)
-NO ORDERING
-lowest level of measurement
ordinal lvl
-quantitative variables
-rank order of individuals
-same vs different, and also whether they are higher or lower on a variable
-ex. satisfaction scale
-allows for comparisons about the variable
-BUT not all levels are of equal significance (ex. somewhat vs very dissatisfied is different than somewhat dis/vs satisfied), DIFFERENT INTERPRETATION THROUGHOUT
interval lvl
-intervals have same interpretation throughout
-ex. there is a 10 degree interval between 80-90 degress, and 10-20 degrees
-no true 0, 0 represents something (Ex. 0 degrees is not the absence of temperature), does not allow for ratio comparisons
-ex. 40 degrees is not twice as hot as 20
ex. IQ
ratio lvl
true zero represents absence of quality
-ex. height, weight
-category, ordered, same interpretation, true zero
4 steps of measurement process
1. conceptually define constructs
2. operationally define constructs (ex. using a scale)
3. implement measure
-be aware of social desirability bias and eliminate demand cues that indicate desired responses
-gaurantee anonymity, be clear and brief,
4. evaluate measure
benefits of using existing measures
1. saves time
2. already evidence of the measure's validity
3. results are more easily comparable wiht previous findings
creating your own measure
-often requires modifying existing measures or repurposing
-should be clear and simple
-pilot testing
Research ethics
-must be addressed in initial stages of study and kept in mind in every phase
-first priority is ensuring participants are not harmed
-risk assessment is a key feature of research process (knowing what is acceptable and where to draw the line)
-must balance between pursuit of knowledge and mitigating harm
tri council policy statement
In Canada, the official statement of ethical conduct for research involving humans; researchers and institutions are expected to adhere to this document to receive federal research funds.
-exists to reduce unintentional harm, researchers may not recognize all risks
-developed due to large amount of unethical research
Research ethics board
-all canadian research required REB approval, must be obtained BEFORE approaching participants and beginning research
-study can be approved, mdoified, or rejected
ethics approval quantitative research
-easier to obtain, generally
-stated hypothesis and testing plan
-data gathered from one person at a time
-some REBs favour quantitative work bc it is seena s more scientific
ethics approval qualitative research
-flexibility for emerging themes means undetermined unsolidifed methods
-may capture data on people who do not want their activities observed
-REBS can restrict project, prevent funding, or prevent project
TCSPS2 course overlapping principles
1. respect for persons
2. concern for welfare
3. justice
Respect for persons
-participants are not objects, basic rights need to be respected, includes dignified treatment by researchers
-MOST FUNDAMENTAL PRINCIPLE
Respect for persons - consent
-informed consent needed, should be given as much info as possible in order to agree to a study
-should be a collaborative relationship, consent is open and can be withdrawn at any time
-those with limited ability are ot hvae a guardian provide consent
-information sheet outlining research, must outline that participant can leave study at any time
Respect for persons - practical challenges
-difficult to give participants all info to have informed conset
-impractical in ethnography due to unforseen changes, participant bias, unknown people entering setting
-also in experiments, informed consent can skew results (reactive effects)
-deception can be justified but debriefing is important
Concern for welfare
-concern for well-being of person or group involved in research, must avoid harm and embarassment
-ensure right to PRIVACY
-confidentiality must always be maintained, even when online
-not a concern with public info
-random response technique - useful for sensitive topics, based on probability
Concern for welfare - qualitative
-dealing with confidentiality is different, despite using pseudonyms, detailed info about the person can signalwho they are
-change identifiable info
-covert research can be intrusive, unconsensual, benefits must outweigh harm to participants
-requires after the fact consent tp use info
-anonymity must be ensured
Concern for welfare - duty to report
-have a duty to report on certain activities like crime or child abuse, but boundaries must be clarified in advance
-REB review and approval required
-sponsored research can infleunce research by setting limitations, expecting results, or publishing results
Justice
-burdens and benefits of research should be spread across society
-no one should be exploited for research, or systematically excluded
-principle of no harm emphasized, informed consent
-harm should not outweigh benefits
3 sources of bias in sampling
1. not using a random sampling method
2. sampling frame being inadequate (Excludes some cases or innacurate), even with random sampling
3. non-response (some selected for sample refuse to participate, cannot be contacted, etc.), there may be something different about people who say yes to a study
sampling error
-occurs when there is a discrepancy btwn smapling and population group
-occurs even with random sampling