1/65
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
score theory
observed score - measurement error = true score
error
sources of variability in measure caused by things other than IV
random vs systematic error
random: pushes DV around unpredictably
systematic: pushes DV around predictably if u know the pattern and should be same for all participants
confiednce interval
to determine the most likely range of data for actual population, if sample size increase, the confidence variable narrows down as data gets more precise.
measurement error
fluctuations in measurement partly attributable ti error in measurement tool
how does within subject study help explain some error
measure participant multiple times to see what is the error causing variability to have a better sense of what the actual score is
when is a design more sensitive
when we can account for some of the systematic errors allowing us to see the effect of IV more clearly. this allows us to detect more subtle effects or find effects with a smaller sample size
construct validity
match between a theoretical construct of interest and the measurement tool being used (how valid is its operationalisation
face validity vs content validity
face: measure appears to reflet the construct being measured (based on subjective judgement)
content: measure captures all the nex=cessary aspect of the construct and nothing more
convergent validity vs predictive validity vs discriminant validity vs concurrent
convergent: measure relates to other measures of the same/similar construct
predictive: measure score could relate to behaviour in the future
discriminant: scores on a measure not related to scores on conceptually unrelated measure
concurrent: see if a tests can measure the same thing at the same time
reliability
consistency or stability of measure
test-retest reliability + associated 2 challenges + expected relationship when plotted against each other
degree to which a measure gives the same results across repeated use
challenges: practice effect (get better) + demand characteristics (change behaviour)
when scores of each time is plotted against each other, should always be positive
internal consistency reliability
degree to which items on a test measure the same construct (ie both constructs move together)
conbach’s alpha
indicator of internal consistency assessed by examining average correlation of each ietm in a measure with every other question (higher conbach = more reliable)
interater reliability + 4 limitations
ability to which >judges agree on the observation
limitations: judges need to be trained; judges need to score independently of one another; blind to the study’s prediction; trade-off between complexity of scoring and interrarter reliability (more complex harder to get consistent results)
when do we need interrater reliability (3 instances)
behavioural coding (raters code a video)
personality measures (people interpret personality)
thematic/ content coding (if one source of info have an agreed upon theme in general eg positive or negative tone)
intraclass correlation coefficient (ICC)
increase ICC = greater inter-rater reliability
participant reactivity
people behaved in different ways when being watched. hence may not be real world response
descriptive statistics
techniques to describe and summarise many data points
histogram + what it shows
indicates frequency of each score for a continuous variable
what it shows: highest and lowest scores + frequency (common/ rare scores) + outliers + spread
bar graph vs pie chart vs frequency polygon (what they are used for)
bar graph: for comparing groups and % nominal categories
pie chart: nominal scale esp good for proportions
frequency polygon: use line to represent frequency for continuous variable
3 measures on central tendency and what data set could use them
mean: arithmetic average which uses info from every score; interval/ ratio scale data
median: score that divides group into half; good for ordinal and also be used for ratio/ interval
mode: most frequently occuring score; all scales including nominal
3 effects out outliers
make more of a difference when the sample size is small
might choose to use the median instead since mean may no longer be reflective of middle
mean sometimes doesnt corresponf to an actual score on scale
variability
spread of the distribution of scores
range
maximum and minimum score
variance
sum of squared deviations around the mean divided by n-1; higher variance = more variability in the score
standard deviation
index of how far away scores tend to be from the mean and square root of variance (score will be in same units as the mean)
normal distribution percentage per SD
1sd: 34.1 - 68.2
2sd: 47.7 - 95.4
3ds: 49.8 - 99,6
effect size + cohen’s d
magnitude of effect observed between groups in a study
cohen’s d: effect size estimate that is standardised mean diff between 2 groups and can be used to compare studies with different units (diff in means is cohen d x SD); larger cohen’s d = better evidence that 2 groups are different
coeeficient of determination
measure of shared variance (correlation coeff ²); portion of variabitlity in one group that can be accounted for by variability in another group
range restruction
when only a subest of a variable’s possible range is sample/observed, the accuracy of the correlation coefficient is compromised (related to how we sample)
shared variance
coefficient of determination
regression
use correlation between variables to make predictions; use score of predictor variable to predict changes in criterion variable byt cannot make causal claims
regression model
set of theoretically relevant predictors predicting a criterion variable and can include >1 predictors; y= bx +c
benefits of multiple correlation + squared multiple correlation coefficient
benefit: increase predictors can increase the accuracy of the prediction
squared multiple correlation coeeficient: proportion of variability that is accounted for by the combined set of predictor variable
partial correlation
calculate the unique shared variance between 2 variables while excluding any variable that is shared with a third variable via finding how it correlates to the 2 relevant variables “we can partial out the effect of…”
nuremberg code 4 requirements + what kind of research is it used for
informed consent
human research based on prior animal work]
benefits > risks
minimise discomfort and avoid injury
specific to medical research
belmont principles 3 points
beneficience (concern for welfare): benefits must outweigh risks, minimise risk, confidentiality
respect for people: respect for autonomy, need for informed consent
justice: equality in access to benefits and to participation in research process while avoiding exploitation of vulnerable group
what is the 3 roles of the research ethics board
reviews applications to see if it adheres to the tri council policy statement
may ask for changes/ approve/ deny
any changes must be approved
risk benefit analysis
weighing benefits to participants and society against potentail harm
4 types of harms
physical
psychological/ emotional
social risk
privacy and confidentiality
exempt vs minimal risk vs greater than minimal risk
exempt: does not involve REB review which includes naturalistic observation or utiising archival research (data previously collected and anonymised)
minimal: risk to participant no greater than would receive in daily life
greater than minimal risk: includes at risk populations and sensitive topics (esp for emotional harm)
confidential vs anonymity
confidential: data that are kept private and used only for the purposes promised by the researcher (who we share data with)
anonymity: protect identity of the of participants by making them unidentifiable by the data
autonomy
participants are able to make deliberate decisions about participating and they must be given all the info that might influence their decisions about participating
deception instances of omission (2) vs instances of commission in deception practices
commission: lysing + leading participants to believe things that are not true
ommission: leaving out some details (but it should not affect particpant’s decision on whether or not they want to participate)
3 demographics that would require extra consideration
minors
individuals with cognitive impairment
individuals with intellectual disabilities
what are the 2 things required for participation of vulnerable demographivs
consent from caregiver/ decision-making proxy
assent from particpant (indication that participant willing to participate)
3 forms of coercion
power differentials
incentives
participation is necessary to reach the next step
3 purposes of debreifing
explaining purpose of research at the end and why it is deception was required (staged manipulation, misleading participants about study’s purpose etc)
important that participants leave feeling okay
maintain trust in people who perform psyc research
3 alternatives to deception
role playing: predicting response if they were in situation
simulation studies: highly involving and can effectively mimin any elements of a real life experience
honest studies: research design does not try to misinform/ hide info from participants and could use naturally occuring events that present unique research opportunities
4 issues related to justice
participant recruitment should be fair and have a sound rationale
one population should not bear all the risks of researcg
disempowered and socially vulnerable populations should be protected
if a specific group is researched, that group should have acces to the benefits of that research
what safeguards are put in place for indigenous participants
The community must be consulted and have final say + researchers should respect culture
what happens when a group is undersampled or oversampled (2)
there is insufficient representation of a certain group leading to generalisation of effects observed in the oversampled group to all other populations; there might be misconecptions that may arise with communication as such
4 factors that would affect representation
sexual orientation
poverty
rural areas
education
3 benefits of animal research
control genetic makeup of participants being studies
more easy to study the physio, neural and genetic foundation of behaviour
reduce risk and harm to human subjects
4 conditions by TCPS for animal research
discomfort and stress minimised
benefits to humans and animals
alternative procedures unavail
conducted by trained scientists
4 types of questionable research practives
selectively reporting dependent variable
optional stopping (have multiple chanece to find an effect)
failing to disclose failed conditions or studies
Harking: hypothesising after results are known
pre registrations vs open data vs replication
preregistration: public statement of methods and data analysis plan
open data and materials: share all data collected to prevent fraud and unintentional eroors
replication take existing study and run in different contexts
2 types of scientific fraud
fabricating data
collecting data from participants but change numbers to support hypothesis
generative AI
algorithms that generate new content based on prior input
issues with training data
takes info from most populated demographic on the internet hence info may be biased
2 justice issues surrounding AI
those who are in charge of labelling of sensitive materials may come from low income countries
there may be interests by companies or stakeholders to control what comes up on generative AI which leads to non-standardised generation of ideas
4 autonomy issues of gen AI
AI may be built into softwares without user’s consent
inequality as incentives for groups of people
higher quality work which affects academic standards
privacy of data may not be secure
beneficience of generative AI (1 pro 2 cons)
safe costs and increase assesibility
scams
improper citations in research leading to degradation of research
bullshit
no regard for the truth and make something up (this may or may not be truth)
applications of AI considerations (2)
is accuracy important for task?
is accuracy of info easily verifiable?