psyc 217 midterm 2

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/65

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

66 Terms

1
New cards

score theory

observed score - measurement error = true score

2
New cards

error

sources of variability in measure caused by things other than IV

3
New cards

random vs systematic error

random: pushes DV around unpredictably

systematic: pushes DV around predictably if u know the pattern and should be same for all participants

4
New cards

confiednce interval

to determine the most likely range of data for actual population, if sample size increase, the confidence variable narrows down as data gets more precise.

5
New cards

measurement error

fluctuations in measurement partly attributable ti error in measurement tool

6
New cards

how does within subject study help explain some error

measure participant multiple times to see what is the error causing variability to have a better sense of what the actual score is

7
New cards

when is a design more sensitive

when we can account for some of the systematic errors allowing us to see the effect of IV more clearly. this allows us to detect more subtle effects or find effects with a smaller sample size

8
New cards

construct validity

match between a theoretical construct of interest and the measurement tool being used (how valid is its operationalisation

9
New cards

face validity vs content validity

face: measure appears to reflet the construct being measured (based on subjective judgement)

content: measure captures all the nex=cessary aspect of the construct and nothing more

10
New cards

convergent validity vs predictive validity vs discriminant validity vs concurrent

convergent: measure relates to other measures of the same/similar construct

predictive: measure score could relate to behaviour in the future

discriminant: scores on a measure not related to scores on conceptually unrelated measure

concurrent: see if a tests can measure the same thing at the same time

11
New cards

reliability

consistency or stability of measure

12
New cards

test-retest reliability + associated 2 challenges + expected relationship when plotted against each other

degree to which a measure gives the same results across repeated use

challenges: practice effect (get better) + demand characteristics (change behaviour)

when scores of each time is plotted against each other, should always be positive

13
New cards

internal consistency reliability

degree to which items on a test measure the same construct (ie both constructs move together)

14
New cards

conbach’s alpha

indicator of internal consistency assessed by examining average correlation of each ietm in a measure with every other question (higher conbach = more reliable)

15
New cards

interater reliability + 4   limitations

ability to which >judges agree on the observation

limitations: judges need to be trained; judges need to score independently of one another; blind to the study’s prediction; trade-off between complexity of scoring and interrarter reliability (more complex harder to get consistent results)

16
New cards

when do we need interrater reliability (3 instances)

  1. behavioural coding (raters code a video)

  2. personality measures (people interpret personality)

  3. thematic/ content coding (if one source of info have an agreed upon theme in general eg positive or negative tone)

17
New cards

intraclass correlation coefficient (ICC)

increase ICC = greater inter-rater reliability

18
New cards

participant reactivity

people behaved in different ways when being watched. hence may not be real world response

19
New cards

descriptive statistics

techniques to describe and summarise many data points

20
New cards

histogram + what it shows

indicates frequency of each score for a continuous variable

what it shows: highest and lowest scores + frequency (common/ rare scores) + outliers + spread

21
New cards

bar graph vs pie chart vs frequency polygon (what they are used for)

bar graph: for comparing groups and % nominal categories

pie chart: nominal scale esp good for proportions

frequency polygon: use line to represent frequency for continuous variable

22
New cards

3 measures on central tendency and what data set could use them

mean: arithmetic average which uses info from every score; interval/ ratio scale data

median: score that divides group into half; good for ordinal and also be used for ratio/ interval

mode: most frequently occuring score; all scales including nominal

23
New cards

3 effects out outliers

  1. make more of a difference when the sample size is small

  2. might choose to use the median instead since mean may no longer be reflective of middle

  3. mean sometimes doesnt corresponf to an actual score on scale

24
New cards

variability

spread of the distribution of scores

25
New cards

range

maximum and minimum score

26
New cards

variance

sum of squared deviations around the mean divided by n-1; higher variance = more variability in the score

27
New cards

standard deviation

index of how far away scores tend to be from the mean and square root of variance (score will be in same units as the mean)

28
New cards

normal distribution percentage per SD

1sd: 34.1 - 68.2

2sd: 47.7 - 95.4

3ds: 49.8 - 99,6

29
New cards

effect size + cohen’s d

magnitude of effect observed between groups in a study

cohen’s d: effect size estimate that is standardised mean diff between 2 groups and can be used to compare studies with different units (diff in means is cohen d x SD); larger cohen’s d = better evidence that 2 groups are different

30
New cards

coeeficient of determination

measure of shared variance (correlation coeff ²); portion of variabitlity in one group that can be accounted for by variability in another group 

31
New cards

range restruction

when only a subest of a variable’s possible range is sample/observed, the accuracy of the correlation coefficient is compromised (related to how we sample)

32
New cards

shared variance

coefficient of determination

33
New cards

regression

use correlation between variables to make predictions; use score of predictor variable to predict changes in criterion variable byt cannot make causal claims

34
New cards

regression model

set of theoretically relevant predictors predicting a criterion variable and can include >1 predictors; y= bx +c

35
New cards

benefits of multiple correlation + squared multiple correlation coefficient

benefit: increase predictors can increase the accuracy of the prediction

squared multiple correlation coeeficient: proportion of variability that is accounted for by the combined set of predictor variable

36
New cards

partial correlation

calculate the unique shared variance between 2 variables while excluding any variable that is shared with a third variable via finding how it correlates to the 2 relevant variables “we can partial out the effect of…”

37
New cards

nuremberg code 4 requirements + what kind of research is it used for

  1. informed consent

  2. human research based on prior animal work]

  3. benefits > risks

  4. minimise discomfort and avoid injury

specific to medical research

38
New cards

belmont principles 3 points

  1. beneficience (concern for welfare): benefits must outweigh risks, minimise risk, confidentiality

  2. respect for people: respect for autonomy, need for informed consent

  3. justice: equality in access to benefits and to participation in research process while avoiding exploitation of vulnerable group

39
New cards

what is the 3 roles of the research ethics board

  1. reviews applications to see if it adheres to the tri council policy statement

  2. may ask for changes/ approve/ deny

  3. any changes must be approved

40
New cards

risk benefit analysis

weighing benefits to participants and society against potentail harm

41
New cards

4 types of harms

  1. physical

  2. psychological/ emotional

  3. social risk

  4. privacy and confidentiality

42
New cards

exempt vs minimal risk vs greater than minimal risk

exempt: does not involve REB review which includes naturalistic observation or utiising archival research (data previously collected and anonymised)

minimal: risk to participant no greater than would receive in daily life

greater than minimal risk: includes at risk populations and sensitive topics (esp for emotional harm)

43
New cards

confidential vs anonymity

confidential: data that are kept private and used only for the purposes promised by the researcher (who we share data with)

anonymity: protect identity of the of participants by making them unidentifiable by the data

44
New cards

autonomy

participants are able to make deliberate decisions about participating and they must be given all the info that might influence their decisions about participating

45
New cards

deception instances of omission (2) vs instances of commission in deception practices

commission: lysing + leading participants to believe things that are not true

ommission: leaving out some details (but it should not affect particpant’s decision on whether or not they want to participate)

46
New cards

3 demographics that would require extra consideration

  1. minors

  2. individuals with cognitive impairment

  3. individuals with intellectual disabilities

47
New cards

what are the 2 things required for participation of vulnerable demographivs

  1. consent from caregiver/ decision-making proxy

  2. assent from particpant (indication that participant willing to participate)

48
New cards

3 forms of coercion

  1. power differentials

  2. incentives

  3. participation is necessary to reach the next step

49
New cards

3 purposes of debreifing

explaining purpose of research at the end and why it is deception was required (staged manipulation, misleading participants about study’s purpose etc)

important that participants leave feeling okay 

maintain trust in people who perform psyc research

50
New cards

3 alternatives to deception

  1. role playing: predicting response if they were in situation

  2. simulation studies: highly involving and can effectively mimin any elements of a real life experience

  3. honest studies: research design does not try to misinform/ hide info from participants and could use naturally occuring events that present unique research opportunities

51
New cards

4 issues related to justice

  1. participant recruitment should be fair and have a sound rationale

  2. one population should not bear all the risks of researcg

  3. disempowered and socially vulnerable populations should be protected

  4. if a specific group is researched, that group should have acces to the benefits of that research

52
New cards

what safeguards are put in place for indigenous participants

The community must be consulted and have final say + researchers should respect culture

53
New cards

what happens when a group is undersampled or oversampled (2)

there is insufficient representation of a certain group leading to generalisation of effects observed in the oversampled group to all other populations; there might be misconecptions that may arise with communication as such

54
New cards

4 factors that would affect representation

  1. sexual orientation

  2. poverty

  3. rural areas

  4. education

55
New cards

3 benefits of animal research

  1. control genetic makeup of participants being studies

  2. more easy to study the physio, neural and genetic foundation of behaviour

  3. reduce risk and harm to human subjects

56
New cards

4 conditions by TCPS for animal research

  1. discomfort and stress minimised

  2. benefits to humans and animals

  3. alternative procedures unavail

  4. conducted by trained scientists

57
New cards

4 types of questionable research practives

  1. selectively reporting dependent variable

  2. optional stopping (have multiple chanece to find an effect)

  3. failing to disclose failed conditions or studies

  4. Harking: hypothesising after results are known

58
New cards

pre registrations vs open data vs replication

  1. preregistration: public statement of methods and data analysis plan

  2. open data and materials: share all data collected to prevent fraud and unintentional eroors

  3. replication take existing study and run in different contexts

59
New cards

2 types of scientific fraud

  1. fabricating data

  2. collecting data from participants but change numbers to support hypothesis

60
New cards

generative AI

algorithms that generate new content based on prior input

61
New cards

issues with training data

takes info from most populated demographic on the internet hence info may be biased

62
New cards

2 justice issues surrounding AI

  1. those who are in charge of labelling of sensitive materials may come from low income countries

  2. there may be interests by companies or stakeholders to control what comes up on generative AI which leads to non-standardised generation of ideas

63
New cards

4 autonomy issues of gen AI

  1. AI may be built into softwares without user’s consent

  2. inequality as incentives for groups of people

  3. higher quality work which affects academic standards

  4. privacy of data may not be secure

64
New cards

beneficience of generative AI (1 pro 2 cons)

  1. safe costs and increase assesibility

  2. scams

  3. improper citations in research leading to degradation of research

65
New cards

bullshit

no regard for the truth and make something up (this may or may not be truth)

66
New cards

applications of AI considerations (2)

  1. is accuracy important for task?

  2. is accuracy of info easily verifiable?