Lec 6: Clinical vs Statistical significance, directioned tests, Reliability/Validity

0.0(0)
studied byStudied by 2 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/34

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

35 Terms

1
New cards
<p>one tailed test meaning</p>

one tailed test meaning

suggests null hypothesis should be rejected when test value is in the critical region on one side of the mean

  • can be left or right tailed=depending on the direction of the inequality of the alternative hypothesis

<p>suggests null hypothesis should be rejected when test value is in the critical region on one side of the mean</p><ul><li><p>can be left or right tailed=depending on the direction of the inequality of the alternative hypothesis</p></li></ul><p></p>
2
New cards
<p>two tailed test meaning</p>

two tailed test meaning

null hypothesis should be rejected when the test value is in either of the two critical regions

<p>null hypothesis should be rejected when the test value is in either of the two critical regions</p>
3
New cards

statistical significance vs clinical significane

statistical significance may not be clinically useful while clinical is

4
New cards

reliability def

extent to which a test consistently measures whatever it purports to measure

  • aka when your measurement tool is consistent and repeatable

5
New cards

what does reliability depend on

  • variables are free from random errors

  • consistency of repeating the measure

6
New cards

what does high reliability indicate

the measurement system produces similar results under the same conditions

7
New cards

stability def (characteristics)

  • consistent & enduring

  • does not change over time

  • high correlation coefficient when administered repeatedly (diff results should be similar

  • evaluate stability at the beginning & throughout the study

8
New cards

homogeneity def/characteristics

extent to which items on a multi-item instrument are consistent with one another

  • aka internal consistency reliability (diff methods/questions=same results)

  • useful for single concept

  • assessed with cronbach’s alpha (ranges from 0/no reliability to 1/complete reliability)

9
New cards

Equivalence def

  • how well multiple forms or users of the instrument produce the same results

  • variation: diff forms of tool or user error/understanding

  • when multiple individuals are collecting data, ensure inter-rater reliability

10
New cards

what are the diff types of reliability

  • equivalency reliability

  • stability reliability

  • internal consistency

  • inter-rater reliability

  • intra-rater reliability

11
New cards

Equivalency Reliability def

extent to which two items measure the same concepts at identical level

12
New cards

Stability Reliability def

measuring variable over time

13
New cards

Internal Consistency def

extent to which test assess the same characteristic

14
New cards

Inter-rater Reliability def

extent to which two or more individuals (raters) agree

15
New cards

Intra-rater Reliability def

same assessment completed by same rater on two or more occasions

16
New cards

what does % Agreement or Kappa statistic measure

inter-rater reliability

17
New cards

list sTATISTICAL MEASURES OF RELIABILITY

  • % agreement or kappa statistic

  • cronbach’s alpha = correlation of items, ranges from 0 to 1, >0.8 good

  • factor analysis=relationships between items and item reduction

  • rest-retest=correlation between test 1 and test 2

18
New cards

what does validity/accuracy depend on

  • the form of test

  • the purpose of test

  • the population to whom it is intended

19
New cards

what is validity checking for

the appropriateness of the data rather than whether measurements are repeatable

20
New cards
<p>lists the diff types of validity</p>

lists the diff types of validity

  • face validity/content validity

  • criterion validity

  • convergent validity

  • discriminant validity

<ul><li><p>face validity/content validity</p></li><li><p>criterion validity</p></li><li><p>convergent validity</p></li><li><p>discriminant validity</p></li></ul><p></p>
21
New cards
<p>face validity/content validity</p>

face validity/content validity

depends on the judgement of the observer (interviews/focus groups)

22
New cards
<p>criterion validity</p>

criterion validity

examine correlations with variables that you expect to be linked (relate with standards)

<p>examine correlations with variables that you expect to be linked (relate with standards)</p>
23
New cards
<p>convergent validity</p>

convergent validity

examine correlations with existing tools/ measures or instruments

  • comparing your results with those of a previously validated survey that measures the same thing

24
New cards
<p>Discriminant validity def</p>

Discriminant validity def

doesnt measure what it shouldnt

25
New cards
<p>validity def</p>

validity def

extent to which a concept or variable is ACCURATELY measured

  • a correlation coefficient of ≥ 0.4 strengthens validity

26
New cards
<p>predictive validity</p>

predictive validity

when it is discovered that after using your measurement tool that it can accurately predict the intended outcome (dependent variable)

<p>when it is discovered that after using your measurement tool that it can accurately predict the intended outcome (dependent variable)</p>
27
New cards
<p>divergent validity</p>

divergent validity

when one tool measures the opposite variable of a previously validated tool

<p>when one tool measures the opposite variable of a previously validated tool</p>
28
New cards

a measurement must be reliable first before it can be…

  • For a test to be valid it has to be reliable!

valid

  • a test can be reliable but it doesn’t mean it is valid/accurate

29
New cards
<p>SELECTION OF MEASUREMENT TOOLS pic</p>

SELECTION OF MEASUREMENT TOOLS pic

IDEALLY = WANT HIGH ACCURACY AND HIGH PRECISION

  • Validity (Accuracy) = the degree to which measuring the true value. How close are you to measuring the true value?

  • Reliability (Precision) = how repeatable are the measurements. How close are the repeated measures to each other?

<p>IDEALLY = WANT HIGH ACCURACY AND HIGH PRECISION</p><ul><li><p>Validity (Accuracy) = the degree to which measuring the true value. How close are you to measuring the true value? </p></li><li><p>Reliability (Precision) = how repeatable are the measurements. How close are the repeated measures to each other?</p></li></ul><p></p>
30
New cards

In order to determine if measurements are reliable and valid we need to examine

error

31
New cards

types of error

  • random=variations day to day, moment to moment

  • systematic=errors associated with incorrect measurement tool, design of study, bias

32
New cards

acceptability

  • extent acceptable to target group

  • language and format

  • indicators: response time, response rate, missing data

33
New cards

Feasibility

  • how easy the tool is to use

  • practicality

  • time

  • cost

  • resources

34
New cards

construct validity

detected a diff that was known to exist in a population, confirms concept

35
New cards

what 3 main factors make up reliability

  • stability

  • homogeneity

  • equivalence