Resistance Training Practical Exam

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/53

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

54 Terms

1
New cards

Reliability

a measure of the degree of consistency or repeatability of a test

  • may differ between groups based on differences in physical or emotional maturity and skill level

2
New cards

Example of a reliable test

If an athlete whose ability does not change is measured two times, the same score is obtained both times

3
New cards

Example of an unreliable test

an individual could obtain a high score on one day and a low score on another

4
New cards

Reliable

a test must be ___ to be valid, because highly variable results have little meaning

5
New cards

Determining reliability of a test - Administering the same test several times to the same group of athletes

Statistical measurements

  1. test-retest reliability - correlation of the scores from two administrations

  2. typical error of measurement - includes both the equipment error and biological variation of athletes

** any differences between the two sets of scores represents measurement error

6
New cards

Factors effecting measurement error

(difference between two sets of scores)

  1. Intrasubject (within subjects) variability

  2. Lack of interrater (between raters) reliability/agreement

  3. Intrarater (within raters) variability

  4. Failure of the test itself to provide consistent results

7
New cards

Intrasubject variability

a lack of consistent performance by the person being tested

8
New cards

Interrater reliability/Objectivity/Interrater agreement

the degree to which different raters agree in their test results over time or on repeated occasions - it is a measure of consistency

** particularly important if different scorers administer tests to different subgroups of athletes (lenient vs less)

Enhancing this

  1. have a clearly defined scoring system

  2. have competent scorers who are trained and experienced with the test

  3. SAME SCORER(s) test a group at the beginning and the end of training period

9
New cards

Sources of interrater differences

  • variations in calibrating testing devices

  • preparing athletes

  • running the test

  • different levels of motivation from athletes based on tester (personality, status, physical appearance, demeanor, sex)

10
New cards

Intrarater Variability

the lack of consistent scores by a given tester

causes:

  • inadequate training

  • inattentiveness/lac of concentration

  • failure to follow standardized procedures

11
New cards

Validity

the degree to which a test or test item measures what it is supposed to measure

** one of the most important characteristics of testing

12
New cards
13
New cards
14
New cards
15
New cards
16
New cards
17
New cards
18
New cards
19
New cards
20
New cards
21
New cards
22
New cards
23
New cards
24
New cards
25
New cards
26
New cards
27
New cards
28
New cards
29
New cards
30
New cards
31
New cards
32
New cards
33
New cards
34
New cards
35
New cards
36
New cards
37
New cards
38
New cards
39
New cards
40
New cards
41
New cards
42
New cards
43
New cards
44
New cards
45
New cards
46
New cards
47
New cards
48
New cards
49
New cards
50
New cards
51
New cards
52
New cards
53
New cards
54
New cards