LESSON 1-2: RESEARCH INSTRUMENT & VALIDITY AND RELIABILITY (copy)

0.0(0)
studied byStudied by 3 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/28

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

29 Terms

1
New cards

INSTRUMENT

This refers to the questionnaire or data gathering tool to be constructed, validated and administered.

2
New cards

STANDARDIZED, MODIFIED STANDARDIZED, RESEARCHER MADE

3 Types of Research Instrument

3
New cards

STANDARDIZED INSTRUMENT

A pre-existing tool or research instrument that has been thoroughly tested for reliability and validity.

4
New cards

MODIFIED STANDARDIZED INSTRUMENT

Starts with a standardized instrument, which is then adapted to better fit the specific context of the study. Researchers might modify to make it more relevant to their target population or study focus.

5
New cards

RESEARCHER-MADE INSTRUMENT

Designed from scratch by the researcher to address very specific research questions or hypotheses.

6
New cards

RELIABILITY

Scores from an instrument are stable and consistent (nearly the same when researchers administer the instrument multiple times at different times)

7
New cards

VALIDITY

The development of sound evidence to demonstrate that the test interpretation matches its proposed use.

8
New cards

FACE, CONTENT, CONSTRUCT, CRITERION

4 Types of Validity

9
New cards

TEST-RETEST, EQUIVALENT FORMS, INTERNAL CONSISTENCY, INTER-RATER

4 Types of Reliability

10
New cards

FACE VALIDITY

This is also known as logical validity which refers to a subjective process of checking the actual face or the façade of the instrument.

11
New cards

FACE VALIDITY

It is determined by looking at the font style, font size, spacing, and other details that might distract the respondents while answering.

12
New cards

CONTENT VALIDITY

This type of validity checks the questions to see if they can answer the preset research questions. In other words, the questions need to meet the objectives of the study.

13
New cards

CONTENT VALIDITY

It is not measured numerically but instead; experts rely on logical judgment.

14
New cards

CONTENT VALIDITY

It is a logical presumption that the questions will yield to answer what the researcher expects to get. Hence, it is subject to the approval of a round of experts who are knowledgeable about the topic.

Three to five experts are suggested to fill the respective panel, and their criticisms will be highly regarded to validate the content.

15
New cards

CONSTRUCT VALIDITY

This is the degree to which the instrument actually tests the hypothesis or theory the study is measuring as a whole. If the instrument is construct valid, it must be able to detect what should exist in the analysis once the papers are taken back, theoretically.

16
New cards

CONSTRUCT VALIDITY

As Barrot (2017) emphasized, it is a form of “intangible or abstract variable such as personality, intelligence, or moods” (p. 115). If the instrument is not able to detect these, it is not construct valid.

17
New cards

CRITERION VALIDITY

This predicts that the instrument produces the same results as those of other instruments in a related study. The correlation between the results of this instrument and of others guarantee this type of validity.

18
New cards

CONCURRENT VALIDITY

If the instrument can predict results the same with those that are already validated in the past. For example, the division-wide math test is valid if then students’ scores are the same as those in the regional-wide math test.

19
New cards

PREDICTIVE VALIDITY

An instrument has this if it yields the same result in the future. For instance, a student’s college entrance exam result in Mathematics is valid if his actual math subjects’ grades become parallel to it.

20
New cards

TEST-RETEST

This is realized if the test is given to the same set of takers at least after two weeks. If the scores are consistent, the instrument is reliable.

21
New cards

EQUIVALENT FORMS RELIABILITY

Two sets of tests are administered to the participants. They have the same content as to coverage, difficulty level, but different in wordings. An example of this is giving a diagnostic test at the beginning of the school year and an achievement test at the end.

22
New cards

INTERNAL CONSISTENCY RELIABILITY

This measures how well the items in two instruments measure the same construct.

23
New cards

INTERNAL CONSISTENCY RELIABILITY

According to Subong and Beldia (2005), this is defined as “Estimating or determining reliability of an instrument through single administration of an instrument.

24
New cards

INTERNAL CONSISTENCY RELIABILITY

The respondents complete one instrument at a time, that is, requiring only a single administration of an instrument. For this reason, this is the easiest form of reliability to investigate.

25
New cards

CRONBACH ALPHA

Also known as coefficient alpha. It is based on the internal consistency of items in the test. It is flexible and can be used with test formats that have more than one correct answer.

26
New cards

LEE CRONBACH

He developed Cronbach alpha a or coefficient alpha.

27
New cards

LIKERT SCALE

A ________________ type of question is compatible with the Cronbach alpha. All the above-mentioned test has software packages that can be used by the student.

28
New cards

INTER-RATER RELIABILITY

To assure this type of reliability, two raters must provide consistent results. A 0.70 coefficient value from Kappa coefficient means that the instrument is reliable.

29
New cards

KAPPA COEFFICIENT

The most common statistical tool used for inter-rater reliability.