1/28
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
INSTRUMENT
This refers to the questionnaire or data gathering tool to be constructed, validated and administered.
STANDARDIZED, MODIFIED STANDARDIZED, RESEARCHER MADE
3 Types of Research Instrument
STANDARDIZED INSTRUMENT
A pre-existing tool or research instrument that has been thoroughly tested for reliability and validity.
MODIFIED STANDARDIZED INSTRUMENT
Starts with a standardized instrument, which is then adapted to better fit the specific context of the study. Researchers might modify to make it more relevant to their target population or study focus.
RESEARCHER-MADE INSTRUMENT
Designed from scratch by the researcher to address very specific research questions or hypotheses.
RELIABILITY
Scores from an instrument are stable and consistent (nearly the same when researchers administer the instrument multiple times at different times)
VALIDITY
The development of sound evidence to demonstrate that the test interpretation matches its proposed use.
FACE, CONTENT, CONSTRUCT, CRITERION
4 Types of Validity
TEST-RETEST, EQUIVALENT FORMS, INTERNAL CONSISTENCY, INTER-RATER
4 Types of Reliability
FACE VALIDITY
This is also known as logical validity which refers to a subjective process of checking the actual face or the façade of the instrument.
FACE VALIDITY
It is determined by looking at the font style, font size, spacing, and other details that might distract the respondents while answering.
CONTENT VALIDITY
This type of validity checks the questions to see if they can answer the preset research questions. In other words, the questions need to meet the objectives of the study.
CONTENT VALIDITY
It is not measured numerically but instead; experts rely on logical judgment.
CONTENT VALIDITY
It is a logical presumption that the questions will yield to answer what the researcher expects to get. Hence, it is subject to the approval of a round of experts who are knowledgeable about the topic.
Three to five experts are suggested to fill the respective panel, and their criticisms will be highly regarded to validate the content.
CONSTRUCT VALIDITY
This is the degree to which the instrument actually tests the hypothesis or theory the study is measuring as a whole. If the instrument is construct valid, it must be able to detect what should exist in the analysis once the papers are taken back, theoretically.
CONSTRUCT VALIDITY
As Barrot (2017) emphasized, it is a form of “intangible or abstract variable such as personality, intelligence, or moods” (p. 115). If the instrument is not able to detect these, it is not construct valid.
CRITERION VALIDITY
This predicts that the instrument produces the same results as those of other instruments in a related study. The correlation between the results of this instrument and of others guarantee this type of validity.
CONCURRENT VALIDITY
If the instrument can predict results the same with those that are already validated in the past. For example, the division-wide math test is valid if then students’ scores are the same as those in the regional-wide math test.
PREDICTIVE VALIDITY
An instrument has this if it yields the same result in the future. For instance, a student’s college entrance exam result in Mathematics is valid if his actual math subjects’ grades become parallel to it.
TEST-RETEST
This is realized if the test is given to the same set of takers at least after two weeks. If the scores are consistent, the instrument is reliable.
EQUIVALENT FORMS RELIABILITY
Two sets of tests are administered to the participants. They have the same content as to coverage, difficulty level, but different in wordings. An example of this is giving a diagnostic test at the beginning of the school year and an achievement test at the end.
INTERNAL CONSISTENCY RELIABILITY
This measures how well the items in two instruments measure the same construct.
INTERNAL CONSISTENCY RELIABILITY
According to Subong and Beldia (2005), this is defined as “Estimating or determining reliability of an instrument through single administration of an instrument.”
INTERNAL CONSISTENCY RELIABILITY
The respondents complete one instrument at a time, that is, requiring only a single administration of an instrument. For this reason, this is the easiest form of reliability to investigate.
CRONBACH ALPHA
Also known as coefficient alpha. It is based on the internal consistency of items in the test. It is flexible and can be used with test formats that have more than one correct answer.
LEE CRONBACH
He developed Cronbach alpha a or coefficient alpha.
LIKERT SCALE
A ________________ type of question is compatible with the Cronbach alpha. All the above-mentioned test has software packages that can be used by the student.
INTER-RATER RELIABILITY
To assure this type of reliability, two raters must provide consistent results. A 0.70 coefficient value from Kappa coefficient means that the instrument is reliable.
KAPPA COEFFICIENT
The most common statistical tool used for inter-rater reliability.