Reliability_Validity_Intervention_Data-Gathering
Lesson 1: Validity and Reliability
1.1 Concept of Validity
Definition: Validity pertains to how well a research instrument measures what it intends to measure.
Example: A questionnaire that incorrectly measures grammar rather than discourse management is not valid.
Importance: Ensures that results are accurately reflecting the intended measure.
1.2 Concept of Reliability
Definition: Reliability refers to the consistency of results when a research instrument is administered multiple times.
Example: If the same test yields the same scores upon repeated administration, it is considered reliable.
Analogy: A reliable weighing scale gives consistent readings; being accurate to actual weight ensures validity.
1.3 Types of Validity
Face Validity: Subjective assessment of whether the instrument appears to measure what it intends to measure.
Content Validity: Instrument meets study objectives; involves expert assessment for relevant items to be measured.
Construct Validity: Alignment of the measurement method with the concept being measured; requires careful parameter development.
Concurrent Validity: Comparing new instrument results with existing validated instruments to check correlation.
1.4 Establishing Reliability
Homogeneity/Internal Consistency: Measures how well all aspects of the study are captured. High consistency indicates good reliability.
Techniques: Split-Half reliability, Kuder-Richardson, Cronbach's alpha.
Stability: Reflects repeatability; instruments should yield similar scores over time.
Techniques: Test-Retest and Parallel-form reliability.
Equivalence: Consistency among different users or forms of the instrument.
Example: Randomly splitting questions to maintain similar means and variances in responses.
Lesson 2: Describing Intervention
2.1 Overview of Experimental Research Design
Definition: In experimental research, variables can be manipulated through interventions.
Purpose: To measure the effects of a treatment or intervention on a population.
2.2 Components of Describing Intervention
Background Information: Explanation of the intervention’s origins, relevance, context, and duration.
Differences and Similarities: Outline the expectations for experimental vs. control groups.
Procedures: Detailed description of how the intervention will be implemented with the experimental group.
Basis of Procedures: Justification for the choice of intervention whether derived from past research or theoretical frameworks.
2.3 Example of Research Intervention
Research Framework: Task-based language teaching aimed at enhancing oral competence among Automotive Service students.
Module Details: Based on variances in learning styles (kinesthetic) and structured by Ellis’s task-based framework with pre, during, and post tasks.
Implementation Timeline: Proposed for use in the second quarter of the academic year after pre-testing and concluded with post-testing.
Lesson 3: Planning Data Collection Procedure
3.1 Importance of Data Collection
Essential for testing hypotheses and gathering information-rich data.
Data should be reliable and valid for effective statistical analysis.
3.2 Phases of Data Collection
Before Data Collection:
Adapt or construct the research instrument.
Identify necessary authorities for permission.
Determine sample size and select respondents.
Obtain consent from participants (or parents for minors).
Validate the research instrument through expert feedback; Pilot testing if needed.
During Data Collection:
Administer the instrument or implement interventions.
Collect and record responses accurately.
After Data Collection:
Summarize data in tabular format for clarity.
Analyze the collected data to evaluate hypotheses.