Graduate-level course on research methodology; current unit: constructing questionnaires/surveys.
Purpose of the session:
Debrief on an earlier reflection exercise about coping with tragedy and being present for clients.
Clarify an administrative issue: Module 2 video lost its multi-item quiz; students should upload a screenshot of the single remaining question for credit.
Launch Module 4 assignment details: final questionnaire due in Module 4 along with pilot-test data.
Personal note from instructor: wearing new clear aligners as part of a TMJ treatment journey—shared to “arrive with a clean, clear smile.”
Assignment Logistics & Timeline
Deliverables
Draft questionnaire now (Module 3).
Pilot test between now and Module 4 submission date.
Submit final questionnaire + pilot feedback + preliminary data in Module 4.
Pilot requirements
Must administer to at least one set of laypersons (friends/family) to check readability and comprehension.
Optional but recommended: swap pilots with another student group; avoid over-burdening any single cohort member—decline politely if you receive multiple requests.
Software options
Google Forms: simplest; meets most needs.
Qualtrics: free with Baylor login; better for complex logic (branching, piping, forced display logic). Recommended if survey contains conditional branches or comparison groups.
Big Picture: Why Pilot?
Pilot testing uncovers
Misinterpretations of wording (literal vs intended meaning).
Missing response options and overlapping categories.
Technical glitches (required fields, branching errors, mobile display issues).
Data-cleanliness concerns (e.g., unexpected text in numeric fields).
Common Questionnaire Hazards (with Illustrative Examples)
1. Basic Grammar & Mechanics
Verify punctuation: statements end with a period, questions with a question mark; partial stems (e.g., “Because …”) have neither.
Spell-check manually: survey tools’ spell-checking is often disabled in question text boxes.
Case study: “Asynchronous material prepares me for synchronous sessions” lacked a period and misspelled “synchronous/asynchronous.”
2. Missing Instructions
Always tell participants what to do (rate, select, type, rank, etc.).
Add an introductory phrase if questions appear in a block sharing directions.
Example fix: “Rate your agreement with the following statement.”
3. Jargon, Ambiguity & Literacy Level
Define or remove technical terms (e.g., “auditory study environment”).
Replace vague phrases like “most conducive to learning” with precise constructs (better focus? higher grades? greater retention?).
4. Wrong Question Type / Double-Barreled Items
Each item should measure one construct.
If you need explanation and numeric rating, split into (a) forced-choice and (b) open-ended follow-up.
Example: “Explain why you selected that option” was glued to a Likert question—creates two questions in one.
Use branching or display logic when follow-ups depend on prior answers.
5. Interesting but Irrelevant Questions
Every question must tie directly to the study’s research aims, demographic descriptors, or necessary screening.
Remove “nice to know” items that bloat length and lower completion rates.
6. Producing Murky / Unusable Data
Preference questions without context (e.g., “Do you prefer synchronous or asynchronous?”) yield meaningless data if underlying reasons aren’t captured.
Open-ended text boxes alone create coding burden; pair with structured choices + an "Other (please specify)" field.
7. Overlapping or Incomplete Response Options
Ranges must be mutually exclusive and exhaustive.
Bad: 1{-}2\text{ days},\;2{-}4\text{ days} (overlap at 2).
Good: 1\text{ day or less},\;2{-}3,\;4{-}5,\;6{-}7.
Example: “Three to five days” vs “One to two days” vs “Day it’s due” vs “Hour it’s due” omitted 6–7 days and had overlap at “day/hour.”
8. Uneven Scales (Response Cliffs)
Scales must be symmetric around a neutral point with equal intervals.
Faulty example: Neutral | Slightly Effective | Effective | Very Effective → only one negative option.
Provide balanced anchors: \text{Very Ineffective},\;\text{Ineffective},\;\text{Neutral},\;\text{Effective},\;\text{Very Effective}.
9. Improper Use of “Other” Field
In forced-choice items, “Other” should be available only when legitimate options are unforeseeable, else force a valid choice.
Do not add “Other” on numeric or scalar items; off-scale answers cannot be analyzed quantitatively.
10. Assumptive Wording
Avoid presupposing behaviors or contexts.
“Which tool helps you manage time?” assumes respondent already uses a tool.
Better: “Which of the following tools, if any, do you use to help manage your time?”
11. Excessively Open-Ended Prompts
“List three words describing group projects” yields heterogeneous text (single words, phrases, hyphenations).
Hard to code; consider structured lists plus an “Other” textbox.
12. Data Entry & Coding Burden
Simulate data-entry: copy a sample of responses into rows/columns to see if analysis will be straightforward.
If the answer cannot be coded into tidy columns, refine the question.
Illustrative Fixes & Redesigns
Original: “Choose the auditory study environment most conducive to your learning.”
Fix: “Choose the study environment you use most often. Options: Loud (TV/music playing), Moderate background noise (coffee shop), Quiet (library), Silent (noise-canceling headphones).”
Original: “During your higher education pursuits (undergraduate and/or graduate school) please explain your opinions on group work as favorable …”
Fix split into:
Q_1 (Likert): “During graduate studies, how would you rate your overall attitude toward group work?”
Q_2 (Open-ended): “Please describe one factor that most influenced the rating above.”
Original: “What methods do you use to manage your time? paper/digital planner, paper/digital calendar…”
Fix: Break each medium into discrete options; allow multiple selection and an ‘Other (app name)’ textbox.
Scaled example: “On a scale of 1 (not beneficial) to 10 (very beneficial), rate how group projects enhance your learning.”
Good Question Characteristics (Positive Models)
Clear construct & timeframe: “How many days per week do you interact with course materials?” with answer set {0,1,2,3,4,5,6,7}.
Balanced numeric scale: 1{-}10 slider with labeled anchors.
Exhaustive list without overlap.
Directions embedded (“Select one,” “Check all that apply,” “Rate each statement”).
Avoid brand names unless brand is central to research; e.g., replace “Google Calendar” with “Digital calendar (e.g., Google, Outlook).”
Qualitative vs Quantitative Items
Qualitative (open text) useful for depth, motivation, unforeseen themes.
Quantitative (forced choice, Likert, numeric) needed for statistical comparisons.
Use mixed-methods design wisely: open-ended follow-ups sparingly; plan coding scheme in advance (Module 3 coding workshop upcoming).
Cultural inclusivity: avoid idioms (“hit the books”), region-specific slang.
Privacy: skip collecting identifying info unless essential; if collected, store separately.
Participant burden: keep survey length short; remove extraneous items; pilot to gauge completion time.
Instructor Tips & Reminders
Force answers when the item is essential; otherwise allow skip.
For branching logic, Qualtrics > Google Forms.
When unaided recall is impossible (e.g., exact dates), offer ranges.
Always include an “I don’t know / Not applicable” when some respondents truly lack the information.
Connections to Previous & Future Modules
Module 1: Research questions & hypotheses set the blueprint; only include survey items that trace back to these.
Module 3 (next week): Qualitative coding techniques—will demonstrate how open-ended answers are analyzed (content analysis, coding tree, inter-rater reliability). No additional class time for questionnaire edits next week.
Module 4: Final questionnaire, pilot report, and preliminary descriptive stats due.
Real-World Relevance
Poorly designed surveys waste resources and can misinform policy/clinical practice.
Balanced, well-piloted questionnaires enhance evidence-based decision-making in occupational therapy and other disciplines.
Action Checklist Before Module 4 Submission
[ ] Review every item for grammar, spelling, clarity.
[ ] Verify response options are exhaustive & non-overlapping.
[ ] Ensure scales are even and anchored.
[ ] Remove brand names unless integral.
[ ] Explicitly state instructions for each item or block.
[ ] Mark required items appropriately; include NA/Other where justified.
[ ] Conduct at least one layperson pilot; collect feedback on confusion points.
[ ] Simulate data coding; confirm each response fits intended analysis.