M3 S2 AI Tools in Health-Science Research

Defining AI’s Role in Health-Science Research

  • AI = systems that simulate (but do not replace) human intelligence.
    • Boost efficiency, provide novel insights, shorten time to evidence, yet remain supplemental tools.
    • Applicable to literature analysis, data synthesis, writing support, brainstorming, quantitative & qualitative analytics.
  • Current state: powerful but limited; vendors publish clear “limitations” pages—typical accuracy ceiling ≈ 80 %.
  • Guiding principle: human discernment first; AI augments, verifies, accelerates, never drives the entire project.

Core Learning Objectives of the Session

  • Define AI relevance to health-science → OT research.
  • Explore tools for literature review, data analysis, and writing.
  • Identify ethical considerations & emerging publication policies.
  • Practice applying AI tools to capstone / group projects.

Literature-Review Tools

  • Semantic search engines (analyze abstracts/full text via language models)
    • Elicit (Allen Institute)
    • Consensus, Scite, Scispace
  • Citation-based graph tools (map articles via reference lists)
    • Connected Papers
    • Research Rabbit
    • CiteAI

Comparison & Use-Cases

  • Semantic tools
    • Strengths: rapid “big-picture” answers early in topic exploration, finds phrasing variations.
    • Weaknesses: may mis-read context, hallucinate relations, limited to open-access corpora, ≈20 % error rate.
  • Citation tools
    • Strengths: grounded in actual reference lists ⇒ lower hallucination risk, visually shows clusters/gaps, good for chain-searching seminal works.
    • Weaknesses: inherits authors’ citation errors; omits conference papers, textbooks, grey literature.

Demonstrations Recap

Elicit

  • Prompt used: “Predictors of quality of life in long-COVID clients who have had occupational therapy.”
  • Workflow: PICO input-→ AI screens 50 papers → returns 7.
  • Limitations exposed:
    • Only 2/7 contained a predictor variable; 1 addressed QoL directly, yet tool declared “perceived exertion is top predictor.”
    • Highlights need for manual verification of inclusion/exclusion criteria.

Connected Papers

  • Input: keywords on long-COVID & OT predictors.
  • Output: force-directed graph clustering articles.
    • Central node: “Functional consequences of long-term COVID need to be addressed by OTs” (2024).
    • Nearby clusters surfaced fatigue literature → suggests potential gap or new focus.
  • User must interpret why nodes connect (reads abstracts, checks shared citations).

CiteAI

  • Requires 7-day trial; citation-based ranking list + direct PDF links.
  • Allows iterative refining (age filters, geography, etc.).

Best-Practice Tips for Literature Search

  • Start with clearly phrased human question; let AI refine, not create.
  • Triangulate: run same query in PubMed/Google Scholar & AI tool.
  • Keep prompt history for transparency & future methods section.

AI for Data Analysis

Quantitative (SPSS & AutoML)

  • ChatGPT can walk through SPSS procedures (e.g., K-WK\text{-}W Kruskal–Wallis steps).
  • AutoML platforms: automate missing-value imputation, scaling, model selection.
ChatGPT Dataset Walk-through (example)
  • Dataset: n=543n=543 school districts, 5252 variables (autism prevalence, tree-canopy %, demographics).
  • AI tasks executed:
    • Identified missing-data patterns.
    • Summarised means (e.g., mean autism rate reported) & simple correlations.
    • Indicated weak negative association between tree canopy & autism ( r0.01r\approx-0.01 ).
    • Can auto-generate visualisations; heavy compute time.

Qualitative

  • Otter.ai – accurate audio transcription with speaker separation.
  • NVivo & QDA Miner / MAXQDA / “Porkos” – house transcripts; new AI add-ons learn from first few human-coded interviews then auto-code remaining.
  • ChatGPT coding demo:
    • Input: full interviews from pandemic motherhood study.
    • Output: frequency tables → themed suggestions (“Juggling work/school/parenting”, “Emotional overload & burnout”).
    • Limitations: theme repetition, cannot decipher nuanced overlap; still needs researcher refinement & audit trail.

AI for Writing & Editing

  • Grammarly (free tier highly recommended).
  • QuillBot, MaddieLoves, Paperpal, JenniAI – alternatives focusing on scientific tone.
  • Brainstorming/outlining: ChatGPT, Claude.

Citation Generation Dangers

  • Example: ChatGPT produced 5 references on sleep in older adults; 4 valid, 1 fabricated (correct journal & year, but population was adolescents).
  • Always cross-check each DOI/PMID.

Ethical & Policy Landscape

  • Never input PHI / sensitive data.
  • Disclose AI use in cover letter and manuscript (e.g., Open Journal of OT policy excerpt):
    • Specify which tool, which section (e.g., grammar edit, thematic suggestion).
    • AI cannot be listed as an author.
    • Humans remain liable for plagiarism, factual accuracy, image originality.
  • Institution/IRB guidelines override tool convenience.
  • Over-reliance can reduce creativity & cognitive performance—balance is key.

Recommended Research Workflow with AI

  1. Draft human research question (PICO/PECO/PEO).
  2. Preliminary semantic search (Elicit) to sense scope.
  3. Citation graph (Connected Papers) to locate seminal works & clusters.
  4. Traditional database confirmation (PubMed, CINAHL, Google Scholar).
  5. Store prompts, decisions, AI outputs in an “AI lab notebook.”
  6. For quantitative data: verify any AI-suggested tests in statistical textbooks or peer-reviewed guides.
  7. For qualitative data: hand-code initial 2-3 transcripts → train NVivo AI → audit sample outputs.
  8. Draft manuscript; run Grammarly; cite all AI assistance in methods & acknowledgements.

Baylor-Specific Resources

  • University site: “Generative AI for Researchers” (webinars, tool list, policy updates).
  • Library add-in: one-click “Download via Baylor” for paywalled PDFs inside any browser search.

In-Class Knowledge Check (with answers)

  1. Best tool for visual map of related articles → Connected Papers.
  2. Key ethical consideration → Disclose AI use in methodology.
  3. Common qualitative transcription tool → Otter.ai.
  4. Main limitation → AI may hallucinate sources/references.
  5. Proper student usage → Augment critical thinking & tasks, not replace them.

Practical Session / Capstone Guidance

  • Groups instructed to
    • Finish transcript coding & theming.
    • Concentrate on survey question design (demographics, quantitative Likert, ≥1–3 open-ended qualitative items).
    • Await feedback on recruitment email & infographic (graded complete/incomplete; IRB-level scrutiny of language).
  • Breakout rooms used for collaboration; faculty provided office hours for SPSS/qualitative advice.

Take-Home Messages

  • AI is here to stay in OT research; mastery = competitive edge.
  • Use it as co-pilot, keep your hands on the wheel.
  • Verify, cite, and remain ethically transparent.
  • Continuous learning essential—policies & capabilities evolve monthly.