Foundational Information for Assessment in Speech-Language Pathology - Key Terms
Purpose of assessment in SLP: Systematically obtain information and use it to make judgments/decisions (diagnosis, prognosis, referrals, treatment needs, focus/frequency/duration, session structure).
Foundational integrity of assessment: Thorough, uses multiple methods, evidence-based, tailored to the individual client.
Thorough: incorporate as much relevant information as possible for accurate diagnoses and recommendations.
Multiple methods: interview/case history, observations, formal and informal testing.
Evidence-based: relies on valid and reliable approaches; findings reflect client abilities and disabilities.
Tailored: materials appropriate for client’s age, gender, skill level, ethnocultural background.
Professional Expectations for Clinicians
Clinicians must maintain professional integrity and achieve high clinical expertise; competence matters across populations and disorders.
Practice within areas of competence; distinguish impostor feelings from actual knowledge gaps; rely on education, training, and available resources.
When uncertain, seek knowledge from experienced colleagues, books/journals, podcasts, videos, reputable websites.
Bias Awareness and Professional Behavior
Be aware of personal and societal biases; biases should not affect client-clinician relationships or assessment outcomes.
Treat all clients with respect; refrain from letting negative attitudes influence impressions or decisions.
Code of Ethics (ASHA)
ASHA Code of Ethics provides an ethical framework for professional behavior and practice; ethics support client welfare and profession integrity.
Resources and related guidelines exist on ASHA website, including:
Scope of Practice in Speech-Language Pathology: broad practice view; definitions; practice domains.
Preferred Practice Patterns for the Profession of Speech-Language Pathology: expectations for quality care; service provision, screening/assessment/intervention; clinical processes; documentation.
Position Statements: 40+ statements on practice issues (workload in schools, discredited techniques, racism, endoscopic swallowing assessment, etc.).
Practice Guidelines and Knowledge & Skills: evidence-based standards for defined practice areas; admission/discharge criteria, services for severe disabilities, neonatal consent/Medicaid guidance, NICU services, etc.
Practice Portal (ASHA)
Practice Portal describes and provides evidence-based recommendations for 60+ topics; outlines roles/responsibilities in practice areas (assessment guidelines, etc.). Topics include accent modification, autism, documentation, pediatric feeding, etc.
Principle of Ethics I: Welfare of Clients
Rules A–T (summarized):
A. Serve only within competence; use resources/referrals when needed.
B. Do not discriminate in services or research; uphold equity.
C. Do not misrepresent credentials; inform clients of names/roles/credentials of those providing services.
D–G. Supervision and delegation: CC holders may delegate tasks only to adequately prepared/supervised personnel; preserve welfare and avoid delegating unique skills or judgment.
H. Obtain informed consent; inform about risks, technology, and outcomes; authorize if decision-making is diminished.
I. Include participants in research/teaching only with voluntary, informed consent.
J. Accurately represent purposes of services/products/research; follow guidelines and humane treatment in research.
K–P. CC recipients evaluate effectiveness, use evidence-based judgment, telepractice alignment, confidentiality, and data security.
Q–R. Do not guarantee outcomes; protect confidentiality; ensure records are timely and accurate; avoid letting personal issues interfere with services.
S. Report colleagues who cannot provide services safely to appropriate authorities; ensure continuity of care and provide alternatives.
T. Provide continuity and alternatives when ceasing services.
Principle of Ethics II: Competence and Performance
Rules A–H (selected highlights):
A. Work within scope of practice/competence; consider certification, training, experience.
B. Non-CCCs may have limited clinical service activity; current laws/regulations apply.
C. Commit to lifelong learning; maintain professional competence.
D–H. Conduct research within regulatory structures; ensure staff do not perform beyond their competence; supervise staff appropriately; use appropriate technology and ensure proper calibration.
Principle of Ethics III: Honesty and Integrity
Rules A–M (selected highlights):
A. Do not misrepresent credentials/competence; ensure honest reporting of qualifications.
B. Avoid conflicts of interest; disclose/manage conflicts if avoidance is not possible.
C–G. Do not misrepresent diagnostic information or outcomes; avoid deceptive advertising; report truthfully.
F. Advertising and public statements align with professional standards; no misrepresentation.
M. Give credit proportionate to contributions; avoid plagiarism.
N. Do not engage in sexual activities with individuals over whom professional authority is exercised; maintain boundaries.
O–P. If violations suspected, address collaboratively or inform ethics board; ensure colleagues adhere to standards.
Principle of Ethics IV: Dignity, Autonomy, and Interprofessional Collaboration
Rules A–H (highlights):
A. Collaborate with colleagues and other professions to deliver high-quality care.
B. Exercise independent professional judgment when directives may impede welfare.
C–F. Communicate with colleagues following professional standards; avoid conduct that harms the profession; avoid harassment/power abuse.
G–H. Ensure proper supervision for those under your supervision; avoid false statements in applications; ensure fair credit and non-discrimination.
I–O. Do not engage in harassment or inappropriate conduct; report violations; address ethics complaints properly; comply with laws.
Ethics in Practice: Harassment, Complaints, and Reporting
Provisions against harassment; reporting mechanisms; procedures for complaints; truthful reporting in ethics processes.
Self-report requirements for criminal convictions or disciplinary actions within 60 days to the ASHA Ethics Office; include relevant documents.
Code of Ethics: Practical Tools and References
Form 1–1: Release of Information (ROI) form for HIPAA-compliant sharing; consider when sharing PHI.
Form 1–2: Standardized Test Evaluation Form for assessing test manuals and psychometric quality.
Code of Fair Testing Practices in Education (JCTP)
Purpose: Ethical testing of all individuals regardless of background; applicable to tests in educational settings and beyond.
Focus: fairness, validity, and appropriate use of tests.
Code of Fair Testing Practices in Education (A–I; summarized)
A. Selecting appropriate tests: define purpose, content, test-taker characteristics; review test content and quality; involve knowledgeable personnel; review technical quality and content of practice materials; ensure accommodations and bias checks; consider diverse subgroups.
B. Administering and Scoring: standardization of administration; provide accommodations; familiarize test formats; protect security; train scorers; correct scoring errors; maintain confidentiality of scores.
C. Reporting/Interpreting: interpret results with content, norms, evidence, limitations; consider modified tests; avoid inappropriate uses; report performance standards; avoid relying on a single score; provide interpretation for groups; document procedures and inclusions/exclusions; communicate results timely; monitor test use.
D–G. Informed, ethical communication about tests and outcomes; avoid misrepresentation; disclose conflicts of interest; ensure truthful reporting; and professional advertising standards.
Other Foundational Documents and Guidelines
HIPAA (Health Insurance Portability and Accountability Act, 1996): protect PHI; rules for disclosures; minimum necessary information; NPI requirement; privacy policies; access controls; accounting of disclosures; business associates compliance.
HIPAA essentials relevant to SLPs:
NPI number required; copies of privacy policy provided to clients; client acknowledgment of receipt.
PHI handled confidentially; minimum necessary disclosure; ePHI standards; accounting of disclosures; business associates compliance.
ROI forms for compliant information sharing.
FERPA (Family Educational Rights and Privacy Act) applies in educational settings; HIPAA exemptions may apply depending on setting.
Psychometric Principles (Measurement Science)
Core idea: psychometrics is the science of measuring human traits, abilities, and processes; assessment must adhere to validity, reliability, standardization, and freedom from bias.
Validity: does the test measure what it claims to measure? Types:
Face validity: looks like it measures the intended construct; superficial judgment of appearance.
Content validity: test content represents the domain; judged by experts; related to but more rigorous than face validity.
Construct validity: test measures the theoretical construct it purports to measure.
Criterion validity: relationship to an external criterion; includes concurrent and predictive validity.
Concurrent validity: compares to an established standard.
Predictive validity: predicts performance on a future criterion (e.g., SAT predicting college performance).
Reliability: consistency of results across time or raters; types:
Test-retest reliability: stability over time.
Internal consistency / split-half reliability: correlation between halves of a test; halves should be comparable.
Rater reliability: agreement among raters; intrarater (same rater) and interrater (different raters).
Alternate-form reliability (parallel forms): correlation between two equivalent forms of a test.
Standardization: standardized tests have uniform administration and scoring; allow comparisons to normative groups.
Test manuals include: purpose, age range, test construction, administration/scoring, norms, normative sample demographics, validity and reliability evidence, standard error of measurement, confidence intervals.
Sensitivity and Specificity (diagnostic accuracy):
Sensitivity: probability of identifying a disorder when it is present.
ext{Sensitivity} = rac{TP}{TP + FN}Specificity: probability of identifying non-disorder when it is absent.
ext{Specificity} = rac{TN}{TN + FP}Ideal values approach 1.0; common clinical threshold: at least 0.80 (80%) for useful decisions.
Freedom from Bias: test should be appropriate for individual; nondiscriminatory across gender, ethnicity, disability, culture, language, age, etc.
Types of bias:
Item bias: individual test items favor a group.
Intrinsic bias: overall test tends to favor a group.
Extrinsic bias: differences in outcomes due to society, not the test.
Assessment Methods (Data-Gathering Approaches)
Purpose: draw conclusions about an individual’s communicative abilities; combine multiple data sources.
Data sources include:
Information from clients and others (PROMs, caregiver input, teachers, doctors, coworkers, friends).
Case History Forms: background, medical/educational/developmental histories, current concerns.
Questionnaires and Inventories: questions about assessed behavior; open/close-ended.
Rating Scales: predefined scores for assessed behavior.
Checklists: lists of behaviors observed.
Interviews: direct conversations; traditional clinician-led vs ethnographic interviews (informant-guided responses).
PROMs (Patient-Reported Outcome Measures): standardized tools to capture subjective experiences/perceptions.
Ethnographic Interview:
Open-ended, informant-driven questions; clinician restates responses to clarify; examples of questions:
"What is a typical morning like in your household?"
"Tell me about your daughter’s social playdates."
"In what ways does your stutter impact you at work?"
"What are some things you do when you can’t come up with the word you want?"
"Please give some examples of what he does when he is not understood."
Purpose: safeguard against clinician bias; view concerns from client’s perspective.
Observation:
Direct observation in natural or structured contexts; forms include:
Naturalistic Observation: observe during daily activities; gather video samples.
Systematic Observation/Contextual Analysis: observe a behavior across multiple situations to assess environmental effects.
Simulated Observations/Structured Play: create realistic but controlled scenarios to elicit responses.
Advantages: contextualized, functional, individualized.
Disadvantages: time-consuming; potential to miss behaviors; requires clinical experience; may be less efficient.
Speech-Language Sample Analysis:
Collect 50–200 utterances in spontaneous settings; analyze to understand functional abilities and difficulties; collect across multiple settings.
Advantages: naturalistic; reveals functional deficits and differential effects of disorders; supports differential diagnosis.
Disadvantages: time-consuming; requires expertise; may be difficult to obtain representative samples; behaviors may be missed.
Dynamic Assessment:
Test-teach-retest approach (MLE: Mediated Learning Experience): measure current performance; teach strategies; re-measure and compare; identify effective teaching strategies.
Advantages: highlights learning potential; identifies effective strategies; good for differentiating disorder vs. difference (multicultural contexts); individualized.
Disadvantages: less objective; requires high clinical skill; planning-intensive; not efficient; may miss behaviors.
Standardized Tests (Formal Tests):
Most are norm-referenced; some are criterion-referenced; can be standardized if uniform administration/scoring.
Norm-Referenced Tests: compare to a normative sample; establish a normal distribution via the norming group.
Criterion-Referenced Tests: compare against a defined criterion or baseline of performance.
Advantages: objective, efficient administration, broad comparison to peers or fixed standards, widely recognized for cross-professional communication.
Disadvantages: margins of error; limited individualization; static (measures what is known, not how learned); testing may be unnatural; limited functional impact data; strict adherence to manual required for validity.
Administering/Interpreting Standardized Tests:
Before administering, read the manual; understand purpose, population, psychometrics, and administration guidelines.
Consider sensitivity/specificity; use Form 1–2 (Standardized Test Evaluation Form) to assess diagnostic strength; consult test reviews (Buros Center).
Administration, Scoring, and Interpretation Details
Basals and Ceilings:
Basal: starting point for test administration; Ceiling: ending point; vary by test; some tests have no basal/ceiling, others require subsets; determine via test manual.
Raw Score:
Initial count of correct/incorrect responses; some items may be partially correct; refer to manual for scoring rules; consider recording sessions to verify responses.
Normative Data and Norms:
Norms establish distribution for a population; normal distribution characterized by mean and standard deviation; bell curve visuals (Figure 1–1).
Norms derived from a standardization sample; represent population for whom the test is intended; large enough sample.
Understanding Normed Scores:
Standard Score: mean 100, SD 15; 68% fall within 85–115; scores outside indicate relative standing.
Below-average/above-average ranges defined by standard score thresholds.
Percentile Rank: percentage of peers scoring at or below a given score; median is 50th percentile.
A percentile above 84th is above average; below 16th is below average.
Scaled Score: mean 10, SD 3; used on subtests to reflect skill-specific performance; does not strictly follow a normal distribution.
Z-Score: number of standard deviations from the mean; mean 0, SD 1.
Stanine: 9-unit scale; mean 5, SD 2; most people score 4–6; extremes 1 or 9 are less common.
Age Equivalence/Grade Equivalence: average raw score by age/grade; considered one of the least useful and potentially misleading measures; do not rely on age/grade equivalence alone.
Confidence Intervals (CIs):
Provide a range in which the true score is likely to lie; typical confidence levels include 90% and 95% (CI).
Higher CI yields wider score ranges; useful for border-line cases to support decisions.
Interpreting and Reporting Scores:
Interpret scores with content, norms, and limitations; consider modified administration effects on results;
Do not rely on a single score; integrate with other data; report group-level interpretations when applicable;
Communicate results clearly and promptly; document procedures for inclusions/exclusions and who was included; discuss how results will be used and who will have access to them.
Informing Test Takers:
Provide information about the test, rights, responsibilities, and score handling; include: test content, question formats, directions, strategies; optional tests: consequences of not taking the test and alternatives; rights to copies, retakes, rescoring, or invalidation; responsibilities of test-takers; data retention duration; how results will be released; procedures to resolve concerns; and how to obtain more information or file complaints.
Accommodations, Modifications, and Test Validity
Accommodations: minor changes that do not compromise standardized procedures (e.g., large-print stimuli, aids for responses); norm-referenced scores may still be valid if content is unchanged and administration remains consistent with manual.
Modifications: changes that alter standardized administration (e.g., simplified instructions, extra time, prompts, item skipping); typically invalidates normative scores; findings still valuable but test is no longer standardized.
Chronological Age, Adjusted Age, Basals/Ceilings, and Scoring Nuances
Chronological Age (CA): exact age in years, months, days; required to convert raw data to normed scores.
Adjusted (Due Date) Age: for prematurely born infants/tet; use due date to adjust age until about age 3; becomes less relevant after age 3.
Adjusted Age example: a 10-month-old born 8 weeks premature is developmentally similar to an 8-month-old.
Calculating CA: record administration date and birth date, subtract birth date from test date; use borrowing across months/days as needed.
Raw Scores and Scoring Decisions: raw score counts; some items allow partial credit; refer to manual; consider audio-recording testing for accurate scoring.
Normed Scores and Interpretation: transform raw scores to standardized metrics using normative data; consult manual for exact conversion.
Norms, Confidence Intervals, and Interpretation (Expanded)
Norms define a population distribution; standard deviation and mean inform score interpretation.
Key score types: standard score (mean 100, SD 15); percentile rank; scaled score (mean 10, SD 3); z-score (mean 0, SD 1); stanine (mean 5, SD 2).
Confidence intervals provide a range for the true score; used for borderline cases to justify decisions such as therapy eligibility.
Reporting approach: present score ranges and context rather than a single deterministic value; document assumptions and measurement error.
Administration and Interpretation of Standardized Tests (Practical Tips)
Before testing: read the manual; confirm purpose, population, and psychometrics; understand standardization and reliability/validity evidence; check for potential biases and accessibility issues.
Use Form 1–2 to evaluate diagnostic strength of a test; consult test reviews (e.g., Buros Center) for critical appraisals.
Accommodations and Modifications (Operational Guidance)
Accommodations:
Minor adjustments that do not change the test’s standard procedures.
Examples: large-print stimuli, assistive devices not altering responses.
Modifications:
Changes to standardized administration; may invalidate normative data; results may still be informative but not strictly comparable to normative samples.