ME

Notes on Work Analysis (Job Analysis) in Defense Context: Process, Concepts, and Real-World Application

Overview of the discussion

  • Real-world context for applying organizational psychology concepts through a guest interview with Becky, a defense scientist at the Department of National Defence (DND).
  • Focus on job analysis, which at DND is termed “work analysis.”
  • Emphasis on the feasibility, legality, and practical implementation of work analyses to support personnel selection and assessment.
  • Goal: connect theoretical concepts from organizational psychology with concrete, defensible processes used in a government context.

Becky’s background and role

  • Academic background in social psychology and personality; PhD completed with Dr. Oram at the University of Ottawa, about two years ago.
  • current position: defense scientist in the personnel selection and assessment team at DND.
  • Work environment: large research organization within DND with many PhD holders doing diverse research.
  • Primary focus: conduct work analyses to inform personnel selection and assessment.
  • Terminology note: at DND, the term used is “work analysis” but it is equivalent to job analysis; sometimes they analyze “jobs” and sometimes “roles” (e.g., a military member’s full-time job plus extra roles).
  • The emphasis on job/work analysis is driven by the need for defensible, legally defensible decisions in selection.

Key concepts introduced

  • Bonafide occupational requirements (BFORs): ensuring selection criteria are legitimately tied to job performance.
  • Legally defensible assessment: without a work analysis, the basis for assessments could be challenged legally; the analysis provides the data backing up decisions.
  • Mayoran case (Mayoran or a similar case referenced): highlighted the tension between physical fitness measures and equal opportunity in selection, and the importance of integrating legal defensibility and fairness in critieria (e.g., anaerobic/cardiovascular measures for firefighters).
  • Competency frameworks as a practical alternative for organizations that cannot conduct full analyses; used to anchor job requirements and to contextualize task outputs.
  • The process blends a task-oriented (outputs and tasks) and a worker-oriented (competencies, KSAOs) approach—a hybrid method.

Why this work matters in large organizations and government

  • Government and large organizations often have stricter legal and regulatory scrutiny, which makes a thorough, defensible analysis crucial.
  • A defensible analysis helps prevent litigation and demonstrates a clear link between what is assessed and what is required to perform the job.
  • In larger contexts, there is typically a need for solid data to justify selection decisions and to inform job postings, environments, and expectations.

The process: an overview

  • Two main modules in Becky's team process:
    1) Focus group work analysis (day-by-day tasks and competencies)
    2) Incumbent survey to rate competencies for selection relevance
  • The goal is to build a defensible model that ties competencies directly to outputs/tasks and to decide what to assess in recruitment.
  • The process also includes a validation component and, when possible, cross-organizational benchmarking.

Phase 1: Focus groups for work analysis

  • Target participants: subject matter experts (SMEs) who are currently or recently in the job, focusing on entry-level perspectives to capture what a new hire must know/do.
  • Sample size: typically around six to seven SMEs per focus group.
  • Two-day focus group structure:
    • Day 1: Identify outputs (functional categories) of the job — broad functional areas such as examples like "conduct mountain operations" or "conduct air crew operations" for SAR techs.
    • Day 2: Identify the competencies needed to perform those outputs (link individual competencies to outputs).
  • Outputs (functional categories): high-level areas describing what the job accomplishes rather than every micro-task; serves as the backbone for mapping competencies.
  • Tasks: under each functional category, SMEs generate specific tasks that describe what is done in the job.
  • Functional job analysis concepts: organized into outputs and tasks; may utilize structured inventories or questionnaires to expedite elicitation.
  • Tools mentioned: structured job analysis questionnaires and inventories to capture the range of tasks more efficiently.
  • Reference to prior materials: sometimes prior work analyses or existing task lists are used as a foundation to avoid starting from scratch.
  • The second day connects tasks to competencies (the worker-focused side).
  • The overall aim is to create a work analysis competency model that includes both abilities and other attributes, ensuring each competency is tied to at least one output.
  • Legal defensibility emphasis: every competency must map to a corresponding task/output to avoid ad hoc inclusions.
  • Example job discussed: SAR (search and rescue) technicians — high-risk, life-saving work with tasks including jumping from airplanes, climbing mountains, carrying people on their backs; emphasizes the importance of precise matching of competencies to critical outputs.

Phase 2: The work analysis competency model and linking outputs to competencies

  • Work analysis competency model components (as defined by organizational psychology experts):
    • Abilities (types):
    • Personality traits
    • Cognitive abilities
    • Physical traits (e.g., muscular strength, flexibility)
    • Knowledge
    • Skills
    • Other characteristics
  • The two-step focus in Phase 2:
    • Step A: Evaluate each hypothetical or proposed competency against the outputs/tasks identified in Phase 1; decide whether each competency is needed for the job.
    • Step B: Gather knowledge, skills, and other characteristics pertinent to the job and list them; this builds a comprehensive items inventory under each competency domain.
  • The linking requirement: every competency must be linked to at least one output; if a competency (e.g., adaptability) does not map to any output, it is dropped.
  • Discussion about determining necessity, practicality, and importance of each competency:
    • Necessity: is the competency required for satisfactory performance at selection time?
    • Practicality: can it be realistically expected that a candidate possesses this competency at the time of selection?
    • Importance: how important is the competency for overall job performance?
  • Human factors and decision-making considerations:
    • Unanimity is often required in the focus group; pushback exists and needs negotiation.
    • Disagreements are common, especially on high-stakes roles (e.g., SAR techs) because participants feel the stakes involve life-or-death outcomes.
    • Role of expertise: researchers with psychology background help avoid misinterpretations (e.g., misdefining terms like self-control, conscientiousness).
  • Bias and limitations (acknowledged):
    • Researcher bias due to personal background and stakes; participants’ own stakes in outcomes influence ratings.
    • The challenge of achieving objectivity when participants have strong opinions or experiences.
    • Recognizing limitations of any single approach and the need for triangulation (qualitative and quantitative methods).
  • The role of the interviewer:
    • The interview/facilitation quality depends on training in organizational psychology to correctly interpret concepts and avoid misinterpretation by lay participants (e.g., mis-defining concepts like perseverance or self-regulation).
  • Hybrid approach rationale:
    • Task-focused outputs provide concrete anchors for competencies.
    • Worker-focused competencies capture the behavioral and attribute requirements for successful performance.
  • Practical challenge: ensuring consistency across analyses and avoiding over- or under-inclusion of competencies.

Phase 3: Incumbent survey and decision rules for selection measures

  • After the focus groups, Becky's team invites all current incumbents to participate in a broader survey.
  • Survey aims and questions:
    • Rate each identified competency on:
    • Whether it is necessary at selection time
    • Whether it is practical to expect a candidate to have it at selection time
    • How important it is for performance in the job
    • The goal is to determine which competencies meet predefined thresholds to justify assessing them in selection.
  • Decision criteria (thresholds and rules):
    • A competency is considered to be included in selection measures if it meets the following criteria: at least $75\%$ of respondents say it is necessary and practical, and its importance rating is sufficiently high.
    • Based on these ratings, the team decides which competencies warrant development of a selection measure.
  • Output of Phase 3: a formal report documenting which competencies pass the criteria and should be assessed; the client (e.g., StarTek or another client) may proceed to develop selection measures for those competencies.
  • Selection methods possible (depending on resources):
    • Assessment center exercises that simulate job tasks and evaluate competencies in action
    • Situational Judgment Tests (SJTs)
    • Structured or semi-structured interviews
    • Cognitive ability tests and spatial ability tests
    • Other measures depending on the competency and the resources available
  • The process emphasizes that the analysis is not standalone; it guides recruitment and reduces risk, but the ultimate selection system must align with the identified competencies in a defensible way.
  • The connection to job postings and descriptions:
    • Job postings often reflect the outcomes, tasks, and competencies (KSAOs) identified in the work analysis to help candidates understand requirements and environmental demands.
    • If resources prevent a full, in-depth job analysis, organizations can use validated competency frameworks as a basis and adapt them to their context.

Validation, data integration, and cross-organizational collaboration

  • Validation challenges due to small occupational populations (e.g., SAR techs may be $30$–$50$ people in a given context).
  • When samples are small, researchers may pool data or collaborate with other organizations to validate selection measures using broader datasets.
  • Validation approaches mentioned:
    • Bayesian methods and other advanced statistics to combine evidence across sources.
    • Mixed-methods approaches that integrate qualitative feedback with quantitative ratings.
  • The two-pronged validation emphasis:
    • Qualitative: confirm that competencies and tasks align with real-world job requirements and that agreed-upon mappings are credible.
    • Quantitative: confirm reliability and validity of the selection measures and the linkage to job performance outcomes.

Practical and ethical considerations

  • Legal defensibility remains central in government contexts; even widely accepted or intuitive competencies must be defensibly linked to outputs.
  • Pushback and disagreements are common, especially where stakeholders believe a competency should be included because it would help colleagues or safety, even if not strictly tied to outputs.
  • The interviewers and analysts must avoid overfitting the model to a single group’s preferences; cross-check with objective job outputs and standardized frameworks.
  • The interviewer’s expertise matters: researchers trained in organizational psychology are better equipped to define constructs precisely and communicate them to non-experts.
  • Potential trade-offs: cost and time constraints may limit in-depth analyses; competency frameworks can provide a practical alternative when resources are limited.
  • Real-world impact argument: accurate job descriptions and justified selection criteria improve job fit, reduce turnover, increase engagement, and enhance safety—especially for high-stakes roles like SAR technicians.

Connections to theory, practical relevance, and curriculum alignment

  • Job analysis (work analysis) is foundational to recruitment, selection, and broader organizational outcomes such as person-job fit, engagement, and productivity.
  • The process links theory (abilities, knowledge, skills, other characteristics) with practice (assessments, selection measures, and job postings).
  • For students: understanding how a theoretically informed, defensible analysis translates into concrete HR tools and practices in high-stakes contexts (e.g., military and defense).
  • Emphasis on the fact that the more accurate the job description and competency mapping, the better the alignment with outcomes like job satisfaction and reduced counterproductive work behaviors.
  • The conversation also highlights the value of cross-disciplinary expertise (psychology, statistics, program evaluation) in conducting rigorous workforce analytics.

Reflections on the most engaging aspects

  • The richness of real-world jobs: hearing about SAR techs, mountain operations, air crew operations, and the wide variety of roles within the military.
  • The practical impact: seeing theories from social psychology applied to life-saving, high-risk work, and the tension between theory and resource constraints.
  • The blend of qualitative and quantitative work: interviews, focus groups, surveys, and statistical validation, all in one process.
  • The human element: the sense that reliable job analysis directly affects who gets to do important work and how well they can perform under pressure,
    including the pragmatic concern of protecting lives.

Final takeaways for students studying organizational psychology

  • Job/work analysis is about producing the most valid, reliable description of a position to hire the right people and support effective work outcomes.
  • A defensible process requires linking every competency to concrete outputs/tasks and following data-driven decision rules for inclusion in assessments.
  • Hybrid approaches that combine task-oriented outputs with worker-oriented competencies tend to be robust and flexible across contexts.
  • Validation and cross-organizational collaboration can help overcome small-sample limitations and strengthen the evidence base for selection decisions.
  • The ethical and legal dimensions are not tangential; they are central to the design and implementation of any selection system, especially in government or high-stakes environments.

Quick reference notes (key numbers and terms)

  • Focus group duration: two days; SMEs per group: typically around $6$–$7$ participants.
  • Time horizon for updating analyses: every $5$ years per occupation, unless major changes occur earlier.
  • Survey decision thresholds: a competency is retained if at least $75\%$ of respondents rate it as necessary and practical; importance ratings are also considered.
  • Data integration: competencies must map to outputs; otherwise, they are discarded.
  • The gamut of possible selection measures includes: assessment centers, SJTs, structured interviews, cognitive and spatial ability tests, and other measures depending on resources.
  • Validation approach: mixed methods with potential Bayesian data integration to pool evidence across organizations when samples are small.

Closing takeaway

  • Job/work analysis is not just a theoretical exercise; it is a rigorous, transferable framework that ensures recruitment and selection are fair, defensible, and aligned with real job demands—an essential bridge between organizational psychology theory and impactful, real-world practice.