Lecture Notes: Lexical Processing, Semantics, and Semantic Memory (Overview)

Announcements & Logistics

  • Last week: some classrooms hit capacity limits (25 students) and some tutorials were redirected, causing some students to go to the wrong room.
  • Action item: check the eStudent portal for updated tutorial room assignments before this week's tutorials.
  • Reason for room changes: increased enrollment; opening a new room last minute was not feasible, so extra students were allocated to other tutorials, which caused capacity issues last week.
  • Apology: acknowledgment of the inconvenience for students who went to the wrong room.
  • If you haven’t registered for tutorials: email the course coordinator, Alice Wu, urgently to be allocated to a tutorial.
  • Tutorials are designed to be practical, interactive, and physical. If you can attend, come to tutorials; if you need to join, tutors will discuss options with you.
  • Tutorial slides are essentially another version of the weekly activities; classroom interactions in tutorials unpack and practice the concepts beyond the slides.
  • Rough schedule: about 90–95% of weeks follow the posted plan, but some changes happen due to pacing; schedules per week include lecture slides and tutorial activities.
  • Week-specific note: Week 3 tutorial task focuses on thinking about the first assignment/first assessment; tutorial activities and weekly slides are posted under each week for access.

First Assessment & Article Critique

  • The first assessment is a critique of an article; an example from last year is provided to illustrate the expected form: concise, specific writing.
  • The current article for critique is posted under the critique section; the tutor will explain more during this week’s tutorial.
  • Deadline: post your written critique to the designated link.
  • Any questions about tutorials or the first assessment? Clarifications will be provided during tutorials.

This Week’s Focus & Structure

  • Core content: lexical processing and word recognition; small revision of last week’s material.
  • Levels of representation of a word: what are they? How do we decide if a string is a word? Examples:
    • An English letter string may be a word in English but not in Spanish.
    • Naming: how we label a picture with a word; measuring naming involves reaction time in milliseconds and accuracy.
  • Milliseconds concept:
    • One second = 1000 milliseconds, i.e. 1 ext{ s} = 1000 ext{ ms}.
  • Naming efficiency and lexical access: retrieval from the vocabulary (lexical access) and the post-lexical processing that verifies relationships between target and prime.
  • Eye-movement measures (brief review): two main variables in eye-tracking studies; additional measures like pupil dilation and perceptual span are relevant.
  • Practical note: some students requested a review of the eye-tracking portion from last week; this is an opportunity to cover the core ideas again and fill in gaps.

Eye Movement Research in Reading

  • Why measure eye movements? The eyes are a window to cognitive processing during reading.
  • Core measures in reading research:
    • Saccades: rapid eye movements between fixations; occur millisecond-by-millisecond.
    • Fixations: duration spent looking at a specific location (word/region) to process information.
    • Eye-movement data are highly time-resolved (ms level).
  • Additional measures:
    • Pupil size: changes can reflect cognitive load.
    • Perceptual span: the range of visual information that can be processed during a fixation.
    • Phoria vs. para-phoria: focal vs. less-important areas in the perceptual span.
  • The perceptual span defines how much of the upcoming text you can usually take in per fixation; this helps explain reading speed and comprehension.
  • Frequency effect: more frequent words are processed faster.
  • Priming in eye-tracking: exposure to a stimulus can facilitate processing of a subsequent stimulus, sometimes without conscious awareness (implicit priming). Repetition priming can be highly automatic.
  • Experimental relevance: eye-tracking provides richer data than simple reaction times, revealing when and where readers process information in text.

Theoretical Models of Lexical Processing

  • Two broad model families:
    • Search (or sequential) models: information is processed one step at a time (serial processing).
    • Interactive Activation (connectionist) models: parallel processing across multiple levels with bidirectional activation.
  • Key architectural idea (interactive activation): a three-level structure consisting of features (bottom), letters (intermediate), and words (top).
    • Processing is parallel across features, letters, and word forms, with activation spreading among levels.
  • Core concept: spreading activation and feedback loops between levels allow rapid mapping from low-level features to high-level word representations, and vice versa.
  • Empirical findings support the interactive activation approach as the more accurate account of reading/lexical processing.
  • Notes on terminology in the Talk:
    • One model (search) processes letters in a left-to-right, time-locked manner.
    • The other model (interactive activation) posits parallel processing across letters and features, with top-down and bottom-up influences.

Semantics, Meaning, & Categorization

  • Semantics is hard to articulate because meaning involves more than orthography/phonology; it concerns meanings at the level of concepts, categories, and knowledge.
  • Denotation vs. connotation:
    • Denotation: the core, essential meaning or properties (e.g., a dog is a animal with certain features).
    • Connotation: culturally dependent associations, including attitudes, stereotypes, or broader contexts (e.g., dogs as friendly, dangerous, loyal).
  • Semantic networks (spreading activation): words are represented as nodes; activation spreads to related nodes. Stronger connections yield stronger priming effects (e.g., dog prime → related words evoke responses faster).
  • Classic critique of spreading activation: while powerful, it can be difficult to falsify because it can accommodate many patterns of data.
  • Feature-based theories of meaning (decomposition-based): meanings are built from a set of semantic features.
    • Perceptual features: what we can perceive with our senses (color, shape, sound).
    • Functional features: usefulness or typical functions of objects (can be eaten, used for sitting, etc.).
    • Defining features vs. characteristic (defining vs. typical) features: defining features are necessary for category membership; characteristic features are typical but not necessary.
    • Classic issues: some categories have fuzzy boundaries; some words are difficult to define with a fixed feature set (e.g., game, emotions).
    • Context-dependence: meaning can depend on context; some features are context-independent while others are context-dependent (e.g., piano can be heavy in the context of furniture, or light in a different context; a piano has keys, strings, etc.).
  • Prototype (family resemblance) theory: categories do not have hard boundaries; there are prototype exemplars that are more representative of a category, leading to graded membership.
    • Examples used in class to illustrate prototypes: which fruit is most representative (blueberry, kiwi, orange, etc.) and which odd number is most representative (3 vs. 57, 9 vs. 11, etc.). These judgments often reveal graded structure within categories.
    • Benefits: explains graded membership and category similarity; explains how people differ in category judgments due to culture or personal experience.
    • Challenges: identifying the best prototype for abstract terms (love, freedom) or for some categories where exemplars vary widely; cultural and contextual influences can shift prototype status (e.g., numbers in different cultures; the “4”/death link in Chinese culture).
  • Context and meaning: many word meanings are context-sensitive; a single word can have independent, stable senses or context-dependent senses depending on sentence-level cues and background knowledge.
  • Theory-theory (developmental view): knowledge about objects and categories is built as children learn; objects are understood through a coherent theory rather than a fixed feature list.
  • Implications: context, culture, development, and experiences shape semantic memory and category representations; no single fixed rule captures all lexical meaning.

Prototypes, Categorization Experiments, & Categorical Perception

  • Prototypicality and graded structure are supported by experiments where participants rate exemplars by representativeness (e.g., which fruit is most typical? which odd number is most typical?).
  • Categorical perception: certain exemplars are more easily discriminated because they sit near category boundaries (e.g., phoneme categories in speech perception; VOT manipulations alter category perception).
  • Ambiguity and cross-category membership: some items (e.g., tomato, olive, avocado) straddle fruit/vegetable boundaries, illustrating fuzzy categories.
  • Summary points:
    • Classical feature theories struggle with fuzzy categories and abstract terms.
    • Prototype theory accounts for graded membership and natural category structure but faces challenges with abstract categories and cross-cultural variation.
    • Both approaches have explanatory power for different domains (e.g., concepts like fruit vs. abstract concepts).

Developmental & Theory-Theory Perspectives on Meaning

  • Children’s concepts often start with perceptual features and gradually integrate deeper category knowledge as they develop.
  • Classic experiments show that children treat some objects as having essential identities that persist despite superficial changes (e.g., dyeing a tiger to remove stripes still being a tiger for older children; younger children may treat such a change as a different animal).
  • Distinguishing living vs. non-living: changes to function or appearance have different implications for identity depending on whether the object is living or not.
  • Coffee pot vs. wine glass example illustrates a shift in category membership when function changes; adults often retain the original category membership if identity is tied to function; children may rely more heavily on perceptual features.
  • These developmental differences highlight how semantic meaning is shaped by experience, life-long learning, and social context.

Rogers et al.: A Connectionist Semantic Memory Model

  • Core idea: semantics are represented by high-level semantic units connected to lower-level feature pools (verbal and visual layers).
  • Structure of the model:
    • Verbal layer: units represent names, visual properties, functional properties, and encyclopedic properties (factual/world-knowledge aspects like location or typical storage of an object).
    • Visual layer: units represent visual/perceptual features of stimuli (colors, shapes, textures, etc.).
    • Semantic (high-level) units: integrate information across verbal and visual features and coordinate with encyclopedic knowledge.
  • Activation flow:
    • Input can be a word (verbal cue) or a visual feature; activation travels to semantic units and then to the other modality (visual or verbal) depending on experience and learning.
    • Activation strengths depend on prior learning and exposure (e.g., readers who learn primarily through text will show stronger verbal activation; visual learners may show stronger visual activation).
    • Activation can flow in both directions through semantic units, but the strength and patterns depend on the stimulus and prior experiences.
  • Key advantages:
    • You do not need to predefine fixed semantic features; meaning emerges from distributed activation across a network of detectors.
    • The model can simulate the interaction between perception and semantics, and the bidirectional mapping between words and pictures.
  • Practical implication: this framework explains how different modalities (visual vs. verbal) contribute to meaning and how context and learning shape semantic activation patterns.

Synthesis & Takeaways

  • There are multiple approaches to understanding lexical meaning:
    • Feature-based theories (classical and prototype variants): decompose meanings into perceptual, functional, and other features; acknowledge definitional vs. characteristic features and the issue of fuzzy boundaries. Context can shift which features matter.
    • Knowledge-based / theory-based perspectives: meanings arise from general knowledge and theories about objects and categories; emphasizes conceptual coherence over fixed feature sets.
    • Connectionist/activation-based models: meaning is distributed across networks; activation flows between perceptual, verbal, and encyclopedic components; no single feature list is necessary.
  • Central themes across the lecture:
    • Meaning is context-dependent and dynamic, not fixed; both denotation and connotation influence comprehension.
    • Categorization and meaning are shaped by experience, culture, and development; prototypes illustrate graded membership and typicality.
    • Semantic memory can be modeled as a network with interactions among perceptual, verbal, and encyclopedic knowledge.
    • Eye-tracking, priming, and frequency effects provide empirical windows into lexical processing and semantic access.

Next Week Preview

  • The course will move from lexical processing to sentence-level processing, building on the foundations of word recognition and meaning discussed today.