Feature Integration Theory Notes
Feature Integration Theory and Guided Search
Observations about Visual Search
- Observation #1:
- Visual search is easy if the target differs from non-targets in a simple feature.
- Visual search is difficult if the target differs from non-targets in a conjunction of features.
- Observation #2:
- At attended locations, features are correctly bound together.
- At unattended locations, features are often incorrectly bound together.
- Observation #3:
- Arrangements of colored shapes into textures are much easier if different parts of a scene differ in simple features compared to a conjunction of features.
Feature Integration Theory (FIT)
- A cognitive model explaining that we perceive coherent objects by binding simple features together at attended locations.
- Pre-attentive Stage:
- Simple features (e.g., color, orientation, edges, movement) are processed independently and in parallel across the visual field.
- Feature maps are created for each dimension, coding which features are at which locations.
- Examples include pre-attentive representations of:
- Blue
- Red
- Vertical orientations
- Horizontal orientations
- Edges
- Master Map of Locations:
- Codes where visual information is located but not what it is.
- Attentional Stage:
- Attention is directed, like a spotlight, randomly around the master map of locations.
- When a location is attended, features from different dimensions are bound together into an object representation (called an object file).
- Post-attentive Stage:
- Object representations (object files) are created one-by-one as attention is directed to locations in the master map.
FIT Explanation of Visual Search
- Feature Search:
- Without attention, simple features are extracted simultaneously across the entire visual field.
- By monitoring for the presence of activity on the known target feature, the target can be found instantly (or rejected as being absent instantly).
- Conjunction Search:
- Attention is required to bind features together.
- Each location needs to be selected (in random order) to bind features together from different feature dimensions until the proper binding is found.
Feature Binding and Attention
- Binding features together requires attention.
- No attention = features remain unbound to an object.
- If asked about unattended locations, one must guess their combination.
Texture Perception
- Binding features into an object representation requires attention.
- Without directed attention toward individual items, a texture display just looks like a bunch of simple features (e.g., circles, diamonds, reds, greens).
- Only after directing attention toward several individual items can one notice differences between different parts of the scene.
Clicker Question: Role of Attention in FIT
- The correct answer is:
- C. Attention binds features together into coherent object representations by selecting a location on the master map.
Visual Search Difficulty Prediction
- Visual Scene A, B, C.
- B is harder than C which is harder than A.
- This is impossible in FIT, since search is either feature or conjunction.