knowt logo

History of Visual Perception

History of understanding visual perception

Ancient Greek theory could be split into two schools of thought: Intromission and Extramission.

Extramission: A form of fire-based beam would be fired from the eyes, gaining information about the environment. An opinion endorsed by Plato and Euclid. Plato saw that senses were the source of all knowledge and vision gave a person information from the world. However, plato also saw the senses as unreliable and therefore any knowledge gathered by them must also be unreliable. He sought that introspection and internal rationalisation would allow you to gain a more reliable understanding of the “forms” that occupy reality.

Intromission: The belief that some portion of the environment would enter the eye, giving an image of the world. Empiricus, a scholar who specialised in thinking about substance at an atomic level, suggested that a layer, one atom thick, from all objects in the environment would enter the eye, giving an image of the world.

One of the biggest critics of Extramission was Aristotle, who rejected the theory due its logical shortfalls; Extramission does not explain how we cannot see in the dark, our vision is “quenched”. He then clapped back with the idea that light travels from the sun and reflected off objects into the eye, however, Aristotle suggested that light had to travel through a medium of air to be detected by the eye. So close Aristotle!!!

Europe then fell into the Dark ages with regards to scientific discoveries. However, a certain legend from Cairo was developing some ideas of his own.

The Persian thinker Alhazen was on house arrest in Cairo when he came up with these ideas;

1.      Discovered Stereopsis and Refraction.

2.      Developed the anatomical understanding of the eye.

3.      Deduced that vision started in the Brain, not the eye.

4.      Came up with some sick illusions too.

Whilst Alhazen was a bit of a legend, he was confined to his time. He could not get his head around how the eye formed an image.

This revelation came in the form of 17th century astronomy. Johanes Kepler deduced a theory of retinal image formation by likening the effect to the optics of a long-range telescope: Vision is created by forming an image on the concave of the retina. Whilst the physics of visual perception was well worked out, Kepler struggled to understand how the neural processing of this retinal image corresponded to perception.

18th CENTURY BABEYY and the million-dollar question at the time was how we discern a 3-d world from a 2-d retinal image. The British empiricists, like Locke and Berkley, believed that depth cues were 100% learned by interacting by your environment. Kant got a bit cunty and suggested that actually, there must be some innate notion of ocular cues. He gave the examples of substance and causality as innate perceptive knowledge. However, as much as they did a lot of yapping, 18th century philosophers were doing fuck all empirical work…experiments!!!

19th century time! Finally, some good, clean, systematic empiricism. The psychophysicists were kicked of by Gustav Fechner. Fechner was stimulated to develop his ideas when thinking about the mind body problem. Fechner saw the physical brain and the concepts created by the mind as separate but heavily interrelated entities, like two sides of a coin. This led Fechner to publish The Elements of Psychophysics. This book caused for the gradual formation of psychology as it’s own discipline, away from philosophy and meta-physics.

Fechner liked the methods implemented by Helmholtz, studying physiology. However, Fechner wanted to look at the process of the mind for perception. This led Fechner to develop 3 methods for measuring stimuli:

1.      Method for limits: Presenting a stimulus in ascending/descending order of size or intensity until the participant can/cannot hear it. The accuracy of the limits discovered can be inaccurate as the participant can anticipate when they will detect the stimulus.

2.      Method of constant stimuli: Presents the stimulus at a random intensity for equal intervals, removing the anticipation and habituation effect.

3.      Method of adjustment: Presents the stimuli in a similar fashion to the method for limits, however the participant is in control of the stimulus intensity.

These methods allowed Fechner to come up with the idea of the Absolute Threshold. The minimum limit of intensity needed for a stimulus to be detectable. Another concept developed by Fechner was the Difference Threshold. The minimum change required in a physical stimulus to be detected by human perception. Fechner was highly influenced by the work of Ernst Weber

Ernst Weber created the rule of Just Noticeable Difference:

Any sensory dimension is a constant proportion of about 3%. Denoting that what is noticeable is based on background comparison.

In short, you are more likely to notice a 3 cm extension on a 10 cm line than the same extension on a 100cm one. Ernst weber then developed a rule that stated that the difference required to perceive stimulus change is proportional to the original scalar of the stimulus (i.e. light intensity) multiplied by a constant:

Weber's Law of Just Noticeable Differences | by Kellie Carlson | Medium

Now Fechner got his rocks off to this. Instead of suggesting (like Weber) that the physical stimulus was mathematically proportional to the perceived notion of the stimulus, Fechner saw the relationship as logarithmic. This became known as Fechner's law:

Perceived intensity = Log(Stimulus intensity)*Constant

Fechner was suggesting that as the perceived intensity increases, exponentially higher increases to actual stimulus intensity must occur. In plain words, as a light gets brighter, the difference required to perceive change increases. h

Fechner’s methods are still used widely to this day. However, the study of Psychophysics for Fechner only went this far. Fechner considered himself a physicist (old school scientists) and did not apply his methods to other areas of cognition.

And BOOM here came along Helmholtz, who was a big-time genius who specialized in physiological optics. He was known for creating the ophthalmoscope and developing the idea of unconscious inference; Perception is an indirect and informed process based on the cognitive relationship between action and sensation. Helmholtz also developed the tri-chromatic theory of colour vision and a theory of depth perception based on visual cues and felt space position (action x sensation relationship again).

It is commonly argued that psychophysics is a foundation for modern psychology. Which is a fair assumption if you consider that it was one of the first schools of thought to exact a rudimentary form of the scientific method and apply it to the human brain and it’s surrounding organs.

Born in 1832, Wilhelm Wundt was an academic in Germany, who was part of the philosophy department. After reading about Fechner’s work in the Elements of Psychophysics, Wundt began to adopt the practical methods used to test the mind’s function. This lead Wundt to publish the Principles of Physiological Psychology in 1874 and establish the first ever psychology laboratory in Leipzig, Germany, five years later (1879). In 1881, Wundt started the first psychology journal, named ‘Philosophical Studies’.

Wundt split forms of cognition into 2 different categories of experience.

Immediate Experience: The mind’s immediate reaction to stimulus without the influence of abstraction and reflection on the stimulus.

Mediate Experience: The mind’s conception or memory of a stimulus.

Wundt wanted to measure immediate experiences; he used tools like introspection to push participants to describe what immediate sensations they were feeling. Wundt trained his students in introspection, warning them not to report sensations from memory. Wundt also stressed that he wanted students to report their sensations (immediate observation of the stimulus) and not their perceptions (the mind consciously making sense of the stimulus). When perceptions were reported from introspection, this was known as a stimulus error.

Wundt’s approach was anti-metaphysical; he only dealt with the data he received from introspection and other approaches. Wundt had no interest in abstracting beyond the mechanical analyses of his academic work. Wundt also believed in mental chronometry. Mental Chronometry was pioneered by Donder in the earlier portion of the 19th century; it stated that, like physical acts, mental acts require time and could be measured.

The lab was made with Miller and Helmholtz, two other psychophysics boffins. In founding the lab, Wundt was able to distinguish between experimental and social psychology. Wundt was a structuralist who used methods like introspection and Donder’s to understand the perceptive elements of consciousness. Wundt, like many other psychologists at the time, focused on perception. Perception was easy to test in experiments as the independent variable (stimulus) was physical and could be manipulated easily.

Wundt would experiment with physical stimulus to understand how objects were mentally represented. These mental representations or perceptions were based on sensations. These sensations were never experienced individually but as a part of the perception of an object. An example of a visual sensation could be colourfulness.

Wundt did not believe in the psychophysical approach to studying sensation. He argued that the psychological and physical act of sensation are separate entities. Wundt believed that there was something more to perception than a stimulation of neural matter.

It must be noted that the 1st World War caused for allot of disruption to the studies taking part in the Leipzig laboratory.

As we move into the 20th century, we can split the study of visual perception in psychology into the 3 Gs’;

1.      Gestalt psychologists

2.      Gibson

3.      Gregory

 

Another important player to note is Hubel and Wiesel who were using microelectrodes to understand neural activity in cats. This gave us important insight into concepts like neuroplasticity and the visual concept that would go on to be called feature detectors.

After the formation of the lab, structuralist psychology moved abroad to the USA, with Titchener starting a laboratory of his own in Cornell (1896). In this lab, Titchener split visual perception into 32 thousand identifiable visual sensations using methods like introspection. Titchener believed that visual perception can be understood by breaking down each scenario of vision into its visual sensations.

Now, Gestalt psychologists thought that was a load of bollocks. Psychologist Wertheimer said that whilst perception can be broken down into smaller elements, perception OCCURS as a process of wider categorisation of these elements. In short, we perceive the whole, not the decomposed elements. Wertheimer used concepts like phi movement to prove that we see one object moving and not two individual objects being shown on screen. The Gestalt psychologists used other optical illusions to further prove their theory. The study of these illusions in visual perception led to the Gestalt psychologists to develop a series of principles for visual perception:

1.      Proximity: Elements that are close to each other are grouped together.

2.      Similarity: Similar objects are grouped differently to different objects.

3.      Common Fate: Things that move together are grouped together.

4.      Good Continuation: Spatial analogue to common fate

5.      Figure: The elements that make up an object: closure, relative size, surroundness, orientation and symmetry interact with the background of the scene to create perception.

These are just a couple of examples that they came up with. The rationale that Gestalt psychologists used to justify their theories was that there are field forces in the brain that automatically categorise objects with a bias (A nativist approach). This idea is not too dissimilar with modern cognitive psychology: we organise visual experience based on the natural laws and reality of objects.

It should be noted that at this time in the early 20th century, Behaviourism was beginning to take off, with the publication of J.B. Watson’s “Psychology as the behaviourist views it”. Which criticised the study of cognitive psychology and cognitive perception. However it was later denounced as an idea due to it not being able to explain more complex elements of human behaviour (See notes for Skinner vs Chomsky).

Our second G of the 20th century, JJ Gibson had a radical approach to perception. His theory stemmed from training WW2 pilots how to fly planes. He found that a good knowledge of pictorial depth cues, really did not aid pilots in learning to land planes. This led to Gibson looking for another theory for how we perceive depth. Gibson theorised that we live in a world of textures and gradients and that it is the gradients that are essential for our depth perception (1966).

Gibson then took this theory further and denounced the idea of “pictorial cues” and “cognition” and “representation”. Just “see” the world (very bottom-up approach). Gibson categorised the world we see as an optic array. Perceptive issues like size constancy are relinquished when we consider the gradient of texture.

To perceive the world, according to Gibson, you had to interact with it. Objects that appear to have textures that are more spaced out/more distance between them appear to be closer to the human eye and when the textures appear to be closer together, the object is further away from the viewer. As we move around an object, the texture expands, giving us new information about the object. An environment contains ‘invariants’: aspects of the environment that do not change. Examples on invariants can be linear geometry and texture/gradients. Visual flow was the term coined by Gibson to describe how objects in your environment appear to flow from some point beyond the horizon. Instead of perceiving individual objects within the scene to make inferences (like in top-down approaches to perception), we perceive the geometry and patterns of light interacting with objects. These patterns of light can manifest as colours, intensity, and frequencies to the human eye. The geometry of light is known as the optic array, according to Gibson. Furthermore, there is no processing of stimulus in the Gibson approach to perception. Sensation is perception to Gibson; no inferences are made.  

 One of the most famous experiments aiding Gibson’s view on perception was the swinging room. Infants were placed into a room. They were then prompted to perform a reaching task. An experimenter would then move the walls of the room in one motion, keeping the floor stable. The visual cue of a walls moving was enough to knock infants off balance and make them fall. This suggested that the flow of optic/visual information was more important in determining balance and perception than any top-down or mechanical knowledge. The experiment was then performed on adults whilst they did a difficult balancing task (to increase load) and similar results were achieved. (1974)

For the most part, Gibson’s approach had some positives. In terms of flying for aviation and driving cars, the use of invariants in the environment was a sound explanation. On clear days in the environment, textures and linear geometry appeared to have a sound. Gibson also inspired other researchers to study the optic array, an element that had not been considered in the field.

Gibson’s approach had some criticisms though. The existence of visual illusions appeared to stump his theory as it proved that some level of neural processing was happening for these perceptive errors to occur. Gibson brushed this off by saying that because visual illusions are not ecologically valid, they paint an incorrect story about perception in the environment. This where he messed up; there are illusions that do occur in the environment. Induced motion, like staring at a waterfall and then looking at another object and perceiving it as moving away from you, happens ecologically. If there is no analysis of individual elements in the environment, why do we perceive vertices more readily than horizontal geometry? Furthermore, why do we overestimate the extent of vertical geometry in our environment? Gibson’s theory did not accommodate these elements of environmental regularities in his approach.

Now Richard Gregory was not a fan of Gibson. In 1970, Gregory argued for the approach of top-down perception in the brain; Gregory argued that the mere sensory input alone was too rudimentary for us to understand the world. Gregory argued that most sensory information was long lost during the sensory processing stage (well before it reached the brain). Smeggy Greggy argued that an important role of visual perception was completed by past knowledge and memories. Using the stimuli we have received from sensory organs and past knowledge we make a hypothesis about the world around us. The stimulus input is ambiguous until the top-down processes of the brain give it context.

 Illusions were also important to Gregory as they gave insight into how the brain can make false hypotheses on ambiguous stimulus input, allowing him to understand what these hypotheses might be. They also were great evidence for his theory. Ambiguous stimuli showed that an object can have two different perceptions (hypotheses) whilst still retaining the same stimuli input. Some famous examples of this can be Gregory’s spinning masks and the Necker cube:

Objects like the Necker cube proved to people that neural processing of stimuli input is essential to understanding the mystery of perception.

There are some criticisms of Gregory and the Top-down, constructivist approach. Firstly, it is difficult to understand the ‘Nature’ of some of these hypotheses. In some instances, we are able to modify our hypotheses of visual input: Ambiguous shapes may begin to form a face or an animal rapidly when told to look for one, causing our ‘hypotheses’ to change. However, some instances show hypotheses unable to change. Gregory’s spinning mask (1974) show that even once we learn how the illusion works, we cannot ‘turn it off’ in our heads, so to speak.

Another criticism is the idea that constructivist theories tend to oppose the nativist approach to perception and other modes of cognition. Through the developmental lens, constructivists believe that a baby learns nothing about visual perception until born where the infant’s vision is guided by learning. However, there has been evidence over the years that point towards a nativist view when it comes to neonate perception. Infants show a good understanding of complex hypothesis like size constancy very early after birth (Slater and Morrison, 1985).

Finally, Gregory’s approach neglects the richness of sensory information ecologically. Gregory used size constancy as an example of hypothesis in the lab. As the object got smaller on the retinal image, the brain maintains the size value of the object, thus inferring that a top-down assumption using previous knowledge has been made. Ecologically, however, there are many more pictorial cues (like Gibson’s invariants) that aid the understanding of depth and size, cues that do not rely on previous knowledge.

History of Visual Perception

History of understanding visual perception

Ancient Greek theory could be split into two schools of thought: Intromission and Extramission.

Extramission: A form of fire-based beam would be fired from the eyes, gaining information about the environment. An opinion endorsed by Plato and Euclid. Plato saw that senses were the source of all knowledge and vision gave a person information from the world. However, plato also saw the senses as unreliable and therefore any knowledge gathered by them must also be unreliable. He sought that introspection and internal rationalisation would allow you to gain a more reliable understanding of the “forms” that occupy reality.

Intromission: The belief that some portion of the environment would enter the eye, giving an image of the world. Empiricus, a scholar who specialised in thinking about substance at an atomic level, suggested that a layer, one atom thick, from all objects in the environment would enter the eye, giving an image of the world.

One of the biggest critics of Extramission was Aristotle, who rejected the theory due its logical shortfalls; Extramission does not explain how we cannot see in the dark, our vision is “quenched”. He then clapped back with the idea that light travels from the sun and reflected off objects into the eye, however, Aristotle suggested that light had to travel through a medium of air to be detected by the eye. So close Aristotle!!!

Europe then fell into the Dark ages with regards to scientific discoveries. However, a certain legend from Cairo was developing some ideas of his own.

The Persian thinker Alhazen was on house arrest in Cairo when he came up with these ideas;

1.      Discovered Stereopsis and Refraction.

2.      Developed the anatomical understanding of the eye.

3.      Deduced that vision started in the Brain, not the eye.

4.      Came up with some sick illusions too.

Whilst Alhazen was a bit of a legend, he was confined to his time. He could not get his head around how the eye formed an image.

This revelation came in the form of 17th century astronomy. Johanes Kepler deduced a theory of retinal image formation by likening the effect to the optics of a long-range telescope: Vision is created by forming an image on the concave of the retina. Whilst the physics of visual perception was well worked out, Kepler struggled to understand how the neural processing of this retinal image corresponded to perception.

18th CENTURY BABEYY and the million-dollar question at the time was how we discern a 3-d world from a 2-d retinal image. The British empiricists, like Locke and Berkley, believed that depth cues were 100% learned by interacting by your environment. Kant got a bit cunty and suggested that actually, there must be some innate notion of ocular cues. He gave the examples of substance and causality as innate perceptive knowledge. However, as much as they did a lot of yapping, 18th century philosophers were doing fuck all empirical work…experiments!!!

19th century time! Finally, some good, clean, systematic empiricism. The psychophysicists were kicked of by Gustav Fechner. Fechner was stimulated to develop his ideas when thinking about the mind body problem. Fechner saw the physical brain and the concepts created by the mind as separate but heavily interrelated entities, like two sides of a coin. This led Fechner to publish The Elements of Psychophysics. This book caused for the gradual formation of psychology as it’s own discipline, away from philosophy and meta-physics.

Fechner liked the methods implemented by Helmholtz, studying physiology. However, Fechner wanted to look at the process of the mind for perception. This led Fechner to develop 3 methods for measuring stimuli:

1.      Method for limits: Presenting a stimulus in ascending/descending order of size or intensity until the participant can/cannot hear it. The accuracy of the limits discovered can be inaccurate as the participant can anticipate when they will detect the stimulus.

2.      Method of constant stimuli: Presents the stimulus at a random intensity for equal intervals, removing the anticipation and habituation effect.

3.      Method of adjustment: Presents the stimuli in a similar fashion to the method for limits, however the participant is in control of the stimulus intensity.

These methods allowed Fechner to come up with the idea of the Absolute Threshold. The minimum limit of intensity needed for a stimulus to be detectable. Another concept developed by Fechner was the Difference Threshold. The minimum change required in a physical stimulus to be detected by human perception. Fechner was highly influenced by the work of Ernst Weber

Ernst Weber created the rule of Just Noticeable Difference:

Any sensory dimension is a constant proportion of about 3%. Denoting that what is noticeable is based on background comparison.

In short, you are more likely to notice a 3 cm extension on a 10 cm line than the same extension on a 100cm one. Ernst weber then developed a rule that stated that the difference required to perceive stimulus change is proportional to the original scalar of the stimulus (i.e. light intensity) multiplied by a constant:

Weber's Law of Just Noticeable Differences | by Kellie Carlson | Medium

Now Fechner got his rocks off to this. Instead of suggesting (like Weber) that the physical stimulus was mathematically proportional to the perceived notion of the stimulus, Fechner saw the relationship as logarithmic. This became known as Fechner's law:

Perceived intensity = Log(Stimulus intensity)*Constant

Fechner was suggesting that as the perceived intensity increases, exponentially higher increases to actual stimulus intensity must occur. In plain words, as a light gets brighter, the difference required to perceive change increases. h

Fechner’s methods are still used widely to this day. However, the study of Psychophysics for Fechner only went this far. Fechner considered himself a physicist (old school scientists) and did not apply his methods to other areas of cognition.

And BOOM here came along Helmholtz, who was a big-time genius who specialized in physiological optics. He was known for creating the ophthalmoscope and developing the idea of unconscious inference; Perception is an indirect and informed process based on the cognitive relationship between action and sensation. Helmholtz also developed the tri-chromatic theory of colour vision and a theory of depth perception based on visual cues and felt space position (action x sensation relationship again).

It is commonly argued that psychophysics is a foundation for modern psychology. Which is a fair assumption if you consider that it was one of the first schools of thought to exact a rudimentary form of the scientific method and apply it to the human brain and it’s surrounding organs.

Born in 1832, Wilhelm Wundt was an academic in Germany, who was part of the philosophy department. After reading about Fechner’s work in the Elements of Psychophysics, Wundt began to adopt the practical methods used to test the mind’s function. This lead Wundt to publish the Principles of Physiological Psychology in 1874 and establish the first ever psychology laboratory in Leipzig, Germany, five years later (1879). In 1881, Wundt started the first psychology journal, named ‘Philosophical Studies’.

Wundt split forms of cognition into 2 different categories of experience.

Immediate Experience: The mind’s immediate reaction to stimulus without the influence of abstraction and reflection on the stimulus.

Mediate Experience: The mind’s conception or memory of a stimulus.

Wundt wanted to measure immediate experiences; he used tools like introspection to push participants to describe what immediate sensations they were feeling. Wundt trained his students in introspection, warning them not to report sensations from memory. Wundt also stressed that he wanted students to report their sensations (immediate observation of the stimulus) and not their perceptions (the mind consciously making sense of the stimulus). When perceptions were reported from introspection, this was known as a stimulus error.

Wundt’s approach was anti-metaphysical; he only dealt with the data he received from introspection and other approaches. Wundt had no interest in abstracting beyond the mechanical analyses of his academic work. Wundt also believed in mental chronometry. Mental Chronometry was pioneered by Donder in the earlier portion of the 19th century; it stated that, like physical acts, mental acts require time and could be measured.

The lab was made with Miller and Helmholtz, two other psychophysics boffins. In founding the lab, Wundt was able to distinguish between experimental and social psychology. Wundt was a structuralist who used methods like introspection and Donder’s to understand the perceptive elements of consciousness. Wundt, like many other psychologists at the time, focused on perception. Perception was easy to test in experiments as the independent variable (stimulus) was physical and could be manipulated easily.

Wundt would experiment with physical stimulus to understand how objects were mentally represented. These mental representations or perceptions were based on sensations. These sensations were never experienced individually but as a part of the perception of an object. An example of a visual sensation could be colourfulness.

Wundt did not believe in the psychophysical approach to studying sensation. He argued that the psychological and physical act of sensation are separate entities. Wundt believed that there was something more to perception than a stimulation of neural matter.

It must be noted that the 1st World War caused for allot of disruption to the studies taking part in the Leipzig laboratory.

As we move into the 20th century, we can split the study of visual perception in psychology into the 3 Gs’;

1.      Gestalt psychologists

2.      Gibson

3.      Gregory

 

Another important player to note is Hubel and Wiesel who were using microelectrodes to understand neural activity in cats. This gave us important insight into concepts like neuroplasticity and the visual concept that would go on to be called feature detectors.

After the formation of the lab, structuralist psychology moved abroad to the USA, with Titchener starting a laboratory of his own in Cornell (1896). In this lab, Titchener split visual perception into 32 thousand identifiable visual sensations using methods like introspection. Titchener believed that visual perception can be understood by breaking down each scenario of vision into its visual sensations.

Now, Gestalt psychologists thought that was a load of bollocks. Psychologist Wertheimer said that whilst perception can be broken down into smaller elements, perception OCCURS as a process of wider categorisation of these elements. In short, we perceive the whole, not the decomposed elements. Wertheimer used concepts like phi movement to prove that we see one object moving and not two individual objects being shown on screen. The Gestalt psychologists used other optical illusions to further prove their theory. The study of these illusions in visual perception led to the Gestalt psychologists to develop a series of principles for visual perception:

1.      Proximity: Elements that are close to each other are grouped together.

2.      Similarity: Similar objects are grouped differently to different objects.

3.      Common Fate: Things that move together are grouped together.

4.      Good Continuation: Spatial analogue to common fate

5.      Figure: The elements that make up an object: closure, relative size, surroundness, orientation and symmetry interact with the background of the scene to create perception.

These are just a couple of examples that they came up with. The rationale that Gestalt psychologists used to justify their theories was that there are field forces in the brain that automatically categorise objects with a bias (A nativist approach). This idea is not too dissimilar with modern cognitive psychology: we organise visual experience based on the natural laws and reality of objects.

It should be noted that at this time in the early 20th century, Behaviourism was beginning to take off, with the publication of J.B. Watson’s “Psychology as the behaviourist views it”. Which criticised the study of cognitive psychology and cognitive perception. However it was later denounced as an idea due to it not being able to explain more complex elements of human behaviour (See notes for Skinner vs Chomsky).

Our second G of the 20th century, JJ Gibson had a radical approach to perception. His theory stemmed from training WW2 pilots how to fly planes. He found that a good knowledge of pictorial depth cues, really did not aid pilots in learning to land planes. This led to Gibson looking for another theory for how we perceive depth. Gibson theorised that we live in a world of textures and gradients and that it is the gradients that are essential for our depth perception (1966).

Gibson then took this theory further and denounced the idea of “pictorial cues” and “cognition” and “representation”. Just “see” the world (very bottom-up approach). Gibson categorised the world we see as an optic array. Perceptive issues like size constancy are relinquished when we consider the gradient of texture.

To perceive the world, according to Gibson, you had to interact with it. Objects that appear to have textures that are more spaced out/more distance between them appear to be closer to the human eye and when the textures appear to be closer together, the object is further away from the viewer. As we move around an object, the texture expands, giving us new information about the object. An environment contains ‘invariants’: aspects of the environment that do not change. Examples on invariants can be linear geometry and texture/gradients. Visual flow was the term coined by Gibson to describe how objects in your environment appear to flow from some point beyond the horizon. Instead of perceiving individual objects within the scene to make inferences (like in top-down approaches to perception), we perceive the geometry and patterns of light interacting with objects. These patterns of light can manifest as colours, intensity, and frequencies to the human eye. The geometry of light is known as the optic array, according to Gibson. Furthermore, there is no processing of stimulus in the Gibson approach to perception. Sensation is perception to Gibson; no inferences are made.  

 One of the most famous experiments aiding Gibson’s view on perception was the swinging room. Infants were placed into a room. They were then prompted to perform a reaching task. An experimenter would then move the walls of the room in one motion, keeping the floor stable. The visual cue of a walls moving was enough to knock infants off balance and make them fall. This suggested that the flow of optic/visual information was more important in determining balance and perception than any top-down or mechanical knowledge. The experiment was then performed on adults whilst they did a difficult balancing task (to increase load) and similar results were achieved. (1974)

For the most part, Gibson’s approach had some positives. In terms of flying for aviation and driving cars, the use of invariants in the environment was a sound explanation. On clear days in the environment, textures and linear geometry appeared to have a sound. Gibson also inspired other researchers to study the optic array, an element that had not been considered in the field.

Gibson’s approach had some criticisms though. The existence of visual illusions appeared to stump his theory as it proved that some level of neural processing was happening for these perceptive errors to occur. Gibson brushed this off by saying that because visual illusions are not ecologically valid, they paint an incorrect story about perception in the environment. This where he messed up; there are illusions that do occur in the environment. Induced motion, like staring at a waterfall and then looking at another object and perceiving it as moving away from you, happens ecologically. If there is no analysis of individual elements in the environment, why do we perceive vertices more readily than horizontal geometry? Furthermore, why do we overestimate the extent of vertical geometry in our environment? Gibson’s theory did not accommodate these elements of environmental regularities in his approach.

Now Richard Gregory was not a fan of Gibson. In 1970, Gregory argued for the approach of top-down perception in the brain; Gregory argued that the mere sensory input alone was too rudimentary for us to understand the world. Gregory argued that most sensory information was long lost during the sensory processing stage (well before it reached the brain). Smeggy Greggy argued that an important role of visual perception was completed by past knowledge and memories. Using the stimuli we have received from sensory organs and past knowledge we make a hypothesis about the world around us. The stimulus input is ambiguous until the top-down processes of the brain give it context.

 Illusions were also important to Gregory as they gave insight into how the brain can make false hypotheses on ambiguous stimulus input, allowing him to understand what these hypotheses might be. They also were great evidence for his theory. Ambiguous stimuli showed that an object can have two different perceptions (hypotheses) whilst still retaining the same stimuli input. Some famous examples of this can be Gregory’s spinning masks and the Necker cube:

Objects like the Necker cube proved to people that neural processing of stimuli input is essential to understanding the mystery of perception.

There are some criticisms of Gregory and the Top-down, constructivist approach. Firstly, it is difficult to understand the ‘Nature’ of some of these hypotheses. In some instances, we are able to modify our hypotheses of visual input: Ambiguous shapes may begin to form a face or an animal rapidly when told to look for one, causing our ‘hypotheses’ to change. However, some instances show hypotheses unable to change. Gregory’s spinning mask (1974) show that even once we learn how the illusion works, we cannot ‘turn it off’ in our heads, so to speak.

Another criticism is the idea that constructivist theories tend to oppose the nativist approach to perception and other modes of cognition. Through the developmental lens, constructivists believe that a baby learns nothing about visual perception until born where the infant’s vision is guided by learning. However, there has been evidence over the years that point towards a nativist view when it comes to neonate perception. Infants show a good understanding of complex hypothesis like size constancy very early after birth (Slater and Morrison, 1985).

Finally, Gregory’s approach neglects the richness of sensory information ecologically. Gregory used size constancy as an example of hypothesis in the lab. As the object got smaller on the retinal image, the brain maintains the size value of the object, thus inferring that a top-down assumption using previous knowledge has been made. Ecologically, however, there are many more pictorial cues (like Gibson’s invariants) that aid the understanding of depth and size, cues that do not rely on previous knowledge.

robot