The multi-store model of memory: sensory register, short-term memory and long-term memory
Always draw out diagrams of memory models if you have time as this gains you credit
The Multi-Store Model of Memory was devised by Atkinson and Shiffrin in 1968.
This theoretical model seeks to explain how information is transmitted from the outside world through different stores memory, and how it is processed at each point.
 Features of each store: coding, capacity and duration.
Coding is which format the information is stored in.
Capacity is how much (the quantity) of information that can be held in the memory store at any one time.
Duration is how long information stays in the memory store.
Function is what each store does.
The mind is flooded with environmental stimuli coming from the eyes, ears, taste receptors, touch receptors ..etc.Â
This information arrives at a part of the mind called the Sensory Register. The information enters and is coded in the sensory register in whatever form it was perceived in (so it is modality specific) and each kind is stored in a different store within the sensory register depending on which sense organ the information came from. Visual memory is stored in the Iconic store, sound in the Echoic store, smell in the Olfactory store, touch in the Haptic store, and taste in the Gustatory store. As there is a lot of stimulus coming in all the time the capacity of the sensory register is unlimited. As most of the sensory information that comes in doesnât need to be consciously considered, and there is a lot of it, the duration of for sensory memory in the sensory register is only about 250 milliseconds.Â
Only sensory information that we pay attention to will move to the next store, and so the vast majority will be lost.`
The Short Term Memory store is the next store. It has a duration of about 18-30 seconds. Only information that is rehearsed (consciously repeated inside the mind) here will pass into the next store, and the rest will be lost, as by rehearsing information we can keep it in the store for longer than 18-30 seconds. Short term memory is coded acoustically as the information is rehearsed using our internal voice.
There are two kinds of rehearsal. Maintenance rehearsal keeps the information in the short term memory for longer, and it may pass into the long term memory eventually. Elaborative rehearsal is where we semantically encode the information (give it meaning), moving it directly from the short term memory to the long term memory.Â
Chunking is a way to improve short term memory by grouping items so that each group is treated as one item by the short term memory, thus improving recall as the overall number of âitemsâ is reduced.
The next store is the Long Term Memory. Information in the long term memory has an unlimited duration and it has an unlimited capacity. Information in our long term memory may be lost (we lose access to it), but this may be regained at a later date. Information in the long term memory is coded semantically (based on its meaning).To use the information in the long term memory we need to bring it out of the long term memory and back into the short term memory. This process is called retrieval.
Unlike the short and long term memory the sensory register is not under conscious control; it is recorded automatically. Any information found in the short or long term memory is initially gathered by the sensory register.
Research on the sensory register
Capacity- Sperling (1960) flashed a grid of 20 letters onto a screen for a 20th of a second. When participants were asked to recall random rows of letters the recall was strong. This suggests that all the rows of letters were stored in the sensory register as participants didnât know which row would be asked for, meaning that the iconic store in the sensory register has a large capacity.
Research on the short term memory
Coding- Baddeley (1966) gave four 10 word lists to four participant groups. The first list consisted of acoustically similar words (words that sound similar). The second list consisted of acoustically dissimilar words (words that sound different). The third list consisted of semantically similar words (words with similar meanings), and the fourth list consisted of semantically dissimilar words (words with unrelated meanings). Baddeley found that immediate recall was worst for list 1, and recall after 20 minutes was worst for list 3. This suggests that information in the short term memory is coded acoustically, meaning that immediate recall for list 1 was hardest because the words all sounded similar, so they were all effectively stored as one item, making recall difficult.
Capacity- Jacobs (1887) presented participants with a list of letters or numbers. Participants then had to recall the list in order. Jacobs found that the capacity for letters was around 7 items and 9 for numbers, suggesting that the capacity of the short term memory is very limited.
Duration- Peterson and Peterson (1959) showed participants 3-letter trigrams (i.e: HFR, TKD). Trigrams with any meaning were avoided so that participants could not rehearse the information semantically. Then participants had to count backwards for a few seconds. This interference task was designed to stop maintenance rehearsal. Peterson and Peterson found that after 18 seconds recall was less than 10%. This suggests that information remains in the short term memory for only a few seconds before it disappears.
Research on long-term memory
Coding- Baddeley (1966) (see above)
Capacity- Wagnaar (1986) kept a diary over the course of six years which recorded over 2,400 events. He tested himself on the events and found a 75% recall after 1 year and a 45% recall after 5 years, suggesting that the capacity of the long term memory is very large, potentially limitless.
Duration- Bahrick (1975) showed old photographs and names (including those of old school friends) to participants aged 17-74, recall was 90% after 15 years, and still 80% for names after 48 years. This suggests that the duration of the LTM is very large, potentially limitless.
Evaluation of the Multi-store Model
The short duration of the SR is supported by evolutionary theory, as in the wild quick reactions are vital for survival, so only important information would be retained and processed. Too much information would lead to slower reactions.
There is a large base of research that supports the idea of distinct STM and LTM systems. An example of this is the Shallice and Warrington (1974) study of KF, a brain-damaged case study patient whose STM was impaired following a motorcycle accident, but his LTM remained intact.
It makes sense that memories in the LTM are encoded semantically â i.e. you might recall the general message put across in a political speech, rather than all of the words as they were heard.
The MSM was a pioneering model of memory that inspired further research and consequently other influential models, such as the Working Memory Model.
Some research into STM duration has low mundane realism, as the stimuli participants were asked to remember bear little resemblance to items learned in real life, e.g. Peterson and Peterson (1959) used nonsense trigrams such as âXQFâ to investigate STM duration. Similarly the ecological validity of many of the experiments is low as they were carried out in lab environments, so participant behaviour may not be the same as would be expected in a more natural environment. This can lead to low external validity for the findings.
The model is arguably over-simplified, as it sees each store as a single unit, but evidence suggests that there are multiple short and long-term memory stores, e.g. âLTMâ can be split into Episodic, Procedural and Semantic memory.
It does not make much sense to think of procedural memory (a type of LTM) as being encoded semantically, i.e. knowing how to ride a bike through its meaning.Â
The capacity of the STM can be significantly altered by factors such as age (reduces) and practice (increases). This means that the view of a fixed STM capacity is incorrect.
It is only assumed that LTM has an unlimited capacity, as research has been unable to measure this accurately.
Types of long-term memory: episodic, semantic, procedural
Long term memory is the storage of memories over a long period of time. It is suggested that there are different kinds of LTM. The two main types are Declarative/Explicit (knowing what something is) and Nondeclarative/Implicit (knowing how something happens/is done).
There are then two types of Declarative/Explicit memory; episodic memory and semantic memory. An episodic memory is a memory of experiences and specific events. Semantic memory is the memory of facts, meanings, and concepts. Semantic memory is knowledge about the external world.
Episodic memories are stored with reference to the time and place where they occurred. They are recalled using conscious effort and can be expressed with words. They are autobiographical and emotions felt at the time when the memory is encoded influences memory strength. According to brain scan evidence episodic memories initially seems to be coded in the prefrontal cortex, and are then stored across areas of the brain, though recall seems to occur through the hippocampus.
Semantic memories require conscious effort to recall and can be expressed in words. Memories are stronger if processed more deeply (i.e: linked to other long term memories). Semantic long term memories last longer than episodic ones. May not recall when learned/encoded these memories. Episodic memories become semantic over time. A number of brain areas are associated with semantic memories such as the Perirhinal cortex (an area at the base of the brain on the underside of the temporal lobe).
The one type of Non-declarative/Implicit memory is procedural memory (unconscious memory of skills/how to do things).
Examples of procedural memories are motor/muscle memories (i.e: how to ride a bike). Often learnt in childhood. Hard to express in words or consider consciously. More resistant to forgetting than episodic or semantic. Semantic memory may have a connection to procedural due to ability to produce automatic language (i.e: can recall a well-known semantic memory, such as that paris is the capital of France, with very little effort). Procedural memories are located across the brain but particularly in the motor cortex, cerebellum, and prefrontal cortex.
Episodic memory research
Hertz (1997) took 100 males and females and gave them tasks testing either their episodic or semantic long term memories. Hertz found that females were better on episodic tasks, but there was no difference on semantic tasks. This difference in performance suggests that episodic and semantic memories are separate processes.
Vicari (2007) did a case study of CL, an eight year old girl. Had brain damage after removal of a tumour. Vicari found deficiencies in her episodic LTM functions, especially in creating new episodic memories, but had no trouble creating or recalling semantic memories. Shows that semantic and episodic memories are separate and use different brain areas, as the area with the damage was the hippocampus, which was thought to be where her episodic memories were stored, but there was no damage to her perirhinal cortex were semantic memories thought to be stored.
Semantic memory evidence
Hertz and Vicari
Procedural memory evidence
Finke (2012) studied PM, a 68 year old cellist with brain damage after encephalitis. He had amnesia and it was found that his episodic and semantic memory were both very affected, but ability to play and read music unaffected, including learning new pieces, showing that his procedural LTM to be separate to his semantic and episodic.
Evidence for all three
Sacks (2007) Clive Wearing has retrograde amnesia so cannot remember his musical education (episodic), however remembers facts about his life (semantic). He can also play the piano (procedural). He is unable to encode new episodic or semantic memories due to the anteriorgrade amnesia, but under some experimental conditions is able to gain some new procedural memories through repetition. This shows that semantic, procedural, and episodic memories are all separate systems that are perhaps found in different brain areas.
Evaluation of types of long term memory
Using case studies such as Sackâs study of Clive Wearing allows us to gain insights that would not be possible in an experimental setting, but lack enough control to suggest a cause and effect relationship between brain areas and memory functions. It could be that the people in these case studies are different to the normal population in some way even before their brain damage (i.e: Clive and PM are both musicians, which may have affected how their brain damage altered their behaviours. Similarly CL was a child, so perhaps her brain was more able to adapt than an adult brain would be).
The use of brain scanning in modern cognitive neuroscience has allowed researchers to study the brain and memory more scientifically, identifying the brain areas where different types of memory are located via which parts of the brain are activated when it is scanned via fMRI.
As both episodic and semantic memories are declarative there may not be a true division between them.
The working memory model: central executive, phonological loop, visuo-spatial sketchpad and
episodic buffer.Â
The Working Memory Model was created by the scientists Baddeley and Hitch in 1974.
Baddeley and Hitch thought that the short term memory must be more complex than just a single store transferring information into the long-term memory. Due to the way that we use short term memory they felt that it must be an active processor, holding multiple pieces of information whilst those pieces of information are being worked on.
So the working memory model was meant to be a replacement for the Short-term Memory store. It initially had three components, but they added a fourth later.
So according to their model short term memory still has to interact with long term memory to place information in and to retrieve information out.
Features of the model: coding and capacity
The first part of the Working Memory Model is the Central Executive. This is the director/head of the model and receives and filters the information before passing it on to two âslaveâ systems. The Central Executive is limited in capacity, so it can only hold one piece of information at a time. It switches attention between different inputs.
The first slave system is the phonological loop. The phonological loop deals with auditory information. It has a limited capacity and 2 second duration, so confusion can occur between similar sounding words. It is split into the primary acoustic store (âinner earâ, dealing with sounds recently heard and holding those in your short-term memory) and the articulatory process (âinner voiceâ, keeps information in your mind subvocal repetition).
The other slave system is the Visuospatial sketchpad. The visuospatial sketchpad is your âinner eyeâ. It has a limited capacity and 2 second duration, and stores visual and spatial information. This consists of the visual cache and Inner scribe. The visual cache is a passive store of form and colour of objects, and inner scribe is an active store allows you to visualise the relationship between objects and rehearse what you have seen.Â
Both of slave systems have a link to long term memory and are able to put memories into and take memories out of long term memory.
Later on Baddeley and Hitch thought that this model was somewhat limited, and the capacity of the phonological loop and visuospatial sketchpad was not enough to explain some research findings, so later on they added the episodic buffer to the Working Memory Model (in 2000). This is a general store to hold and combine information from the visuospatial sketchpad, phonological loop, and central executive, and then can pass that information into the long-term memory.
Testing the models
Memory models have been tested through experimental lab studies via interference tasks and dual task experiments. These are usually well controlled, carefully run, and have large numbers of participants, meaning that their internal validity is usually quite high as they tend to produce strong results that are replicable. However, they lack ecological validity as environment is not their natural environment (laboratory environment), may lack mundane realism as tasks may not necessarily use memory in the way that it is naturally used, so thus may lack external validity.
The Memory models are also tested through clinical evidence by using brain-damaged individuals to show the separation of components of the memory model within the brain (i.e: because they have deficiencies in some areas of the model but not others.)
Another method used to test memory models is brain scanning. Scanners such as fMRI scanners show different areas of brain activation during separate short or long term memory tasks, indicating the separate systems.
Research on the Central Executive of the Working Memory Model
 Braver (1997) provides biological evidence of the central executive via brain scans. Braver found an area of the prefrontal cortex that became more active the more demanding a task was for the central executive.
Baddeley (1996) participants were asked to verbally generate random lists of numbers while also attempting to switch between typing numbers and letters on a keyboard. Baddeley found that compared to typing or generating a list of number on their own, participants performed considerably worse when completing both tasks at the same time. This shows that people have low aptitude for dual task performance, perhaps because the central executive is limited in capacity and can only cope with one type of information at a time.
An application of this finding is driving. Driving involves spatial awareness. If you are attempting to have a conversation with someone in the car whilst driving, then this might result in lower performance on both activities, potential putting the safety of both you and the passenger at risk. However, this finding relates to activities that require conscious thought. Experienced drivers donât rely on conscious thought to drive (it becomes an automatic process, meaning the CE is freer to do other things), so perhaps only less experiences drivers need to heed this advice.
Research on the Visuospatial sketchpad
Klauer and Zhao (2004) gave groups of participants either a visual task of remembering chinese ideographs, or a spatial task of remembering the location of dots on a screen. At the same time they also gave participants either a spatial or visual interference task (an interference task is a task that uses the same kind of processing as the main task). They found that spatial memory is more disrupted by spatial interference than visual memory is by visual interference. This shows that the visuospatial sketchpad has distinct visual and spatial components. This is because if visual and spatial memory were both processed the same way by one single component then interference would affect both in the same way.
Trojani and Grossi (1995) studied SC. Shallice and Warrington studied KF. Both participants had short term memory issues after brain damage affecting the functioning of their phonological loops (couldnât remember auditory information), but not the functioning of their visuospatial sketch pads (could remember visual information). This suggests that the phonological loop and visuospatial sketchpad are separate and likely located in different brain regions, as otherwise they would have been affected equally.
Research on the Phonological loop
Baddeley (1975) Participants were visually shown 5 word lists with short exposure, then asked to write the words down in the same order. The first condition used monosyllabic words like âbondâ and âyieldâ. The second condition used polysyllabic words like âopportunityâ. Baddeley found a significant increase in recall in the first condition with the shorter words. This shows that the capacity of the phonological loop is not affected by the number of distinct items as much as by the time it takes to say them. The time limit for a word for the phonological loop was around 2 seconds. This is known as the word length effect. Words taking longer than this to say are much less likely to be remembered.
Trojani and Grossi (1995), Shallice and Warrington (1970)
Research on the episodic buffer
Prabhakaran et al (2000) got participants to do tasks that involved equal amounts of spatial and verbal information whilst being fMRI scanned. In some tasks the spatial and verbal information was integrated, and in others it was not.The scans showed more activation in the prefrontal cortex when information was integrated, and more activation in the posterior brain regions when information was not integrated. This is potential biological evidence for the episodic buffer being located in the prefrontal cortex, as this region specialises in combining audio and visual information.
Evaluation of the Working Memory Model
The WMM provides an explanation for parallel processing (i.e. where processes involved in a cognitive task occur at once), unlike Atkinson and Shiffrinâs MSM.
The multi-store model sees the short term memory as a passive store of information, while the working memory model seems more accurate in describing how memory is used as an active processor. Psychologists often as a result now refer to the short term memory as the working memory due to this idea.
The working memory model provides an explanation of processing deficits such as reading difficulties, so could potentially be applied to make therapies more effective for treatment of such conditions.
Research in this area of psychology is generally lab-based, therefore highly controlled with strong internal validity.
There is considerable brain imaging research showing that different areas of the brain are more active when performing different kinds of task. For example Braver (1997) provides biological evidence of the central executive via brain scans. Braver found an area of the prefrontal cortex that became more active the more demanding a task was for the central executive.
A Shallice and Warrington (1974) case study reported that brain-damaged patient KF could recall visual but not verbal information immediately after its presentation, which supports the WMMâs claim that separate short-term stores manage short-term phonological and visual memories.
The model was developed based on evidence from laboratory experiments, so confounding variables could be carefully controlled to produce reliable results (that can be replicated).
Results from laboratory experiments researching the WMM will often have low mundane realism (i.e. may not relate to real life), as tasks such as repeating âthe the theâ are arguably not representative of our everyday activities, so canât be generalised to memory in day to day life.
The central executive has been criticised by other psychologists as a vague concept without full explanation of its function and is quite difficult to test (not fully falsifiable). Baddeley admits that the concept needs development, and included the episodic buffer as part of this development.
Explanations for forgetting: proactive and retroactive interference and retrieval failure due to absence
of cues
Interference is when two lots of information become confused in memory. This can be because the long term memory is being disrupted by one lot during the coding of the other lot. There are two kinds of interference; proactive and retroactive.
Proactive interference is when previously learnt information affects the recall of new information.
Retroactive interference is when new information affects the recall of previously learnt information, as it may overwrite or block the earlier information.
Interference is much more likely to occur when two pieces of information are similar due to response competition. Interference is time-sensitive, so much more likely to occur if both pieces of information are learnt at the same time (and less likely to occur if learned far apart).
Studies for interference
Schmidt (2000) investigated memories of childhood street names. The sample was 211 11-79 year olds who responded to a questionnaire containing a map of the area around their old school without street names (streets numbered instead). They were asked to name the streets from memory. Found the more times an individual moved home, the fewer street names they recalled (negative correlation). This demonstrates retroactive interference as remembering new streets makes recalling old street names harder.
Baddeley and Hitch (1977) demonstrated retroactive interference as they found that the more games rugby players had played, the worse their recall was of the names of team that they had played became. Memory of the newer team names had interfered with the memory of the older team names.
Evaluation of interference
Loss of information may only be temporary, therefore interference is not a true explanation for forgetting as it is more permanent. Retrieval failure (see below) is a better explanation for this as if the cues arenât present then the memory may be harder to recall.
There is a question as to whether interference leads to overwriting of memories or not
Issues of validity- evidence that interference can explain forgetting frequently comes from artificial laboratory experiments using artificial tasks, so interference may not occur to the same extent in more real life scenarios, challenging interference as an explanation of forgetting
Knowledge of how forgetting works has practical applications such as effective revision strategies.
Interference only explains forgetting if two sets of information are similar so cannot explain many day-to-day cases of forgetting
Another form of forgetting is retrieval failure. Retrieval failure is a problem with retrieving/recalling the memory. This may be because, whilst the memory is in our long term memory, we have forgotten the information (cues/prompts) that makes us think of the memory (cue-dependent forgetting).
The three kind of cues that you can have for a memory are context-dependent cues, state-dependent cues, and category/organisational-dependent cues.
The context is the situation around you as you initially code the information. If you are then asked to recall that information in a different environment, your memory will be inhibited as you donât have the environmental cues that you had when you first coded the memory.
The state is the mood, physical state or mental condition you were in when you learn the new information. Being in a different mood or physical/mental state (having a different internal environment) may inhibit recall of the information.
Category/organisation- it is easier to find a memory if our memories are clearly organised. By knowing the category that the information fits into when you remember it (i.e: Paris is the capital city of France, rather than just that Paris is a capital city) you are more likely to be able to recall it in future. In this way more effective cues will have fewer things associated with it (Easier to find Paris among city names in france than amongst city names of the world), so greater organisation aids memory.
Research on context-dependent cues
Godden and Baddeley (1975) studied divers who had learnt new material either on dry land or while underwater. They were then tested on the new information on land or underwater. Recall was worst if tested in a different context to learned and recall was best when tested in the same context as learnt. This suggests that being in the same context as you learned the information in when you are being tested on it aids recall.
Therefore it might be good for instructors to teach safety instructions to divers once they are already underwater so that they are more likely to remember them in an emergency.
Research on state-dependent cues
Overton (1972) Participants learned material whilst either drunk or sober. Then were tested on the material when either drunk or sober. Overton found that recall was worse when participants had a different internal state when tested than they had had when they learnt the information (i.e; learnt it when drunk, tested on it when sober). Recall was best is they had the same internal state at recall as they had when learning. This shows that the state-dependent cues can aid recall of information.
Research on category-dependent cues
Tulving and Pealstone (1966) gave participants 48 words to learn, and found that the free recall group (no categories for words) performed worse than the group given the list split into four 12 word categories. The categories therefore acted as a cue. This suggests that organisation improves recall.
Evaluation of retrieval failure due to absence of cues
Explains why we temporarily forget (correct cues may not be present in one situation but may be present in another).
The levels of processing theory which states that the deeper you think about information the more likely it is to be recalled supports this theory of forgetting as the cues provide a richer level of detail and thus the chance of recall is increased.
Much of the research on this theory is laboratory based and therefore lacks ecological validity. The studies do not test âeveryday memoryâ and therefore it could be argued the evidence lacks validity.
The theory has been helpful to forensic and police work as it helps facilitate recall from eye witnesses. This means the theory has a real world practical application making it useful outside of the psychology laboratory.
Factors affecting the accuracy of eyewitness testimony: misleading information, including leading questions and post-event discussion; anxiety.
Eyewitness testimony is when observers of events are asked to recall from memory details of what they have seen
Bartlett (1932) said that memories are not accurate âsnapshotsâ of events, but are âreconstructionsâ of events that are influenced by attitudes and stereotypes (schemas). If recall is not objective then this is an issue for eyewitness testimony.
Reconstructive memory is memory that is not an accurate recording of events. It is reconstructed in recalling and may produce errors (confabulations).Â
Leading questions are questions that imply a particular answer, and can influence how a memory is recalled. This could be due to an actual change in memory (substitution bias explanation) or due to the witness feeling emotional pressure to give a particular response (response bias explanation). Memories could also be altered when recalled to be in line with a personâs pre-existing biases (schemas). Post event discussion can also alter the memory, as one witness recalling events may alter the accuracy of the recall of another witness who heard their recall due to memory conformity, where witnesses go along with the accounts of others for social approval.
Leading questions
Loftus and Palmer (1974) aimed to investigate the effect of leading questions on the accuracy of eyewitness testimony.
Their sample consisted of 45 American students, who were divided into five groups of nine. All of the participants watched a video of a car crash and were then asked a specific question about the speed of the cars. Loftus and Palmer manipulated the verb used in the question, for example: âHow fast were they cars going when they smashed / collided / bumped / hit / contacted with each other?â
They found that the estimated speed was affected by the verb used. The more extreme the verb, the faster the participants would estimate that the car had been going. For example, participants who were given the verb smashed reported an average speed of 40.5 mph, where participants who were given the word contacted reported an average speed of 31.8 mph, an overall difference of 8.7 mph.
The results clearly show that the accuracy of eyewitness testimony is affected by leading questions and that a single word in a question can significantly affect the accuracy of our judgements.
In a second experiment, Loftus and Palmer used a different sample of 150 American students, who were divided into three even groups. All the students watch a one-minute video depicting a car accident and were then given a questionnaire to complete.
One group was asked: âHow fast were the cars going when they smashed into each other?â
Another group was asked: âHow fast were the cars going when they hit each other?â
The final group (control) was not asked about the speed of the vehicles. One week later the participants were asked a series of questions via questionnaire about the accident. The critical question was: âDid you see any broken glass?â 32% of the participants who were previously questioned using the verb smashed, reported seeing broken glass; 14% of the participants who were previously questioned using the verb hit, reported seeing broken glass; and 12% of the control group reported seeing broken glass.
There was no broken glass in the video clip and the participants who were questioned previously using the verb smashed, were significantly more likely to report seeing the broken glass, as a result of the earlier leading question. The verb smashed has connotation of faster speeds and broken glass and this question led the participants to report seeing something that was not actually present. Their memory for the original event was distorted by the question used (response substitution) one week earlier. In conclusion this study demonstrates the power of leading questions, and implies that they can potentially have a longer term impact on accuracy of recall of the incident.
Evaluation
Loftus and Palmerâs research has questionable ecological validity. On the one hand, questioning participants about everyday events like a car crash appears to be a genuine measure of eyewitness testimony. However, the participants in their research watched a video of a car crash and witnessed the events unfold from start to finish. In everyday reports of car accidents, witnesses rarely see the whole event; they are either involved in the event directly, or see a small part of the event happen in their peripheral vision. Therefore, their results to do reflect everyday car accidents and we are unable to conclude if participants involved in real accidents, who would have a stronger emotional connection to the event, would also be susceptible to leading questions in the same way.
A second weakness of Loftus and Palmerâs research is that their study lacks population validity. Their two experiments consisted of 45 and 150 American students. It is reasonable to argue that the students in their experiment were less experienced drivers, who may be less accurate at estimating speeds. Consequently, we are unable to generalise the results to other populations, for example, older and more experienced drivers may be more accurate in their judgement of speeds and therefore not as susceptible to leading questions.
However, Loftus and Palmerâs research took place in a laboratory of Washington University and was therefore highly controlled. This high degree of control reduces the chance of extraneous variable, increasing the validity of the results. Furthermore, it is easy for psychologists to replicate their research, to see if the same results are achieved with a different population.
Gabbert (2003) videos of crimes shot from different perspectives were used. A different video was shown to each participant in pairs of participants, with unique information available in each film. In one condition the pairs were allowed to discuss what they had seen, and in another they werenât. 71% of pairs allowed to discuss what they had seen included aspects of the film they had not seen in their recollection of the video. This compared to 0% in pairs who were not allowed to discuss what they had seen. This suggests that witnesses will change their accounts of crimes to match what other witnesses have said. This may be an attempt to seek social approval, resulting in memory conformity.
However, this was a laboratory situation unlike real life (low ecological validity), and there were no consequences to participants for adapting their memories (low mundane realism as there would be in real life), so memory conformity may not be as common in real life.
However, Bonder (2009) found that if witnesses are warned of the effects of post-event discussion then its impact can be reduced.
Another factor affecting eyewitness testimony is anxiety. Anxiety is a mental state of arousal that includes feelings of extreme concern and tension. This often accompanies physiological changes such as increased heart rate.
Eyewitness testimony is often used after violent crimes that have caused eyewitnesses high anxiety. Research on eyewitness testimony often has no emotional impact on the participant, resulting in low validity. So research that does not cause anxiety cannot be generalised to the courtroom as the participants donât have the same mental state as the eyewitnesses.
Research indicates that high levels of anxiety produces poor recall of the perpetrator. However, this may be due to Weapon focus. Weapons are a cause of anxiety, so witnesses may be distracted by them, thus focusing their attention on the weapon rather than the criminal.
However, some anxiety may increase recall as it is a state of arousal so could improve general alertness/awareness of the situation and surroundings. The emotional aspect could also improve memory encoding.
The Yerkes- Dodson Law of arousal states that these conflicting results could be explained by accuracy increasing as anxiety raises eyewitness accuracy up until a point where anxiety becomes too high and more stress results in lower accuracy.
Research on anxiety and eyewitness testimony
Loftus (1979) placed participants outside a lab so that they could listen to conversations. In one condition it was a normal conversation about equipment failure, and a man walks out with greasy hands and a pen. In another condition the conversation is hostile and there would be the sound of breaking glass and furniture being knocked over. A man walks out with a knife covered in blood. Participants were then given 50 photographs and asked to identify the man who came out of the lab. Found more identified the man with the pen (49%) than the man with the knife (33%). This suggests that the high anxiety caused by the knife took focus away from the face, so this study acts as evidence for weapon focus.
Peters (1988) found that participants who visited a healthcare centre and interacted with a researcher and a nurse were more able to recognise the researcher than the nurse because the nurse gave them an injection, suggesting that there was weapon focus on the needle. This is better as it has high ecological validity (normal setting) and high mundane realism (getting an injection is a normal procedure) so its results are more generalisable.
Yuille and Cutshall (1986) interviewed 13 witnesses of a real shooting in Canada four months later. Despite leading questions recall was as high as 88%, and those who reported the highest levels of stress at the time gave the most accurate responses. This suggests that in real life situations anxiety many not have the same effect on witnesses as participants in a lab, making them more accurate, not less.
However those that experienced the highest levels of stress may also have been closest to the crime scene, so seen more information to recall.
This study is supported by Christianson and Hubinette (1993) who showed that 110 victims who had been directly threatened during 22 genuine bank robberies were more accurate and detailed in their responses than bystanders in the bank.
Evaluation of eyewitness testimony research
Research on the accuracy of the eyewitness testimony has real-life applications, such as the development of better interview techniques such as the cognitive interview (see below).
There is research that suggests that individual factors such as personality and age can influence the impact of misleading information. For example, Warren et al (2005) found that misleading information such as leading questions are more effective on children than adults, which has implications for how children are treated as witnesses.
As studies such as Loftusâs use films of staged crashes and crimes, and are conducted in a laboratory setting rather than in a courtroom there is lack of consequence for inaccurate eyewitness testimony. Also lab based studies suffer from demand characteristics as participants want to âhelpâ the researcher by giving them the responses that they expect.
Loftusâ (1979) study can be seen as unethical as it was designed to create high levels of anxiety, so this goes against the psychologistâs duty to protect participants from (in this case mental) harm.
Improving the accuracy of eyewitness testimony, including the use of the cognitive interview
Numerous studies have indicated that eyewitness testimony lacks accuracy due to factors such as anxiety, leading questions, and post-event contamination.
Fisher (1987) studied techniques used by the police in Florida when interviewing witnesses. The following factors in this âstandard interviewâ that he felt needed improvement:
Witnesses were given a large number of quick and direct closed questions in a short time.
The order of the questions was not matched to the witnessesâ mental representation of the event.Â
Witnesses were not allowed to talk freely about their experience, and were frequently interrupted.
In response to these issues Fisher and Geiselman developed the Cognitive interview (1985) and suggested that police use it in order to improve the accuracy of the eyewitness testimony.
The four parts of the Cognitive Interview are Context Reinstatement (mentally returning to the scene of the crime, including the environmental and emotional state as these act as cues for the recall of more information), Report Everything (witness should report all details even those that seem irrelevant), Recall from a changed perspective (consider/mentally recreate how the crime would be recalled from the perspective of other witnesses and the perpetrator. This âholisticâ view of events is intended to minimise biases of the witnesses and disrupt schemas which may be influencing the accuracy of their recall (also much harder to lie in this order), and finally Recall in reverse order (recall in different orders chronologically, for example from end to beginning or middle to start to check accuracy of original statement, and to prevent schemas from affecting recall).
An improved cognitive interview called the Enhanced Cognitive Interview was developed by Fisher and Geiselman in 1987. It focused on building trust between the interviewer and the witness so that the witness felt more comfortable disclosing information.
The Enhanced cognitive interview focusing on the interviewer not distracting the witness, the witness controlling the flow of information, asking open-ended questions, getting the witnesses to speak slowly so that all of what they are saying can be understood, reminding witnesses not to guess and that âDonât knowâ is an option where necessary, and offering comments to clarify statements, and finally it focuses on reducing anxiety in the witness and getting them to relax.
Holliday (2003) adapted the cognitive interview to form the Modified Cognitive interview which can be used on children and people with learning difficulties.
Evaluation of the Cognitive Interview
Fisher, Geiselman, and Adador (1989) did field research where 16 detectives from the Florida police department were divided into two matched pairs groups based on their previous performance at interviewing. One group received CI training and the other acted as a control group so had no additional training. After training the CI group gained 63% more information in subsequent interviews than the untrained group. This suggests that the Cognitive Interview is an effective method for improving the information gained by real interviewing police officers operating in the field.
This is also supported by Memon (2010) who reviewed 57 studies comparing CI, ECI, or MCI to control groups. Memon found that CI interviews produced more accurate recall than non-CI interviews, especially in older people.
Geiselman and Fisher (1997) found that the cognitive interview greatly improved the accuracy of eyewitness testimony suggesting that it has higher validity than the standard interview.
A practical application of the cognitive interview has been the development of the Modified Cognitive interview which can be used so that children and people with learning difficulties can be interviewed more effectively.
However, CI is time-consuming and so more costly. The time consuming nature is problematic if you need information quickly for a fast-moving case, and often requires more time than officers have operationally available, so this limits its use even though it is found to be effective.
CI is not effective at improving recognition of suspects in identity parades and photographs, so this reduces its usefulness to a number of police activities involving witnesses.