The Gimli Glider: A Story of Engineering Near-Tragedy
The Gimli Glider: A Story of Engineering Near-Tragedy
Overview of the Incident
Date: July 23rd, 1983
Aircraft: Air Canada Boeing 767-233
Flight Plan: Montreal to Edmonton with a stop in Ottawa
Passengers: 61 passengers and 8 crew members
Pre-flight Checks and Fuel Issues
Initial Situation:
- Mechanics informed the pilots that fuel gauges were not working and the replacement parts would not arrive until the next day.
- Decision-making dilemma:
- Cancel the flight (leading to unhappy passengers).
- Check the fuel manually and reassess in Ottawa.
Calculating Fuel Requirements
Distance to Edmonton: 1850 miles
Fuel Efficiency: 0.15 km/kg
Conversion to Metric: Finding Required Fuel:
- The conversion factor for fuel needed thermal profiling as density changes with temperature.
- Incorrect density conversion used by the crew was from English units and was not appropriate for metric measurements.
Fuel Calculations by Crew
Initial Fuel Measurements: (Incorrect Calculation):
- Initial fuel reported: 7682 L
- Used the incorrect conversion factor of 1.77 lb/L:
- Calculation: 7682 L × 1.77 kg/L = 13597 kg
- The total weight needed for the flight: 22200 kg
- Additional fuel therefore required: 22200 kg - 13597 kg = 8703 kg
- Fuel needed in liters to be added: 8703 kg / 1.77 kg/L = 4916 L
Correct Fuel Calculation (Student Exercise)
Manual Calculation Task: Students are tasked with doing the same calculation with proper unit conversions to assess accuracy.
Transition to Flight and Errors
Fuel Verification: After taking off, Bob Pearson made sure of the fuel quantity and again converted it incorrectly using the wrong conversion factor.
Key Takeaway: Importance of understanding engineering designs and the danger of using known solutions without adaptability for new situations.
Warning on Complacency: The dangers of lazy safety checks leading to disastrous outcomes were highlighted.
Engine Failure Emergency Scenario
Flight Experience: At cruising altitude of 41,000 feet, both engine fuel pumps failed with alarm signals.
Decision: Pilots opted to divert to Winnipeg for an emergency landing after the engines began running on gravity alone.
Major Malfunction: Instrument failure occurred when both engines stopped unexpectedly, leading to chaos on board. The flight manual lacked procedures for such a double-engine failure scenario.
Emergency Landing Preparation
Available Emergency Technologies:
- A ram-jet turbine was installed to provide limited power to critical instruments, though not all required instruments were powered.Navigational Calculations for Glide:
- Needed to compute glide ratio by checking altitude at two different times using the altimeter, asking control towers for distance flown, and determining if the plane could reach Winnipeg safely based on altitude drop calculations.
- Measurements: The altitude dropped from 35,000 ft to 30,000 ft over 12 miles, with Winnipeg positioned 100 miles away at 1000 ft above sea level.
Situational Challenges During Descent
Alternate Landing Choice: Pilots diverted to a nearby abandoned air force base at Gimli, located 12 miles away, requiring emergency preparation for landing despite a glide path complication due to descent speed.
Techniques to Slow Down Without Power:
- Adjusting aerodynamics by tilting wings, changing nose angle, and using flaps to alter lift surfaces.Unprecedented Actions: These maneuvers had not been previously trained or simulated for a 767.
Encountering Obstacles on Landing Approach
Emergency Dynamics: Difficulty in controlling the aircraft without power steering likened to driving a car under similar conditions.
Landing Gear Complications: The attempt to deploy landing gear manually through gravity failed, intensifying the urgency of the situation.
Visibility Challenges: Co-pilot spotted children playing on the runway, indicating a direct danger to public safety during landing.
Worst-Case Scenario
Landing Turmoil: The co-pilot noted that not just the runway but the surrounding area was populated with families enjoying a day at the park, further complicating the landing scenario.
Accident Avoidance: Maintaining safety procedures while dealing with an emergency posed severe risks, given the unexpected public present on the ground.
Successful but Tumultuous Landing
Pilot's Execution: Pearson executed a critical maneuver to align the plane with the runway just before touchdown.
Braking Technique: He applied brakes forcefully, resulting in blown-out back tires to reduce stopping distance; the plane collided into a guardrail, producing a dramatic physical response.
Visual and Sensory Effects: A shower of sparks was produced upon landing, highlighting the urgency and danger of the situation.
Emergency Response and Aftermath
Quick Actions Post-Landing: Parents quickly utilized extinguishers to tackle fire risks arising from the landing.
Passenger Evacuation Issues: There were struggles with deployed emergency slides, resulting in minor injuries among passengers; no major injuries occurred, leading to a successful total evacuation.
Post-Incident Reflection: Bob Pearson faced temporary demotion, yet was also awarded for outstanding airmanship, emphasizing the dual narratives of accountability and achievement following the event.
Learnings from the Incident: Ethical reflections on decision-making errors leading to near disasters, serving as reminders for meticulous calculations in aviation contexts.
Restoration of Aircraft: The aircraft was repaired and returned to service within a week, flying safely across Canada for over 20 years despite some pilots reminiscing about the incident years later.
What this lecture is about
This lecture uses the Gimli Glider incident — one of the most dramatic engineering near-disasters in aviation history — to teach fundamental lessons about unit conversions, engineering calculation verification, safety culture, and what happens when engineers and operators make systematic errors under pressure. This is not just a story lecture. Every element connects to core engineering principles that apply directly to chemical process design.
The Setup — July 23, 1983
Air Canada Flight 143 was a brand new Boeing 767-233, one of the first metric aircraft Air Canada operated. The flight was from Montreal to Edmonton with a stop in Ottawa, carrying 61 passengers and 8 crew. Before departure, the crew discovered the fuel quantity gauges were inoperative — the computerized fuel management system was broken. The decision was made to fly anyway, checking fuel manually with dipsticks.
This first decision point is itself an engineering lesson. Operating with a known instrument failure creates a situation where there is no redundancy for a critical measurement. The proper response would be to cancel the flight until the instruments were repaired. The pressure of unhappy passengers overrode proper safety judgment — and this is exactly the kind of human factors failure that causes accidents. The Texas City explosion also involved operators making decisions under pressure that deviated from safe procedure.
The Unit Conversion Error — the heart of the incident
The new Boeing 767 used metric measurements. Fuel quantity was measured in kilograms. But the dipsticks in the fuel tanks measured volume in liters. To convert from liters to kilograms, you need the density of jet fuel.
The crew had always worked in English units. The conversion factor they knew was 1.77 lb/L — pounds per liter. But they needed kg/L. The correct density of jet fuel is approximately 0.803 kg/L (which you can derive by converting 1.77 lb/L using the fact that 1 kg = 2.205 lb, giving 1.77 / 2.205 = 0.803 kg/L).
Here is exactly what Bob Pearson the pilot did:
He found 7,682 L of fuel in the tanks. He multiplied by 1.77 — treating this as kg/L when it was actually lb/L — getting 13,597 kg. He needed 22,300 kg for the flight, so he calculated he needed to add 8,703 kg. He then divided 8,703 kg by 1.77 to get 4,916 L to add.
The catastrophic error: by using lb/L instead of kg/L, he was working with a conversion factor that is 2.205 times too large. This means he thought the fuel was 2.205 times heavier than it actually was — which means he added 2.205 times less fuel than he needed.
The correct calculation: 7,682 L × 0.803 kg/L = 6,169 kg already in tanks. He needed 22,300 - 6,169 = 16,131 kg more. At 0.803 kg/L, that's 20,088 L to add. He added only 4,916 L instead of 20,088 L — less than a quarter of what was needed.
The plane took off with roughly 22,300 lb of fuel instead of 22,300 kg — a difference of a factor of 2.205. The plane had less than half the fuel required.
The Ottawa Stop — the second chance they missed
When the plane landed in Ottawa, Pearson re-checked the fuel with the dipsticks and converted again — using the same wrong conversion factor. The co-pilot checked the math. He verified the arithmetic was correct. But nobody checked whether the conversion factor itself was correct.
This is one of the most important lessons in the entire lecture and the professor states it explicitly: repeating known solution paths without understanding leads to an inability to cope with new situations. Lazy safety checks can lead to disaster.
In engineering this translates directly to: checking that the math is correct is not the same as checking that the approach is correct. You can perfectly execute a wrong method and get a wrong answer with mathematical precision. This is why engineering design reviews must check assumptions and methods, not just arithmetic.
In your own work — Aspen simulations, material balances, cost estimates — the lesson is to always verify that your conversion factors, your unit systems, and your fundamental assumptions are correct before trusting any numerical result.
The Emergency Unfolds
At 41,000 feet over the wilderness of southeastern Manitoba, the left engine fuel pump alarm sounded. Then the right engine fuel pump failed. The engines were still running on gravity feed, but the pilots knew this was serious and diverted toward Winnipeg.
Then a sound none of them had ever heard before — a loud "BONG" — went off. All instruments went blank. Both engines stopped simultaneously. The plane became a glider at 41,000 feet.
The pilots checked the manual for instructions on double engine failure. The manual writers had assumed a double engine failure was impossible and had written no procedure for it. This is another engineering lesson: designing procedures only for scenarios you consider possible leaves you completely without guidance for the scenarios you didn't anticipate. In process design this is the concept of the design envelope — you must think about failure modes outside the normal operating range, not just within it.
The Ram-Air Turbine — partial salvation
The plane had a ram-air turbine (RAT) — a small propeller that deploys from the belly of the plane and generates power from the airflow as the plane moves. This provided power for a few critical instruments.
But critically, the airspeed indicator and vertical speed indicator were not classified as "critical" instruments, so they were not powered by the RAT. This left the pilots unable to directly measure how fast they were descending or how fast they were flying.
The engineering lesson: the definition of "critical" for backup power purposes had not been thoroughly analyzed. A proper failure mode and effects analysis (FMEA) would have identified that knowing airspeed and descent rate are critical to managing a powerless glide. Incomplete safety analysis — deciding what matters under abnormal conditions — is a design failure.
The Glide Ratio Calculation — engineering under pressure
With no airspeed or vertical speed indicator, the crew had to calculate whether they could reach Winnipeg. They used the altimeter and time to derive the glide ratio:
They dropped from 35,000 ft to 30,000 ft — a loss of 5,000 ft — over 12 miles of horizontal distance. This gives a glide ratio of 12 miles / (5,000 ft × 1 mile/5,280 ft) = 12 / 0.947 = approximately 12.7 miles of forward travel per mile of altitude lost.
They were at 30,000 ft altitude and needed to reach Winnipeg at 1,000 ft elevation — meaning they needed to lose 29,000 ft of altitude. At their glide ratio of 12.7 miles per 5,000 ft lost, they could travel 29,000/5,000 × 12 = approximately 69.6 miles.
Winnipeg was 100 miles away. They could only glide about 70 miles. They could not make it to Winnipeg. The co-pilot correctly figured this out.
The pilot then remembered an old Royal Canadian Air Force base at Gimli, only 12 miles away. Decision made — head for Gimli.
The Landing — everything goes wrong simultaneously
Flying too fast toward the runway, the pilot needed to slow down. With no engine power, the only options were aerodynamic: tilting the wings using a forward slip maneuver (crossing the ailerons and rudder to create drag while maintaining direction), changing nose angle, and deploying flaps. None of this had ever been done in a 767 in real conditions, nor had it been simulated in training. Pearson executed the maneuver purely on instinct and experience from smaller aircraft.
With no power steering, controlling the flight surfaces required enormous physical strength. This is analogous to a process plant where control valve actuators fail and operators must manually operate valves — it's physically demanding and requires the right knowledge.
The co-pilot tried to drop the landing gear by gravity (the normal hydraulic system was dead). The nose wheel deployed but failed to lock in place — meaning the nose wheel could collapse on landing.
Then the co-pilot looked down and saw two boys on bicycles on the runway. The decommissioned air force base had been converted to a drag racing strip — and it was Family Day. The grandstand at the end of the runway was full of spectators. Children were racing go-karts on the runway. They couldn't hear the silent, powerless jet descending toward them.
The Miraculous Outcome
Pearson lined up with the runway just seconds before touchdown. He stomped on the brakes, blowing out the rear tires deliberately to maximize braking force. The unlocked nose wheel collapsed on touchdown, the nose scraped the ground and hit a guardrail, producing a 150-foot shower of sparks. The plane skidded to a stop just a few feet from the occupied grandstand.
Go-kart parents used fire extinguishers to put out the resulting fire. Emergency slides deployed but were too high off the ground — a few passengers were bruised escaping. No passengers were seriously injured.
Bob Pearson was briefly demoted for his role in the fuel calculation error, but simultaneously won the Fédération Aéronautique Internationale Diploma for Outstanding Airmanship for the landing. These two outcomes together capture the complexity of engineering accountability — he made a catastrophic calculation error AND performed a nearly miraculous feat of airmanship to save everyone aboard.
The Engineering Lessons — what your exam will test
The lecture explicitly says: "Think about the ethical and calculational errors that lead to this incident. Such mistakes often lead to terrible loss of life." This is deliberate — the professor wants you to extract transferable engineering principles.
Unit conversion errors are catastrophic. In chemical engineering you work constantly with unit conversions — from English to metric, between different pressure units, between mass and molar quantities, between volumetric and mass flow rates. A factor of 2.205 error in fuel density caused this disaster. A similar error in a reactor design could cause a runaway. Always carry units through every calculation and verify that they cancel correctly. Never use a conversion factor without knowing what units it applies to.
Checking arithmetic is not the same as checking the method. The co-pilot verified Pearson's math perfectly — and the math was perfect. The conversion factor was wrong. In design review, you must check assumptions, unit systems, and fundamental approaches — not just the numbers.
Repeating known solution paths without understanding fails when conditions change. The crew knew how to convert fuel volume to mass — but they had always done it in English units. When the unit system changed, they applied the same procedure without recognizing that the conversion factor needed to change. Engineers who understand their methods can adapt. Engineers who follow procedures mechanically cannot.
Write procedures for failure modes, not just normal operations. The 767 manual had no procedure for double engine failure because the designers considered it impossible. In process design, HAZOP studies specifically examine what happens when things that "can't happen" do happen. Every failure mode needs a response.
Define "critical" systems carefully. The decision not to include airspeed and vertical speed indicators on the RAT power system was made by someone who didn't fully think through what information pilots need in a complete power failure. In process plant design, you must carefully analyze which instruments and control systems are genuinely safety-critical and ensure they have backup power and redundancy.
Human factors and social pressure override safety. The decision to fly with broken fuel gauges was driven by passenger pressure. In process plants, production pressure is the equivalent — the pressure to keep running rather than shut down for maintenance. This is how accidents begin.
Likely Exam Questions:
"What was the unit conversion error in the Gimli Glider incident?" — The crew used 1.77 lb/L as if it were kg/L. The correct jet fuel density is approximately 0.803 kg/L. This caused them to calculate they had 2.205 times more fuel than they actually had, loading less than half the fuel required.
"What engineering lesson does the co-pilot's math check illustrate?" — Verifying arithmetic is not sufficient — you must also verify that the method, assumptions, and unit systems are correct. A perfect calculation using a wrong conversion factor produces a perfectly wrong answer.
"What lesson does the absence of a double-engine failure procedure teach?" — Safety procedures must be written for failure modes that engineers consider improbable, not just for normal operations and expected failures. Assuming something can't happen means you have no plan when it does.
"What does 'repeating known solution paths without understanding' mean in engineering practice?" — Applying a procedure mechanically without understanding why it works means you cannot recognize when conditions have changed enough to make the procedure invalid. Engineers must understand their methods deeply enough to recognize when they no longer apply.
"How did the pilots calculate whether they could reach Winnipeg?" — They used altitude loss over a measured distance from air traffic control to calculate their glide ratio, then compared the distance they could glide to the distance to Winnipeg.
"What was the result of not including airspeed and vertical speed indicators on the ram-air turbine power?" — The pilots had no direct measurement of airspeed or descent rate, forcing them to estimate their glide performance indirectly from altimeter readings and ground distance from air traffic control.
"What are three engineering principles illustrated by the Gimli Glider?" — Any three of: unit conversion errors are catastrophic, checking math is not checking method, procedures must cover improbable failure modes, critical system definition must be thorough, social pressure overrides safety judgment, understanding must underpin procedure application.