We constantly make judgments about people, events, and behaviors in our environment.
One of the primary mechanisms involved in making judgments is inductive reasoning, which involves reasoning based on observations, or reaching conclusions from evidence.
Inductive reasoning is the foundation of scientific investigations, where observations are made, data is collected, and conclusions are drawn.
This type of reasoning is sometimes called "bottom-up reasoning" as it moves from specific observations to broader generalizations and theories.
Inductive reasoning is the opposite of deductive reasoning, also known as "top-down reasoning," which goes from the general to the specific.
Both types of reasoning are crucial for science.
Inductive reasoning leads to conclusions that are probable but not definitively true.
For example, concluding that John loves the outdoors because he's wearing a North Face jacket seems logical, but he might have bought it for the style or borrowed it.
Inductive arguments lead to conclusions that are probably true, not definitely true.
Strong inductive arguments result in conclusions that are most likely to be true, and weak arguments result in conclusions that are not as likely to be true.
Factors contributing to the strength of an inductive argument:
Representativeness of observations: Do the observations accurately represent the entire category?
The geese example is not representative as it only considers geese from England and the Netherlands.
Number of observations: More observations strengthen the argument.
The geese argument is strengthened by observations from the Netherlands and England.
The rising sun in Amsterdam is a strong conclusion due to a vast number of observations.
Quality of the evidence: Stronger evidence leads to stronger conclusions.
The rising sun conclusion is further strengthened by scientific explanations of Earth's rotation.
We often use inductive reasoning without realizing it.
Sarah anticipates an exam format based on a professor's past exams.
Sam expects good service from a company based on previous positive experiences.
Anytime we make a prediction about what will happen based on our observations about what has happened in the past, we are using inductive reasoning.
We predict and choose based on past experiences, particularly in familiar situations like studying or online shopping.
Inductive reasoning allows us to use past experiences to guide current behavior.
Most of the time reasoning happens automatically without awareness.
We often use shortcuts called heuristics to reach conclusions quickly.
Heuristics are "rules of thumb" that are likely to provide the correct answer, but are not foolproof.
The availability heuristic suggests that events that are more easily remembered are judged as more probable (Tversky & Kahneman, 1973).
People think more words start with "r" because they are easier to recall, even though more words have "r" as the third letter.
Films, news, and advertisements can influence perceptions through the availability heuristic.
Seeing many news reports about child abduction might lead you to believe it's more common than it is.
Hearing about a lottery winner might cause you to overestimate your own chances of winning.
A study by Lichtenstein, Slovic, Fischoff, Layman, & Combs (1978) showed that people misjudge the prevalence of certain causes of death.
People overestimate deaths from publicized events like tornados compared to less publicized causes like asthma.
Errors occur when less frequent events, such as tornados, stand out in memory because of media coverage.
Gunter and Wober (1983) found a relationship between television viewing and perceived risks of hazards like lightning, flooding, and terrorist attacks.
The availability heuristic does not always lead to errors; sometimes, we remember events that do occur more frequently.
Illusory correlations are apparent correlations between events that don't actually exist or are weaker than assumed.
These correlations occur when we expect two things to be related, leading us to believe they are even when they are not.
Expectations may take the form of stereotypes, which are oversimplified generalizations about groups that often focus on the negative.
Stereotypes can lead people to pay special attention to behaviors associated with the stereotype, reinforcing the illusory correlation.
Example: The stereotype that gay males are effeminate might lead someone to focus on instances that confirm this, ignoring those that don't.
Selective attention to stereotypical behaviors makes them more "available" in memory.
The representativeness heuristic involves making judgments based on how much one event resembles another.
The representativeness heuristic states that the probability of A belonging to class B is based on how well A's properties resemble those we associate with class B.
*
Example: Considering that Ella is described as compassionate and interested in alternative medicine and based on the description of Ella, people would identify her as a holistic healer. But in reality it's is far more likely that Ella is a school teacher, based on the fact that in Europe there are far more school teachers than holistic healers.
When making occupation judgments, people rely on stereotypes and ignore base rates (the relative proportion of different classes in the population).
Tversky and Kahneman addressed this by providing participants with base rate information. In a group of 100 people, there are 70 lawyers and 30 engineers. What is the chance that if we pick one person from the group at random that the person will be an engineer?
Participants correctly guessed that there would be a 30 per cent chance of picking an engineer when given this problem.
Adding this description caused participants to greatly increase their estimate of the chances that the randomly picked person (Jack, in this case) was an engineer.
* Jack is a 45-year-old man. He is married and has four children. He is generally conservative, careful and ambitious. He shows no interest in political and social issues and spends most of his free time on his many hobbies, which include home carpentry, sailing and mathematical puzzles.
When only base rate information is available, people use it for estimates.
When descriptive information is available, people disregard base rates, causing potential errors.
Relevant descriptive information can increase judgment accuracy, such as knowing Jack determines bridge structural characteristics.
The following demonstration illustrates another characteristic of the representativeness heuristic.
Linda is 31years old, single, outspoken, and very bright. She majored in philosophy. As a student she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Which statement is more likely?
1. Linda is a bank clerk.
2. Linda is a bank clerk and is active in the feminist movement.
The correct answer to this problem is that Statement 1 has a greater probability of being true, but when Tversky and Kahneman (1983) posed this problem to their participants, 85 per cent picked Statement 2. It is easy to see why they did this. They were influenced by the representativeness heuristic, because the description of Linda fits people's idea of a typical feminist.
Incorrectly assuming that small samples are representative People also make errors in reasoning by ignoring the importance of the size of the sample on which observations are based.
People make errors by ignoring sample size.
Which hospital is more likely to report more than 60% male births: a large hospital or a small hospital?
The group that thought there would be no difference was presumably assuming that the birthrate for males and females in both hospitals would be representative of the overall birthrate for males and females.
The correct answer is the smaller hospital due to the law of large numbers.
The law of large numbers states that larger random samples are more representative of the population.
Small samples are less representative.
In the hospital example, percentages in the large hospital will be closer to 50% than in the small hospital.
People often assume representativeness holds for small samples, leading to reasoning errors.
Reasoning errors occur due to heuristics and ignoring evidence and also influenced by knowledge, attitudes, and preconceptions.
Lord, Ross, and Lepper (1979) showed how attitudes are affected by contradictory evidence.
Participants for and against the death penalty were presented with pro and con research studies.
Participants rated studies that aligned with their views at the beginning of the experiment.