We constantly make judgments about people, events, and behaviors in our environment.
Definition: Reasoning based on observations, reaching conclusions from evidence.
Nature:
Basis of scientific investigations (observations, data collection, conclusions).
Also known as "bottom-up reasoning" (specific observations to broader generalizations and theories).
Inductive reasoning goes in the opposite direction of deductive reasoning.
Deductive reasoning is known as "top-down reasoning" (general to specific).
Both types are relevant for science.
Deductive reasoning is explained in more detail later in the text.
Conclusions are probable, but not definitely, true.
Example: Concluding John loves the outdoors because he wears a North Face jacket.
Conclusions are suggested with varying degrees of certainty but do not definitely follow from observations.
Example 1:
Observation: All geese seen in England are white. All geese seen in the Netherlands are white.
Conclusion: All geese are white.
Example 2:
Observation: The sun has risen every morning in Amsterdam.
Conclusion: The sun will rise in Amsterdam tomorrow.
The second argument is more convincing than the first.
Inductive arguments lead to what is probably true, not definitely true.
Strong inductive arguments: Conclusions are most likely to be true.
Weak inductive arguments: Conclusions are not as likely to be true.
Representativeness of observations: How well do observations represent all members of the category?
The geese example suffers from a lack of representativeness because it does not consider geese from other parts of Europe or the world.
Number of observations: Adding observations strengthens the argument.
Adding observations from the Netherlands strengthens the geese argument. However, further research revealed that geese can also be brown (in Canada) or grey (in Australia), so the conclusion "All geese are white" is not true.
The conclusion that the sun will be rising again in Amsterdam is extremely strong because it is supported by a very large number of observations (every day!).
Quality of the evidence: Stronger evidence results in stronger conclusions.
The conclusion "The sun will rise in Amsterdam" is even stronger when considering scientific descriptions of how the Earth rotates on its axis and revolves around the sun.
Thus, adding the observation "Scientific measurements of the rotation of the Earth indicate that every time the Earth rotates the sun will rise" strengthens the conclusion even further.
Often used without realizing it.
Example: Sarah concludes that Professor X's exam will include questions about experiments based on a previous course.
Example: Sam places another order from an internet company based on previous good service.
Anytime we make a prediction based on past observations, we are using inductive reasoning.
It makes sense to make predictions and choices based on past experience.
We make so many assumptions about the world based on past experience that we are using inductive reasoning constantly, often without even realizing it.
Example: Assuming a chair will not collapse when you sit on it based on past experience with chairs.
It would be time-consuming to approach every experience as if we were having it for the first time.
Inductive reasoning provides the mechanism for using past experience to guide present behavior.
When people use past experience to guide present behaviour, they often use shortcuts to help them reach conclusions rapidly.
We don't have the time or energy to gather every bit of information to be 100% certain.
Definition: "Rules of thumb" that are likely to provide the correct answer but are not foolproof.
Events that are more easily remembered are judged as being more probable.
Example: People think more words begin with the letter r than have r as the third letter because words starting with r are easier to bring to mind.
Films, news, and advertisements may assert their influence through the availability heuristic.
Example: Seeing television programs on child abduction may lead you to think such events are relatively common, causing you to keep your children at home.
Hearing that someone in your street won the lottery, you start overestimating your own chances of winning the jackpot, triggering you to buy some extra lottery tickets.
Participants judged the relative prevalence of various causes of death in the US.
Large numbers of errors were associated with causes publicized by the media.
Example: Many thought more deaths were caused by tornados than by asthma, when in reality asthma is more common.
The availability heuristic can mislead us into reaching the wrong conclusion when less frequently occurring events stand out, like a tornado.
A British survey study found a direct relationship between the amount of television viewing and perceived chances of facing a number of hazards in life.
The availability heuristic doesn't always lead to errors.
There are many situations where we remember events that actually do occur more frequently.
Illusory correlations occur when a correlation between two events appears to exist, but in reality, there is no correlation or it is much weaker than it is assumed to be.
Illusory correlations can occur when we expect two things to be related, so we fool ourselves into thinking they are related even when in reality they are not.
These expectations may take the form of a stereotype — an oversimplified generalization about a group or class of people that often focuses on the negative.
Stereotypes may lead people to pay special attention to behaviors associated with that stereotype.
This selective attention creates an illusory correlation that reinforces the stereotype.
The phenomenon is related to the availability heuristic because selective attention to stereotypes makes these behaviors more "available."
Example: Stereotype that all gay males are effeminate. People may pay particular attention to gay characters on TV or in movies.
Selectively taking into account only the situations that support preconceptions creates the illusion that a correlation exists when there may be only a weak correlation or none at all.
Related to the idea that people often make judgments based on how much one event resembles another event.
The representativeness heuristic states that the probability that A is a member of class B can be determined by how well the properties of A resemble the properties we usually associate with class B.
Ella is described as compassionate, caring, and interested in spirituality. Most people identify her as a holistic healer.
In reality, Ella is more likely to be a school teacher, based on the fact that in Europe (and probably also in the rest of the world) there are far more school teachers than holistic healers.
When making occupation judgments, people often rely too much on their stereotypic ideas, while ignoring another important source of information- namely the base rates, or the relative proportion of different classes in the population.
Tversky and Kahneman dealt with this problem by providing their participants with base rate information.
In a group of 100 people, there are 70 lawyers and 30 engineers. What is the chance that if we pick one person from the group at random that the person will be an engineer?
Participants correctly guessed that there would be a 30 percent chance of picking an engineer.
Jack is a 45-year-old man. He is married and has four children. He is generally conservative, careful and ambitious. He shows no interest in political and social issues and spends most of his free time on his many hobbies, which include home carpentry, sailing and mathematical puzzles.
Adding this description caused participants to greatly increase their estimate of the chances that the randomly picked person (Jack, in this case) was an engineer.
When only base rate information is available, people use that information to make their estimates.
When any descriptive information is available, people disregard the base rate information, and this can potentially cause errors in reasoning.
The right kind of descriptive information can increase the accuracy of a judgment.
If the description of Jack also noted that his last job involved determining the structural characteristics of a bridge, then this would greatly increase the chances that he was, in fact, an engineer.
When relevant information is available, applying the representativeness heuristic can thus lead to correct judgments.
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Which of the following statements is more likely?
Linda is a bank clerk.2. Linda is a bank clerk and is active in the feminist movement.
Statement 1 has a greater probability of being true, but most people pick Statement 2.
They are influenced by the representativeness heuristic, because the description of Linda fits people's idea of a typical feminist.
They violate the conjunction rule, which states that the probability of a conjunction of two events (A and B) cannot be higher than the probability of the single constituents (A alone or B alone).
Because there are more bank clerks (A) than feminist bank clerks (B), stating that Linda is a bank clerk includes the possibility that she is a feminist bank clerk.
Similarly, the probability that Anne has a blue Audi cannot be greater than the probability that she has an Audi because the two constituents together (Audi and blue) define a smaller number of cars than one constituent (Audi) alone.
People tend to violate the conjunction rule even when it is clear that they understand it.
The culprit is the representativeness heuristic.
In the example, participants saw Linda's characteristics as more representative of "feminist bank clerk" than "bank clerk."
A certain town is served by two hospitals. In the larger hospital about 45 babies are born each day, and in the smaller hospital about 15 babies are born each day. As you know, about 50% of all babies are boys. However, the exact percentage varies from day to day. Sometimes it may be higher than 50%, sometimes lower. For a period of 1 year, each hospital recorded the days on which more than 60% of the babies born were boys. Which hospital do you think recorded more such days?
The larger hospital2. The smaller hospital3. About the same (that is, about the same number of days)
People make errors in reasoning by ignoring the importance of the size of the sample on which observations are based.
Most people state that there would be no difference between the number of males and females.
The correct answer is that there would be more days with over 60% male births in the small hospital.
This result can be explained by considering a statistical rule called the law of large numbers, which states that the larger the number of individuals that are randomly drawn from a population, the more representative the resulting group will be of the entire population.
Conversely, samples of small numbers of individuals will be less representative of the population.
Thus, in the hospital problem it is more likely that the percentage of boys or girls born on any given day will be near 50% in the large hospital and farther from 50% in the small hospital.
People often assume that representativeness holds for small samples, and this results in errors in reasoning.
So far, we've seen that the reasoning involved in making judgments may sometimes be in error because we are seduced by various heuristics that lead us to ignore some evidence and reach faulty conclusions.
Another influence on judgment is the knowledge, attitudes and preconceptions that people bring to the situation.
People's attitudes are affected by exposure to evidence that contradicts their attitudes.
Participants in favor of the death penalty and those against it were presented with research studies on the deterrent effects of the death penalty.
Their responses reflected their attitudes at the beginning of the experiment.
An article presenting evidence that supported the deterrent effect of the death penalty was rated as "convincing" by proponents of the death penalty and "unconvincing" by those against it.
People's prior beliefs may have caused them to focus on information that agreed with their beliefs and to disregard information that didn't.
The tendency for people to generate and evaluate evidence and test their hypotheses in a way that is biased toward their own opinions and attitudes is called the myside bias.
Myside bias has been demonstrated in many empirical studies.
Its magnitude does not seem to be related to intelligence and cognitive ability.
Myside bias remained present even when participants' scientific reasoning skills were primed.
The myside bias is a type of confirmation bias.
A confirmation bias is broader than the myside bias because it holds for any situation (not just for opinions or attitudes) in which information is favored that confirms a hypothesis.
You will be given three numbers which conform to a simple rule that I have in mind…. Your aim is to discover this rule by writing down sets of three numbers together with your reasons for your choice of them. After you have written down each set, I shall tell you whether your numbers conform to the rule or not. When you feel highly confident that you have discovered the rule, you are to write it down and tell me what it is.
After presenting the numbers 2, 4, and 6, participants created their own sets of three numbers and received feedback.
The most common initial hypothesis was "increasing intervals of two."
Participants tested this hypothesis by creating sequences like 4, 6, 8 or 22, 24, 26.
Wason's rule was "three numbers in increasing order of magnitude."
The mistake that participants made was that they only sought to confirm their hypothesis.
The secret to determining the correct rule is to try to create sequences that don't satisfy the person's current hypothesis, but do satisfy Wason's rule.
Determining that the sequence 2, 4, 5 is correct allows one to reject the initial "increasing intervals of two" hypothesis and formulate a new one.
Participants whose rule was correct on their first guess followed the strategy of testing a number of hypotheses themselves before giving their answer, by creating sequences that were designed to disconfirm their current hypothesis.
Participants who didn't guess the rule correctly on their first try tended to keep creating sequences that confirmed their current hypothesis.
The confirmation bias acts like a pair of blinkers-we see the world according to rules we think are correct and are never dissuaded from this view because we seek out only evidence that confirms our rule.
Because many internet programs and search engines filter information or make viewing suggestions on the basis of "what we might like" based on what we have selected or viewed in the past, it becomes more likely that we start seeing and hearing the same things and opinions over and over again, creating a rather one-sided or biased view of the world (the so-called filter bubble).
Judgments are often accurate and useful, but research has focused on situations in which errors occur.
Studying situations in which things go wrong turns out to be a useful way to learn about how the mind operates in general.
Reasoning based on observations.
Conclusions can be probably true, but not definitely true.
We determine whether a conclusion logically follows from statements called premises.
Sherlock Holmes Example:
Investigates the disappearance of a racehorse and the murder of the horse's trainer.
Holmes started his investigation with the general premise that "Dogs bark at strangers," and noticed that "Nobody had heard the watchdogs bark on the night of the murder."
Combining these premises, Holmes came to the conclusion that the dogs must have known the murderer.
Setting up the general premise, "Dogs bark at strangers," and then the more specific premise, "The dogs did not bark at whoever entered the stables," from which it logically followed that "Whoever entered the stables was not a stranger."
(Some people argue that Sherlock Holmes also uses a lot of inductive reasoning, taking observational cues as his starting point. The truth probably lies in the middle in that he uses a mixture of both.)
The father of deductive reasoning is Aristotle, who introduced the basic form of deductive reasoning called the syllogism.
A syllogism consists of two premises followed by a third statement called the conclusion.
We will first consider categorical syllogisms, in which the premises and conclusion are statements that begin with All, No, or Some.
Syllogism 1
Premise 1: All birds are animals. (All A are B)
Premise 2: All animals eat food. (All B are C)
Conclusion: Therefore, all birds eat food. (All A are C)
The A, B, C format is a useful way to compare the forms of different syllogisms.
The answer (to whether the conclusion follows from the two premises) involves considering the difference between validity and truth in syllogisms.
The word valid is often used in everyday conversation to mean that something is true or might be true.
A syllogism is valid when the form of the syllogism indicates that its conclusion follows logically from its two premises.
Validity is about whether the conclusion logically follows from the premises.
If it does, and the premises are true, as in Syllogism 1, then the conclusion will be true as well.
But if one or both of the premises are not true, the conclusion may not be true, even though the syllogism's reasoning is valid.
Syllogism 2
All birds are animals. (All A are B)
All animals have four legs. (All B are C)
All birds have four legs. (All A are C)
From the A, B, C notation we can see that this syllogism has the same form as Syllogism 1.
Because the syllogism's form is what determines its validity, and we saw that Syllogism 1 is valid, we can therefore conclude that the conclusion of Syllogism 2 follows from the premises, so it is also valid.
Validity is about whether the conclusion logically follows from the premises.
If it does, and the premises are true, as in Syllogism 1, then the conclusion will be true as well.
But if one or both of the premises are not true, the conclusion may not be true, even though the syllogism's reasoning is valid.
The difference between validity and truth can make it difficult to judge whether reasoning is "logical" or not.
Syllogism 3
All of the students are tired. (All A are B)
Some tired people are irritable. (Some C are D)
Some of the students are irritable. (Some A are D)
Is the reasoning behind this syllogism valid? Stop and consider this question before reading further.
The answer is that this syllogism is not valid — the conclusion does not follow from the two premises.
Syllogism 4
All of the students live in London. (All A are B)
Some people who live in London are millionaires. (Some C are D)
Some of the students are millionaires. (Some A are D)
One reason that people think Syllogism 3 is valid can be traced to the belief bias —the tendency to think a syllogism is valid if its conclusion is believable.
Mental model approach is a way of approaching deductive reasoning.
A mental model is a specific situation represented in a person's mind that can be used to help determine the validity of syllogisms in deductive reasoning.
According to this idea, people first create a model, or representation of the situation, when they encounter a reasoning problem.
Next, they generate a tentative conclusion based on this model and then look for exceptions that might falsify the model.
If they do find an exception, they modify the model. Eventually, if they can find no more exceptions and their current model matches the conclusion, they can decide that the syllogism is valid.
Syllogism 5
None of the artists are beekeepers. (No A are B)
All of the beekeepers are chemists. (All B are C)
Some of the chemists are not artists. (Some C are not A)
This example illustrates the basic principle behind the mental model theory: A conclusion is valid only if it cannot be refuted by any model of the premises.
The mental model theory is attractive because it can be applied without training in the rules of logic and because it makes less complicated predictions that can be tested more easily.
Conditional syllogisms have two premises and a conclusion like categorical syllogisms, but the first premise has the form "If … then."
This kind of deductive reasoning is common in everyday life.
Example:
If I lend Steve 20 Euros, then I won't get it back.
I lent Steve 20 Euros.
Therefore, I won't get my 20 Euros back.
The four major types of conditional syllogisms are listed in Table 13.3 in abstract form (using p and q).
Conditional syllogisms typically use the notations p and q instead of the A and B used in categorical syllogisms.
A. If I study, I'll get a good grade. I studied. Therefore, I'll get a good grade.
This form of syllogism-called modus ponens, which is Latin for (roughly translated) "the way that affirms by affirming" —is valid: The conclusion follows logically from the two premises.
People are often better at judging the validity of syllogisms when real-world examples are used instead of abstract symbols.
Determine which card(s) need to be turned over to determine whether the following rule is being followed:
If there is a vowel on one side, then there is an even number on the other side.
Four cards are showing: E, K, 4, 7.
Participants indicated that the E must be turned over.
To test a rule, it is necessary to look for situations that would falsify the rule.
Only four percent of participants came up with the correct answer to The Wason Four-Card Problem—that the second card that needs to be turned over is the 7.
Four cards are shown. Each card has an age on one side and the name of a beverage on the other side. Imagine you are a police officer who is applying the rule "If a person is drinking beer, then he or she must be over 19 years old."
Which of the cards in figure 13.8 must be turned over to determine whether the rule is being followed?
The right answer is "Beer" and "16 years old"
The beer/drinking-age version of Wason's problem is identical to the abstract version except that concrete everyday terms (beer and soda; younger and older ages) are substituted for the letters and numbers.
For this version of the problem, 73 per cent of their participants provided the correct response.
In contrast, none of their participants answered the abstract task correctly.
According to Griggs and Cox, the beer/drinking-age version of the task is easier because it involves regulations people are familiar with (i.e., the regulations are part of their knowledge structure).
Cheng and Holyoak were based on the idea that people think in terms of pragmatic schemas — their knowledge about rules that govern their thoughts and actions.
The permission schema "If you are 19, then you get to drink beer" is something that most of the participants in this experiment had learned, so they were able to apply that schema to the card task.
In the abstract task, the goal is to indicate whether an abstract statement about letters and numbers is true.
But in the beer/drinking-age task, the goal is to be sure that a person has permission to drink alcohol.
Activating the permission schema helps people focus attention on the card that would test that schema.
If the form says "Entering" on one side, the other side includes "cholera" among the list of diseases. which of the following forms would you have to turn over to check to be sure?
Correct cards: "Entering" and "Typhoid, Hepatitis."
*Instead of checking just to see whether the correct diseases are listed on the form, the immigration officer is checking to see whether the travellers have the inoculations necessary to give them permission to enter the country. *
Results showed that this actually happened, because 91 per cent of the participants in this condition picked the correct cards.
Adaptive characteristics will, over time, become basic characteristics of the organism.
An important aspect of human behaviour is the ability for two people to cooperate in a way that is beneficial to both people.
It is essential, therefore, that people be able to detect cheating behaviour so they can avoid it.
According to the evolutionary approach, people who can do this well will have a better chance of surviving, so "detecting cheating" has become a part of the brain's cognitive makeup.
The evolutionary approach proposes that the Wason four-card problem can be best understood in terms of cheating detection.
People do well in the cholera task because they can detect someone who cheats by entering the country without a cholera shot
The evolutionary approach suggests that the Wason four-card problem is best understood through cheating detection.
People perform well in tasks like the cholera task because they can identify cheating, such as entering a country without required vaccinations.
Cosmides and Tooby (1992) designed scenarios in unfamiliar cultures to test if cheating, rather than learned permission schemas, is the key variable.
In a hypothetical culture called the Kulwane, participants were told: "If a man eats cassava root, then he must have a tattoo on his face."
Participants had to determine which cards to turn over: (1) Eats cassava roots, (2) Eats molo nuts, (3) Tattoo, and (4) No tattoo.
High performance was observed even with unfamiliar rules, suggesting cheating detection plays a significant role.
Other experiments showed better performance on statements involving cheating compared to those that didn't.
Manktelow and Over (1990) tested people with no medical background using the rule: "If you clean up spit blood, you must wear gloves."
This unfamiliar permission statement improved performance compared to the abstract version of the Wason task.
The context within which conditional reasoning occurs is critical.
Familiar situations can improve reasoning, but familiarity is not always necessary or sufficient.
Controversies in this area highlight the complexity of the human mind.
Decision making involves choices between different courses of action.
Decisions range from unimportant (e.g., what to wear) to impactful (e.g., career choices).
The focus is on how people make choices, whether personal or professional.
Decisions involve both benefits and costs.
Expected utility theory assumes people are rational and aim to maximize utility.
Utility refers to outcomes that achieve a person's goals.
The theory is rooted in economics, where utility is linked to monetary value.
Good decision making maximizes monetary payoff.
Example: Deciding whether to drive or take the train based on traffic reports.
Expected Utility Calculation: The expected utility (EU) is calculated by multiplying the assigned value and probability of each possible outcome. The option with the highest EU will be the one that the person making the decision will go for.
EU = \sum (Value \times Probability)
People's decisions often don't maximize the probability of a good outcome.
Denes-Raj and Epstein (1994) found people chose a bowl with a smaller proportion of red jellybeans but more red beans overall.
Participants felt they had a better chance with more red beans, even knowing the probabilities were against them.
Contestants in "Deal or No Deal" are influenced by emotions and previous events, not just probabilities.
In "Deal or No Deal," contestants choose between a guaranteed amount from the bank or continuing the game.
Post et al. (2008) found that contestants' choices depend on what has happened leading up to their decision.
Contestants are more cautious if things are going well and more risky when doing poorly to avoid feeling like a loser.
Decisions are swayed by emotions triggered by preceding events.
People with damage to the prefrontal lobe show impaired decision making despite preserved intellectual abilities.
Flattened emotions may explain their difficulty in evaluating emotional outcomes.
Damasio's somatic marker hypothesis suggests emotion-related signals bias choices.
The ventromedial and orbitofrontal regions trigger somatic markers based on memories and knowledge.
These markers are indexed changes in heart rate, blood pressure, gut motility, etc., related to a previously experienced consequence of a particular choice.
Somatic states allow emotional anticipation of outcomes.
Anxious people tend to avoid decisions with large negative consequences (risk avoidance).
Optimistic people may ignore negative information, leading to poor decisions.
Expected emotions (predicted feelings for a particular outcome) influence decision making.
Expected emotions determine risk aversion (avoiding risks).
Risk aversion increases when people believe a loss will have a greater impact than an equivalent gain.
Prospect theory (Kahneman and Tversky, 1979) suggests choices are based on values assigned to gains and losses.
Losing 100$ Euros feels worse than winning 100$$, leading people to decline a 50-50 bet even if the win is larger.
Kermer et al. found people overestimate the negative effect of losing.
People don't account for coping mechanisms when predicting emotional reactions.
Inability to predict emotional outcomes accurately leads to inefficient decision making.
Incidental emotions are unrelated to the decision itself (e.g., general mood, events earlier in the day, environment).
These emotions can still affect decisions.
Simonsohn (2007) found university admissions reviewers weighted academic attributes more on cloudy days and non-academic attributes on sunny days.
Prospective students were more likely to enroll at a university visited on a cloudy day (Simonsohn, 2009).
People experience different emotions in different weather conditions.
Happy feelings take away attention from academic achievement, while sadness has the opposite effect.
The sadder-but-wiser hypothesis suggests sadness is associated with careful decision making.
Sadness reduces biases from heuristics, reputation, and stereotypes.
Lerner, Small, and Lowenstein (2004) found sadness and disgust affected buying and selling prices.
Disgust is associated with a need to expel things, and sadness with a need for change.
Sad participants were willing to pay more for a set of pens, reflecting a need for reward replacement.
Increasing choice alternatives can influence decisions.
Redelmeier & Shafir (1995) found that physicians were less likely to prescribe arthritis medication when given two medication options compared to one.
Faced with a more difficult decision, people sometimes make no decision at all.
Consumer research shows the too-much-choice effect or choice overload.
Iyengar and Lepper (2000) found that more choice leads to less purchasing and less satisfaction.
More alternatives mean more have to be turned down, increasing search costs, time, and uncertainty.
Context affects medical decision making.
Shen, Rabinowitz, Geist, & Shafir (2010) found that physicians' decisions to perform a caesarean section were influenced by preceding cases.
More physicians recommended caesarean sections after reviewing serious cases compared to routine cases.
Opt-in vs. opt-out procedures affect organ donation rates.
Opt-in (informed consent): active registration required.
Opt-out (presumed consent): everyone is a donor unless they opt out.
Organ donations are higher in opt-out countries.
Celebrity endorsement and coordinators also influence organ donation rates.
The status quo bias is the tendency to do nothing when faced with a decision.
People stick with their current providers even when better options exist.
Slovic, Monahan, and MacGregor (2000) showed that the way information is presented affects decisions.
Statements framed as frequencies (e.g., 20 out of 100) led to different choices than statements framed as probabilities (e.g., 20 per cent chance).
Tversky and Kahneman (1981) found that choices are influenced by how they are stated or framed.
When choices are framed in terms of gains, people use a risk aversion strategy.
When choices are framed in terms of losses, people use a risk-taking strategy.
Framing highlights some features and de-emphasizes others.
Neuroeconomics combines psychology, neuroscience, and economics to study brain activation during decisions.
This research identifies brain areas associated with affective experiences.
Sanfey et al. (2003) used the ultimatum game to measure brain activity during decision making.
The ultimatum game involves a proposer and a responder splitting money.
Responders reject unfair offers, even though it means getting nothing.
Emotions, not just rational calculations, drive decisions in the ultimatum game.
The right anterior insula is activated more strongly when responders reject an offer, indicating negative emotional states.
The prefrontal cortex (PFC) is activated during the decision task, but activation is the same for accepted and rejected offers.
The PFC handles the cognitive demands of the task, such as accumulating money.
The anterior insula handles the emotional goal of resenting unfairness.