PHIL2125 - Rationality and Social Cooperation

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/198

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

199 Terms

1
New cards

Decision Theory

The study of instrumental rationality, or means-end rationality. Takes ends/preferences as given and studies how best to achieve those preferences.

2
New cards

Normative Decision Theory

Concerned with “oughts”

3
New cards

Descriptive Decision Theory

Describes how actors actually behave and make decisions

4
New cards

Game theory

the study of individuals making individual decisions, but when the outcomes of their actions depend on what others do as well.

5
New cards

Decision tables (aka decision matrices)

• Acts correspond to the rows • States correspond to the columns • The outcome of an action, given a state of the world, gets written in the box that’s in both the action-row and the state-column

6
New cards

In a decision under risk,

you can rationally assign probabilities to each state of the world.

7
New cards

In a decision under uncertainty (or ignorance),

you lack enough information to even assign probabilities to the different relevant states of the world.

8
New cards

Asymmetry conditions: O1

If xPy, then not yPx

9
New cards

Asymmetry conditions: O2

If xPy, then not xIy

10
New cards

Asymmetry conditions: O3

If xIy, then not xPy and not yPx

11
New cards

Connectivity condition: O4

xPy or yPx or xIy

12
New cards

Transitivity conditions: O5

If xPy and yPz, then xPz

13
New cards

Transitivity conditions: O6

If xPy and xIz, then zPy

14
New cards

Transitivity conditions: O7

If xPy and yIz, then xPz

15
New cards

Transitivity conditions: O8

If xIy and yIz, then xIz

16
New cards

Ordering Axioms

Asymmetry Conditions, Connectivity Condition, Transitivity Conditions

17
New cards

If your preferences satisfy the Ordering Axioms

then the relevant outcomes can be divided into indifference classes, such that you’re indifferent between any two members of a given indifference class.

18
New cards

Relation R is an equivalence relation

iff R is reflexive (xRx), symmetric (if xRy, then yRx), and transitive (if xRy and yRz, then xRz).

19
New cards

Connectivity is incompatible with incommensurability,

the thought that neither of two outcomes may be better than the idea, but nor are they equally good.

20
New cards

Transitivity can also been questioned

This is the bees/electric chair example

21
New cards

Ordinal Utility Functions

a mapping from the set of outcomes to some numerical scale large enough for the purpose at hand, such that the preferred outcomes are assigned larger numbers.

22
New cards

Ordinal Utility Functions recording

u(x) > u(y) iff xPy and u(x) = u(y) iff xIy

23
New cards

Ordinal Transformation.

One that preserves order of underlying preferences. u* is an ordinal transformation of u iff, for all outcomes x and y, u(x) ≥ u(y) iff u*(x) ≥ u*(y)

24
New cards

Decision Rules for Decisions under Ignorance

Dominance, Maximin, MiniMax Regret,

25
New cards

Dominance Decision Rule

Select the Act that dominates (weakly or strongly) all other acts

26
New cards

Maximin decision rule

Find the worst possible outcome for each act, and choose the act whose worst possible outcome is best. Maximax is this flipped

27
New cards

MiniMax Regret decision Rule

Construct a regret table and Choose the act whose maximum possible regret is least.

28
New cards

MiniMax Regret decision Rule - Problems

Violates the Rule of Independent Alternatives. It is not invariant under ordinal transformations.

29
New cards

Positive linear transformations

We can show that, where u* is a positive linear transformation of u: u(x) - u(y) ≥ u(z) - u(w) iff u*(x) - u*(y) ≥ u*(z) - u*(w). Positive Linear Transformations preserve relative distance.

30
New cards

Optimism/Pessimism Rule

Combines the Minimax and Maximax rules to find a "compromise" value between the two extremes.

31
New cards

Optimism/Pessimism Rule Formula

aMAX + (l -a)min, where a is an optimism index between 0 and 1.

32
New cards

Optimism/Pessimism Rule Formula

aMAX + (l -a)min, where a is an optimism index between 0 and 1.

33
New cards

Ordinal vs Interval Scales

Ordinal scales only value rank ordering, but the rank ordering and ratios of differences in utility are both meaningful for interval scales

34
New cards

Ordinal vs Interval Scales with transformations

-u* is an ordinal transformation of u iff for all x and y: u(x) ≥ u(y) iff u*(x) ≥ u*(y) while interval scales require positive linear transformations.

35
New cards

Interval transformation

-u* is a positive linear transformation of u iff u* = (a x u) + b, where a > 0

36
New cards

Ratio scales

The rank ordering and ratios of differences in utility and zero point are all meaningful. - ie kilograms, meters, degrees celsius

37
New cards

Ratio scales Transformations

The choice of unit is arbitrary so any similarity transformation u* of u is as good as u itself -u* is a similarity transformation of u iff u* = a x u, where a > 0

38
New cards

Probabilities understood as

objective chances vs. rational degrees of confidence.

39
New cards

Principle of Insufficient Reason: Nature

A proposal for assigning rational degrees of confidence. Tells you what belief state to be in.

40
New cards

Principle of Insufficient Reason: Principle

Given a set of n possibilities, where you have no evidence favouring any one of the n possibilities over any other, the probability of each possibility is 1/n.

41
New cards

we can combine the principle of insufficient reason with the principle of maximising expected utility.

The expected utility of act is a sum of products: for each possible state, take the probability of that state and multiply it by the utility of the outcome generated by that act in that state.

42
New cards

Expected Utility (where Oi,j is the outcome resulting from performing at Ai in state Sj):

EU(A1) = ∑j P(Sj) x u(O1,j)

43
New cards

Problems with the Principle of Insufficient Reason. - 1

Resnik: “If there is no reason for assuming one set of probabilities, there is no reason for assuming that the states are equiprobable.”

44
New cards

Problems with the Principle of Insufficient Reason. - 1 - Defence

Conflates objective chances with degrees of confidence.

45
New cards

Problems with the Principle of Insufficient Reason. - 2

Resnik: “it could lead to bad results. For all we know, when we make a decision under ignorance, one state with a terrible outcome or that produces a large regret has the greatest chance of being the true one.”

46
New cards

Problems with the Principle of Insufficient Reason. - 2 - Defence

Slippery-slope, overdramatic, again conflates assumptions with reality

47
New cards

Problems with the Principle of Insufficient Reason. - 3

The principle is inconsistent. There generally are multiple ways of chopping up the set of states of the world, and applying the principle of insufficient reason to different ways of chopping up the states will yield different, conflicting results.

48
New cards

Problems with the Principle of Insufficient Reason. - 3 - Examples

van Fraassen’s cube factory example. See also Bertrand’s paradox

49
New cards

Why maximise expected utility?

People are utility-maximising, maximising expected utility ensures we get the best outcome.

Recall: The expected utility of an act is not the amount of money (or utility) you think you’ll get if you perform that act.

50
New cards

The Probability Calculus

We write ‘P(S)=a’ to mean that the probability of S is a

51
New cards

The Probability Calculus Axioms

Non-Negativity, Normalisation, Finite Additivity.

52
New cards

The Probability Calculus Axioms - Non-negativity

For all S, P(S) ≥ 0

53
New cards

The Probability Calculus Axioms - Normalisation

If S is a tautology, then P(S) = 1

54
New cards

The Probability Calculus Axioms - Finite Additivity

If S1 and S2 are mutually exclusive, then P(S1 or S2) = P(S1) + P(S2)

55
New cards

Some probability axiomatizations also include countable additivity

extends the previous axiom to the (countably) infinite case.

56
New cards

Probability Calculus - Theorem 1

P(S) + P(not S) = 1

57
New cards

Probability Calculus - Theorem 1 - Proof

S and not S are mutually exclusive, so by Finite Additivity, P(S)+P(not S)=P(S or not S). `S or not S’ is a tautology, so P(S or not S)=1 by Normalization. Therefore, P(S) + P(not S) = 1.

58
New cards

Probability Calculus - Theorem 2

If S1 and S2 are equivalent, then P(S1) = P(S2).

59
New cards

Probability Calculus - Theorem 2 - Proof

Suppose S1 and S2 are equivalent. Then,

(a) `not S1 or S2’ is a tautology, and

(b) not S1 and S2 are mutually exclusive. From (a) and Normalization, we get P(not S1 or S2) = 1.

From (b) and Finite Additivity, we get P(not S1 or S2) = P(not S1) + P(S2). From this and Theorem 1, we have1 = 1 - P(S1) + P(S2). It follows immediately that P(S1) = P(S2).

60
New cards

Probability Calculus - Theorem 3

P(S1 or S2) = P(S1) + P(S2) - P(S1 and S2)

61
New cards

Conditional Probability

P(S | Q) = a)’ to mean that the conditional probability of S given Q is a.

62
New cards

Conditional Probability - Ratio Analysis

P(S | Q) =df P(S and Q)/P(Q) (provided that P(Q)>0; otherwise undefined)

63
New cards

Conditional Probability - Bayes’ Theorem:

P(Q | S) = P(Q) x P(S | Q)/P(S) (provided P(S)>0)

64
New cards

Conditional Probability - Probabilistic Independence

S is probabilistically independent of Q iff P(S) = P(S | Q)

65
New cards

From Probabilistic Independence - Multiplication

If S and Q are probabilistically independent relative to probability function P, then P(S and Q) = P(S) x P(Q).

66
New cards

Conditionalisation - Positive Relevance

S is positively relevant to Q iff P(Q | S) > P(Q)

67
New cards

Conditionalisation - Negative Relevance

S is negatively relevant to Q iff P(Q | S) < P(Q)

68
New cards

Probabilistic relevance and evidential support:

We can say that S provides evidence for Q iff S is positively relevant to Q

69
New cards

Two propositions can be (unconditionally) probabilistically independent but probabilistically dependent conditional on a third proposition.

This requires that:

P(Q | S) = P(Q) and P(Q | S&R) > P(Q | R) or P(Q | S&R) < P(Q | R)

70
New cards

Screening off - Two propositions can be (unconditionally) probabilistically dependent but probabilistically independent conditional on a third proposition.

This requires that: P(Q | S) > P(Q) and P(Q | S&R) = P(Q | R)

71
New cards

Screening off

R screens off S from Q iff Q is unconditionally dependent on S but independent of S conditional on R.

72
New cards

The Bayesian norm of Conditionalisation states

when you learn E (and nothing more), your new probabilities should equal your old conditional probabilities - conditional on E.

73
New cards

Conditionalisation basis

Let P0 be your probability function at time t0 and let P1 be your probability function at time t1. If between t0 and t1 you learn (i.e. become certain of) E and nothing stronger, then it is a requirement of rationality that for all H, P1(H) = P0(H | E)

74
New cards

Conditionalisation

says that when you learn E (and nothing stronger), you should assign probability 0 to all of the not-E possibilities and then multiply each remaining nonzero probability by the same constant so that they all still sum to 1.

75
New cards

Conditionalisation means

that for all propositions S and Q that entail E, the ratio between your old probability for S and your old probability for Q will equal the ratio between your new probability for S and your new probability for Q.

76
New cards

Conditionalization properties:

• It is cumulative • It is commutative:

• Conditionalization entails that

77
New cards

Conditionalization properties: Cumulative

If you learn E1 at t1 and then learn E2 at t2 the result is the same as if you learn E1&E2 all at once.

78
New cards

Conditionalization properties: Commuatative

The result of conditionalizing on E1 and then conditionalizing on E2 is the same as the result of conditionalizing on E2 and then conditionalizing on E1.

79
New cards

Conditionalization properties: Entails that

if you learn nothing between t1 and t2, your probabilities at t2 should be the same as your probabilities at t1. (Learning nothing is equivalent to ‘learning’ the tautology and nothing stronger.)

80
New cards

Is the Ratio Analysis Correct?

When combining Conditionalization and the Ratio Analysis of conditional probability: you get Certainties are forever:

81
New cards

Certainties are forever:

Once you become certain of a proposition, you must remain certain forevermore. This is because, according to the Ratio Analysis, if the probability of H is 1, the conditional probability of H given any other proposition E is either 1 (if P(E)>0) or undefined (if P(E)=0). So there’s no proposition such that conditionalizing on that proposition will drop your probability for H from 1 to less than 1.

82
New cards

Ratio Analysis runs into further trouble with infinities.

If we endorse the Ratio Analysis, for each point on the line segment, the unconditional probability that I hit that point is 0. But some point has to have been hit?

83
New cards

Improved Ratio Analysis

P(E) =df P(E | T), where T is the tautology P(H | E) = P(H&E)/P(E) when P(E)>0, but allow probabilities to still be defined in cases where P(E)=0.

84
New cards

Ratio Analysis

P(H | E) = P(H&E)/P(E) when P(E)>0

85
New cards

Bayes’ Theorem

P(H | E) = P(E | H) x P(H) / P(E)

86
New cards

Bayes’ Theorem

P(H | E) = P(E | H) x P(H) / P(E)

87
New cards

Bayesnian Approach - prior probability function P

This function gives us both unconditional probabilities and conditional probabilities, standardly defined using the ratio analysis.

88
New cards

Evidential support is cashed out in terms of positive relevance:

E is positively relevant to H =df P(H | E) > P(H)

89
New cards

Statistical inference goes by Conditionalization:

Let P0 be your probability function at time t0 and let P1 be your probability function at time t1. If between t0 and t1 you learn (i.e. become certain of) E and nothing stronger, then it is a requirement of rationality that for all H, P1(H) = P0(H | E)

90
New cards

The term ‘frequentism’

refers both to a theory of statistical inference, and to an interpretation of probability (i.e. a claim about the meaning of statements of probability)

91
New cards

Frequentism in probability

‘The probability of H is n’ is true if and only if the relative frequency of H-events, within the relevant reference class of events, is n’

92
New cards

Frequentism - step 1

Choose a null hypothesis H0

93
New cards

Frequentism - step 2

Figure out the possible outcomes of the experiment, and determine the probability of each outcome, on the assumption that the null hypothesis H0 is true

94
New cards

Frequentism - step 3

3Given the actual outcome, calculate the probability, given the assumption of the null hypothesis, of getting the actual outcome or any outcome less probable than the actual outcome. That is, sum the probabilities, given the null hypothesis, of the actual outcome, and of each outcome less probable than the actual one.

95
New cards

Frequentism - step 4

The number obtained is known as a p-value. When the p-value is less than or equal to α, we say that your results are ‘statistically significant at level α’ and that you may ‘reject the null H0 at that significance level.’

96
New cards

Frequentism - p-values

Lower p-values are supposed to mean stronger grounds for rejecting the null hypothesis.

97
New cards

The Bayesian Critique of Frequentism - Objection 1

Objection 1: Frequentism involves the use of probabilistic modus tollens

98
New cards

Modus Tollens:

Premise 1: If A then B

Premise 2: not-B

Conclusion: not-A

99
New cards

Frequentist, Probabilistic Modus Tollens

Premise 1: If H0 then probably not-E

Premise 2: not-E

Conclusion: Probably not-H0

This is bad

100
New cards

The Bayesian Critique of Frequentism - Objection 2

Frequentism makes the mistake of weakening the evidence