8 pt 2

CP214: Discrete Structures

Chapter 7: Discrete Probability

Instructor: Dr. Shimaa Abdelmeguid

Outline of Topics

  • Probability Theory

    • Independence

    • Bernoulli Trials

    • Random Variables

    • Expected Values


Probability Theory

Independence

  • Definition:

    • The events E and F are independent if and only if the following equation holds:
      p(E \cap F) = p(E) \cdot p(F)

    • Furthermore, the conditional probability of E given F is defined as:
      p(E | F) = \frac{p(E \cap F)}{p(F)} = p(E)

    • Symmetrical property: If p(E \cap F) = p(E) \cdot p(F), then p(F | E) = p(F) as well.


Independence Examples

Example 1: Rolling Dice
  • Let E be the event of rolling an even number with an unbiased die, while F is the event that the resulting number is divisible by three.

  • Events defined as:

    • E = {2, 4, 6}

    • F = {3, 6}

    • E \cap F = {6}

  • Calculate probabilities:

    • p(E) = \frac{3}{6} = \frac{1}{2}

    • p(F) = \frac{2}{6} = \frac{1}{3}

    • p(E \cap F) = \frac{1}{6}

  • Verification:

    • p(E \cap F) = p(E) \cdot p(F):

    • \frac{1}{6} = \frac{1}{2} \cdot \frac{1}{3}

  • Conclusion: E and F are independent.


Example 2: Bit Strings
  • Let E be the event where a random bit string of length four begins with a 1. Let F be the event the bit string contains an even number of 1s.

  • Possible bit strings starting with 1:

    • {1000, 1001, 1010, 1011, 1100, 1101, 1110, 1111}

  • Possible bit strings with an even number of 1s:

    • {0000, 0011, 0101, 0110, 1001, 1010, 1100, 1111}

  • Calculate probabilities:

    • Total strings = 16

    • p(E) = p(F) = \frac{8}{16} = \frac{1}{2}

    • E \cap F = {1111, 1100, 1010, 1001}

    • p(E \cap F) = \frac{4}{16} = \frac{1}{4}

  • Verification:

    • p(E \cap F) = 1/4 = (1/2)(1/2) = p(E) \cdot p(F)

  • Conclusion: E and F are independent.


Example 3: Family Children
  • For a family with two children, let E be the event of having two boys (BB) and F be the event of having at least one boy (BB, BG, GB).

    • E = {BB}, p(E) = 1/4

    • F = {BB, BG, GB}, p(F) = 3/4

    • E \cap F = {BB}, p(E \cap F) = 1/4

  • Independence check:

    • p(E) \cdot p(F) = (1/4)(3/4) = 3/16

  • Conclusion: E and F are not independent since 1/4
    eq 3/16.


Bernoulli Trials

Definition and Concepts

  • A Bernoulli trial is defined as a random experiment with two possible outcomes, commonly referred to as a success (S) or failure (F).

  • Let p be the probability of success, and q = 1 - p be the probability of failure.

  • The relationship between probabilities dictates:

    • p + q = 1

  • When conducting n independent Bernoulli trials, the probability of achieving exactly k successes is given by the formula:

    • P(X = k) = C(n, k) p^k q^{n-k}
      where C(n, k) represents the binomial coefficient, the number of ways to choose k successes from n trials.


Bernoulli Trials: Example and Calculation

Probability Calculation
  • Consider the probability of obtaining two successes in five Bernoulli trials.

  • Let’s denote:

    • Successful outcomes: S

    • Failing outcomes: F

  • Example sequence: SSFFF

  • Probability calculation for this specific sequence:

    • p^2 q^3

  • Another possible sequence: FSFSF with the same logic leads to the same probability which is p^2 q^3.

  • Total number of possible sequences with k successes in n trials is determined by the binomial coefficient C(n, k).


Example of a Coin Toss
  • For a biased coin where the probability of heads (success) is 2/3, let's calculate the probability of exactly four heads in seven tosses.

  • Total ways to get four heads among seven tosses is given by the binomial coefficient:

    • C(7, 4)

  • The probability of each combination is:

    • (\frac{2}{3})^4 (\frac{1}{3})^3

  • Therefore, the total probability is given by the product:

    • P(X = 4) = C(7, 4) \left(\frac{2}{3}\right)^4 \left(\frac{1}{3}\right)^3 = \frac{560}{2187} \approx 0.2561


Random Variables

Definition

  • A random variable is defined as a function that maps outcomes from the sample space of an experiment to real numbers.

  • It quantifies the outcomes for mathematical analysis.

Distribution of Random Variables
  • Definition: The distribution of a random variable X over sample space S consists of pairs denoted as {(r, P(X = r)) | r \in X(S)} where P(X = r) signifies the probability that X takes the value r.


Example of a Random Variable

  • Illustrative example: Flipping a coin three times, let X(s) represent the number of heads.

  • Outcomes defined:

    • X(HHH) = 3,

    • X(TTT) = 0,

    • X(HHT) = X(HTH) = X(THH) = 2,

    • X(TTH) = X(THT) = X(HTT) = 1.


Expected Values

Definition

  • The expected value of a random variable is a statistical measure that provides a weighted average of the possible outcomes when an experiment is repeated many times.

  • Each outcome contributes to the average based on its probability.

Calculation of Expected Values
  • For a random variable with outcomes 1 and 2, where 1 occurs with P = 0.1 and 2 occurs with P = 0.9, the traditional arithmetic average is not sufficient as it neglects the probabilities.

  • True average value is computed as a weighted sum:

    • E(X) = 0.1 imes 1 + 0.9 imes 2 = 0.1 + 1.8 = 1.9

  • The generalized formula for the expected value is:

    • E(X) = \sum_{s \in S} P(s) X(s)


Examples of Expected Values

Example I: Coin Flips
  • Flipping a fair coin three times:

  • Sample space S contains 8 outcomes. The expected value of the random variable X, which counts how many heads appear, is calculated as:

    • E(X) = \frac{1}{8} [X(HHH) + X(HHT) + X(HTH) + X(THH) + X(TTH) + X(THT) + X(HTT) + X(TTT)]

    • After computation:

    • E(X) = 1.5


Example II: Fair Die
  • Random variable X defines the outcome on a fair die. Each face shows values 1-6 with equal probabilities of \frac{1}{6}.

  • Expected value calculated as:

    • E(X) = \frac{1}{6} \cdot [1 + 2 + 3 + 4 + 5 + 6] = \frac{21}{6} = \frac{7}{2}


Example III: Sum of Dice
  • Random variable X denotes the sum of numbers when rolling two dice. The sample size yields 36 outcomes.

  • Distribution analysis shows that while outcomes are equally likely, the values of X do not share equal probabilities.

  • Probabilities of outcomes for specific X values:

    • P(X = 2) = \frac{1}{36}

    • P(X = 3) = \frac{2}{36} = \frac{1}{18}

    • P(X = 4) = \frac{3}{36} = \frac{1}{12}

    • P(X = 5) = \frac{4}{36} = \frac{1}{9}

    • P(X = 6) = \frac{5}{36}

    • P(X = 7) = \frac{6}{36} = \frac{1}{6}

    • P(X = 8) = \frac{5}{36}

    • P(X = 9) = \frac{4}{36} = \frac{1}{9}

    • P(X = 10) = \frac{3}{36} = \frac{1}{12}

    • P(X = 11) = \frac{2}{36} = \frac{1}{18}

    • P(X = 12) = \frac{1}{36}

  • Expected value is calculated as:

    • E(X) = 2 \cdot (\frac{1}{36}) + 3 \cdot (\frac{1}{18}) + 4 \cdot (\frac{1}{12}) + 5 \cdot (\frac{1}{9}) + 6 \cdot (\frac{5}{36}) + 7 \cdot (\frac{1}{6}) + 8 \cdot (\frac{5}{36}) + 9 \cdot (\frac{1}{9}) + 10 \cdot (\frac{1}{12}) + 11 \cdot (\frac{1}{18}) + 12 \cdot (\frac{1}{36}) = 7

  • Interpretation: Averaging repeated rolls yields an average sum of 7.


Theorem Relating to Expected Values

  • Theorem:

    • If X and Y are random variables on a sample space S, and a and b are real constants:

    • E(X + Y) = E(X) + E(Y)

    • E(aX + b) = aE(X) + b

  • Following this theorem, the expected value of a sum of two dice can be simplified using their individual expected values:

    • For two independent random variables X₁ and X₂ representing two dice, we find that:

    • E(X₁) = E(X₂) = \frac{7}{2}

    • Thus, E(X₁ + X₂) = E(X₁) + E(X₂) = 7


Summary of Key Topics

  • Discussed concepts include:

    • Discrete Probability

    • Conditional Probability

    • Bernoulli Trials

    • Random Variables

    • Expected Values


End of Notes