Simulate 20 bernoulli trials with a success probability of 0.75.

The Bernoulli Trial formula is p

Trial # | Success/Failure | Math Work | Math Work II | Probability |
---|---|---|---|---|

1 | Failure | 0.75^{0}0.25^{(1 - 0)} | 1 x 0.25 | 0.25 |

2 | Failure | 0.75^{0}0.25^{(1 - 0)} | 1 x 0.25 | 0.25 |

3 | Success | 0.75^{1}0.25^{(1 - 1)} | 0.75 x 1 | 0.75 |

4 | Success | 0.75^{1}0.25^{(1 - 1)} | 0.75 x 1 | 0.75 |

5 | Success | 0.75^{1}0.25^{(1 - 1)} | 0.75 x 1 | 0.75 |

6 | Success | 0.75^{1}0.25^{(1 - 1)} | 0.75 x 1 | 0.75 |

7 | Success | 0.75^{1}0.25^{(1 - 1)} | 0.75 x 1 | 0.75 |

8 | Success | 0.75^{1}0.25^{(1 - 1)} | 0.75 x 1 | 0.75 |

9 | Failure | 0.75^{0}0.25^{(1 - 0)} | 1 x 0.25 | 0.25 |

10 | Success | 0.75^{1}0.25^{(1 - 1)} | 0.75 x 1 | 0.75 |

11 | Failure | 0.75^{0}0.25^{(1 - 0)} | 1 x 0.25 | 0.25 |

12 | Failure | 0.75^{0}0.25^{(1 - 0)} | 1 x 0.25 | 0.25 |

13 | Failure | 0.75^{0}0.25^{(1 - 0)} | 1 x 0.25 | 0.25 |

14 | Success | 0.75^{1}0.25^{(1 - 1)} | 0.75 x 1 | 0.75 |

15 | Success | 0.75^{1}0.25^{(1 - 1)} | 0.75 x 1 | 0.75 |

16 | Success | 0.75^{1}0.25^{(1 - 1)} | 0.75 x 1 | 0.75 |

17 | Success | 0.75^{1}0.25^{(1 - 1)} | 0.75 x 1 | 0.75 |

18 | Success | 0.75^{1}0.25^{(1 - 1)} | 0.75 x 1 | 0.75 |

19 | Success | 0.75^{1}0.25^{(1 - 1)} | 0.75 x 1 | 0.75 |

20 | Success | 0.75^{1}0.25^{(1 - 1)} | 0.75 x 1 | 0.75 |

Given your success probability of 0.75, we would have expected 0.75 x 20 = 15 successes

Our actual results were 14 successes and 6 failures

The median of the bernoulli trial works as follows:

- If q > p, 0
- If q = p, 0.5
- If q < p, 1

Since q < p, 0.25 < 0.75, then our median is 1

Variance σ

Variance σ

Variance σ

Skewness = | q - p |

√pq |

Skewness = | 0.25 - 0.75 |

√(0.75)(0.25) |

Skewness = | -0.5 |

√0.1875 |

Skewness = | -0.5 |

0.22 |

Skewness =

Kurtosis = | 1 - 6pq |

√pq |

Kurtosis = | 1 - 6(0.75)(0.25) |

(0.75)(0.25) |

Kurtosis = | 1 - 6(0.1875) |

0.1875 |

Kurtosis = | 1 - 1.125 |

0.1875 |

Kurtosis = | -0.125 |

0.1875 |

Kurtosis =

Entropy = -qLn(q) - pLn(p)

Entropy = -(0.25)Ln(0.25) - 0.75Ln(0.75)

Entropy = -(0.25)(-9) - 0.75(-0.78)

Entropy = -(-0.97) - -0.84

Entropy =