-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathprobability_20191104_152_chang.Rmd
529 lines (404 loc) · 28 KB
/
probability_20191104_152_chang.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
---
title: "R Notebook"
output: html_notebook
---
# Chapter 13 Probability
One of the goals of this part of the book is to help us understand how probability is useful to understand and describe real-world events when performing data analysis.
Probability theory is useful in many other contexts and, in particular, in areas that depend on data affected by chance in some way.
## 13.1 Discrete probability
We start by covering some basic principles related to categorical data. The subset of probability is referred to as discrete probability. It will help us understand the probability theory we will later introduce for numeric and continuous data, which is much more common in data science applications. Discrete probability is more useful in card games and therefore we use these as examples.
### 13.1.1 Relative frequency
A tangible way to think about the probability of an event is as the proportion of times the event occurs when we repeat the experiment an infinite number of times, independently, and under the same conditions.
### 13.1.2 Notation
We use the notation $Pr(A)$ to denote the probability of event A happening. We use the very general term event to refer to things that can happen when something occurs by chance.
In data science applications, we will often deal with continuous variables. These events will often be things like “is this person taller than 6 feet”. In this case, we write events in a more mathematical form: $X≥6$. We will see more of these examples later. Here we focus on categorical data.
### 13.1.3 Probability distributions
If we know the relative frequency of the different categories, defining a distribution for categorical outcomes is relatively straightforward. We simply assign a probability to each category.
## 13.2 Monte Carlo simulations for categorical data
Computers provide a way to actually perform the simple random experiment: pick a bead at random from a bag that contains three blue beads and two red ones. Random number generators permit us to mimic the process of picking at random.
An example is the `sample` function in R. We demonstrate its use in the code below. First, we use the function `rep` to generate the urn:
```{r}
beads<-rep(c("red", "blue"), times=c(2,3))
beads
```
and then use `sample` to pick a bead at random:
```{r}
sample(beads,1)
```
This line of code produces one random outcome. We want to repeat this experiment an infinite number of times, but it is impossible to repeat forever. Instead, we repeat the experiment a large enough number of times to make the results practically equivalent to repeating forever. __This is an example of a Monte Carlo simulation.__
Much of what mathematical and theoretical statisticians study, which we do not cover in this book, relates to providing rigorous definitions of “practically equivalent” as well as studying how close a large number of experiments gets us to what happens in the limit. Later in this section, we provide a practical approach to deciding what is “large enough”.
To perform our first Monte Carlo simulation, we use the replicate function, which permits us to repeat the same task any number of times. Here, we repeat the random event $B=10,000$ times:
```{r}
B<-10000
events<-replicate(B,sample(beads,1))
```
We can now see if our definition actually is in agreement with this Monte Carlo simulation approximation. We can use `table` to see the distribution:
```{r}
tab<-table(events)
tab
```
and `prop.table` gives us the proportions:
```{r}
prop.table(tab)
```
The numbers above are the estimated probabilities provided by this Monte Carlo simulation. Statistical theory, not covered here, tells us that as B gets larger, the estimates get closer to 3/5=.6 and 2/5=.4.
Although this is a simple and not very useful example, we will use Monte Carlo simulations to estimate probabilities in cases in which it is harder to compute the exact ones. Before delving into more complex examples, we use simple ones to demonstrate the computing tools available in R.
### 13.2.1 Setting the random seed
Before we continue, we will briefly explain the following important line of code:
```{r}
set.seed(1986)
```
If you want to ensure that results are exactly the same every time you run them, you can set R’s random number generation seed to a specific number. Above we set it to 1986. We want to avoid using the same seed everytime. A popular way to pick the seed is the year - month - day. For example, we picked 1986 on December 20, 2018: $2018−12−20=1986$
```{r}
?set.seed
```
In the exercises, we may ask you to set the seed to assure that the results you obtain are exactly what we expect them to be.
### 13.2.2 With and without replacement
The function `sample` has an argument that permits us to pick more than one element from the urn. However, by default, this selection occurs without replacement: after a bead is selected, it is not put back in the bag. Notice what happens when we ask to randomly select five beads:
```{r}
sample(beads, 5)
sample(beads, 5)
sample(beads, 5)
```
This results in rearrangements that always have three blue and two red beads. If we ask that six beads be selected, we get an error:
```{r}
sample(beads, 6)
```
However, the `sample` function can be used directly, without the use of `replicate`, to repeat the same experiment of picking 1 out of the 5 beads, continually, under the same conditions. To do this, we sample with replacement: return the bead back to the urn after selecting it. We can tell `sample` to do this by changing the `replace` argument, which defaults to `FALSE`, to `replace = TRUE`:
```{r}
events<-sample(beads, B, replace=TRUE)
prop.table(table(events))
```
Not surprisingly, we get results very similar to those previously obtained with `replicate`.
## 13.3 Independence
We say two events are independent if the outcome of one does not affect the other. The classic example is coin tosses. Every time we toss a fair coin, the probability of seeing heads is 1/2 regardless of what previous tosses have revealed. The same is true when we pick beads from an urn with replacement. In the example above, the probability of red is 0.40 regardless of previous draws.
Many examples of events that are not independent come from card games. When we deal the first card, the probability of getting a King is 1/13 since there are thirteen possibilities: Ace, Deuce, Three, …, Ten, Jack, Queen, King, and Ace. Now if we deal a King for the first card, and don’t replace it into the deck, the probabilities of a second card being a King is less because there are only three Kings left: the probability is 3 out of 51. These events are therefore __not independent__: the first outcome affected the next one.
To see an extreme case of non-independent events, consider our example of drawing five beads at random __without__ replacement:
```{r}
x<-sample(beads,5)
```
If you have to guess the color of the first bead, you will predict blue since blue has a 60% chance. But if I show you the result of the last four outcomes:
```{r}
x[2:5]
```
would you still guess blue? Of course not. Now you know that the probability of red is 1 since the only bead left is red. The events are not independent, so the probabilities change.
## 13.4 Conditional probabilities
When events are not independent, conditional probabilities are useful. We already saw an example of a conditional probability: we computed the probability that a second dealt card is a King given that the first was a King. In probability, we use the following notation:
$Pr(Card 2 is a king | Card 1 is a king) = 3/51$
We use the | as shorthand for "given that" or "conditional on".
When two events, say A and B, are independent, we have:
$Pr(A|B) = Pr(A)$
This is the mathematical way of saying: the fact that B happened does not affect the probability of A happening. In fact, this can be considered the mathematical definition of independence.
## 13.5 Addition and multiplication rules
### 13.5.1 Multiplication rule
If we want to know the probability of two events, say A and B, occurring, we can use the multiplication rule:
$Pr(A and B) = Pr(A)Pr(B|A)$
Let’s use Blackjack as an example. In Blackjack, you are assigned two random cards. After you see what you have, you can ask for more. The goal is to get closer to 21 than the dealer, without going over. Face cards are worth 10 points and Aces are worth 11 or 1 (you choose).
So, in a Blackjack game, to calculate the chances of getting a 21 by drawing an Ace and then a face card, we compute the probability of the first being an Ace and multiply by the probability of drawing a face card or a 10 given that the first was an Ace:
$1/13 * 16/51 ≈ 0.025$
The multiplication rule also applies to more than two events. We can use induction to expand for more events:
$Pr(A and B and C)=Pr(A)Pr(B|A)Pr(C|A and B)$
### 13.5.2 Multiplication rule under independence
When we have independent events, then the multiplication rule becomes simpler:
$Pr(A and B and C) = Pr(A)Pr(B)Pr(C)$
But we have to be very careful before using this since assuming independence can result in very different and incorrect probability calculations when we don’t actually have independence.
As an example, imagine a court case in which the suspect was described as having a mustache and a beard. The defendant has a mustache and a beard and the prosecution brings in an “expert” to testify that 1/10 men have beards and 1/5 have mustaches, so using the multiplication rule we conclude that only 1/10×1/5 or 0.02 have both.
But to multiply like this we need to assume independence! Say the conditional probability of a man having a mustache conditional on him having a beard is .95. So the correct calculation probability is much higher: 1/10 * 95/100 = 0.095.
The multiplication rule also gives us a general formula for computing conditional probabilities:
$Pr(B|A)=Pr(A and B)/Pr(A)$
To illustrate how we use these formulas and concepts in practice, we will use several examples related to card games.
### 13.5.3 Addition rule
The addition rule tells us that:
$Pr(A or B)=Pr(A)+Pr(B)-Pr(A and B)$
This rule is intuitive: think of a Venn diagram. If we simply add the probabilities, we count the intersection twice so we need to substract one instance.
## 13.6 Combinations and permutations
In our very first example, we imagined an urn with five beads. As a reminder, to compute the probability distribution of one draw, we simply listed out all the possibilities. There were 5 and so then, for each event, we counted how many of these possibilities were associated with the event. The resulting probability of choosing a blue bead is 3/5 because out of the five possible outcomes, three were blue.
For more complicated cases, the computations are not as straightforward. For instance, what is the probability that if I draw five cards without replacement, I get all cards of the same suit, what is known as a “flush” in poker? In a discrete probability course you learn theory on how to make these computations. Here we focus on how to use R code to compute the answers.
First, let’s construct a deck of cards. For this, we will use the `expand.grid` and `paste` functions. We use `paste` to create strings by joining smaller strings. To do this, we take the number and suit of a card and create the card name like this:
```{r}
number<-"Three"
suit<-"Hearts"
paste(number,suit)
```
`paste` also works on pairs of vectors performing the operation element-wise:
```{r}
paste(letters[1:5], as.character(1:5))
```
The function `expand.grid` gives us all the combinations of entries of two vectors. For example, if you have blue and black pants and white, grey, and plaid shirts, all your combinations are:
```{r}
expand.grid(pants=c("blue","black"), shirt=c("white","grey","plaid"))
```
Here is how we generate a deck of cards:
```{r}
suits<-c("Diamonds","Clubs","Hearts","Spades")
numbers<-c("Ace", "Deuce", "Three", "Four", "Five", "Six", "Seven",
"Eight", "Nine", "Ten", "Jack", "Queen", "King")
deck<-expand.grid(number=numbers,suit=suits)
deck<-paste(deck$number, deck$suit)
deck
```
With the deck constructed, we can double check that the probability of a King in the first card is 1/13 by computing the proportion of possible outcomes that satisfy our condition:
```{r}
kings<-paste("King",suits)
mean(deck %in% kings)
```
Now, how about the conditional probability of the second card being a King given that the first was a King? Earlier, we deduced that if one King is already out of the deck and there are 51 left, then this probability is 3/51. Let’s confirm by listing out all possible outcomes.
To do this, we can use the permutations function from the __gtools__ package. For any list of size `n`, this function computes all the different combinations we can get when we select `r` items. Here are all the ways we can choose two numbers from a list consisting of `1,2,3`:
```{r}
library(gtools)
permutations(3,2)
```
Notice that the order matters here: 3,1 is different than 1,3. Also, note that (1,1), (2,2), and (3,3) do not appear because once we pick a number, it can’t appear again.
Optionally, we can add a vector. If you want to see five random seven digit phone numbers out of all possible phone numbers (without repeats), you can type:
```{r}
all_phone_numbers<-permutations(10,7,v=0:9) #가능한 걸 다 만들어둠
n<-nrow(all_phone_numbers) #가능한 경우의 수
index<-sample(n,5) #그 중 5개만 뽑겠다!
all_phone_numbers[index,]
```
Instead of using the numbers 1 through 10, the default, it uses what we provided through `v`: the digits 0 through 9.
To compute all possible ways we can choose two cards when the order matters, we type:
```{r}
hands<-permutations(52,2,v=deck)
```
This is a matrix with two columns and 2652 rows. With a matrix we can get the first and second cards like this:
```{r}
first_card<-hands[,1]
second_card<-hands[,2]
```
Now the cases for which the first hand was a King can be computed like this:
```{r}
kings<-paste("King",suits)
sum(first_card %in% kings)
```
To get the conditional probability, we compute what fraction of these have a King in the second card:
```{r}
sum(first_card%in%kings & second_card%in%kings) / sum(first_card%in%kings)
```
which is exactly 3/51, as we had already deduced. Notice that the code above is equivalent to:
```{r}
mean(first_card%in%kings & second_card%in%kings) / mean(first_card%in%kings)
```
which uses `mean` instead of `sum` and is an R version of:
$Pr(A and B)/Pr(A)$
How about if the __order doesn’t matter__? For example, in Blackjack if you get an Ace and a face card in the first draw, it is called a Natural 21 and you win automatically. If we wanted to compute the probability of this happening, we would enumerate the combinations, not the permutations, since the order does not matter.
```{r}
combinations(3,2)
```
In the second line, the outcome does not include (2,1) because (1,2) already was enumerated. The same applies to (3,1) and (3,2).
So to compute the probability of a Natural 21 in Blackjack, we can do this:
```{r}
aces<-paste("Ace", suits)
facecard<-c("King", "Queen", "Jack", "Ten")
facecard<-expand.grid(number=facecard,suit=suits)
facecard<-paste(facecard$number,facecard$suit)
hands<-combinations(52,2,v=deck)
mean(hands[,1] %in% aces & hands[,2] %in% facecard)
```
In the last line, we assume the Ace comes first. This is only because we know the way `combination` enumerates possibilities and it will list this case first. But to be safe, we could have written this and produced the same answer:
```{r}
mean((hands[,1] %in% aces & hands[,2] %in% facecard) | (hands[,2] %in% aces & hands[,1] %in% facecard))
```
### 13.6.1 Monte Carlo example
Instead of using `combinations` to deduce the exact probability of a Natural 21, we can use a Monte Carlo to estimate this probability. In this case, we draw two cards over and over and keep track of how many 21s we get. We can use the function `sample` to draw two cards without replacements:
```{r}
hand<-sample(deck,2)
hand
```
And then check if one card is an Ace and the other a face card or a 10. Going forward, we include 10 when we say face card. Now we need to check both possibilities:
```{r}
(hands[1] %in% aces & hands[2] %in% facecard) |
(hands[2] %in% aces & hands[1] %in% facecard)
```
If we repeat this 10,000 times, we get a very good approximation of the probability of a Natural 21.
Let’s start by writing a function that draws a hand and returns TRUE if we get a 21. The function does not need any arguments because it uses objects defined in the global environment.
```{r}
blackjack<-function(){
hand<-sample(deck,2)
(hand[1] %in% aces & hand[2] %in% facecard) |
(hand[2] %in% aces & hand[1] %in% facecard)
}
```
Here we do have to check both possibilities: Ace first or Ace second because we are not using the `combinations` function. The function returns `TRUE` if we get a 21 and `FALSE` otherwise:
```{r}
blackjack()
```
Now we can play this game, say, 10,000 times:
```{r}
B<-10000
results<-replicate(B,blackjack())
mean(results)
```
## 13.7 Examples
In this section, we describe two discrete probability popular examples: the Monty Hall problem and the birthday problem. We use R to help illustrate the mathematical concepts.
### 13.7.1 Monty Hall problem
In the 1970s, there was a game show called “Let’s Make a Deal” and Monty Hall was the host. At some point in the game, contestants were asked to pick one of three doors. Behind one door there was a prize. The other doors had a goat behind them to show the contestant they had lost. After the contestant picked a door, before revealing whether the chosen door contained a prize, Monty Hall would open one of the two remaining doors and show the contestant there was no prize behind that door. Then he would ask “Do you want to switch doors?” What would you do?
We can use probability to show that if you stick with the original door choice, your chances of winning a prize remain 1 in 3. However, if you switch to the other door, your chances of winning double to 2 in 3! This seems counterintuitive. Many people incorrectly think both chances are 1 in 2 since you are choosing between 2 options. You can watch a detailed mathematical explanation on Khan Academy or read one on Wikipedia. Below we use a Monte Carlo simulation to see which strategy is better. Note that this code is written longer than it should be for pedagogical purposes.
Let’s start with the stick strategy:
```{r}
B<-10000
monty_hall<-function(strategy){
doors<-as.character(1:3)
prize<-sample(c("car","goat","goat"))
prize_door<-doors[prize=="car"]
my_pick<-sample(doors,1)
show<-sample(doors[!doors %in% c(my_pick, prize_door)],1)
stick<-my_pick
stick==prize_door
switch<-doors[!doors %in% c(my_pick,show)]
choice<-ifelse(strategy == "stick", stick, switch)
choice == prize_door
}
stick<-replicate(B, monty_hall("stick"))
mean(stick)
switch<-replicate(B, monty_hall("switch"))
mean(switch)
```
As we write the code, we note that the lines starting with my_pick and show have no influence on the last logical operation when we stick to our original choice anyway. From this we should realize that the chance is 1 in 3, what we began with. When we switch, the Monte Carlo estimate confirms the 2/3 calculation. This helps us gain some insight by showing that we are removing a door, `show`, that is definitely not a winner from our choices. We also see that unless we get it right when we first pick, you win: 1 - 1/3 = 2/3.
### 13.7.2 Birthday problem
Suppose you are in a classroom with 50 people. If we assume this is a randomly selected group of 50 people, what is the chance that at least two people have the same birthday? Although it is somewhat advanced, we can deduce this mathematically. We will do this later. Here we use a Monte Carlo simulation. For simplicity, we assume nobody was born on February 29. This actually doesn’t change the answer much.
First, note that birthdays can be represented as numbers between 1 and 365, so a sample of 50 birthdays can be obtained like this:
```{r}
n<-50
bdays<-sample(1:365,n,replace=TRUE)
```
To check if in this particular set of 50 people we have at least two with the same birthday, we can use the function `duplicated`, which returns TRUE whenever an element of a vector is a duplicate. Here is an example:
```{r}
duplicated(c(1,2,3,1,4,3,5))
```
The second time 1 and 3 appear, we get a `TRUE`. So to check if two birthdays were the same, we simply use the `any` and `duplicated` functions like this:
```{r}
any(duplicated(bdays))
```
In this case, we see that it did happen. At least two people had the same birthday.
To estimate the probability of a shared birthday in the group, we repeat this experiment by sampling sets of 50 birthdays over and over:
```{r}
B <- 10000
same_birthday <- function(n){
bdays <- sample(1:365, n, replace=TRUE)
any(duplicated(bdays))
}
results <- replicate(B, same_birthday(50))
mean(results)
```
Were you expecting the probability to be this high?
People tend to underestimate these probabilities. To get an intuition as to why it is so high, think about what happens when the group size is close to 365. At this stage, we run out of days and the probability is one.
Say we want to use this knowledge to bet with friends about two people having the same birthday in a group of people. When are the chances larger than 50%? Larger than 75%?
Let’s create a look-up table. We can quickly create a function to compute this for any group size:
```{r}
compute_prob <- function(n, B=10000){
results <- replicate(B, same_birthday(n))
mean(results)
}
```
Using the function `sapply`, we can perform element-wise operations on any function:
```{r}
n <- seq(1,60)
prob <- sapply(n, compute_prob)
```
We can now make a plot of the estimated probabilities of two people having the same birthday in a group of size n:
```{r}
library(tidyverse)
prob <- sapply(n, compute_prob)
qplot(n, prob)
```
Now let’s compute the exact probabilities rather than use Monte Carlo approximations. Not only do we get the exact answer using math, but the computations are much faster since we don’t have to generate experiments.
To make the math simpler, instead of computing the probability of it happening, we will compute the probability of it not happening. For this, we use the multiplication rule.
Let’s start with the first person. The probability that person 1 has a unique birthday is 1. The probability that person 2 has a unique birthday, given that person 1 already took one, is 364/365. Then, given that the first two people have unique birthdays, person 3 is left with 363 days to choose from. We continue this way and find the chances of all 50 people having a unique birthday is:
$1*364/365*363/365...(365-n+1)/365$
We can write a function that does this for any number:
```{r}
exact_prob <- function(n){
prob_unique <- seq(365,365-n+1)/365
1 - prod( prob_unique)
}
eprob <- sapply(n, exact_prob)
qplot(n, prob) + geom_line(aes(n, eprob), col = "red")
```
This plot shows that the Monte Carlo simulation provided a very good estimate of the exact probability. Had it not been possible to compute the exact probabilities, we would have still been able to accurately estimate the probabilities.
## 13.8 Infinity in practice
The theory described here requires repeating experiments over and over forever. In practice we can’t do this. In the examples above, we used B=10,000 Monte Carlo experiments and it turned out that this provided accurate estimates. The larger this number, the more accurate the estimate becomes until the approximaton is so good that your computer can’t tell the difference. But in more complex calculations, 10,000 may not be nearly enough. Also, for some calculations, 10,000 experiments might not be computationally feasible. In practice, we won’t know what the answer is, so we won’t know if our Monte Carlo estimate is accurate. We know that the larger B, the better the approximation. But how big do we need it to be? This is actually a challenging question and answering it often requires advanced theoretical statistics training.
One practical approach we will describe here is to check for the stability of the estimate. The following is an example with the birthday problem for a group of 25 people.
```{r}
B <- 10^seq(1, 5, len = 100)
compute_prob <- function(B, n=25){
same_day <- replicate(B, same_birthday(n))
mean(same_day)
}
prob <- sapply(B, compute_prob)
qplot(log10(B), prob, geom = "line")
```
In this plot, we can see that the values start to stabilize (that is, they vary less than .01) around 1000. Note that the exact probability, which we know in this case, is 0.569.
## 13.9 Exercises
1. One ball will be drawn at random from a box containing: 3 cyan balls, 5 magenta balls, and 7 yellow balls. What is the probability that the ball will be cyan?
__3/15__
2. What is the probability that the ball will not be cyan?
__12/15__
3. Instead of taking just one draw, consider taking two draws. You take the second draw without returning the first draw to the box. We call this sampling without replacement. What is the probability that the first draw is cyan and that the second draw is not cyan?
__3/15*12/14=6/35__
4. Now repeat the experiment, but this time, after taking the first draw and recording the color, return it to the box and shake the box. We call this sampling with replacement. What is the probability that the first draw is cyan and that the second draw is not cyan?
__3/15*12/15=4/25__
5. Two events A and B are independent if Pr(A and B)=Pr(A)Pr(B). Under which situation are the draws independent?
a. You don’t replace the draw.
__b. You replace the draw.__
c. Neither
d. Both
6. Say you’ve drawn 5 balls from the box, with replacement, and all have been yellow. What is the probability that the next one is yellow?
__7/15__
7. If you roll a 6-sided die six times, what is the probability of not seeing a 6?
$(5/6)^6$
8. Two teams, say the Celtics and the Cavs, are playing a seven game series. The Cavs are a better team and have a 60% chance of winning each game. What is the probability that the Celtics win at least one game?
$1-(0.6)^7$
9. Create a Monte Carlo simulation to confirm your answer to the previous problem. Use `B <- 10000` simulations. Hint: use the following code to generate the results of the first four games:
```{r}
celtic_wins <- sample(c(0,1), 4, replace = TRUE, prob = c(0.6, 0.4))
```
The Celtics must win one of these 4 games.
```{r}
celtics_wins4 <- function(){
celtic_wins <- sample(c(0,1), 4, replace = TRUE, prob = c(0.6, 0.4))
any(celtic_wins == 1)
}
mean(replicate(B, celtics_wins4()))
```
10. Two teams, say the Cavs and the Warriors, are playing a seven game championship series. The first to win four games, therefore, wins the series. The teams are equally good so they each have a 50-50 chance of winning each game. If the Cavs lose the first game, what is the probability that they win the series?
$5*(0.5)^5$
11. Confirm the results of the previous question with a Monte Carlo simulation.
```{r}
cavs_win <- function(){
cavs_wins <- sample(c(0,1), 5, replace = TRUE, prob = c(0.5, 0.5))
sum(cavs_wins) == 4
}
mean(replicate(B, cavs_win()))
```
12. Two teams, A and B, are playing a seven game series. Team A is better than team B and has a p>0.5 chance of winning each game. Given a value p, the probability of winning the series for the underdog team B can be computed with the following function based on a Monte Carlo simulation:
```{r}
prob_win <- function(p){
B <- 10000
result <- replicate(B, {
b_win <- sample(c(1,0), 7, replace = TRUE, prob = c(1-p, p))
sum(b_win)>=4
})
mean(result)
}
```
Use the function `sapply` to compute the probability, call it `Pr`, of winning for `p <- seq(0.5, 0.95, 0.025)`. Then plot the result.
```{r}
Pr <- sapply(p <- seq(0.5, 0.95, 0.025), prob_win)
qplot(p, Pr, geom = "line")
```
13. Repeat the exercise above, but now keep the probability fixed at `p <- 0.75` and compute the probability for different series lengths: best of 1 game, 3 games, 5 games,… Specifically, `N <- seq(1, 25, 2)`. Hint: use this function:
```{r}
prob_win <- function(N, p=0.75){
B <- 10000
result <- replicate(B, {
b_win <- sample(c(1,0), N, replace = TRUE, prob = c(1-p, p))
sum(b_win)>=(N+1)/2
})
mean(result)
}
```
```{r}
Pr <- sapply(N <- seq(1, 25, 2), prob_win)
qplot(N, Pr, geom = "line")
```