Skip to content

Commit

Permalink
Deploying to gh-pages from @ ca445e2 🚀
Browse files Browse the repository at this point in the history
  • Loading branch information
danhalligan committed Aug 22, 2024
1 parent 4277885 commit b2bcd45
Show file tree
Hide file tree
Showing 14 changed files with 94 additions and 88 deletions.
4 changes: 2 additions & 2 deletions 03-linear-regression.md

Large diffs are not rendered by default.

6 changes: 3 additions & 3 deletions 04-classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -679,16 +679,16 @@ fit <- knn(
```
##
## fit Down Up
## Down 21 29
## Up 22 32
## Down 21 30
## Up 22 31
```

``` r
sum(diag(t)) / sum(t)
```

```
## [1] 0.5096154
## [1] 0.5
```

> h. Repeat (d) using naive Bayes.
Expand Down
2 changes: 1 addition & 1 deletion 05-resampling-methods.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ mean(store)
```
```
## [1] 0.6424
## [1] 0.6308
```

The probability of including $4$ when resampling numbers $1...100$ is close to
Expand Down
2 changes: 1 addition & 1 deletion 08-tree-based-methods.md
Original file line number Diff line number Diff line change
Expand Up @@ -1150,7 +1150,7 @@ bart <- gbart(College[train, pred], College[train, "Outstate"],
## done 800 (out of 1100)
## done 900 (out of 1100)
## done 1000 (out of 1100)
## time: 3s
## time: 4s
## trcnt,tecnt: 1000,1000
```

Expand Down
66 changes: 33 additions & 33 deletions 10-deep-learning.md
Original file line number Diff line number Diff line change
Expand Up @@ -393,15 +393,15 @@ npred <- predict(nn, x[testid, ])
```

```
## 6/6 - 0s - 54ms/epoch - 9ms/step
## 6/6 - 0s - 61ms/epoch - 10ms/step
```

``` r
mean(abs(y[testid] - npred))
```

```
## [1] 2.334041
## [1] 2.219039
```

In this case, the neural network outperforms logistic regression having a lower
Expand Down Expand Up @@ -433,7 +433,7 @@ model <- application_resnet50(weights = "imagenet")

```
## Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels.h5
## 8192/102967424 [..............................] - ETA: 0s 8085504/102967424 [=>............................] - ETA: 0s 21987328/102967424 [=====>........................] - ETA: 0s 36618240/102967424 [=========>....................] - ETA: 0s 51453952/102967424 [=============>................] - ETA: 0s 66551808/102967424 [==================>...........] - ETA: 0s 80912384/102967424 [======================>.......] - ETA: 0s 95641600/102967424 [==========================>...] - ETA: 0s102967424/102967424 [==============================] - 0s 0us/step
## 8192/102967424 [..............................] - ETA: 0s 3956736/102967424 [>.............................] - ETA: 1s 4202496/102967424 [>.............................] - ETA: 2s 8396800/102967424 [=>............................] - ETA: 1s 16785408/102967424 [===>..........................] - ETA: 1s 25174016/102967424 [======>.......................] - ETA: 1s 33562624/102967424 [========>.....................] - ETA: 0s 41951232/102967424 [===========>..................] - ETA: 0s 50905088/102967424 [=============>................] - ETA: 0s 58728448/102967424 [================>.............] - ETA: 0s 67117056/102967424 [==================>...........] - ETA: 0s 83894272/102967424 [=======================>......] - ETA: 0s101908480/102967424 [============================>.] - ETA: 0s102967424/102967424 [==============================] - 1s 0us/step
```

``` r
Expand Down Expand Up @@ -729,7 +729,7 @@ kpred <- predict(model, xrnn[!istrain,, ])
```

```
## [1] 0.4133125
## [1] 0.412886
```

Both models estimate the same number of coefficients/weights (16):
Expand Down Expand Up @@ -762,25 +762,25 @@ model$get_weights()

```
## [[1]]
## [,1]
## [1,] -0.03262059
## [2,] 0.09806149
## [3,] 0.19123746
## [4,] -0.00672294
## [5,] 0.11956818
## [6,] -0.08616812
## [7,] 0.03884261
## [8,] 0.07576967
## [9,] 0.16982540
## [10,] -0.02789208
## [11,] 0.02615459
## [12,] -0.76362336
## [13,] 0.09488130
## [14,] 0.51370680
## [15,] 0.48065400
## [,1]
## [1,] -0.031145222
## [2,] 0.101065643
## [3,] 0.141815767
## [4,] -0.004181504
## [5,] 0.116010934
## [6,] -0.003764492
## [7,] 0.038601257
## [8,] 0.078083567
## [9,] 0.137415737
## [10,] -0.029184511
## [11,] 0.036070298
## [12,] -0.821708620
## [13,] 0.095548652
## [14,] 0.511229098
## [15,] 0.521453559
##
## [[2]]
## [1] -0.005785846
## [1] -0.006889343
```

The flattened RNN has a lower $R^2$ on the test data than our `lm` model
Expand Down Expand Up @@ -833,11 +833,11 @@ xfun::cache_rds({
```

```
## 56/56 - 0s - 64ms/epoch - 1ms/step
## 56/56 - 0s - 66ms/epoch - 1ms/step
```

```
## [1] 0.4267343
## [1] 0.4271516
```

This approach improves our $R^2$ over the linear model above.
Expand Down Expand Up @@ -906,11 +906,11 @@ xfun::cache_rds({
```

```
## 56/56 - 0s - 136ms/epoch - 2ms/step
## 56/56 - 0s - 133ms/epoch - 2ms/step
```

```
## [1] 0.4447892
## [1] 0.4405331
```

### Question 13
Expand Down Expand Up @@ -966,21 +966,21 @@ xfun::cache_rds({

```
## Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz
## 8192/17464789 [..............................] - ETA: 0s 7127040/17464789 [===========>..................] - ETA: 0s 8396800/17464789 [=============>................] - ETA: 0s17464789/17464789 [==============================] - 0s 0us/step
## 8192/17464789 [..............................] - ETA: 0s 3784704/17464789 [=====>........................] - ETA: 0s 4202496/17464789 [======>.......................] - ETA: 0s 8396800/17464789 [=============>................] - ETA: 0s17464789/17464789 [==============================] - 0s 0us/step
## 782/782 - 16s - 16s/epoch - 20ms/step
## 782/782 - 16s - 16s/epoch - 20ms/step
## 782/782 - 16s - 16s/epoch - 20ms/step
## 782/782 - 16s - 16s/epoch - 20ms/step
## 782/782 - 15s - 15s/epoch - 20ms/step
## 782/782 - 15s - 15s/epoch - 20ms/step
## 782/782 - 15s - 15s/epoch - 20ms/step
```



| Max Features| Accuracy|
|------------:|--------:|
| 1000| 0.84516|
| 3000| 0.87840|
| 5000| 0.86400|
| 10000| 0.87200|
| 1000| 0.86084|
| 3000| 0.87224|
| 5000| 0.87460|
| 10000| 0.86180|

Varying the dictionary size does not make a substantial impact on our estimates
of accuracy. However, the models do take a substantial amount of time to fit and
Expand Down
Binary file modified 10-deep-learning_files/figure-html/unnamed-chunk-12-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified 10-deep-learning_files/figure-html/unnamed-chunk-21-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 3 additions & 3 deletions classification.html
Original file line number Diff line number Diff line change
Expand Up @@ -1016,10 +1016,10 @@ <h3><span class="header-section-number">4.2.1</span> Question 13<a href="classif
<span id="cb199-6"><a href="classification.html#cb199-6" aria-hidden="true"></a>(t &lt;-<span class="st"> </span><span class="kw">table</span>(fit, Weekly[<span class="op">!</span>train, ]<span class="op">$</span>Direction))</span></code></pre></div>
<pre><code>##
## fit Down Up
## Down 21 29
## Up 22 32</code></pre>
## Down 21 30
## Up 22 31</code></pre>
<div class="sourceCode" id="cb201"><pre class="sourceCode r"><code class="sourceCode r"><span id="cb201-1"><a href="classification.html#cb201-1" aria-hidden="true"></a><span class="kw">sum</span>(<span class="kw">diag</span>(t)) <span class="op">/</span><span class="st"> </span><span class="kw">sum</span>(t)</span></code></pre></div>
<pre><code>## [1] 0.5096154</code></pre>
<pre><code>## [1] 0.5</code></pre>
<blockquote>
<ol start="8" style="list-style-type: lower-alpha">
<li>Repeat (d) using naive Bayes.</li>
Expand Down
Loading

0 comments on commit b2bcd45

Please sign in to comment.