Skip to content

Commit

Permalink
Deploying to gh-pages from @ ca445e2 🚀
Browse files Browse the repository at this point in the history
  • Loading branch information
danhalligan committed Aug 22, 2024
1 parent a9c1456 commit dac8710
Show file tree
Hide file tree
Showing 12 changed files with 79 additions and 75 deletions.
4 changes: 2 additions & 2 deletions 03-linear-regression.md

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion 05-resampling-methods.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ mean(store)
```
```
## [1] 0.6307
## [1] 0.6294
```

The probability of including $4$ when resampling numbers $1...100$ is close to
Expand Down
2 changes: 1 addition & 1 deletion 08-tree-based-methods.md
Original file line number Diff line number Diff line change
Expand Up @@ -509,7 +509,7 @@ bartfit <- gbart(Carseats[train, 2:11], Carseats[train, 1],
## done 800 (out of 1100)
## done 900 (out of 1100)
## done 1000 (out of 1100)
## time: 3s
## time: 2s
## trcnt,tecnt: 1000,1000
```

Expand Down
62 changes: 31 additions & 31 deletions 10-deep-learning.md
Original file line number Diff line number Diff line change
Expand Up @@ -393,15 +393,15 @@ npred <- predict(nn, x[testid, ])
```

```
## 6/6 - 0s - 54ms/epoch - 9ms/step
## 6/6 - 0s - 53ms/epoch - 9ms/step
```

``` r
mean(abs(y[testid] - npred))
```

```
## [1] 2.30611
## [1] 2.324471
```

In this case, the neural network outperforms logistic regression having a lower
Expand Down Expand Up @@ -433,7 +433,7 @@ model <- application_resnet50(weights = "imagenet")

```
## Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels.h5
## 8192/102967424 [..............................] - ETA: 0s 4235264/102967424 [>.............................] - ETA: 1s 18759680/102967424 [====>.........................] - ETA: 0s 36265984/102967424 [=========>....................] - ETA: 0s 53231616/102967424 [==============>...............] - ETA: 0s 70893568/102967424 [===================>..........] - ETA: 0s 86900736/102967424 [========================>.....] - ETA: 0s102967424/102967424 [==============================] - 0s 0us/step
## 8192/102967424 [..............................] - ETA: 0s 2531328/102967424 [..............................] - ETA: 1s 15474688/102967424 [===>..........................] - ETA: 0s 26615808/102967424 [======>.......................] - ETA: 0s 41254912/102967424 [===========>..................] - ETA: 0s 56344576/102967424 [===============>..............] - ETA: 0s 70606848/102967424 [===================>..........] - ETA: 0s 85114880/102967424 [=======================>......] - ETA: 0s 99860480/102967424 [============================>.] - ETA: 0s102967424/102967424 [==============================] - 0s 0us/step
```

``` r
Expand Down Expand Up @@ -729,7 +729,7 @@ kpred <- predict(model, xrnn[!istrain,, ])
```

```
## [1] 0.4118947
## [1] 0.4126973
```

Both models estimate the same number of coefficients/weights (16):
Expand Down Expand Up @@ -762,25 +762,25 @@ model$get_weights()

```
## [[1]]
## [,1]
## [1,] -0.030813020
## [2,] 0.101270206
## [3,] 0.179615468
## [4,] -0.006129819
## [5,] 0.124265596
## [6,] -0.068290100
## [7,] 0.037529659
## [8,] 0.077909760
## [9,] 0.204727829
## [10,] -0.032089159
## [11,] 0.034063924
## [12,] -0.847255647
## [13,] 0.096230716
## [14,] 0.511620998
## [15,] 0.529515982
## [,1]
## [1,] -0.03103472
## [2,] 0.09990595
## [3,] 0.16059186
## [4,] -0.00529294
## [5,] 0.12038019
## [6,] -0.04571687
## [7,] 0.03948567
## [8,] 0.07816564
## [9,] 0.18490496
## [10,] -0.02650242
## [11,] 0.03655238
## [12,] -0.83607799
## [13,] 0.10024448
## [14,] 0.51236451
## [15,] 0.52258915
##
## [[2]]
## [1] -0.006476609
## [1] -0.004416319
```

The flattened RNN has a lower $R^2$ on the test data than our `lm` model
Expand Down Expand Up @@ -833,11 +833,11 @@ xfun::cache_rds({
```

```
## 56/56 - 0s - 63ms/epoch - 1ms/step
## 56/56 - 0s - 64ms/epoch - 1ms/step
```

```
## [1] 0.4221822
## [1] 0.4276608
```

This approach improves our $R^2$ over the linear model above.
Expand Down Expand Up @@ -910,7 +910,7 @@ xfun::cache_rds({
```

```
## [1] 0.451564
## [1] 0.4472675
```

### Question 13
Expand Down Expand Up @@ -966,9 +966,9 @@ xfun::cache_rds({

```
## Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz
## 8192/17464789 [..............................] - ETA: 0s 4202496/17464789 [======>.......................] - ETA: 0s17464789/17464789 [==============================] - 0s 0us/step
## 782/782 - 15s - 15s/epoch - 20ms/step
## 782/782 - 15s - 15s/epoch - 20ms/step
## 8192/17464789 [..............................] - ETA: 0s 3227648/17464789 [====>.........................] - ETA: 0s 4202496/17464789 [======>.......................] - ETA: 0s 8396800/17464789 [=============>................] - ETA: 0s17464789/17464789 [==============================] - 0s 0us/step
## 782/782 - 15s - 15s/epoch - 19ms/step
## 782/782 - 15s - 15s/epoch - 19ms/step
## 782/782 - 15s - 15s/epoch - 20ms/step
## 782/782 - 15s - 15s/epoch - 20ms/step
```
Expand All @@ -977,10 +977,10 @@ xfun::cache_rds({

| Max Features| Accuracy|
|------------:|--------:|
| 1000| 0.84476|
| 3000| 0.86616|
| 5000| 0.86156|
| 10000| 0.87264|
| 1000| 0.80568|
| 3000| 0.87264|
| 5000| 0.87600|
| 10000| 0.87420|

Varying the dictionary size does not make a substantial impact on our estimates
of accuracy. However, the models do take a substantial amount of time to fit and
Expand Down
Binary file modified 10-deep-learning_files/figure-html/unnamed-chunk-12-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified 10-deep-learning_files/figure-html/unnamed-chunk-21-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
74 changes: 39 additions & 35 deletions deep-learning.html
Original file line number Diff line number Diff line change
Expand Up @@ -750,9 +750,9 @@ <h3><span class="header-section-number">10.2.2</span> Question 7<a href="deep-le
<span id="cb707-19"><a href="deep-learning.html#cb707-19" aria-hidden="true"></a><span class="kw">plot</span>(history, <span class="dt">smooth =</span> <span class="ot">FALSE</span>)</span></code></pre></div>
<p><img src="10-deep-learning_files/figure-html/unnamed-chunk-12-1.png" width="672" /></p>
<div class="sourceCode" id="cb708"><pre class="sourceCode r"><code class="sourceCode r"><span id="cb708-1"><a href="deep-learning.html#cb708-1" aria-hidden="true"></a>npred &lt;-<span class="st"> </span><span class="kw">predict</span>(nn, x[testid, ])</span></code></pre></div>
<pre><code>## 6/6 - 0s - 54ms/epoch - 9ms/step</code></pre>
<pre><code>## 6/6 - 0s - 53ms/epoch - 9ms/step</code></pre>
<div class="sourceCode" id="cb710"><pre class="sourceCode r"><code class="sourceCode r"><span id="cb710-1"><a href="deep-learning.html#cb710-1" aria-hidden="true"></a><span class="kw">mean</span>(<span class="kw">abs</span>(y[testid] <span class="op">-</span><span class="st"> </span>npred))</span></code></pre></div>
<pre><code>## [1] 2.30611</code></pre>
<pre><code>## [1] 2.324471</code></pre>
<p>In this case, the neural network outperforms logistic regression having a lower
absolute error rate on the test data.</p>
</div>
Expand All @@ -779,12 +779,14 @@ <h3><span class="header-section-number">10.2.3</span> Question 8<a href="deep-le
<pre><code>## Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels.h5
##
8192/102967424 [..............................] - ETA: 0s
4235264/102967424 [&gt;.............................] - ETA: 1s
18759680/102967424 [====&gt;.........................] - ETA: 0s
36265984/102967424 [=========&gt;....................] - ETA: 0s
53231616/102967424 [==============&gt;...............] - ETA: 0s
70893568/102967424 [===================&gt;..........] - ETA: 0s
86900736/102967424 [========================&gt;.....] - ETA: 0s
2531328/102967424 [..............................] - ETA: 1s
15474688/102967424 [===&gt;..........................] - ETA: 0s
26615808/102967424 [======&gt;.......................] - ETA: 0s
41254912/102967424 [===========&gt;..................] - ETA: 0s
56344576/102967424 [===============&gt;..............] - ETA: 0s
70606848/102967424 [===================&gt;..........] - ETA: 0s
85114880/102967424 [=======================&gt;......] - ETA: 0s
99860480/102967424 [============================&gt;.] - ETA: 0s
102967424/102967424 [==============================] - 0s 0us/step</code></pre>
<div class="sourceCode" id="cb714"><pre class="sourceCode r"><code class="sourceCode r"><span id="cb714-1"><a href="deep-learning.html#cb714-1" aria-hidden="true"></a>pred &lt;-<span class="st"> </span>model <span class="op">|</span><span class="er">&gt;</span></span>
<span id="cb714-2"><a href="deep-learning.html#cb714-2" aria-hidden="true"></a><span class="st"> </span><span class="kw">predict</span>(x) <span class="op">|</span><span class="er">&gt;</span></span>
Expand Down Expand Up @@ -1003,7 +1005,7 @@ <h3><span class="header-section-number">10.2.5</span> Question 10<a href="deep-l
<div class="sourceCode" id="cb733"><pre class="sourceCode r"><code class="sourceCode r"><span id="cb733-1"><a href="deep-learning.html#cb733-1" aria-hidden="true"></a>kpred &lt;-<span class="st"> </span><span class="kw">predict</span>(model, xrnn[<span class="op">!</span>istrain,, ])</span></code></pre></div>
<pre><code>## 56/56 - 0s - 58ms/epoch - 1ms/step</code></pre>
<div class="sourceCode" id="cb735"><pre class="sourceCode r"><code class="sourceCode r"><span id="cb735-1"><a href="deep-learning.html#cb735-1" aria-hidden="true"></a><span class="dv">1</span> <span class="op">-</span><span class="st"> </span><span class="kw">mean</span>((kpred <span class="op">-</span><span class="st"> </span>arframe[<span class="op">!</span>istrain, <span class="st">&quot;log_volume&quot;</span>])<span class="op">^</span><span class="dv">2</span>) <span class="op">/</span><span class="st"> </span>V0</span></code></pre></div>
<pre><code>## [1] 0.4118947</code></pre>
<pre><code>## [1] 0.4126973</code></pre>
<p>Both models estimate the same number of coefficients/weights (16):</p>
<div class="sourceCode" id="cb737"><pre class="sourceCode r"><code class="sourceCode r"><span id="cb737-1"><a href="deep-learning.html#cb737-1" aria-hidden="true"></a><span class="kw">coef</span>(arfit)</span></code></pre></div>
<pre><code>## (Intercept) L1.DJ_return L1.log_volume L1.log_volatility
Expand All @@ -1022,25 +1024,25 @@ <h3><span class="header-section-number">10.2.5</span> Question 10<a href="deep-l
## -0.017206826 -0.037298183 0.008361380</code></pre>
<div class="sourceCode" id="cb739"><pre class="sourceCode r"><code class="sourceCode r"><span id="cb739-1"><a href="deep-learning.html#cb739-1" aria-hidden="true"></a>model<span class="op">$</span><span class="kw">get_weights</span>()</span></code></pre></div>
<pre><code>## [[1]]
## [,1]
## [1,] -0.030813020
## [2,] 0.101270206
## [3,] 0.179615468
## [4,] -0.006129819
## [5,] 0.124265596
## [6,] -0.068290100
## [7,] 0.037529659
## [8,] 0.077909760
## [9,] 0.204727829
## [10,] -0.032089159
## [11,] 0.034063924
## [12,] -0.847255647
## [13,] 0.096230716
## [14,] 0.511620998
## [15,] 0.529515982
## [,1]
## [1,] -0.03103472
## [2,] 0.09990595
## [3,] 0.16059186
## [4,] -0.00529294
## [5,] 0.12038019
## [6,] -0.04571687
## [7,] 0.03948567
## [8,] 0.07816564
## [9,] 0.18490496
## [10,] -0.02650242
## [11,] 0.03655238
## [12,] -0.83607799
## [13,] 0.10024448
## [14,] 0.51236451
## [15,] 0.52258915
##
## [[2]]
## [1] -0.006476609</code></pre>
## [1] -0.004416319</code></pre>
<p>The flattened RNN has a lower <span class="math inline">\(R^2\)</span> on the test data than our <code>lm</code> model
above. The <code>lm</code> model is quicker to fit and conceptually simpler also
giving us the ability to inspect the coefficients for different variables.</p>
Expand Down Expand Up @@ -1086,8 +1088,8 @@ <h3><span class="header-section-number">10.2.6</span> Question 11<a href="deep-l
<span id="cb741-27"><a href="deep-learning.html#cb741-27" aria-hidden="true"></a> <span class="dv">1</span> <span class="op">-</span><span class="st"> </span><span class="kw">mean</span>((kpred <span class="op">-</span><span class="st"> </span>arframe[<span class="op">!</span>istrain, <span class="st">&quot;log_volume&quot;</span>])<span class="op">^</span><span class="dv">2</span>) <span class="op">/</span><span class="st"> </span>V0</span>
<span id="cb741-28"><a href="deep-learning.html#cb741-28" aria-hidden="true"></a></span>
<span id="cb741-29"><a href="deep-learning.html#cb741-29" aria-hidden="true"></a>})</span></code></pre></div>
<pre><code>## 56/56 - 0s - 63ms/epoch - 1ms/step</code></pre>
<pre><code>## [1] 0.4221822</code></pre>
<pre><code>## 56/56 - 0s - 64ms/epoch - 1ms/step</code></pre>
<pre><code>## [1] 0.4276608</code></pre>
<p>This approach improves our <span class="math inline">\(R^2\)</span> over the linear model above.</p>
</div>
<div id="question-12-4" class="section level3 hasAnchor" number="10.2.7">
Expand Down Expand Up @@ -1150,7 +1152,7 @@ <h3><span class="header-section-number">10.2.7</span> Question 12<a href="deep-l
<span id="cb744-48"><a href="deep-learning.html#cb744-48" aria-hidden="true"></a></span>
<span id="cb744-49"><a href="deep-learning.html#cb744-49" aria-hidden="true"></a>})</span></code></pre></div>
<pre><code>## 56/56 - 0s - 135ms/epoch - 2ms/step</code></pre>
<pre><code>## [1] 0.451564</code></pre>
<pre><code>## [1] 0.4472675</code></pre>
</div>
<div id="question-13-3" class="section level3 hasAnchor" number="10.2.8">
<h3><span class="header-section-number">10.2.8</span> Question 13<a href="deep-learning.html#question-13-3" class="anchor-section" aria-label="Anchor link to header"></a></h3>
Expand Down Expand Up @@ -1203,10 +1205,12 @@ <h3><span class="header-section-number">10.2.8</span> Question 13<a href="deep-l
<pre><code>## Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz
##
8192/17464789 [..............................] - ETA: 0s
3227648/17464789 [====&gt;.........................] - ETA: 0s
4202496/17464789 [======&gt;.......................] - ETA: 0s
8396800/17464789 [=============&gt;................] - ETA: 0s
17464789/17464789 [==============================] - 0s 0us/step
## 782/782 - 15s - 15s/epoch - 20ms/step
## 782/782 - 15s - 15s/epoch - 20ms/step
## 782/782 - 15s - 15s/epoch - 19ms/step
## 782/782 - 15s - 15s/epoch - 19ms/step
## 782/782 - 15s - 15s/epoch - 20ms/step
## 782/782 - 15s - 15s/epoch - 20ms/step</code></pre>
<table>
Expand All @@ -1219,19 +1223,19 @@ <h3><span class="header-section-number">10.2.8</span> Question 13<a href="deep-l
<tbody>
<tr class="odd">
<td align="right">1000</td>
<td align="right">0.84476</td>
<td align="right">0.80568</td>
</tr>
<tr class="even">
<td align="right">3000</td>
<td align="right">0.86616</td>
<td align="right">0.87264</td>
</tr>
<tr class="odd">
<td align="right">5000</td>
<td align="right">0.86156</td>
<td align="right">0.87600</td>
</tr>
<tr class="even">
<td align="right">10000</td>
<td align="right">0.87264</td>
<td align="right">0.87420</td>
</tr>
</tbody>
</table>
Expand Down
Loading

0 comments on commit dac8710

Please sign in to comment.