-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rework all tutorials #500
Rework all tutorials #500
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left two minor comments, but it looks good to me. Feel free to merge.
" for batch_ind, batch in enumerate(dataloader):\n", | ||
" current_batch = batch[0].numpy()\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could also be changed to
" current_batch, label_batch = batch\n", |
" dataloader = Dataset(inputs, labels)\n", | ||
" dataloader = dataloader.shuffle(seed=epoch).batch(batch_size)\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we change this to
dataset = Dataset(inputs, labels)
for epoch in range(10):
epoch_loss = 0.0
# Our simple dummy dataloader must be shuffled at every epoch.
dataloader = dataset.shuffle(seed=epoch).batch(batch_size)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
alternatively, you can also increment the seed after the last yield or sth. than you would not have to re-init
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The above does not work. I will leave it as it is for now, feel free to change it later.
03_setting_parameters
as it had very little content and I want to keep the number of tutorials small. I added its main content into tutorials01_
and02_
.