Skip to content

Commit

Permalink
remove output
Browse files Browse the repository at this point in the history
Signed-off-by: YunLiu <[email protected]>
  • Loading branch information
KumoLiu committed Mar 20, 2024
1 parent 6f94f08 commit f07fb66
Showing 1 changed file with 2 additions and 201 deletions.
203 changes: 2 additions & 201 deletions pathology/tumor_detection/ignite/profiling_camelyon_pipeline.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -152,208 +152,9 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'root': './', 'train_file': './training.csv', 'valid_file': './training.csv', 'logdir': './logs/', 'region_size': 768, 'grid_shape': 3, 'patch_size': 224, 'batch_size': 10, 'n_epochs': 10, 'lr': 0.001, 'use_openslide': False, 'amp': True, 'novograd': True, 'pretrain': False, 'num_workers': 0, 'gpu': '0'}\n",
"Logs and model are saved at './logs/240320-141645_resnet18_ps224_bs10_ep10_lr0.001'.\n",
"CUDA is being used with GPU Id(s): 0\n",
"image: \n",
" shape torch.Size([90, 3, 224, 224])\n",
" type: <class 'monai.data.meta_tensor.MetaTensor'>\n",
" dtype: torch.float32\n",
"labels: \n",
" shape torch.Size([90, 1, 1, 1])\n",
" type: <class 'monai.data.meta_tensor.MetaTensor'>\n",
" dtype: torch.float32\n",
"batch size: 10\n",
"train number of batches: 300\n",
"valid number of batches: 300\n",
"INFO:ignite.engine.engine.SupervisedTrainer:Engine run resuming from iteration 0, epoch 0 until 10 epochs\n",
"2024-03-20 14:16:46,951 - INFO - Epoch: 1/10, Iter: 1/300 -- train_loss: 0.6897 \n",
"2024-03-20 14:16:47,378 - INFO - Epoch: 1/10, Iter: 2/300 -- train_loss: 0.6739 \n",
"2024-03-20 14:16:47,810 - INFO - Epoch: 1/10, Iter: 3/300 -- train_loss: 0.6583 \n",
"2024-03-20 14:16:48,240 - INFO - Epoch: 1/10, Iter: 4/300 -- train_loss: 0.6184 \n",
"2024-03-20 14:16:48,662 - INFO - Epoch: 1/10, Iter: 5/300 -- train_loss: 0.6225 \n",
"2024-03-20 14:16:49,099 - INFO - Epoch: 1/10, Iter: 6/300 -- train_loss: 0.5696 \n",
"2024-03-20 14:16:49,531 - INFO - Epoch: 1/10, Iter: 7/300 -- train_loss: 0.6489 \n",
"2024-03-20 14:16:49,974 - INFO - Epoch: 1/10, Iter: 8/300 -- train_loss: 0.6573 \n",
"2024-03-20 14:16:50,418 - INFO - Epoch: 1/10, Iter: 9/300 -- train_loss: 0.7667 \n",
"2024-03-20 14:16:50,851 - INFO - Epoch: 1/10, Iter: 10/300 -- train_loss: 0.8260 \n",
"2024-03-20 14:16:51,280 - INFO - Epoch: 1/10, Iter: 11/300 -- train_loss: 0.5951 \n",
"2024-03-20 14:16:51,711 - INFO - Epoch: 1/10, Iter: 12/300 -- train_loss: 0.6774 \n",
"2024-03-20 14:16:52,150 - INFO - Epoch: 1/10, Iter: 13/300 -- train_loss: 0.4473 \n",
"2024-03-20 14:16:52,595 - INFO - Epoch: 1/10, Iter: 14/300 -- train_loss: 0.4918 \n",
"2024-03-20 14:16:53,071 - INFO - Epoch: 1/10, Iter: 15/300 -- train_loss: 0.6562 \n",
"2024-03-20 14:16:53,502 - INFO - Epoch: 1/10, Iter: 16/300 -- train_loss: 0.6771 \n",
"2024-03-20 14:16:53,937 - INFO - Epoch: 1/10, Iter: 17/300 -- train_loss: 0.5090 \n",
"2024-03-20 14:16:54,383 - INFO - Epoch: 1/10, Iter: 18/300 -- train_loss: 0.5514 \n",
"2024-03-20 14:16:54,816 - INFO - Epoch: 1/10, Iter: 19/300 -- train_loss: 0.7518 \n",
"2024-03-20 14:16:55,249 - INFO - Epoch: 1/10, Iter: 20/300 -- train_loss: 0.5011 \n",
"2024-03-20 14:16:55,685 - INFO - Epoch: 1/10, Iter: 21/300 -- train_loss: 0.6055 \n",
"2024-03-20 14:16:56,132 - INFO - Epoch: 1/10, Iter: 22/300 -- train_loss: 0.5906 \n",
"2024-03-20 14:16:56,671 - INFO - Epoch: 1/10, Iter: 23/300 -- train_loss: 0.5019 \n",
"2024-03-20 14:16:57,166 - INFO - Epoch: 1/10, Iter: 24/300 -- train_loss: 0.5021 \n",
"2024-03-20 14:16:57,663 - INFO - Epoch: 1/10, Iter: 25/300 -- train_loss: 0.5797 \n",
"2024-03-20 14:16:58,177 - INFO - Epoch: 1/10, Iter: 26/300 -- train_loss: 0.5811 \n",
"2024-03-20 14:16:58,694 - INFO - Epoch: 1/10, Iter: 27/300 -- train_loss: 0.4111 \n",
"2024-03-20 14:16:59,197 - INFO - Epoch: 1/10, Iter: 28/300 -- train_loss: 0.5173 \n",
"2024-03-20 14:16:59,702 - INFO - Epoch: 1/10, Iter: 29/300 -- train_loss: 0.5535 \n",
"2024-03-20 14:17:00,209 - INFO - Epoch: 1/10, Iter: 30/300 -- train_loss: 0.5420 \n",
"2024-03-20 14:17:00,717 - INFO - Epoch: 1/10, Iter: 31/300 -- train_loss: 0.4536 \n",
"2024-03-20 14:17:01,216 - INFO - Epoch: 1/10, Iter: 32/300 -- train_loss: 0.4920 \n",
"2024-03-20 14:17:01,739 - INFO - Epoch: 1/10, Iter: 33/300 -- train_loss: 0.4702 \n",
"2024-03-20 14:17:02,189 - INFO - Epoch: 1/10, Iter: 34/300 -- train_loss: 0.4269 \n",
"2024-03-20 14:17:02,648 - INFO - Epoch: 1/10, Iter: 35/300 -- train_loss: 0.5104 \n",
"2024-03-20 14:17:03,102 - INFO - Epoch: 1/10, Iter: 36/300 -- train_loss: 0.3928 \n",
"2024-03-20 14:17:03,557 - INFO - Epoch: 1/10, Iter: 37/300 -- train_loss: 0.4403 \n",
"2024-03-20 14:17:04,005 - INFO - Epoch: 1/10, Iter: 38/300 -- train_loss: 0.5788 \n",
"2024-03-20 14:17:04,452 - INFO - Epoch: 1/10, Iter: 39/300 -- train_loss: 0.5200 \n",
"2024-03-20 14:17:04,902 - INFO - Epoch: 1/10, Iter: 40/300 -- train_loss: 0.5064 \n",
"2024-03-20 14:17:05,369 - INFO - Epoch: 1/10, Iter: 41/300 -- train_loss: 0.5104 \n",
"2024-03-20 14:17:05,810 - INFO - Epoch: 1/10, Iter: 42/300 -- train_loss: 0.5819 \n",
"2024-03-20 14:17:06,247 - INFO - Epoch: 1/10, Iter: 43/300 -- train_loss: 0.5133 \n",
"2024-03-20 14:17:06,705 - INFO - Epoch: 1/10, Iter: 44/300 -- train_loss: 0.4662 \n",
"2024-03-20 14:17:07,243 - INFO - Epoch: 1/10, Iter: 45/300 -- train_loss: 0.3947 \n",
"2024-03-20 14:17:07,745 - INFO - Epoch: 1/10, Iter: 46/300 -- train_loss: 0.4615 \n",
"2024-03-20 14:17:08,185 - INFO - Epoch: 1/10, Iter: 47/300 -- train_loss: 0.3967 \n",
"2024-03-20 14:17:08,622 - INFO - Epoch: 1/10, Iter: 48/300 -- train_loss: 0.5709 \n",
"2024-03-20 14:17:09,057 - INFO - Epoch: 1/10, Iter: 49/300 -- train_loss: 0.4810 \n",
"2024-03-20 14:17:09,510 - INFO - Epoch: 1/10, Iter: 50/300 -- train_loss: 0.5632 \n",
"2024-03-20 14:17:09,947 - INFO - Epoch: 1/10, Iter: 51/300 -- train_loss: 0.4619 \n",
"2024-03-20 14:17:10,385 - INFO - Epoch: 1/10, Iter: 52/300 -- train_loss: 0.4463 \n",
"2024-03-20 14:17:10,826 - INFO - Epoch: 1/10, Iter: 53/300 -- train_loss: 0.3804 \n",
"2024-03-20 14:17:11,264 - INFO - Epoch: 1/10, Iter: 54/300 -- train_loss: 0.4547 \n",
"2024-03-20 14:17:11,722 - INFO - Epoch: 1/10, Iter: 55/300 -- train_loss: 0.6589 \n",
"2024-03-20 14:17:12,175 - INFO - Epoch: 1/10, Iter: 56/300 -- train_loss: 0.3543 \n",
"2024-03-20 14:17:12,635 - INFO - Epoch: 1/10, Iter: 57/300 -- train_loss: 0.4939 \n",
"2024-03-20 14:17:13,107 - INFO - Epoch: 1/10, Iter: 58/300 -- train_loss: 0.3955 \n",
"2024-03-20 14:17:13,585 - INFO - Epoch: 1/10, Iter: 59/300 -- train_loss: 0.5342 \n",
"2024-03-20 14:17:14,037 - INFO - Epoch: 1/10, Iter: 60/300 -- train_loss: 0.5049 \n",
"2024-03-20 14:17:14,481 - INFO - Epoch: 1/10, Iter: 61/300 -- train_loss: 0.4974 \n",
"2024-03-20 14:17:14,931 - INFO - Epoch: 1/10, Iter: 62/300 -- train_loss: 0.4929 \n",
"2024-03-20 14:17:15,398 - INFO - Epoch: 1/10, Iter: 63/300 -- train_loss: 0.5162 \n",
"2024-03-20 14:17:15,872 - INFO - Epoch: 1/10, Iter: 64/300 -- train_loss: 0.4614 \n",
"2024-03-20 14:17:16,321 - INFO - Epoch: 1/10, Iter: 65/300 -- train_loss: 0.4094 \n",
"2024-03-20 14:17:16,770 - INFO - Epoch: 1/10, Iter: 66/300 -- train_loss: 0.4510 \n",
"2024-03-20 14:17:17,228 - INFO - Epoch: 1/10, Iter: 67/300 -- train_loss: 0.5105 \n",
"2024-03-20 14:17:17,698 - INFO - Epoch: 1/10, Iter: 68/300 -- train_loss: 0.4204 \n",
"2024-03-20 14:17:18,164 - INFO - Epoch: 1/10, Iter: 69/300 -- train_loss: 0.5269 \n",
"2024-03-20 14:17:18,613 - INFO - Epoch: 1/10, Iter: 70/300 -- train_loss: 0.3737 \n",
"2024-03-20 14:17:19,064 - INFO - Epoch: 1/10, Iter: 71/300 -- train_loss: 0.4546 \n",
"2024-03-20 14:17:19,516 - INFO - Epoch: 1/10, Iter: 72/300 -- train_loss: 0.4940 \n",
"2024-03-20 14:17:19,986 - INFO - Epoch: 1/10, Iter: 73/300 -- train_loss: 0.6451 \n",
"2024-03-20 14:17:20,434 - INFO - Epoch: 1/10, Iter: 74/300 -- train_loss: 0.4267 \n",
"2024-03-20 14:17:20,886 - INFO - Epoch: 1/10, Iter: 75/300 -- train_loss: 0.4723 \n",
"2024-03-20 14:17:21,340 - INFO - Epoch: 1/10, Iter: 76/300 -- train_loss: 0.4265 \n",
"2024-03-20 14:17:21,782 - INFO - Epoch: 1/10, Iter: 77/300 -- train_loss: 0.4092 \n",
"2024-03-20 14:17:22,225 - INFO - Epoch: 1/10, Iter: 78/300 -- train_loss: 0.6169 \n",
"2024-03-20 14:17:22,671 - INFO - Epoch: 1/10, Iter: 79/300 -- train_loss: 0.4281 \n",
"2024-03-20 14:17:23,124 - INFO - Epoch: 1/10, Iter: 80/300 -- train_loss: 0.4716 \n",
"2024-03-20 14:17:23,568 - INFO - Epoch: 1/10, Iter: 81/300 -- train_loss: 0.4013 \n",
"2024-03-20 14:17:24,028 - INFO - Epoch: 1/10, Iter: 82/300 -- train_loss: 0.5401 \n",
"2024-03-20 14:17:24,470 - INFO - Epoch: 1/10, Iter: 83/300 -- train_loss: 0.4899 \n",
"2024-03-20 14:17:24,922 - INFO - Epoch: 1/10, Iter: 84/300 -- train_loss: 0.3732 \n",
"2024-03-20 14:17:25,366 - INFO - Epoch: 1/10, Iter: 85/300 -- train_loss: 0.2542 \n",
"2024-03-20 14:17:25,817 - INFO - Epoch: 1/10, Iter: 86/300 -- train_loss: 0.3251 \n",
"2024-03-20 14:17:26,282 - INFO - Epoch: 1/10, Iter: 87/300 -- train_loss: 0.4266 \n",
"2024-03-20 14:17:26,726 - INFO - Epoch: 1/10, Iter: 88/300 -- train_loss: 0.3316 \n",
"2024-03-20 14:17:27,167 - INFO - Epoch: 1/10, Iter: 89/300 -- train_loss: 0.3020 \n",
"2024-03-20 14:17:27,624 - INFO - Epoch: 1/10, Iter: 90/300 -- train_loss: 0.3618 \n",
"2024-03-20 14:17:28,078 - INFO - Epoch: 1/10, Iter: 91/300 -- train_loss: 0.6237 \n",
"2024-03-20 14:17:28,520 - INFO - Epoch: 1/10, Iter: 92/300 -- train_loss: 0.5555 \n",
"2024-03-20 14:17:28,969 - INFO - Epoch: 1/10, Iter: 93/300 -- train_loss: 0.3036 \n",
"2024-03-20 14:17:29,453 - INFO - Epoch: 1/10, Iter: 94/300 -- train_loss: 0.3390 \n",
"2024-03-20 14:17:29,899 - INFO - Epoch: 1/10, Iter: 95/300 -- train_loss: 0.3101 \n",
"2024-03-20 14:17:30,401 - INFO - Epoch: 1/10, Iter: 96/300 -- train_loss: 0.3370 \n",
"2024-03-20 14:17:30,902 - INFO - Epoch: 1/10, Iter: 97/300 -- train_loss: 0.5054 \n",
"2024-03-20 14:17:31,407 - INFO - Epoch: 1/10, Iter: 98/300 -- train_loss: 0.3124 \n",
"2024-03-20 14:17:31,906 - INFO - Epoch: 1/10, Iter: 99/300 -- train_loss: 0.4102 \n",
"2024-03-20 14:17:32,427 - INFO - Epoch: 1/10, Iter: 100/300 -- train_loss: 0.3778 \n",
"2024-03-20 14:17:32,939 - INFO - Epoch: 1/10, Iter: 101/300 -- train_loss: 0.4827 \n",
"2024-03-20 14:17:33,444 - INFO - Epoch: 1/10, Iter: 102/300 -- train_loss: 0.3077 \n",
"2024-03-20 14:17:33,945 - INFO - Epoch: 1/10, Iter: 103/300 -- train_loss: 0.3601 \n",
"2024-03-20 14:17:34,454 - INFO - Epoch: 1/10, Iter: 104/300 -- train_loss: 0.3990 \n",
"2024-03-20 14:17:34,951 - INFO - Epoch: 1/10, Iter: 105/300 -- train_loss: 0.4200 \n",
"2024-03-20 14:17:35,452 - INFO - Epoch: 1/10, Iter: 106/300 -- train_loss: 0.4044 \n",
"2024-03-20 14:17:35,955 - INFO - Epoch: 1/10, Iter: 107/300 -- train_loss: 0.6881 \n",
"2024-03-20 14:17:36,454 - INFO - Epoch: 1/10, Iter: 108/300 -- train_loss: 0.4339 \n",
"2024-03-20 14:17:36,953 - INFO - Epoch: 1/10, Iter: 109/300 -- train_loss: 0.4136 \n",
"2024-03-20 14:17:37,456 - INFO - Epoch: 1/10, Iter: 110/300 -- train_loss: 0.3997 \n",
"2024-03-20 14:17:37,957 - INFO - Epoch: 1/10, Iter: 111/300 -- train_loss: 0.3197 \n",
"2024-03-20 14:17:38,477 - INFO - Epoch: 1/10, Iter: 112/300 -- train_loss: 0.6071 \n",
"2024-03-20 14:17:38,987 - INFO - Epoch: 1/10, Iter: 113/300 -- train_loss: 0.4153 \n",
"2024-03-20 14:17:39,459 - INFO - Epoch: 1/10, Iter: 114/300 -- train_loss: 0.4086 \n",
"2024-03-20 14:17:39,930 - INFO - Epoch: 1/10, Iter: 115/300 -- train_loss: 0.3210 \n",
"2024-03-20 14:17:40,388 - INFO - Epoch: 1/10, Iter: 116/300 -- train_loss: 0.4856 \n",
"2024-03-20 14:17:40,869 - INFO - Epoch: 1/10, Iter: 117/300 -- train_loss: 0.4541 \n",
"2024-03-20 14:17:41,318 - INFO - Epoch: 1/10, Iter: 118/300 -- train_loss: 0.3256 \n",
"2024-03-20 14:17:41,774 - INFO - Epoch: 1/10, Iter: 119/300 -- train_loss: 0.3684 \n",
"2024-03-20 14:17:42,234 - INFO - Epoch: 1/10, Iter: 120/300 -- train_loss: 0.3328 \n",
"2024-03-20 14:17:42,704 - INFO - Epoch: 1/10, Iter: 121/300 -- train_loss: 0.2817 \n",
"2024-03-20 14:17:43,155 - INFO - Epoch: 1/10, Iter: 122/300 -- train_loss: 0.3444 \n",
"2024-03-20 14:17:43,616 - INFO - Epoch: 1/10, Iter: 123/300 -- train_loss: 0.3701 \n",
"2024-03-20 14:17:44,092 - INFO - Epoch: 1/10, Iter: 124/300 -- train_loss: 0.4369 \n",
"2024-03-20 14:17:44,556 - INFO - Epoch: 1/10, Iter: 125/300 -- train_loss: 0.3276 \n",
"2024-03-20 14:17:45,021 - INFO - Epoch: 1/10, Iter: 126/300 -- train_loss: 0.3739 \n",
"2024-03-20 14:17:45,472 - INFO - Epoch: 1/10, Iter: 127/300 -- train_loss: 0.3354 \n",
"2024-03-20 14:17:45,927 - INFO - Epoch: 1/10, Iter: 128/300 -- train_loss: 0.3925 \n",
"2024-03-20 14:17:46,376 - INFO - Epoch: 1/10, Iter: 129/300 -- train_loss: 0.2664 \n",
"2024-03-20 14:17:46,957 - INFO - Epoch: 1/10, Iter: 130/300 -- train_loss: 0.3081 \n",
"2024-03-20 14:17:47,407 - INFO - Epoch: 1/10, Iter: 131/300 -- train_loss: 0.2207 \n",
"2024-03-20 14:17:47,845 - INFO - Epoch: 1/10, Iter: 132/300 -- train_loss: 0.6715 \n",
"2024-03-20 14:17:48,287 - INFO - Epoch: 1/10, Iter: 133/300 -- train_loss: 0.2616 \n",
"2024-03-20 14:17:48,924 - INFO - Epoch: 1/10, Iter: 134/300 -- train_loss: 0.2184 \n",
"2024-03-20 14:17:49,366 - INFO - Epoch: 1/10, Iter: 135/300 -- train_loss: 0.2279 \n",
"2024-03-20 14:17:49,796 - INFO - Epoch: 1/10, Iter: 136/300 -- train_loss: 0.3357 \n",
"2024-03-20 14:17:50,237 - INFO - Epoch: 1/10, Iter: 137/300 -- train_loss: 0.3705 \n",
"2024-03-20 14:17:50,671 - INFO - Epoch: 1/10, Iter: 138/300 -- train_loss: 0.2834 \n",
"2024-03-20 14:17:51,118 - INFO - Epoch: 1/10, Iter: 139/300 -- train_loss: 0.2815 \n",
"2024-03-20 14:17:51,561 - INFO - Epoch: 1/10, Iter: 140/300 -- train_loss: 0.5719 \n",
"2024-03-20 14:17:52,001 - INFO - Epoch: 1/10, Iter: 141/300 -- train_loss: 0.3141 \n",
"2024-03-20 14:17:52,453 - INFO - Epoch: 1/10, Iter: 142/300 -- train_loss: 0.2696 \n",
"2024-03-20 14:17:52,897 - INFO - Epoch: 1/10, Iter: 143/300 -- train_loss: 0.3604 \n",
"2024-03-20 14:17:53,352 - INFO - Epoch: 1/10, Iter: 144/300 -- train_loss: 0.3254 \n",
"2024-03-20 14:17:53,786 - INFO - Epoch: 1/10, Iter: 145/300 -- train_loss: 0.3528 \n",
"2024-03-20 14:17:54,225 - INFO - Epoch: 1/10, Iter: 146/300 -- train_loss: 0.3061 \n",
"2024-03-20 14:17:54,658 - INFO - Epoch: 1/10, Iter: 147/300 -- train_loss: 0.2493 \n",
"2024-03-20 14:17:55,089 - INFO - Epoch: 1/10, Iter: 148/300 -- train_loss: 0.3965 \n",
"2024-03-20 14:17:55,522 - INFO - Epoch: 1/10, Iter: 149/300 -- train_loss: 0.3770 \n",
"2024-03-20 14:17:55,955 - INFO - Epoch: 1/10, Iter: 150/300 -- train_loss: 0.2256 \n",
"2024-03-20 14:17:56,398 - INFO - Epoch: 1/10, Iter: 151/300 -- train_loss: 0.5284 \n",
"2024-03-20 14:17:56,871 - INFO - Epoch: 1/10, Iter: 152/300 -- train_loss: 0.3276 \n",
"2024-03-20 14:17:57,308 - INFO - Epoch: 1/10, Iter: 153/300 -- train_loss: 0.2415 \n",
"2024-03-20 14:17:57,749 - INFO - Epoch: 1/10, Iter: 154/300 -- train_loss: 0.5705 \n",
"2024-03-20 14:17:58,182 - INFO - Epoch: 1/10, Iter: 155/300 -- train_loss: 0.4241 \n",
"2024-03-20 14:17:58,617 - INFO - Epoch: 1/10, Iter: 156/300 -- train_loss: 0.2897 \n",
"2024-03-20 14:17:59,048 - INFO - Epoch: 1/10, Iter: 157/300 -- train_loss: 0.3383 \n",
"2024-03-20 14:17:59,748 - INFO - Epoch: 1/10, Iter: 158/300 -- train_loss: 0.2568 \n",
"2024-03-20 14:18:00,205 - INFO - Epoch: 1/10, Iter: 159/300 -- train_loss: 0.3865 \n",
"2024-03-20 14:18:00,664 - INFO - Epoch: 1/10, Iter: 160/300 -- train_loss: 0.3544 \n",
"2024-03-20 14:18:01,116 - INFO - Epoch: 1/10, Iter: 161/300 -- train_loss: 0.5614 \n",
"2024-03-20 14:18:01,571 - INFO - Epoch: 1/10, Iter: 162/300 -- train_loss: 0.6000 \n",
"2024-03-20 14:18:02,026 - INFO - Epoch: 1/10, Iter: 163/300 -- train_loss: 0.4404 \n",
"2024-03-20 14:18:02,469 - INFO - Epoch: 1/10, Iter: 164/300 -- train_loss: 0.6065 \n",
"2024-03-20 14:18:02,914 - INFO - Epoch: 1/10, Iter: 165/300 -- train_loss: 0.3703 \n",
"2024-03-20 14:18:03,409 - INFO - Epoch: 1/10, Iter: 166/300 -- train_loss: 0.2962 \n",
"2024-03-20 14:18:03,875 - INFO - Epoch: 1/10, Iter: 167/300 -- train_loss: 0.3598 \n",
"2024-03-20 14:18:04,327 - INFO - Epoch: 1/10, Iter: 168/300 -- train_loss: 0.4526 \n",
"2024-03-20 14:18:04,833 - INFO - Epoch: 1/10, Iter: 169/300 -- train_loss: 0.3000 \n",
"2024-03-20 14:18:05,355 - INFO - Epoch: 1/10, Iter: 170/300 -- train_loss: 0.5047 \n",
"2024-03-20 14:18:05,869 - INFO - Epoch: 1/10, Iter: 171/300 -- train_loss: 0.3094 \n",
"2024-03-20 14:18:06,489 - INFO - Epoch: 1/10, Iter: 172/300 -- train_loss: 0.3837 \n",
"2024-03-20 14:18:06,986 - INFO - Epoch: 1/10, Iter: 173/300 -- train_loss: 0.4154 \n",
"Generating '/tmp/nsys-report-bac5.qdstrm'\n",
"[1/1] [========================100%] profile_report.nsys-rep\n",
"Generated:\n",
" /workspace/Code/tutorials/pathology/tumor_detection/ignite/profile_report.nsys-rep\n"
]
}
],
"outputs": [],
"source": [
"!nsys profile \\\n",
" --trace nvtx,osrt,cudnn,cuda, \\\n",
Expand Down

0 comments on commit f07fb66

Please sign in to comment.