-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reducing CUDA Memory #20
Comments
I have the same problem, mine is 1080ti, I have changed num_pts and chunk_size to 1, but still not working. Can the author publish a small data set that can fit 12GB of video memory? |
Sorry for the confusion, num of points and chunk size will only control the memory usage for training but not preprocessing. Could you provide more details about which part of the processing code leads to the OOM error? One likely reason is lines like this consume too much memory. |
ah I see. Thank you, So yes I am able compute the Dino Features but the moment it begins doing the pairwise optical flows it instantly crashes. It needs a very large amount of memory. Can we scale this down or modify it as well in pre? Thank you for any advice you may be able to provide! |
I change num_pairs to 4, it runs successfully on 3060 with 12GB memory. |
I am trying to train on some videos of Mosquitoes and am doing some preprocessing. I am running into
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 12.00 GiB total capacity; 9.91 GiB already allocated; 0 bytes free; 11.27 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I am on a 3080ti. Inside of config.py i reduced the default num of points to 20 and the chunk size to a measly 100. Yet still memory errors. Any suggestions? Ran nvidia-smi and have nothing else hogging Gpu. trying to squeeze down can't afford an A100!
The text was updated successfully, but these errors were encountered: