Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error will executing /inference.py Error PYTORCH_CUDA_ALLOC_CONF #34

Open
jeevitesh opened this issue Oct 1, 2023 · 0 comments
Open

Comments

@jeevitesh
Copy link

jeevitesh commented Oct 1, 2023

Hi ,
Please help me to resolve the issue moment i run the program i get this error
new_node1 = torch.matmul(res_feature_after_view1, self.node_fea_for_res)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.41 GiB (GPU 0; 2.00 GiB total capacity; 330.44 MiB already allocated; 1.23 GiB free; 410.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

approach tried

  1. import os
    os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:2000" -> did'nt work
  2. os.environ["PYTORCH_CUDA_ALLOC_CONF"] = 'max_split_size_mb:2000' -> did'nt work
  3. torch.cuda.empty_cache() -> in the beginning of the file and several places within the loop
    for iii, sample_batched in enumerate(zip(testloader_list, testloader_flip_list)):
@jeevitesh jeevitesh changed the title PYTORCH_CUDA_ALLOC_CONF error will executing /inference.py Error PYTORCH_CUDA_ALLOC_CONF Oct 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant