You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi ,
Please help me to resolve the issue moment i run the program i get this error
new_node1 = torch.matmul(res_feature_after_view1, self.node_fea_for_res)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.41 GiB (GPU 0; 2.00 GiB total capacity; 330.44 MiB already allocated; 1.23 GiB free; 410.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
approach tried
import os
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:2000" -> did'nt work
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = 'max_split_size_mb:2000' -> did'nt work
torch.cuda.empty_cache() -> in the beginning of the file and several places within the loop
for iii, sample_batched in enumerate(zip(testloader_list, testloader_flip_list)):
The text was updated successfully, but these errors were encountered:
jeevitesh
changed the title
PYTORCH_CUDA_ALLOC_CONF
error will executing /inference.py Error PYTORCH_CUDA_ALLOC_CONF
Oct 1, 2023
Hi ,
Please help me to resolve the issue moment i run the program i get this error
new_node1 = torch.matmul(res_feature_after_view1, self.node_fea_for_res)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.41 GiB (GPU 0; 2.00 GiB total capacity; 330.44 MiB already allocated; 1.23 GiB free; 410.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
approach tried
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:2000" -> did'nt work
for iii, sample_batched in enumerate(zip(testloader_list, testloader_flip_list)):
The text was updated successfully, but these errors were encountered: