You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I prepared all the required h5py files and caption annotations files for COCO Caption finetuning as instructed in README. The training went normally at the beginning, but got killed (bus error (core dumped)) after around 70k~100k iteration.
I wonder if it was an out-of-memory issue caused by data loading. It seemed that huge memory was progressively consumed by the program, perhaps due to reading more and more image features from h5py files. Using del or gc.collect() didn't help free unreferenced objects' memory.
Is there any good solution to save memory for the multimodal training? Or idea on what was going on in my case. Thanks a lot!
The text was updated successfully, but these errors were encountered:
Hi, thanks for sharing this project.
I prepared all the required h5py files and caption annotations files for COCO Caption finetuning as instructed in README. The training went normally at the beginning, but got killed (bus error (core dumped)) after around 70k~100k iteration.
I wonder if it was an out-of-memory issue caused by data loading. It seemed that huge memory was progressively consumed by the program, perhaps due to reading more and more image features from h5py files. Using del or gc.collect() didn't help free unreferenced objects' memory.
Is there any good solution to save memory for the multimodal training? Or idea on what was going on in my case. Thanks a lot!
The text was updated successfully, but these errors were encountered: