You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For environments with large observation spaces and numerous agents (such as MAgent), storing the replay buffer in RAM may become infeasible due to hardware limitations. To address this issue, TorchRL offers an alternative storage option: LazyMemmapStorage.
As explained in the documentation, LazyMemmapStorage functions similarly to LazyTensorStorage but utilizes disk files instead of RAM. This approach allows for handling extremely large datasets while maintaining efficient, contiguous data access.
It would be highly beneficial if BenchMARL provided greater flexibility in choosing between storing the replay buffer in RAM or on the hard disk, depending on the use case and resource availability.
I created a PR #155 with the solution that I implemented for myself, it can help you. With this solution, I only add an extra argument to my experiment when I create it.
For environments with large observation spaces and numerous agents (such as MAgent), storing the replay buffer in RAM may become infeasible due to hardware limitations. To address this issue,
TorchRL
offers an alternative storage option:LazyMemmapStorage
.As explained in the documentation,
LazyMemmapStorage
functions similarly toLazyTensorStorage
but utilizes disk files instead of RAM. This approach allows for handling extremely large datasets while maintaining efficient, contiguous data access.It would be highly beneficial if BenchMARL provided greater flexibility in choosing between storing the replay buffer in RAM or on the hard disk, depending on the use case and resource availability.
Relevant links: https://pytorch.org/rl/stable/reference/generated/torchrl.data.replay_buffers.LazyMemmapStorage.html
The text was updated successfully, but these errors were encountered: