Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flexibility for replay buffer storage #154

Open
JoseLuisC99 opened this issue Dec 31, 2024 · 2 comments
Open

Flexibility for replay buffer storage #154

JoseLuisC99 opened this issue Dec 31, 2024 · 2 comments

Comments

@JoseLuisC99
Copy link
Contributor

For environments with large observation spaces and numerous agents (such as MAgent), storing the replay buffer in RAM may become infeasible due to hardware limitations. To address this issue, TorchRL offers an alternative storage option: LazyMemmapStorage.

As explained in the documentation, LazyMemmapStorage functions similarly to LazyTensorStorage but utilizes disk files instead of RAM. This approach allows for handling extremely large datasets while maintaining efficient, contiguous data access.

It would be highly beneficial if BenchMARL provided greater flexibility in choosing between storing the replay buffer in RAM or on the hard disk, depending on the use case and resource availability.

Relevant links: https://pytorch.org/rl/stable/reference/generated/torchrl.data.replay_buffers.LazyMemmapStorage.html

@JoseLuisC99
Copy link
Contributor Author

JoseLuisC99 commented Dec 31, 2024

I created a PR #155 with the solution that I implemented for myself, it can help you. With this solution, I only add an extra argument to my experiment when I create it.

experiment = Experiment(
  task=task_config,
  algorithm_config=VdnConfig.get_from_yaml(),
  model_config=model_config,
  critic_model_config=model_config,
  seed=seed,
  config=experiment_config,
  replay_buffer_storage=PhysicalStorage.DISK,
)

@matteobettini
Copy link
Collaborator

Thanks a mil for this! Yes, very useful! I'll work towards merging it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants