-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for different algorithms for different groups #156
Comments
TL;DR
Full replyHello! Thanks for opening this! While there is currently no support in the interface for doing this, we have structured the library such that we can allow it easily. In fact, I already use something like this for my own runs. Essentially, different groups already have separate replay bufferers, losses, optimizers, etc. If you look at BenchMARL/benchmarl/experiment/experiment.py Lines 476 to 505 in e910a83
You can see that every function of the algorithm class already takes I'll first discuss one possibility for having different algorithms for different groups and then different models Different algorithms for each groupThe easiest way to do this is to create a new custom algorithm (let's say class IppoIsac(Algorithm):
def __init__(**kwargs):
self.ippo = Ippo(...)
self.isac = Isac(....)
def _get_loss(
self, group: str, policy_for_loss: TensorDictModule, continuous: bool
) -> Tuple[LossModule, bool]:
if goup == "attackers":
return self.ippo._get_loss(,group, policy_for_loss, continuous)
elif group == "defenders":
return self.isac._get_loss(,group, policy_for_loss, continuous) Different models for each groupFor models it is even simpler: you can create a custom model config that reimplements Note that the choices of using different algorithms and models for different groups are completely decoupled and they can be taken independently of each other, following the BenchMARL philosophy (you could use the same algo for all groups with different models, or vice viersa) We could provide thisEventually, what I could do is provide custom classes called But I tend to think that this would be a bit behind the scope of BenchMARL as it would be really hard to configure through hydra and it would require users to know what groups there are (aka know what env is been used). I'll think about it but in the meantime let me know if the proposed solution works for you |
Thanks for the quick reply, this solution should work for my use case (I am yet to implement and test it, but at first glance, this looks like the solution I was hoping for). The |
I think what I can do is implement them and provide them to the user. To configure them you can just pass the map of group names to component configs. Since they are hard to configure via hydra I can in the first instance expose them only via the python inerface and write a section on the docs on how to use them for interested users |
I have implemented the ensemble model and will soon do the algo in #159 as you can see it is just a few lines cause i already wanted to allow this in the lib |
Thank you! I will take a look soon. |
I have pushed also There is a point I overlooked regarding to this. Since the training loop are currently shared by all algos, it is not currently possible to mix on_policy and off_policy algos in the same ensemble. This is not something impossible to overcome but it will require a major restructoring of the lib so for now I am keeping this constraint. However it is now possible to run stuff like this from benchmarl.algorithms import EnsembleAlgorithmConfig, IppoConfig, MappoConfig
from benchmarl.environments import VmasTask
from benchmarl.experiment import Experiment, ExperimentConfig
from benchmarl.models import MlpConfig
from models import DeepsetsConfig, EnsembleModelConfig, GnnConfig
if __name__ == "__main__":
# Loads from "benchmarl/conf/experiment/base_experiment.yaml"
experiment_config = ExperimentConfig.get_from_yaml()
# Loads from "benchmarl/conf/task/vmas/simple_tag.yaml"
task = VmasTask.SIMPLE_TAG.get_from_yaml()
algorithm_config = EnsembleAlgorithmConfig(
{"agent": MappoConfig.get_from_yaml(), "adversary": IppoConfig.get_from_yaml()}
)
model_config = EnsembleModelConfig(
{"agent": MlpConfig.get_from_yaml(), "adversary": GnnConfig.get_from_yaml()}
)
critic_model_config = EnsembleModelConfig(
{
"agent": DeepsetsConfig.get_from_yaml(),
"adversary": MlpConfig.get_from_yaml(),
}
)
experiment = Experiment(
task=task,
algorithm_config=algorithm_config,
model_config=model_config,
critic_model_config=critic_model_config,
seed=0,
config=experiment_config,
)
experiment.run() |
Thanks, I think it should be good for my use case right now |
Thanks for this amazing repository, this is a great addition to the MARL research community.
I would like to know if there is support for using different algorithms for different groups (E.g., when half the agents use IPPO, and the other half use ISAC in the same environment) in the same experiment. By grouping I am referring to the shared abstraction introduced in BenchMARL/TorchRL. By extension, this would also require support for using different components (models, replay buffers, etc) per group as well. If there is no current support, what might be the steps one might need to take to have this functionality?
The text was updated successfully, but these errors were encountered: