Running inference on server #840
-
Hi NVFlare-Team, I'm wondering, whether it is possible to have a (global) test set on server and run inference with the current global model on the server dataset to evaluate its performance. Thank you already! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @lyscho Thanks for raising this discussion. Yes, it is possible. There are several places you can do this, to integrate into NVFlare system:
Another way is that, you can write the "model persistor" and the final model will be saved on the server side. @holgerroth if you have other suggestions please also provide, thanks. Hope this help. |
Beta Was this translation helpful? Give feedback.
Hi @lyscho
Thanks for raising this discussion.
Yes, it is possible.
There are several places you can do this, to integrate into NVFlare system:
Add the logic into a "workflow", for example, if you are using the scatter and gather workflow, you could add the evaluation code at the end of each round: https://github.com/NVIDIA/NVFlare/blob/2.1.4/nvflare/app_common/workflows/scatter_and_gather.py#L237, you can copy these codes into the custom folder in your app, then add codes to them, just need to change your config_fed_server.json to use it.
Write a "widget/FLComponent" that listen to the
AppEventType.AFTER_LEARNABLE_PERSIST
event ((https://github.com/NVIDIA/NVFlare/blob/2.1.4/nvflare/a…