Skip to content

Running inference on server #840

Closed Answered by YuanTingHsieh
lyscho asked this question in Q&A
Discussion options

You must be logged in to vote

Hi @lyscho

Thanks for raising this discussion.

Yes, it is possible.

There are several places you can do this, to integrate into NVFlare system:

  1. Add the logic into a "workflow", for example, if you are using the scatter and gather workflow, you could add the evaluation code at the end of each round: https://github.com/NVIDIA/NVFlare/blob/2.1.4/nvflare/app_common/workflows/scatter_and_gather.py#L237, you can copy these codes into the custom folder in your app, then add codes to them, just need to change your config_fed_server.json to use it.

  2. Write a "widget/FLComponent" that listen to the AppEventType.AFTER_LEARNABLE_PERSIST event ((https://github.com/NVIDIA/NVFlare/blob/2.1.4/nvflare/a…

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@lyscho
Comment options

Answer selected by lyscho
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants