Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[API] redesign of logging and monitoring for 2.0 #1700

Open
dr-upsilon opened this issue Oct 22, 2024 · 4 comments
Open

[API] redesign of logging and monitoring for 2.0 #1700

dr-upsilon opened this issue Oct 22, 2024 · 4 comments
Labels
API design API design & software architecture

Comments

@dr-upsilon
Copy link

dr-upsilon commented Oct 22, 2024

Discussion on API design of monitoring and logging layer for version 2.0.

Top level umbrella issue: #1736

Converted from original issue below, by @dr-upsilon.


pytorch-forecasting seemingly unnecessarily saves loss and logging_metrics multiple times

  • PyTorch-Forecasting version: 1.1.1
  • PyTorch version: 2.4.1
  • Python version: 3.12.5
  • Operating System: windows

C:\...miniconda3\envs\envpt\Lib\site-packages\lightning\pytorch\utilities\parsing.py:208: Attribute 'logging_metrics' is an instance of nn.Moduleand is already saved during checkpointing. It is recommended to ignore them usingself.save_hyperparameters(ignore=['logging_metrics']).

This is caused by self.save_hyperparameters() in init method of TemporalFusionTransformer, because save_hyperparameters() uses inspect and frame to identify all the hyperparameters,

What's the reason to keep it or shall we add handling in init?

@dr-upsilon dr-upsilon changed the title Is there a reason that pytorch-forecasting saves loss and logging_metrics multiple times? Is there a reason that pytorch-forecasting seemingly unnecessarily saves loss and logging_metrics multiple times? Oct 22, 2024
@fkiraly fkiraly added the question Further information is requested label Oct 22, 2024
@fkiraly
Copy link
Collaborator

fkiraly commented Oct 22, 2024

Maybe a question for @jdb78.

@dr-upsilon, is this happening in a way that is duplicative with lightning core?

@dr-upsilon
Copy link
Author

Maybe a question for @jdb78.

@dr-upsilon, is this happening in a way that is duplicative with lightning core?

Yes I think so. BaseModel inherits from HyperparametersMixin from lighting.pytorch.core.mixins.hparams_mixin.

@fkiraly
Copy link
Collaborator

fkiraly commented Nov 7, 2024

Looked through this multiple times and I cannot come up with a good reason (without hearing @jdb78 original rationale).

I suppose it is something that changed between different versions of lightning, so it made sense once but is now duplicative.

Would you like to contribute a PR?

@fkiraly fkiraly changed the title Is there a reason that pytorch-forecasting seemingly unnecessarily saves loss and logging_metrics multiple times? [API] pytorch-forecasting seemingly unnecessarily saves loss and logging_metrics multiple times Jan 5, 2025
@fkiraly fkiraly added API design API design & software architecture and removed question Further information is requested labels Jan 5, 2025
@fkiraly fkiraly changed the title [API] pytorch-forecasting seemingly unnecessarily saves loss and logging_metrics multiple times [API] redesign of logging and monitoring for 2.0 Jan 5, 2025
@fkiraly
Copy link
Collaborator

fkiraly commented Jan 5, 2025

@dr-upsilon, monitoring and logging have emerged as a top priority item for the API redesign.

I have hence converted this issue in a more general API redesign issue for monitoring and logging.

  • your opinions on the "should" state would be much appreciated!
  • if you are interested to contribute more deeply to this, consider joining the sktime discord and the pytorch-forecasting channel, and/or interacting with the higher level issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
API design API design & software architecture
Projects
None yet
Development

No branches or pull requests

2 participants