You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In our current design, the model details, fit settings and fit results are put together in the Model class. This brings some complexity with it:
We always have to think about what happens when a model is "refit", especially if a different redshift is specified, and features that were deactivated due to the wavelength range have to be reactivated.
Storing the uncertainties is an issue. They are only meaningful for a fit result, not a fit setup.
The bounds are stored alongside the result. They are only meaningful for a fitting setup, not the result (although they are still informative to the user, i.e. they can see if a bound has been hit).
Alternative design
Model: stores the features table provided at construction, but does not change afterwards. Every call to fit returns a new Results object. Model objects can not be saved.
Result: Contains the fitted features table, the uncertainties, maybe even a copy of the user-supplied spectrum? This would be the new home of the "plot" function, and various extraction / analysis tools. Results can be saved and loaded.
Possible issues with this:
What does "guess" do then? Change Model or return a Result?
Don't we want to be able to plot unfitted models / initial guesses too?
How to use a result as initial conditions for another fit (Answer: pass it to the guess function).
Maybe Model doesn't even need to be a class, but just a fit-function that takes yaml pack + spectrum, or a Result + spectrum (for re-fitting).
The user has to learn two classes instead of one.
We're not going to change this now, since we really need to get PAHFIT running. But I really wanted to write this down, and we should keep this type of redesign in mind if we run into more symptoms of the same underlying conceptual problem.
The text was updated successfully, but these errors were encountered:
Yeah, guess pretty much is required to change the model, since it is a warm-up to fit. And fit can be a prelude to anotherfit, e.g. on the next spectrum over in the cube. We can leave this open for a while, but I think a better approach is to tweak Features so it gives indication about:
the provenance and history of the Features, which can go in features.meta['history'] or similar. Updated when: read from yaml (which one), guessed with a spectrum (mention which guess model, and which spectrum/instrument(s)), fitted to a spectrum, etc.
any auto-disabled features (e.g. fell off range given redshift). These can be re-enabled if you fit using a different redshift.
feature values (e.g. FWHM) which have been altered by the instrument pack. This is "opt-in" as I understand it; the user must explicitly request using the Feature's FWHM line values.
uncertainties on fitted output cells.
jdtsmith
changed the title
Possible API change: Separate fit results from Model
Possible API change: Separate fit results from Model/add history to Features Table
May 20, 2024
In our current design, the model details, fit settings and fit results are put together in the Model class. This brings some complexity with it:
Alternative design
Possible issues with this:
We're not going to change this now, since we really need to get PAHFIT running. But I really wanted to write this down, and we should keep this type of redesign in mind if we run into more symptoms of the same underlying conceptual problem.
The text was updated successfully, but these errors were encountered: