Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Swell workflows for common comparisons #442

Open
ashiklom opened this issue Oct 17, 2024 · 2 comments
Open

Swell workflows for common comparisons #442

ashiklom opened this issue Oct 17, 2024 · 2 comments
Assignees

Comments

@ashiklom
Copy link
Collaborator

Two common use cases (but there may be others):

  • Compare two different sets of pinned versions of JEDI
  • Compare two different versions of GEOS

Each of these comparisons would probably be separate workflows (e.g., swell create compare_jedi; swell create compare_geos).

Key to these workflows is not only making sure that both workflows execute without error, but also to calculate relevant comparison diagnostics (e.g., absolute diffs; maybe some maps / time series).

@Dooruk
Copy link
Collaborator

Dooruk commented Oct 31, 2024

Some quick thoughts:

  1. (optional) A simple prelimianry test could be running the ctests inside the JEDI build directories (build-intel-release/soca, build-intel-release/fv3-jedi`) and see if the same number of tests are passing between two versions (it's never %100 pass on JEDI side!). Of course, there will be different cmake tests added occasionally.

From my notes,

Running ctests:

Request interactive compute nodes and source modules in build-intel-release. Navigate to build-intel-release/soca and then run the tests by issuing ctest. This will confirm that that everything was installed successfully. Some useful commands:

  • ctest -N (list all the tests)
  • ctest -V -I (run all the tests displaying full (verbose) output)
  • ctest -R <name> (run a particular test using the name of the test)
  • ctest -I 4,6 (run test numbers 4,5,6)
  • ctest -V -I 5,5 (run test 5 with all the output)
  1. For variational tests, at the end of the jedi_variational_log file, OOPS puts out the following cost functions stats for each observer (this one is for SOCA, and hence marine observations) and forecast-background for state variables.
End Jo Observations Errors
CostJo   : Nonlinear Jo(adt_3a_egm2008) = 266.798601, nobs = 5802, Jo/n = 0.045984, err = 0.100000
CostJo   : Nonlinear Jo(adt_3b_egm2008) = 256.433901, nobs = 5900, Jo/n = 0.043463, err = 0.100000
CostJo   : Nonlinear Jo(adt_c2_egm2008) = 165.992805, nobs = 3529, Jo/n = 0.047037, err = 0.100000
CostJo   : Nonlinear Jo(adt_coperl4) = 0.000000 --- No Observations
CostJo   : Nonlinear Jo(adt_j3_egm2008) = 328.166562, nobs = 7318, Jo/n = 0.044844, err = 0.100000
CostJo   : Nonlinear Jo(adt_sa_egm2008) = 187.663495, nobs = 5166, Jo/n = 0.036327, err = 0.100000
CostJo   : Nonlinear Jo(sst_gmi_l3u) = 11277.829275, nobs = 63718, Jo/n = 0.176996, err = 1.035731
CostJo   : Nonlinear Jo(sss_smos_esa) = 31144.634467, nobs = 92514, Jo/n = 0.336648, err = 0.674588
CostJo   : Nonlinear Jo = 43627.519106
CostJb: FG-BG
Valid time: 2021-07-02T00:00:00Z
socn   min=   -2.277945   max=    2.484253   mean=   -0.000148
tocn   min=   -6.935245   max=    6.080743   mean=    0.000608
ssh   min=   -0.190825   max=    0.222546   mean=   -0.000202
hocn   min=    0.000000   max=    0.000000   mean=    0.000000
mld   min=    0.000000   max=    0.000000   mean=    0.000000
layer_depth   min=    0.000000   max=    0.000000   mean=    0.000000
CostFunction: Nonlinear J = 43627.519106
OOPS_STATS Variational end                          - Runtime:   107.77 sec,  Local Memory:     5.80 Gb

eva-jedi_log parses some of this log file already, so there could be another step added to EVA parsing this part. Afterwards two JEDI versions could be compared against each other, with tolerances for nobs and Jo. This is what @rtodling and I are doing currently. I already wrote a crappy python regex script for parsing this a while back but there has to be a better approach. I can share what I have though.

At some point there was a thought about saving these stats in a file using JEDI/OOPS directly, which would save us from the hassle of writing a parser in EVA. There is an EMC/GMAO tag-up on Monday and I will ask this.

  1. As regards to GEOS version updates, that's a bit tricky as some versions come with a different setup but as a first step we could compare diag outputs of two forecast versions (MAPL History has some cool settings that would standardize these).

@Dooruk
Copy link
Collaborator

Dooruk commented Nov 21, 2024

I thought following simple commands might be useful for this issue, in response to this comment:

Ricardo and I simply run two experiment (3dfgat_atmos and 3dvar) with current and candidate JEDI builds and execute this command to compare the convergences:

fgrep "Residual norm" current_3dfgat/run/20211212T000000Z/geos_atmosphere/jedi_variational_log.log
fgrep "Residual norm" current_3dvar/run/20210701T120000Z/geos_ocean/jedi_variational_log.log

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants